id
stringlengths
10
10
title
stringlengths
5
246
abstract
stringlengths
42
3.32k
authors
stringlengths
5
21.5k
published_date
timestamp[s]
link
stringlengths
33
34
markdown
stringlengths
140
1.08M
abstract_ja
stringlengths
0
1.35k
2302.14354
Deep Learning for Identifying Iran's Cultural Heritage Buildings in Need of Conservation Using Image Classification and Grad-CAM
The cultural heritage buildings (CHB), which are part of mankind's history and identity, are in constant danger of damage or in extreme situations total destruction. That being said, it's of utmost importance to preserve them by identifying the existent, or presumptive, defects using novel methods so that renovation processes can be done in a timely manner and with higher accuracy. The main goal of this research is to use new deep learning (DL) methods in the process of preserving CHBs (situated in Iran); a goal that has been neglected especially in developing countries such as Iran, as these countries still preserve their CHBs using manual, and even archaic, methods that need direct human supervision. Having proven their effectiveness and performance when it comes to processing images, the convolutional neural networks (CNN) are a staple in computer vision (CV) literacy and this paper is not exempt. When lacking enough CHB images, training a CNN from scratch would be very difficult and prone to overfitting; that's why we opted to use a technique called transfer learning (TL) in which we used pre-trained ResNet, MobileNet, and Inception networks, for classification. Even more, the Grad-CAM was utilized to localize the defects to some extent. The final results were very favorable based on those of similar research. The final proposed model can pave the way for moving from manual to unmanned CHB conservation, hence an increase in accuracy and a decrease in human-induced errors.
Mahdi Bahrami, Amir Albadvi
2023-02-28T07:14:15
http://arxiv.org/abs/2302.14354v1
Deep Learning for Identifying Iran's Cultural Heritage Buildings in Need of Conservation Using Image Classification and Grad-CAM+ ###### Abstract The cultural heritage buildings (CHB), which are part of mankind's history and identity, are in constant danger of damage or in extreme situations total destruction. That being said, it's of utmost importance to preserve them by identifying the existent, or presumptive, defects using novel methods so that renovation processes can be done in a timely manner and with higher accuracy. The main goal of this research is to use new deep learning (DL) methods in the process of preserving CHBs (situated in Iran); a goal that has been neglected especially in developing countries such as Iran, as these countries still preserve their CHBs using manual, and even archaic, methods that need direct human supervision. Having proven their effectiveness and performance when it comes to processing images, the convolutional neural networks (CNN) are a staple in computer vision (CV) literacy and this paper is not exempt. When lacking enough CHB images, training a CNN from scratch would be very difficult and prone to overfitting; that's why we opted to use a technique called transfer learning (TL) in which we used pre-trained ResNet, MobileNet, and Inception networks, for classification. Even more, the Grad-CAM was utilized to localize the defects to some extent. The final results were very favorable based on those of similar research. The final proposed model can pave the way for moving from manual to unmanned CHB conservation, hence an increase in accuracy and a decrease in human-induced errors. built cultural heritage conservation deep learning image processing convolutional neural networks (CNN) gradient weighted class activation mapping (Grad-CAM) Structural health monitoring transfer learning ## 1 Introduction Two main categories of Cultural Heritage (CH) are tangible and intangible heritages, and the CHBs fall under the former category. The tangible CHs have universal values which must be physically preserved for future generations as an irreplaceable legacy [1, 2]. CHBs are indubitably an integral part of the history and culture of human beings. Throughout the years many of these precious buildings have been in danger of damage due to several reasons, namely material deterioration, natural disasters, presence of visitors, vandalism, etc. [3, 4, 5]. Currently, the topic of CH has attracted increasing global attention from scientists and researchers alike, and the scope of its concept is constantly expanding. Most social scientists emphasize on its utility in supporting ethnic and national interests, while many others point to its creative and counter-hegemonic aspects [5, 6]. ### Importance Endowed with rich CHBs, Iran is ranked 10th in 2022, among all other countries, with 26 UNESCO world heritage sites [7]. Although only 26 of the CHBs in Iran have been registered in UNESCO and not all of them are buildings, the number of CHBs in Iran is of the order of thousands and according to archaeological findings, Iranian architecture dates back to 6,000-8,000 B.C. [8]. One of the reasons why Iran has been unsuccessful in registering more CHBs is the fact that most of these CHBs have not been preserved correctly, if not at all. Even some CHBs are beyond restoration. The CHBs, which fall under the category of immovable tangible CHs, demand more sophisticated methods for conservation since we cannot move them to museums to preserve. Lack of resources in terms of skilled practitioners, budget, and new technologies are just some of the shortcomings that introduce many problems in the conservation process. As regards the usage of state-of-the-art technologies, Iran as a developing country still uses archacion, and sometimes obsolete, manned methods to preserve these precious treasures of humanity. From a broader perspective, many CHBs around the world suffer from such problems as well, so the use of artificial intelligence (AI) techniques such as ML and DL is not a luxury anymore but a necessity. Using ML and DL, we can move toward unmanned conservation of CHB, hence an increase in accuracy and a decrease in human-induced error. ### Research Aim The aim of this paper was to develop a highly generalized, yet simple, deep learning pipeline for the identification of CHBs in need of preservation, which can be used even in poor countries. We achieved this by making our model as lightweight as possible using a wealth of novel methods, as not all countries have access to expensive resources. This mindset allows for having fewer data and processing power but still reaping satisfying results (Table 3). ### Contribution **Unprecedented in Iran:** To the best of our knowledge, and to our surprise, not even a single scientific research had been conducted using ML or DL in the conservation of Iran's CHBs. The body of research outside Iran is not so much either. according to Fiorucci et al. [9] the use of ML in CH literacy has been quite limited in contrast to other fields. We believe that more research in the intersection of AI and CH can change this situation and can pave the way for the prevalence of such techniques in the process of CHB conservation around the world and accrue many benefits to CHB literacy as well. **First-hand Complex Data:** We used first-hand data, which had been collected from different sources, as discussed in subsection 3.1. Using first-hand data is important in the sense that not only our experiment would be unprecedented in Iran but globally as well; since no known CHB dataset to date [9] can cover the diversity of types of buildings, types of defects, and color nuances of both Persian and Islamic architecture, like ours. **New combination of Methods:** This paper proposes an automated deep learning pipeline for identifying surface damage of CHBs. Having developing countries in mind, we used a combination of state-of-the-art methods to cater to their conservation needs with as little budget as possible. That said, the final deep learning pipeline, using a pre-trained MobileNet, can be run on low-cost devices, for instance a budget mobile phone, to make inference. * Image classification: define whether a CHB needs preservation or not. * MobileNet: a very lightweight CNN architecture, but with approximately the same performance as a lot of havier CNNs (e.g., ResNet and/or Inception). * Grad-CAM: to approximately localize the defects. * Transfer learning: to reap great results without the need for expensive servers or manpower to take copious images. * A valid data augmentation pipeline: allows the model to learn more features from the same data. * Compound regularization method: a combination of four regularization methods together, namely augmentation, dropout, L2 regularization, and batch normalization. ## 2 Related works Globally many attempts have been made to use deep learning for damage detection in CHB images. Wang et al. [10] used object detection (OD) with the aid of FasterR-CNN based on a ResNet101 CNN to detect damage in images of masonry buildings with bounding boxes. In another research, Wang et al. [11] used instance segmentation (IS), by the means of a Mask R-CNN model, for damage detection, using a masked colored layer, in glazed tiled CHBs. An interesting work by Pathak et al. [12] used Faster-RCNN to detect damage in CHBs, but with one major difference to other works. They used point clouds data, instead of images, as the input to their proposed model, and instead rendered point clouds as images which increased the versatility of their model, since capturing photogrammetry doesn't have the same limitations of manually taking photos. Expectedly, damage detection using deep learning is not limited to CHB literacy; for instance, Perez and Tah [13] used OD to detect defects on the images of modern buildings. As highly revered as OD and IS are, they have some downsides, namely (1) a time-consuming data labeling process with bounding boxes (for OD) or color annotation (for IS); (2) the need for a huge amount of accurately labeled data; (3) detecting only pre-specified types of defects; and (4) much higher computational complexity, in comparison with image classification. This is especially important in the case of developing countries (e.g., Iran), where budgets and resources are limited. That's why despite the prevalence of OD and IS in computer vision, many researchers opted to use the simpler image classification, where each image will be given a label as a whole, and the position of damage is not delineated. As an example, Perez et al. [14] used image classification and CAM layers to classify and localize defects. The downside of their work was not the use of image classification, but using cropped images, which would have been more suitable for object detection rather than image classification. The usage of image classification and deep learning has not been just for damage detection, but aspects of CHB can benefit from them, as was the case with Llamas et al. [15] who attempted to classify different architectural elements in historical buildings. In terms of methodology, we followed the footsteps of Llamas et al. [15] and Perez et al. [14] by using image classification over OD and/or IS. Although our work is different in terms of the details of methodology and data. Unlike them, we used data augmentation and a combination of four regularization methods together, which in our case resulted in a 4-5% improvement in metrics (Table 3 and 4). **Research Gap:** To the best of our knowledge, most of the works regarding deep learning and CHB use either simplistic data or use the data belonging to a single CHB. As a result, the final trained model lacks the generalization needed to be used for a wide range of buildings in the country of origin. We believe that the data must reflect the variety of real-world data with no editing or cropping. This way the research can come as close as possible to the practical application of using deep learning in the conservation of CHBs. Despite being known as de facto in CV, OD and/or IS need substantial computational resources to process images and detect damage, therefore making these methods infeasible for developing and/or poor countries with so many CHBs (e.g., Iran). Using more lightweight and sophisticated techniques, we can achieve reasonable results but with low-budget and simple devices (e.g., Mobile Phones). ## 3 Materials and Methods ### Data For this experiment, we curated a labeled dataset of approximately 10,500 CHB images. In the following, the data curation process is discussed. #### 3.1.1 Data Collection The data were gathered from four different sources; (i) The archives of Iran's cultural heritage ministry; (ii) The author's (M.B) personal archives; (iii) images captured on site by the authors (M.B) during the research process and (iv) pictures crawled from the Internet but kept it to a minimum as their distribution differed due to heavy edits and effects. The images that didn't meet the desired quality were removed, to avoid introducing noise to our dataset. Our collected images proved to be very challenging, in the terms of complexity, peculiarity, level of detail, and variation in size, color, characteristics, etc (Figure 1). Regarding the population of data, as it was infeasible to have access to all the CHBs of Iran, or manually take pictures of them, we tried a random but fair approach to increase the richness of data by taking samples from a wide variety of buildings in terms of architectural style, color theme, quality, time of building, etc. In the process of collecting data different types of criteria were foremost in our minds: * **Locations**: Semnan, Hamedan, Tehran, Ghazvin, etc. * **Types**: Mosques, Shrines, Churches, Palaces, etc.; * **Style**: Islamic, Roman, Persian, etc.; * **Types**: cracks, deterioration, mold, etc.; * **Color nuances**: we have images from different times of the day and in different seasons.; #### 3.1.2 Data cleaning and preprocessing A number of preprocessing steps were taken before creating our final datasets: 1. Cleaning low-quality images, in terms of relevance, corruption, aspect ratio, grayscale, lighting condition, etc. (Figure A.1). 2. Fixing the auto-rotation EXIF metadata. 3. Finding a good enough resolution and resizing all images to it (i.e., 224x224). 4. Normalizing pixel values to a range of \([-1,1]\). #### 3.1.3 Data labeling Not to exacerbate the existent data imbalance, we chose binary classification over multi-class classification. The negative class (label 0) was used for images that didn't include physical defects and the positive class (label 1) for the ones that did. Not to become biased in the labeling phase we had three different highly qualified CHB practitioners label the images individually. This way the final label of a single image was determined by the majority vote of these three labelers. When it comes to labeling, especially image data, we almost always have to deal with some degree of inconsistency, as different practitioners have different experiences, expertise, criteria, etc. To mitigate this effect we defined some criteria by which each labeler had a more consistent and clear guideline to label the images. Figure A.2 shows why it was so crucial to have some criteria that distinctly determine what should be considered a defect (e.g., in terms of length or depth). As regards what types of physical defects were considered in the labeling process, we can enumerate the crack, mold, stain, and deterioration as the most important ones with enough samples in our dataset. #### 3.1.4 Creating the datasets After cleaning and preprocessing our data, it was divided into three mutually exclusive and jointly exhaustive sets, namely train, validation (aka dev), and test (Figure 1). To ensure a random but fair division we used stratifying shuffle that's why we have approximately the same ratio between the number images for each label (Table 1). Figure 1: A few sample images which show the complexity, diversity and variation of our data. As it's evident in Table 1, the notorious yet prevalent problem of data imbalance could be identified. A will be discussed in subsection 4.2 we used a weighted loss function to mitigate this problem by a large margin. ### Convolutional Neural Networks (CNNs) Synonymous with unassailable performance when it comes to processing image data, the CNNs were a staple in the field of CV since their introduction in 1989 by LeCun et al. [16, 17]. Therefore it was somewhat indubitable that we needed to process our CHB images with this type of NNs to benefit from all the advantages that could accrue to our models by using CNNs. Goodfellow et al. [18] believe CNNs to have three main benefits: translation equivariance, sparse connections, and parameter sharing. A CNN network has less number of learnable parameters in comparison with its conventional fully connected (FC) counterpart. This reduction in the number of parameters is the product of having sparse connections and parameter sharing which enables CNNs to; (i) train faster; (ii) be less prone to overfitting and as results demand fewer train data; and (iii) be able to work with high dimensional data (e.g., images), that their FC counterparts are incapable of. The CNN does the onerous work of feature extraction automatically; the task that without CNNs used to be done by hand engineering the features [19]. In this experiment, we used three of the most prestigious CNN architectures which have shown compelling results and interesting loss convergence, namely ResNet [20], Inception [21], and MobileNet [22]. ### Transfer Learning Dealing with several restraints such as lack of enough data and powerful computers, a methodology called transfer learning was employed to drastically mitigate these impediments. TL tries to transfer the knowledge, a pre-trained model has already learned from a large amount of data, to another model [23]. Generally, TL consists of two main parts. The first part is responsible for customizing the output layer to our problem. The second part fine-tunes the pre-trained model to adapt more to our specific data. ### Class Activation Mapping (CAM) In spite of the merits of image classification, there is a notorious drawback that lies within, and that is the black-box nature of artificial neural networks (NN). That being said, we don't know whether the model considers pertinent features in an image to decide its class or not. That's why researchers came up with a solution named class activation mapping (CAM) [24]. In this experiment we used gradient-weighted class activation maps (Grad-CAM) [25] which is a CAM method that merges the gradients (aka derivatives) of the final classification, that is the output layer deciding the label of the image, and the output of the final Conv layer of the model to generate a heatmap. The heatmap then is applied to the original image to localize the places that were taken into account when deciding its class/label. ### Regularization As one of the salient reasons for the occurrence of overfitting is the lack of enough data, which is ubiquitous in CV, we are always in need of more data. Unfortunately getting more brand-new data is not always possible. A workaround is to use the data we already have to increase the number of valid labeled train data, hence a decrease in overfitting as the model is now less capable of naively memorizing the train set [26]. As data augmentation is a staple in CV [26], we almost always opt for using it and this paper is not exempt. Finally, in Figure 2 the result of our proposed data augmentation pipeline after nine runs on the same image can be seen. The data augmentation methods used in this paper can be found in Table 2. Briefly, to decrease overfitting, which is commonplace in DL models, due to their high capacity in terms of the number of parameters, a combination of four famous methods were used, namely L2 regularization [27], dropout [28], batch normalization layer [29], and data augmentation [26]. The results of this combining approach, as discussed in section 5, \begin{table} \begin{tabular}{c c c c c} \hline \hline **class/label** & **Total images** & **Train set** & **Validation set** & **Test set** \\ \hline **negative/0** & 1432 & 1018 (13.8\%) & 207 (12.99\%) & 207 (13.28\%) \\ **positive/1** & 9096 & 6358 (86.2\%) & 1386 (87.01\%) & 1352 (86.72\%) \\ \hline **Total** & 10528 & 7376 (70.06\%) & 1593 (15.13\%) & 1559 (14.80\%) \\ \hline \hline \end{tabular} \end{table} Table 1: The distribution of data; both aggregated and for each dataset separately. were quite satisfiable in terms of overfitting and resulted in a very small amount of overfitting (i.e., \(<1\%\)) for all of our models. ## 4 Implementation ### Network Architecture In the Figure 3 the holistic architecture of our proposed method is represented. Not to process new input images through a data preprocessing pipeline every time, we embedded both the resizing and the normalization preprocessing functions into our network (i.e., pink box). This way, there would be no need to process the unknown images before executing the prediction on them, after the model had been trained. It was alluded to before that in this experiment we made use of several pre-eminent CNN architectures to tackle the problem at hand and not to be biased toward a certain architecture. As a result, four different networks were implemented, namely ResNet50-v2, ResNet152-v2, InceptionResNet-v2, and MobileNet-v2. One main difference between the ResNet50-v2 and other models is that we trained the ResNet50-v2 from scratch and with randomly initialized weights; while the other three were pre-trained models which were accompanied by TL. \begin{table} \begin{tabular}{c c|c c} \hline **method** & **value** & **method** & **value** \\ \hline random flip & Horizontal & random brightness & 0.05 \\ random rotation & 0.005 & random saturation & 0.6 - 1.2 \\ random crop & 5\% & random contrast & 0.75 - 1.1 \\ random quality & 80 - 100 & random hue & 0.03 \\ \hline \end{tabular} \end{table} Table 2: The data augmentation methods used in this paper and their corresponding values. Figure 2: An example of applying the proposed data augmentation methods on a train image (i.e., nine times). Notice how random, realistic, and valid the augmented versions are. The responsibility of the Global Average Pooling layer (i.e., purple box) was to flatten the output of the last Conv layer into a matrix, which is the desired shape of the input of a fully connected (FC) layer. Before replacing the output of the pre-trained model with a layer of our own, an FC layer (i.e., light blue box) was added to decrease underfitting; the bigger our network becomes the less underfitting we experience, but it also increasing overfitting, that's why a single FC layer proved to provide a desired trade-off, and thus reduced underfitting by a large margin without increasing overfitting too much. As shown in Figure 3, our model has two outputs. The first (i.e., green box) is responsible for the task of classification, by which each image will be given a label (i.e., negative or positive). The second output on the other hand does the task of localizing the parts by which the model has decided on a certain label for a specific image; this task is done by the Grad-CAM method (i.e., orange box). ### Evaluation To evaluate the implemented networks several metrics have been used in an endeavor to meticulously monitor the behavior of the networks at different stages of training. All these metrics will be scrutinized in the following subsections. #### 4.2.1 Cost function As mentioned in subsubsection 3.1.4 our two classes were imbalanced and since it would nudge our model to be biased toward the class with more examples (i.e., the positive class), we had to tackle this problem somehow. Having decided in favor of using the class weight method due to its numerous merits the Equation 1 was used to calculate the weight of each class, but it's worth noting that there is a myriad of ways to calculate the weights but as we would fine-tune the calculated weights later on in hyperparameter tuning phase we chose the most widely used: \[w_{c}=\frac{n_{t}}{n_{l}*n_{c}} \tag{1}\] Where \(w_{c}\), \(n_{t}\), \(n_{l}\), and \(n_{c}\) indicate the calculated weight of class \(c\), the total number of images in the dataset, the number of classes, and the number of images in class \(c\) respectively. These weights then will be used in the cost function of our networks so that the importance of images belonging to the inferior class outweighs that of the superior class, in a way that network will be rewarded or penalized more when it comes to the images of the class with fewer examples in it. The binary cross-entropy cost function was used, and the way it calculates cost before and after applying class weights can be seen in Equation 2 and 3 respectively. To make it more concrete the first one is used in validation, test, and prediction while the latter is employed in training time; that is we only care about data imbalance during training which is common sense as the network only updates its internal parameters (e.g., weights) in training time and backpropagation. \[L(\hat{y},y)=-\bigg{(}ylog(\hat{y})+(1-y)log(1-\hat{y})\bigg{)} \tag{2}\] Figure 3: The overall architecture of our proposed model/network. Where the values shown in parenthesis below each layer represent the layer’s output shape. The \(N\), \(n_{t}^{[L]}\), \(n_{W}^{[L]}\), and \(n_{C}^{[L]}\) refer to the batch size, height, width, and channels of the last layer (\(L\)) of the embedded CNN model respectively. \[L(\hat{y},y)=-\bigg{(}(w_{1})(y)log(\hat{y})+(w_{0})(1-y)log(1-\hat{y})\bigg{)} \tag{3}\] Where \(y\) refers to the true label and the \(\hat{y}\) to the predicted label of the given record. Note that as we did binary classification and sigmoid activation function for the output layer then \(\hat{y}\) is actually the probability ([0, 1]) of the record belonging to the positive class. #### 4.2.2 Performance measures and metrics When it comes to the evaluation of our model, several metrics were incorporated to ensure the rigor of our results. As we suffer from imbalanced data the Accuracy can be quite misleading if the model gets biased toward the superior class, so to address this issue four more performance measures were used, namely Precision, Recall, F-Score, and AUC. If anything, the F-Score is the harmonic mean of the Precision and Recall, thus it takes into account both of them to give us a balanced score of the two. Mathematically, Accuracy, Precision and Recall, and F-Score are defined as: \[Accuracy=\frac{TP+TN}{TP+FP+TN+FN} \tag{4}\] \[Precision=\frac{TP}{TP+FP} \tag{5}\] \[Recall=\frac{TP}{TP+FN} \tag{6}\] \[F\text{-}Score=\frac{2*Precision*Recall}{Precision+Recall} \tag{7}\] Where TP, TN, FP, and FN are True Positive, True Negative, False Positive, and False Negative respectively. In this paper the FN takes precedence over FP, thus the Recall is more important than precision as the FN is in the denominator of the Recall's Equation 6, however, we tried to balance them as much as possible. The reason is that if an image is falsely labeled as positive then in the worst-case scenario we lose time, but in the case of an image being falsely labeled as negative, then a building in dire need of conservation can be overlooked which might lead to irredeemable destruction. The area under the ROC curve, abbreviated as AUC, was employed in an endeavor to refrain from creating a model biased toward a certain class. AUC demonstrates the power of the model in distinguishing different classes. ## 5 Results After slogging through the onerous task of training and fine-tuning the hyperparameters several times, we achieved highly satisfactory results (Table 3). Note that the training process of the ResNet50-v2 doesn't have the fine-tuning step as we trained it from the ground up and with random initial weights. Considering the lack of enough data and computational power, which were alluded to before, it was of no surprise that the networks trained with TL fared the best. Among the networks that used TL, there is no definite winner, but the MobileNet-v2 had the best performance considering both the performance measures and the computational complexity for both the training and making an inference. That said, MobileNet's lightweight architecture is conducive to training and predicting faster which is especially important for devices with low computational power such as mobile phones, edge devices, etc. which are considered de facto pieces of equipment to monitor CHBs [30]. ### Evaluation of MobileNet-v2's Performance As mentioned before and according to Table 3 the fine-tuned model made with pre-trained MobileNet-v2 was the winner among the other three networks. and its lightweight architecture which is conducive to training and predicting faster is especially important for devices with low computational power such as mobile phones, edge devices, etc. That being said, as the winner among all four network architectures let's scrutinize MobileNet-v2's performance even more. The results of other networks in detail can be found in Figure A.3-A.5. The Table A.1 displays the most important hyperparameters used during the training and fine-tuning of our multiple networks. The fine-tuned MobileNet-v2 doesn't suffer from underfitting nor overfitting (Figure 4). As regards the second output of the fine-tuned MobileNet-v2, the localizations seemed completely relevant and attest to the fact that the model had learned the correct features in the train data (Figure 5). The output of several Conv layers, aka feature maps, from our fine-tuned MobileNet-v2 network, are visualized in Figure 6; we purposefully chose one layer from the beginning, one from the middle, and another from the end of the network to demonstrate that the more we go deep into the network the more holistic and abstract the detected features will be and vice versa. ## 6 Discussion This work demonstrates the facilities of DL in the conservation of CHB by the means of damage detection. As we have collected a diverse set of intricate CHB images, the trained model is very robust and achieved a minimum of 90% for all the metrics we used on the test set. More than our diverse data, using TL, data augmentation, and three different regularization methods in combination, was conducive to reducing overfitting and increasing the generalization power of our model. The salient reasons that attest to why our results are considered to be good enough are (i) Bayes error rate and (ii) the value of performance measures. Although measuring Bayes error rate is a hard and time-consuming task, which was not in the scope of this experiment, we can argue that its value is high, as for instance even a highly skilled Figure 4: The changes in performance measures reported after each epoch for both the train and validation sets during the training and fine-tuning phase; belonging to the MobileNet-v2 network. the green line indicates the point, in terms of epoch number, where we started to fine-tune some late layers in the pre-trained model. \begin{table} \begin{tabular}{c c c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{**Measure**} & \multicolumn{3}{c}{**ResNet50V2 1**} & \multicolumn{3}{c}{**ResNet152V2 22 **} & \multicolumn{3}{c}{**MobileNetV2 23**} & \multicolumn{3}{c}{**InceptionResNetV2 2 **} \\ \cline{2-13} & **train** & **val** & **test** & **train** & **val** & **test** & **train** & **val** & **test** & **train** & **val** & **test** \\ \hline **Loss** & 0.48 & 0.47 & 0.48 & 0.38 & 0.38 & 0.38 & 0.31 & 0.32 & 0.33 & 0.36 & 0.36 & 0.37 \\ **Accuracy** & 0.83 & 0.84 & 0.83 & 0.88 & 0.89 & 0.89 & 0.90 & 0.90 & 0.90 & 0.88 & 0.88 & 0.88 \\ **Precision** & 0.87 & 0.87 & 0.87 & 0.92 & 0.92 & 0.92 & 0.95 & 0.94 & 0.94 & 0.91 & 0.91 & 0.91 \\ **Recall** & 0.95 & 0.95 & 0.95 & 0.95 & 0.95 & 0.96 & 0.94 & 0.94 & 0.94 & 0.96 & 0.95 & 0.96 \\ **F-Score** & 0.91 & 0.91 & 0.91 & 0.93 & 0.94 & 0.94 & 0.94 & 0.94 & 0.94 & 0.93 & 0.93 & 0.93 \\ **AUC** & 0.54 & 0.54 & 0.89 & 0.88 & 0.88 & 0.93 & 0.92 & 0.90 & 0.87 & 0.86 & 0.85 \\ **TP** & 6040 & 1310 & 1287 & 6056 & 1319 & 1296 & 5961 & 1311 & 1274 & 6082 & 1319 & 1295 \\ **FP** & 923 & 189 & 192 & 551 & 107 & 114 & 328 & 78 & 76 & 623 & 123 & 135 \\ **TN** & 95 & 21 & 22 & 467 & 103 & 100 & 690 & 127 & 139 & 395 & 87 & 79 \\ **FN** & 318 & 73 & 67 & 302 & 64 & 302 & 397 & 77 & 79 & 276 & 64 & 59 \\ \hline \hline \end{tabular} \end{table} Table 3: Final results, after hyperparameter tuning. CHB practitioner from the south of Iran, would have had a hard time detecting the defects in CHBs from north of the country, considering the peculiarity and idiosyncrasies of each building in our dataset. According to Mandrekar [31], in the field of CV, values larger than 90% are considered excellent, so it's safe to assume that the MobileNet-v2 had excellent performance, recording values above 90% for all of our metrics. Other than reaching the best performance among other models, the MobileNet-v2 is particularly interesting as it is a faster NN which is particularly important in doing real-time damage detection in devices with low computational resources, such as mobile phones or edge devices. Using our proposed model based on MobileNet-v2 can pave the way for the wide usage of such models in CH sites in Iran and/or around the world with the fewest possible resources. Figure 5: Some samples of the output of Grad-CAM layer of fine-tuned MobileNet-v2 network. The localized defects are shown by a heatmap (from Blue to Red). Figure 6: A few samples (i.e., 8) of feature maps from the beginning (top), mid-section (middle), and end (bottom) of our fine-tuned MobileNet-v2 network. The input image was the same as that of the Of subfigure c in Figure 5. To compare our results with those of similar researchers, the papers of Llamas et al. [15] and Perez et al. [14] were used, as these were the ones that used image classification, CNN, and TL, just like this experiment. As both of these papers used multiclass classification whereas we used binary classification, we took the average of each metric (e.g., Recall) for all classes, Llamas et al. had ten/10 classes and Perez et al. had four/4 classes; this way we could make their results comparable to those of ours. The comparison of the results on the test set is shown in Table 4. The most important challenges and limitations that we faced during this experiment were: (i) needing more data, which is a perennial problem in CV; (ii) lack of suitable computational power; and (iii) inconsistency in labeling due to personal preference and difference in the level of labelers' expertise. ## 7 Conclusion This experiment is concerned with applying novel yet matured methods such as DL and CNNs to make the process of conservation of CHBs less prone to errors and more efficient than doing it manually by direct human supervision. By getting Iran's CHB practitioners, the main beneficiaries of this experiment, to use our proposed models besides their old methods, a higher rate of success in detecting physical defects of such buildings can be achieved. We irrevocably believe that CHB practitioners using DL models, such as our proposed one, can identify physical defects more often than either does alone and hopefully as a result, a lower prospect of CHBs deteriorating in structural health. In an endeavor to practically demonstrate the utilities of DL in CH literature, We developed a fully fledged DL model that classifies the images in need of conservation and even more approximately localizes the defects to help the CH practitioners identify defects in a timely manner, and as a result speed of the process of CHB conservation as well as increasing its accuracy. In spite of all the limitations, we achieved very good results with a score of at least 94% for Precision, Recall, and F1-Score, which were about 4-5% more than similar works (Table 4). As regards future works, addressing the limitations we faced can open up a plethora of opportunities in terms of methods and outputs. for instance, if had access to a large amount of labeled data and powerful servers, physical or in the cloud, then object detection or instance segmentation would be more useful and could elicit more accurate and user-friendly results from our data. Having gotten traction in the past few years, the generative adversarial networks (GANs) can be utilized in our network architecture to propose restoration based on the label and localizations our proposed model offers.
文化遺産建造物(CHB)は、人類の歴史とアイデンティティの一部であり、常に損傷の危険にさらされています。最悪の状況では、完全な破壊に陥ります。しかしながら、その保存には、既存の欠陥や推測される欠陥を、革新的な手法を用いて特定することが重要です。これにより、適切なタイミングで、より正確な方法で修復プロセスを進めることができます。この研究の主な目的は、イランの文化遺産建造物を保存するための新しい深層学習(DL)手法を適用することです。しかし、特に発展途上国において、このような研究が不足しており、伝統的な方法を用いて保存されているため、この研究は重要です。これらの国々は、伝統的な手法を継続して使用しているため、これらの国の文化遺産建造物は、人工的な方法を用いて保存されています。コンピュータービジョン(CV)における、コンボリューションアルゴリズムネットワーク(CNN)の利用
2308.16518
MS23D: A 3D Object Detection Method Using Multi-Scale Semantic Feature Points to Construct 3D Feature Layer
LiDAR point clouds can effectively depict the motion and posture of objects in three-dimensional space. Many studies accomplish the 3D object detection by voxelizing point clouds. However, in autonomous driving scenarios, the sparsity and hollowness of point clouds create some difficulties for voxel-based methods. The sparsity of point clouds makes it challenging to describe the geometric features of objects. The hollowness of point clouds poses difficulties for the aggregation of 3D features. We propose a two-stage 3D object detection framework, called MS23D. (1) We propose a method using voxel feature points from multi-branch to construct the 3D feature layer. Using voxel feature points from different branches, we construct a relatively compact 3D feature layer with rich semantic features. Additionally, we propose a distance-weighted sampling method, reducing the loss of foreground points caused by downsampling and allowing the 3D feature layer to retain more foreground points. (2) In response to the hollowness of point clouds, we predict the offsets between deep-level feature points and the object's centroid, making them as close as possible to the object's centroid. This enables the aggregation of these feature points with abundant semantic features. For feature points from shallow-level, we retain them on the object's surface to describe the geometric features of the object. To validate our approach, we evaluated its effectiveness on both the KITTI and ONCE datasets.
Yongxin Shao, Aihong Tan, Binrui Wang, Tianhong Yan, Zhetao Sun, Yiyang Zhang, Jiaxin Liu
2023-08-31T08:03:25
http://arxiv.org/abs/2308.16518v9
MS\({}^{2}\)3D: A 3D Object Detection Method Using Multi-Scale Semantic Feature Points to Construct 3D Feature Layer ###### Abstract Lidar point clouds, as a type of data with accurate distance perception, can effectively represent the motion and posture of objects in three-dimensional space. However, the sparsity and disorderliness of point clouds make it challenging to extract features directly from them. Many studies have addressed this issue by transforming point clouds into regular voxel representations. However, the sparsity of point clouds poses challenges in effectively aggregating features within a 3D feature layer using voxel-based two-stage methods. To mitigate these issues, we propose a two-stage 3D detection framework named MS\({}^{2}\)3D in this paper. Within MS\({}^{2}\)3D, a novel approach is introduced to construct a 3D feature layer using multi-scale semantic feature points, effectively converting the sparse 3D feature layer into a more compact representation. Additionally, we predict the offset between the feature points in the 3D feature layer and the object's centroid, aiming to position the feature points as close to the object's center as possible. This method significantly enhances the efficiency of feature aggregation. Voxel-based methods often result in the loss of fine-grained local feature information during downsampling. By leveraging voxel encoding at different scales, we acquire feature information with varying receptive fields, mitigating the deficiency of fine-grained feature information to some extent. To validate the effectiveness of our approach, we conducted evaluations on both the KITTI dataset and the ONCE dataset. 3D object detection, Point cloud, Lidar, Deep learning. ## I Introduction With the continuous development of artificial intelligence technologies, 2D object detection methods [1, 2, 3, 4, 5] have become highly mature. However, in autonomous driving scenarios, 2D object detection lacks depth information, making it difficult to accurately represent objects' motion and pose. Although subsequent 3D object detection methods using stereo cameras [6, 7, 8, 9, 10, 11] or depth cameras [12, 13, 14, 15, 16, 17] can describe the pose of objects in three-dimensional space, they are susceptible to lighting conditions and occlusions, often resulting in poor performance in complex outdoor scenes. In comparison, Lidar is unaffected by environmental lighting conditions and can provide point clouds that are more suitable for representing the position of objects in three-dimensional space [18, 19, 20, 62]. However, point clouds possess specific characteristics [21] (sparsity, disorderliness, rotation invariance) that make direct feature extraction challenging. To address these issues, some methods convert point clouds into regular voxel representations and utilize 3D convolutions for feature extraction. Fig. 1 illustrates the basic structure of voxel-based methods. Voxel-based methods can be categorized into one-stage and two-stage methods. In one-stage methods [22, 23], the point clouds are voxelized, then the mean or maximum value of the feature points within each voxel is computed. Subsequently, a 3D convolutional neural network (CNN) is employed for feature extraction. The resulting features are transformed into bird's-eye view feature maps, and object detection is performed from that perspective. Two-stage methods [24] utilize the bounding boxes detected in the first stage as region proposals. These region proposals are then used to aggregate feature information on the 3D feature layer to accomplish the final detection task. While voxel-based methods have shown promise, they still face certain challenges that require attention: (1) In the context of autonomous driving, LiDAR often captures only one or two measurements of small or distant objects, while the corresponding RGB sensor can capture hundreds of pixels [25]. Consequently, accurately representing the original geometric features of the point cloud, especially for small or distant objects, can be challenging. This challenge is further amplified when averaging or taking the maximum value of feature points within each voxel. It will exacerbate the sparsity of the point cloud. In the two-stage method, constructing a compact 3D Fig. 1: Overview of voxel-based 3D object detection methods. The blue portion represents the basic workflow of one-stage methods, while the combination of the blue and yellow portions represents the basic workflow of two-stage methods.
LiDAR点雲は、3次元空間における対象物の動きと姿勢を効果的に描写することができる。多くの研究は、点雲を voxelizing し、3次元オブジェクト検出を達成している。しかし、自律運転シナリオでは、点雲の疎さと空洞性は、 Voxel ベースの方法にいくつかの困難を突き付けている。点雲の疎さは、オブジェクトの幾何学的特徴を記述する上で難しい。点雲の空洞さは、3次元特徴の集約に困難をきたす。そこで、私たちは、2段階の 3D オブジェクト検出フレームワークである MS23D を提案する。 (1) 複数の枝からVoxel特徴点を用いて 3D 特徴層を構築する手法を提案する。異なる枝から得られたVoxel特徴点を用いて、比較的コンパクトな 3D 特徴層を構築し、豊富なセマンティック特徴を備える。さらに、距離加重サンプル法
2305.20056
Rare Life Event Detection via Mobile Sensing Using Multi-Task Learning
Rare life events significantly impact mental health, and their detection in behavioral studies is a crucial step towards health-based interventions. We envision that mobile sensing data can be used to detect these anomalies. However, the human-centered nature of the problem, combined with the infrequency and uniqueness of these events makes it challenging for unsupervised machine learning methods. In this paper, we first investigate granger-causality between life events and human behavior using sensing data. Next, we propose a multi-task framework with an unsupervised autoencoder to capture irregular behavior, and an auxiliary sequence predictor that identifies transitions in workplace performance to contextualize events. We perform experiments using data from a mobile sensing study comprising N=126 information workers from multiple industries, spanning 10106 days with 198 rare events (<2%). Through personalized inference, we detect the exact day of a rare event with an F1 of 0.34, demonstrating that our method outperforms several baselines. Finally, we discuss the implications of our work from the context of real-world deployment.
Arvind Pillai, Subigya Nepal, Andrew Campbell
2023-05-31T17:29:24
http://arxiv.org/abs/2305.20056v1
# Rare Life Event Detection via Mobile Sensing Using Multi-Task Learning ###### Abstract Rare life events significantly impact mental health, and their detection in behavioral studies is a crucial step towards health-based interventions. We envision that mobile sensing data can be used to detect these anomalies. However, the human-centered nature of the problem, combined with the infrequency and uniqueness of these events makes it challenging for unsupervised machine learning methods. In this paper, we first investigate granger-causality between life events and human behavior using sensing data. Next, we propose a multi-task framework with an unsupervised autoencoder to capture irregular behavior, and an auxiliary sequence predictor that identifies transitions in workplace performance to contextualize events. We perform experiments using data from a mobile sensing study comprising N=126 information workers from multiple industries, spanning 10106 days with 198 rare events (\(<2\%\)). Through personalized inference, we detect the exact day of a rare event with an F1 of 0.34, demonstrating that our method outperforms several baselines. Finally, we discuss the implications of our work from the context of real-world deployment. Data and Code Availability.Tesserae study data (Mattingly et al., 2019) can be obtained through a data usage agreement ([https://tesserae.nd.edu/](https://tesserae.nd.edu/)). Code is not publicly available, but provided on request. Institutional Review Board (IRB).The study protocol is fully approved by the Institutional Review Boards at Dartmouth College, University of Notre Dame, University of California-Irvine, Georgia Tech, Carnegie Mellon University, University of Colorado-Boulder, University of Washington, University of Texas-Austin, and Ohio State University. This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA Contract No. 2017-17042800007. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. ## 1 Introduction Life events (LE) are significant changes in an individual's circumstances that affect interpersonal relationships, work, and leisure (Hill, 2002). The nature of these events inevitably affect mental well-being and general health (Goodyer, 2001). The detrimental effects of adverse LEs (e.g., death of a loved one, losing a job, terminal illness) have been widely studied and known to be associated with a higher incidence of depression and altered brain network structure (Falkingham et al., 2020; Gupta et al., 2017). In contrast, positive LEs (e.g., taking a vacation, job promotion, childbirth) are associated with increased life satisfaction, and higher cognitive function (Castanho et al., 2021). Moreover, LEs affect cardiovascular vascular disease risk factors, such as increased central adiposity, heightened exposure to inflammation, and elevated resting blood pressure (Steptoe and Kivimaki, 2013). A study by Steptoe and Kivimaki (2012) suggests that even minor LEs can trigger cardiac events like myocardial ischemia. Sensing data is a multivariate time series, and traditional ML approaches for anomaly detection include the One-Class SVM (OCSVM) (Ma and Perkins, 2003), Isolation Forest (IF) (Liu et al., 2008), and logistic regression (Hilbe, 2009). However, these approaches do not capture temporal dependencies. Additionally, creating user-specific thresholds is critical in human-centered tasks. Thus, methods which directly predict anomalies (IF) or require threshold parameter tuning (OCSVM) are not ideal. Recently, timeseries based autoencoders have received significant attention (Rumelhart et al., 1985; Zhou and Paffenroth, 2017; Audibert et al., 2020; Su et al., 2019), and many methods use RNN variants such as LSTM (Hochreiter and Schmidhuber, 1997) and GRU (Cho et al., 2014) to capture temporal dependencies. In an autoencoder, the reconstruction error is used to detect anomalies. However, the complexity of human behavior and class imbalance creates biased learned representations, making it challenging to distinguish normal and rare events (Pang et al., 2021). An intuitive solution to this problem involves increasing the error of rare events without affecting normal events. Multi-task learning achieves this by incorporating additional information from a related task(s) to learn a shared representation, and thus compensate for the limitations of a purely unsupervised approach (Sadhu et al., 2019; Wu et al., 2021). In this paper, we first use statistical analysis to examine whether LEs result in behavioral shifts observable through mobile sensing. Next, we propose Multi-Task Anomaly Detection (MTAD) to detect "in-the-wild" rare LEs using behavioral mobile sensing data. We hypothesize that a standalone unsupervised autoencoder cannot effectively capture differences between normal and rare events using the reconstruction error because of the heterogeneity in human data and the significant class imbalance (\(<2\%\) rare events). Thus, MTAD trains an auxiliary sequence predictor to contextualize events by capturing changes in workplace performance due to the event. For example, a participant reported that getting promoted positively impacted their work performance, while another person mentioned that visiting their sick parents had a negative effect. We aim to identify such changes at inference and compute a scaling factor that magnifies the reconstruction error of rare events. ### Contributions Toward the vision of detecting life events from mobile sensing data collected in in-the-wild settings, our main contributions are as follows. First, we perform granger-causality testing to detect if the days before an event can be used to predict days after the event. Thus, establishing a relationship between LEs and behavior (section 4). Second, we propose a multi-task learning architecture consisting of two components: (1) an LSTM based encoder-decoder to calculate an anomaly score, and (2) a sequence predictor to contextualize the anomaly score by inferring a transition in workplace performance (section 5). Third, we perform empirical analysis on a real-world dataset to compare MTAD with five state-of-the-art baselines to analyze performance and robustness (section 5.4). Finally, we rigorously evaluate parameters that affect MTAD (section 5.4) and discuss implications of our research (section 6). ## 2 Related Work ### Change Point Detection Change point detection (CPD) refers to the change in the state of a time series. CPD has been applied to various physiological signals (Shvetsov et al., 2020; Fotoohinasab et al., 2020; Stival et al., 2022). For example, Shvetsov et al. (2020) propose an unsupervised approach based on point clouds and Wasserstein distances to detect six different types of arrhythmias. Designing appropriate metrics that can identify change is vital in CPD. Consequently, Chen et al. (2019) design a metric for EEG CPD by modifying several similarity metrics from other domains. In fitness, Stival et al. (2022) propose a method combining CPD and Gaussian state space models to detect actionable information such as physical discomfort and de-training. ### Anomaly detection methods Approaches for anomaly detection in multivariate time series are varying, and several challenges exist based on the method, and applied area (Pang et al., 2021). In deep learning, the LSTM encoder-decoder (or LSTM autoencoder) has received a lot of attention. Malhotra et al. (2016) demonstrate the robustness of using an LSTM in an autoencoder framework. Similarly, Park et al. (2018) propose the LSTM-VAE, which combines the temporal modeling strength of LSTM with the variational inference capability of a VAE. The resulting model obtains better generalization for multimodal sensory signals. The Deep Autoencoding Gaussian Mixture Model (DAGMM) jointly optimizes two components to enable optimal anomaly detection, an autoencoder computes a low-dimensional representation, and a gaussian mixture model that predicts sample membership from the compressed data (Zong et al., 2018). Audibert et al. (2020) propose an adversarially trained autoencoder framework to detect anomalies. For anomalous driving detection, Sadhu et al. (2019) introduce a multi-task architecture that magnifies rare maneuvers using domain knowledge regarding the frequency of driving actions. ### Life event detection To the best of our knowledge, there are two works similar to ours. First, Faust et al. (2021) use OCSVM to assess the response to an adverse life event using physiological and behavioral signals from wrist-worn wearables. They focus on examining the responses to the adverse event and the coping strategies employed by the participant. Their findings suggest the existence of behavioral deviations after a negative event, motivating us to focus on prediction. Second, Burghardt et al. (2021) detect abnormal LEs using wrist-worn wearables from hospital and aerospace workers. Their method works by first creating a time series embedding using a hidden markov model variant and then uses a random forest or logistic regression for classification. Our work differs from the previous studies in several ways: (1) we use smartphone behavioral data instead of wearable physiological data, (2) we consider postive, negative, and multiple LEs (differing from Faust et al. (2021)), (3) we focus on deep models instead of traditional ML, (4) our data has an extremely low anomaly ratio (\(<2\%\)) compared to Burghardt et al. (2021) (\(11.7\%\) and \(14.9\%\)). Thus, we view our problem to be significantly challenging. Moreover, we provide crucial statistical motivation to pursue LE detection. ## 3 Study The Tesserae study (Mattingly et al., 2019) recruited 757 information workers across different companies in the USA for one year where participants respond to several surveys. They were instrumented with a Garmin vivoSmart 3 wearable and a continuous sensing app is installed on their phone. Participants are instructed to maintain data compliance level of 80% to warrant eligibility for monetary remuneration. The sub-cohort's age ranged from 21 to 64, with an average of 34. Of the 126 individuals, the dataset is fairly balanced with 67 and 59 identified as male and female, respectively. The top 3 areas of occupation were Computer and Mathematical Sciences, Business and Finance, and Engineering. Roughly 98% of the participants had at least a college degree. In terms of mobile platform, the cohort had 66 Android users and 60 iOS users. Please refer to the link in the data availability statement to learn about the Tesserae study. Additional demographic information is listed in Appendix A.3. ### Features In contrast to studies using wearable physiology data (Faust et al., 2021; Burghardt et al., 2021), we use daily summaries of behavioral mobile sensing features in our analyses. Overall, we used walking duration, sedentary duration, running duration, distance traveled, phone unlock duration, number of phone unlocks, number of locations visited, and number of unique locations visited. Further, to better understand user behavior, we divide the features (except number of unique locations visited) into 4 "epochs" for modelling: epoch 1 (12am - 6am), epoch 2 (6am - 12pm), epoch 3 (12pm - 6pm), and epoch 4 (6pm - 12am). Ultimately, 29 features were used for analyses. Previous studies elucidate the importance of these features to understand human behavior from a workplace context (Nepal et al., 2020; Mirjafari et al., 2019). ### Ground Truth The definition of a **significant life event** is subjective and depends on the individual. We adopt a widely accepted definition from job stress literature, which describes these events as situations where psychological demand exceeds available resources (Karasek et al., 1981). After study completion, participants were asked to describe significant LEs using their diaries, calendars, and other documents. Participants provided free-text descriptions for every event, start and end dates, date confidence, significance, valence (positive/negative), type of event, and workplace performance impact. Valence, date confidence, and workplace performance are reported on a 1-7 likert scale as follows: (1) Valence: "1" indicated "Extremely Positive" and "7" indicated "Extremely Negative", and (2) Date confidence: "1" indicated "Lowest confidence" and "7" indicated "Highest confidence". Workplace performance impact is assigned to one of the following - "Large Negative Effect", "Medium Negative Effect", "Small Negative Effect", "No Effect", "Small Positive Effect", "Medium Positive Effect", "Large Positive Effect". Our selection criteria is as follows: (1) Valence must be Extremely Positive, Very Positive, Very Negative, or Extremely Negative, and (2) Date confidence must be "Moderately High", "High", or "Highest". Next, we set a 30-day date limit before and after an event for analysis. For overlapping events, the limit is set 30 days before and after the first and last events, respectively. These choices are based on a study by Faust et al. (2021) examining the impact of LEs within a 60-day period. Finally, the missingness and uneven spacing (discontinuous days) within this time frame must be \(<25\%\). Every day was labelled as "1" indicating a rare event or "0" indicating a normal event. For workplace performance, the label is forward filled, and an "Unknown" label is assigned to days before the rare event. Our final dataset consists of 10106 days from 126 participants with 198 rare LEs (\(<2\%\)). ## 4 Statistical Testing Initially, we ask the question: "Does the behavior of an individual change after an LE?". If so, what features are significant in most of the time series? To this end, we applied the granger-causality test as follows. First, we split the 159 multivariate time series (29 features) into two parts, one before and including the rare event (\(T^{pre}\)) and the other after the rare event (\(T^{post}\)). Next, for each feature, a granger-cause test is applied to investigate if \(T^{pre}\) granger-causes \(T^{post}\). A \(p<0.05\) implies the time series for the specific feature is significant. Finally, we sum the significant time series across each feature. For example, in Figure 1, loc_visit_num_ep_1 has a value of 42, which implies that 42 out of 159 time series were granger-cause significant. We used the statsmodel package for python to apply the granger-cause test with a lag of upto 6. The SSR-based F-test is used to evaluate statistical significance at \(p=0.05\). Significance at any lag is considered granger-causality, and the total number of significant time series (out of 159) for each feature is displayed in Figure 1. From Figure 1, we observe that the number of locations visited and location distance between 12am-6am and 12pm-6pm is significantly impacted by LEs in several cases, suggesting location behaviors are crucial to LE detection. In addition, we observe that walking and running have approximately the same number of granger-cause time series across episodes, suggesting an overall change throughout the day rather than a shift in the timing of these activities. In contrast, sedentary action varies across episodes, suggesting that LEs might affect the sedentary state at different times of the day. Also, unlock duration and count vary due to a life event. While the number of unlocks between 12am-6am has the most number of significant time series, the unlock duration between 6am-12pm has more significance comparatively. ## 5 Multi-task anomaly detection ### Problem formulation Given a set of \(I\) participants, we define a multivariate time series for each user \(u\in\{1,\ldots,I\}\) with \(T\) days as \(\mathcal{T}^{u}=\{\mathbf{x}_{1}^{u},\ldots,\mathbf{x}_{T}^{u}\}\), where \(\mathbf{x}\in\mathbb{R}^{m}\), \(m\) is the number of mobile sensing features, and \(t\in\{1,\ldots,T\}\) is a specific day. To model temporal dependencies, we apply a rolling window approach to the time series. A window at day \(t\) with a predefined length \(l\) is given Figure 1: Heatmap indicating the number of time series (out of 159) that are granger-cause significant at \(p<0.05\). Larger values imply that the corresponding feature changes after an LE in many participants, and could be important for detection. The x-axis is the number of significant time series, i.e., the count of significant time series out of 159. The y-axis are the mobile sensing features. “act” is an activity which can be still, walking, running. “loc” specifies location. “dist” is distance. “num” is number. by: \[W_{t}=\{\mathbf{x}_{t-l+1},\ldots,\mathbf{x}_{t-1},\mathbf{x}_{t}\} \tag{1}\] Using equation (1), the user's multivariate time series \(\mathcal{T}^{u}\) is transformed into a collection of windows \(\mathcal{W}=\{W_{1},\ldots,W_{T}\}\), where \(W\in\mathbb{R}^{l\times m}\). Next, a window is assigned a binary label \(y^{R}\in\{0,1\}\), where \(y^{R}=1\) indicates a rare life event at the time \(t\) (i.e., \(y^{R}_{t}=1\)) and \(y^{R}=0\) indicates a normal event in all other cases. Observe that we only consider the exact day of the rare event to be a rare window. Given that each participant's windows is transformed separately, we generalize the entire collection of normal and rare event windows across participants as \(\mathcal{W}_{normal}=\{W_{1},\ldots,W_{N}\}\) and \(\mathcal{W}_{rare}=\{W_{1},\ldots,W_{R}\}\), respectively. In our context, we define a multi-task model with two related tasks trained using \(\mathcal{W}_{normal}\). First, an unsupervised learning task trained to reconstruct the input \(\mathcal{W}_{normal}\), which produces higher errors or anomaly scores when reconstructing \(\mathcal{W}_{rare}\). Thus, facilitating rare life event detection. Second, a supervised learning task to contextualize and scale the anomaly score. Here, \(\mathcal{W}_{normal}\) is trained to predict a workplace performance vector \(\mathbf{y}\in\mathbb{R}^{l}\), where each day \(t\in\{1,\ldots,l\}\) in \(W\) represented by \(y_{t}\) belongs to one of the workplace performance labels described in 3.2. _Problem Statement._ Given a participant's multivariate time series window \(W_{t}\) and the corresponding workplace performance vector \(\mathbf{y}\), the objective of our problem is to train a multi-task framework capable of detecting a rare life event at time \(t\). ### Multi-task Architecture Our multi-task framework (Figure 2) consists of three components: an encoder E which maps a window \(W\) to a low-dimensional representation (latent space) \(Z\), a decoder \(D\) to reconstruct \(W\) from \(Z\) (5.2.1), and a sequence predictor \(P\) to predict the workplace performance vector \(\mathbf{y}\) (5.2.2). #### 5.2.1 Unsupervised Autoencoder (Task A) We capture temporal dependencies from the multivariate time series using LSTMs (Hochreiter and Schmidhuber, 1997) to build the encoder-decoder architecture. An LSTM encoder learns from an input window \(W\) by running through the day-wise input sequence and computes a fixed size latent space \(Z\). Next, \(Z\) is copied multiple times to match the length of the window. Finally, the LSTM decoder \(D\) uses \(Z\) to reconstruct the input sequence, and the reconstructed sequence is represented as \(\overline{W}\). We train the LSTM encoder-decoder (LSTM-ED) by minimizing the reconstruction error between \(W\) and \(\overline{W}\) using the mean squared error defined as: \[\mathcal{L}_{A}=\frac{1}{l\times m}\|W-\overline{W}\|_{F}^{2} \tag{2}\] where \(\overline{W}=D(Z)\); \(Z=E(W)\); \(\|\cdot\|_{F}\) is the Frobenius norm Recall that we only use \(\mathcal{W}_{normal}\) to train the LSTM-ED to learn normal event representations. Therefore, by using the reconstruction error as an anomaly score \(\alpha\), we can detect rare events based on their higher \(\alpha\) values. However, it is possible that some participants or events do not exhibit significant behavior changes which can be captured by our LSTM-ED through \(\alpha\). To address this challenge, we attempt to identify anomalies through a supervised learning setup in the next section. Srivastava et al. (2015) describes LSTM encoder-decoder architectures in detail. ``` Input:\(\mathcal{D}_{train}\) with \(\mathcal{W}_{normal}=\{W_{1},\ldots,W_{N}\}\), \(\{Y_{1},\ldots,Y_{N}\}\), class weight vector \(\mathbf{w}\), and number of epochs \(E\). Output: Trained \(E\), \(D\), \(P\) \(E\), \(D\), \(P\)\(\leftarrow\) initialize weights; \(e\gets 1\); repeat for\(n\gets 1\) to \(N\)do \(\underline{Z_{n}}\gets E(W_{n})\); \(\overline{W_{n}}\gets D(Z_{n})\); \(\widehat{Y_{n}}\gets P(Z_{n})\); \(\mathcal{L}_{A}\leftarrow\frac{1}{l\times m}\|W_{n}-\overline{W_{n}}\|_{F}^{2}\); \(\mathcal{L}_{B}\leftarrow\sum_{i=1}^{l}\sum_{j=1}^{c}Y_{nij}\times\ln( \widehat{Y_{nij}})\times w_{j}\); \(\mathcal{L}\leftarrow\mathcal{L}_{A}+\mathcal{L}_{B}\); \(E,D,P\)\(\leftarrow\) update weights using \(\mathcal{L}\); end for \(e\gets e+1\); until\(e=E\); ``` **Algorithm 1**Training #### 5.2.2 Sequence Prediction (Task B) To scale the anomaly score \(\alpha\), we train a supervised sequence predictor \(P\) to detect day-wise workplace performance. The window \(W\) and a true workplace performance label vector \(\mathbf{y}\in\mathbb{R}^{l}\) as training inputs, where the label for day \(t\in\{1,\ldots,l\}\) in \(W\)represented by \(y_{t}\) has one of the performance labels described in section 3.2. Moreover, \(Y\in\mathbb{R}^{l\times c}\) represents one-hot vectors from \(\mathbf{y}\) with \(c\) classes (\(c=8\)). Observe that \(W\) is same for tasks A and B. Hence, our architecture shares the LSTM encoder network \(E\) described in section 5.2.1. For Task B, \(P\) is composed of an LSTM network to further extract temporal features from the latent representation \(Z\) followed by a fully connected layer \(FC\) (softmax activation) to predict day-wise class probabilities \(\widehat{Y}\). At inference, \(\widehat{Y}\) is mapped to the predicted workplace performance label vector \(\widehat{\mathbf{y}}\). The model is optimized using the weighted categorical cross-entropy loss function defined by: \[\mathcal{L}_{B}=-\sum_{i=1}^{l}\sum_{j=1}^{c}Y_{ij}\times\ln(\widehat{Y_{ij}}) \times w_{j} \tag{3}\] where \(\widehat{Y}=\text{softmax}(FC)\); \(w_{j}\) is the weight for class \(j\). From equations (2) and (3), we can represent the final loss function as \(\mathcal{L}=\mathcal{L}_{A}+\mathcal{L}_{B}\). The proposed multi-task architecture is trained in an end-to-end fashion, where the tasks A and B are jointly optimized by minimizing \(\mathcal{L}\) (Algorithm 1). #### 5.2.3 Inference The detection phase workflow (Algorithm 2) computes the anomaly score \(\alpha\) from Task A and a scaling factor \(s\) from Task B for each test window. **Anomaly Score.** Recall that, our goal is to identify a rare event on the exact day. Thus, we unroll \(W\) and \(\overline{W}\) to compute the score for the most recent day \(t\) as follows: \[\alpha=\frac{1}{m}\sum_{j=1}^{m}(x_{j}-\overline{x_{j}})^{2} \tag{4}\] where \(\mathbf{x}\) and \(\overline{\mathbf{x}}\) are the true and reconstructed multivariate time series, respectively; \(m\) is the number of mobile sensing features. **Scaling factor.** The effect of stressful life events affect work-life balance in US workers and reduce productivity (Hobson et al., 2001). Thus, there is reason to believe workplace performance shifts after an LE. To capture this change, we first transform the predicted workplace performance vector \(\widehat{\mathbf{y}}\) into a binary vector \(\mathbf{r}\in\mathbb{R}^{l-1}\) defined as: \[r_{t-1}=\begin{cases}1&\widehat{y}_{t}\neq\widehat{y}_{t-1}\\ 0&otherwise\end{cases}\] where \(t\in\{2,\ldots,l\}\) In essence, we identify the day of workplace performance change and use it as a proxy for rare event detection. For example, consider a \(\widehat{\mathbf{y}}=\) ["Unknown", "Unknown", "Large Negative Effect"] with \(l=3\), then the transition vector is \(r=[0,1]\). Initially, we assumed that a value "1" on the most recent day could be directly used to detect the rare event. However, this idea had two major limitations. First, it is possible that larger window sizes might have multiple transitions (\(r_{t}=1\)). Second, erroneous predictions might hinder detection. Thus, we exponential weight our transition vector \(r\). Intuitively, more recent workplace performance shifts have larger impact Figure 2: The proposed multi-task learning architecture illustrating training information flow for a window \(W\) of length \(l\). on behavioral changes owing to LE. The scaling factor \(s\) aims to capture the abovementioned effect as follows: \[s=\frac{1}{l-1}\sum_{t=1}^{l-1}e^{-\lambda tr_{t}} \tag{5}\] where \(\lambda\) is a constant decay factor. **Detection.** The final scaled anomaly score \(\delta\) is computed from equations (4) and (5) in the following way: \(\delta=\frac{\alpha}{s}\). Observe that \(\delta=\alpha\) when \(\mathbf{r}\) is a zero vector, i.e., a vector with no workplace performance changes. Ultimately, a window \(W_{t}\) with a scaled anomaly score \(\delta\) has a rare life event at \(t\) (\(y_{t}^{R}=1\)) if \(\delta\) is greater than a threshold \(\gamma\). However, the scarcity of rare events hindered threshold tuning based on performance metrics. Thus, \(\gamma\) is the 95th percentile anomaly scores from the validation data set. ``` Input:\(\mathcal{D}_{test}\) with \(\mathcal{W}=\{W_{1},\dots,W_{N+R}\}\), \(\gamma\) from \(D_{val}\), \(l\), \(\lambda\). Output:\(\mathbf{y^{R}}=\{y_{1}^{R},\dots,y_{N+R}^{R}\}\) for\(n\gets 0\) to \(N+R\)do \(Z_{n}\gets E(W_{n})\); \(\overline{W_{n}}\gets D(Z_{n})\); \(\widehat{Y_{n}}\gets P(Z_{n})\); \(\mathbf{x},\overline{\mathbf{x}}\leftarrow\text{unroll}(W_{n})\), \(\text{unroll}(\overline{W_{n}})\) ; \(\alpha\leftarrow\frac{1}{m}\sum_{j=1}^{m}(x_{j}-\overline{x_{j}})^{2}\); \(\widehat{\mathbf{y}}\leftarrow\text{get class labels from probabilities }\widehat{Y_{n}}\); \(\mathbf{r}\leftarrow\text{compute binary transition vector from}\) \(\widehat{\mathbf{y}}\); \(s\leftarrow\frac{1}{l-1}\sum_{t=1}^{l-1}e^{-\lambda tr_{t}}\); \(\delta\leftarrow\frac{\alpha}{s}\); if\(\delta>\gamma\)then \(y_{n}^{R}\gets 1\); else \(y_{n}^{R}\gets 0\); end if end for ``` **Algorithm 2**Inference ### Experimental Setup We performed several pre-processing steps for optimal training and inference. First, we impute missing data and unevenly sampled time series using the mean. Second, we forward fill missing rare event and workplace performance labels. Third, within-subject feature normalization is applied as a precursor for personalization. Finally, each participant's time series is transformed into windows of length \(l=10\) using equation (1). In our analysis, we divide \(\mathcal{W}_{normal}\) and the corresponding \(y\) into training (\(\mathcal{D}_{train}\)), validation (\(\mathcal{D}_{val}\)), and test (\(\mathcal{D}_{test}\)) with ratio of 80:10:10, respectively. Next, we append \(\mathcal{W}_{rare}\) to \(\mathcal{D}_{test}\). Note that rare events are held-out for testing and are neither used for training nor validation. Moreover, we generate ten different user-dependent splits to ensure the training, validation, and test data sets consist of time series from all participants. To assess positive class performance in imbalanced data, we use precision (P), recall (R), and the F1-score, and report mean and standard deviation across the splits. The windows are unrolled at inference to detect a rare event on the exact day. Thus, identifying if a rare event is present within a window is not considered to be an accurate detection. ### Results In this section, we examine the properties of the proposed framework by comparing it with other baselines (5.4.1), analyzing the strengths of personalization (5.4.2), and estimating the changes in performance at different window sizes and decay constants for the scaling factor (5.4.3). Next, we perform an ablation study to assess the necessity of the sequence predictor (5.4.4). Finally, we assess the types of events identified by MTAD (5.4.5). #### 5.4.1 Performance We evaluate the performance of our algorithm by comparing it with five state-of-the-art baselines for anomaly detection, namely, OCSVM, IF, LSTM-VAE, DAGMM, and LSTM-ED. As shown in Table 1, MTAD performs significantly better than all traditional machine learning and deep learning methods in terms of P, R, and F1. Particularly, MTAD's 0.29 F1 score is 2.6 times greater than a standard LSTM autoencoder (LSTM-ED). Unlike other methods, DAGMM does not compute a normal event decision boundary, i.e., train only with normal event data. Consequently, we observe that DAGMM has a higher recall than methods like LSTM-ED, LSTM-VAE, and OCSVM, but it has poor precision. Interestingly, IF performs better than deep models like LSTM-ED and LSTM-VAE. IF directly pre dicts a rare event without considering temporal information, whereas LSTM approaches used windows that might contain unknown rare events or behavioral discrepancies, thus, resulting in poor performance. Moreover, we observe that unsupervised LSTM autoencoder approaches are sensitive to variance in human behavior (Table 1), suggesting that the latent representation computed might be biased to a specific user "persona". To address this, we attempt to personalize our approach to improve performance. #### 5.4.2 Personalization Towards personalization, we applied within-subject normalization to capture user-specific behavior changes and computed user-specific thresholds \(\gamma^{u}\) from the individual's validation data. Detecting rare events using these personalized thresholds (PT) yields performance improvements for all threshold-based methods, as shown in Table 2. Overall, MTAD-PT is the best performing method with an F1 of 0.34, a 0.05 increase from the general model. From Table 1 and 2, we see that unsupervised methods like LSTM-VAE-PT and LSTM-ED-PT show the most F1 score improvements of 0.14 and 0.15, respectively. Interestingly, by personalizing MTAD, we observe a trade-off between precision and recall. Additionally, methods like IF directly predict an anomaly without thresholds and cannot be personalized without training. Our experiments show that methods like MTAD can achieve performance improvement simply by personalized thresholding without additional training, demonstrating its advantage in human-centered problems. #### 5.4.3 Effect of parameters **Window size.** Optimal window size is critical to sufficiently discerning differences between normal and \begin{table} \begin{tabular}{l l l l} \hline \hline Algorithm & P (std) & R (std) & F1 (std) \\ \hline OCSVM & 0.32 (0.04) & 0.07 (0.00) & 0.12 (0.06) \\ IF & 0.22 (0.01) & 0.18 (0.01) & 0.20 (0.00) \\ LSTM-VAE & 0.28 (0.04) & 0.07 (0.01) & 0.12 (0.01) \\ DAGMM & 0.04 (0.01) & 0.11 (0.02) & 0.06 (0.01) \\ LSTM-ED & 0.25 (0.03) & 0.07 (0.01) & 0.11 (0.01) \\ MTAD & **0.47 (0.05)** & **0.21 (0.03)** & **0.29 (0.04)** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of MTAD with baselines using precision (P), Recall (R), and F1-score (F1). \begin{table} \begin{tabular}{l l l l} \hline \hline Algorithm & P (std) & R (std) & F1 (std) \\ \hline LSTM-VAE-PT & 0.25 (0.02) & 0.27 (0.02) & 0.26 (0.02) \\ LSTM-ED-PT & 0.26 (0.02) & 0.26 (0.02) & 0.26 (0.01) \\ MTAD-PT & **0.33 (0.02)** & **0.35 (0.03)** & **0.34 (0.03)** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of personalized threshold models using precision (P), Recall (R), and F1-score (F1). Figure 4: Comparison between LSTM-ED-PT and MTAD-PT at different window sizes. Figure 3: Comparison between general LSTM-ED and MTAD at different window sizes. are events. Smaller window sizes allow each day to have more impact, whereas larger ones spread importance across many days. Thus, we evaluate the performance of MTAD, LSTM-ED, MTAD-PT, LSTM-ED-PT using four different window sizes \(l\in\{6,8,10,12\}\). We observe several interesting trends from Figures 3 and 4. First, the performance of MTAD and MTAD-PT increases with larger window sizes from 6 to 10. However, the difference in F1 score of MTAD-PT at \(l=10\) and \(l=12\) (0.34 vs 0.34) is insignificant. Second, the performance of LSTM-ED deteriorates gradually with increasing window sizes. Conversely, LSTM-ED-PT F1 score increases, demonstrating the robustness of using user-specific thresholds. Third, by personalizing, we observe a trade-off between P and R for MTAD at all window sizes. For our problem, the higher recall is acceptable, and we use a window size \(l=10\). **Decay constant.** The constant \(\lambda\) used in exponential weighting determines the intensity of decay where higher values drastically reduce the weight of days farther from the current day as opposed to lower values. We evaluated the sensitivity of MTAD and MTAD-PT at different decay constants \(\lambda\in\{0.5,2,5,10\}\) and identified no significant changes (Appendix A.4). Intuitively, we expect this behavior as the proposed method only magnifies the anomaly score of windows with at least one rare event, leaving windows with normal events unchanged. #### 5.4.4 Utility of the sequence predictor We evaluate the necessity of having both tasks by performing an ablation study at inference. After training, we treat the sequence predictor as a supervised classifier where a \(W\) has a rare event at time \(t\) if the transition vector value \(r_{t-1}=1\), i.e., there is a workplace performance change between the two most recent days. The model obtained a P, R, and F1 of 0.52, 0.04, and 0.07 respectively. An 0.07 F1 score of the sequence predictor is poor compared to the baselines in Table 1. Additionally, this experiment shows that a combined multi-task model has superior performance compared to standalone methods. #### 5.4.5 Analyzing event type and valence We analyze the events identified by our model and observe that it is capable of identifying personal, work, health, and financial events (Table 3). These types of events directly affect the participant or their relatives. Our method is unable to identify societal and miscellaneous events. These events could be related to politics, sports, or other local activities which could indirectly affect mood. From Table 3, we observe that MTAD-PT is fairly balanced at detecting positive and negative events with similar recall (0.40 vs 0.39). ## 6 Discussion ### Summary of results Initially, granger causality testing suggested that location features are crucial in detecting behavioral changes after an event. Perhaps, both negative and positive events such as loss of loved one or vacation result in changes in location dynamics. We observed that 46 time series were significant for location distance between 12am-6am. This is considerably large from an extreme anomaly detection perspective. Thus, motivating us to build a multi-task learning model to detect rare LEs. The results of rare life event detection presented in section 5.4 highlight the advantages our deep learning approach. A multi-task setup can be used to overcome the deficiencies of a purely unsupervised deep learning model. Specifically, the presence of a severe class imbalance (\(<2\%\)) can be addressed using our method. In comparison to Burghardt et al. (2021), we achieve a comparable F1 of 0.34 on a more challenging problem because of the aforementioned class imbalance (11.7% & 14.9% vs. 1.9%). Moreover, our method can detect both positive and negative LEs in addition to different events types such as personal, \begin{table} \begin{tabular}{l l l} \hline \hline Event Type & Total Events & Events Detected (R) \\ \hline \multicolumn{3}{l}{_Type_} \\ Personal & 92 & 41 (0.44) \\ Work & 69 & 21 (0.30) \\ Health & 14 & 7 (0.50) \\ Financial & 13 & 9 (0.69) \\ Societal & 8 & 0 (0.00) \\ Other & 2 & 0 (0.00) \\ \multicolumn{3}{l}{_Valence_} \\ Positive & 136 & 54 (0.40) \\ Negative & 62 & 24 (0.39) \\ \hline \hline \end{tabular} \end{table} Table 3: Events detected or recall (R) using MTAD-PT distributed by type work, health, and financial (Burghardt et al., 2021). Our approach can be extended to other applications by appropriately identifying auxiliary tasks to positively transfer knowledge to the main task. With the future vision of deploying models on mobile devices, it is imperative that models are not retrained on the phone. Higher computational costs of training models results in slower application performance. Consequently, human interaction with the application might be reduced. Personalizing the thresholds for each individual without re-training addresses this issue while simultaneously improving the performance of MTAD and other unsupervised models. ### Implications Two major directions could greatly benefit from our work. First, as LEs are difficult to identify without explicit questions, detecting them using mobile phones is valuable. Second, interventions can alleviate the stressful impact of LEs. **Detection.** Generally, LEs are identified only through self-reports. Automated detection in-the-wild is difficult because of the subtleties of human behavior. Our analysis of data from Android and iOS devices illustrates the effectiveness of using passive sensing data instead of conducting monthly interviews or similar alternative methods with employees. Thus, mobile phones are a low-burden option offering continuous real-time sensing to detect rare LEs. Ultimately, we envision detection leading to helpful and empathetic interventions. **Ubiquitous health interventions.** Individuals can use our detection algorithm for teletherapy and self-monitoring. In teletherapy, smartphone sensing applications can connect people to counselors, mentors, or therapists offering event-specific services. For example, a death in the family requires the expertise of a grief counselor, whereas a mentor can help tackle the stressors of a new job promotion. Applications like Talkspace and Betterhelp offer several services in this sector, and our methods can be integrated with similar applications. Second, our algorithm can be extended for self-monitoring, where an app tracks anomalous behaviors longitudinally--ultimately suggesting intervention strategies for significant behavioral deviations. Organizations should be proactive in improving the mental health and wellness of its employees. Here, we describe three intervention scenarios using smartphones. First, having helpful co-workers reduces negative triggers. Our method may be used to detect a life event passively and provide incentives to an information worker to help colleagues. Second, Spector and Fox (2002) suggest that emotion about control over a stressful event affects health. Thus, emotion regulation strategies such as cognitive re-appraisal (re-interpreting a situation) can be prompted to the user through mobile apps (Katana et al., 2019). Third, organized leisure crafting have been shown to positively affect mental health and can be used as an intervention tool (Ugwu, 2017). ### Limitations Some limitations of this work should be noted. First, we do not address the "cold-start problem" to evaluate performance for an unseen user. Thus, our model with personalization requires user behavioral data to construct specific thresholds and latent spaces for detecting LEs in the future. Second, it is useful to understand how the various mobile features contribute the most to detection. The latent features constructed by autoencoders are unobserved feature spaces compressed into a low-dimensional for reconstruction and prediction. Therefore, interpretation of these features is not straight-forward, the additional scaling of the auxiliary task further hinders this ability. Third, some rare events cannot be detected because of their similarity to normal events. In essence, there are several confounding factors that may or may not elicit a behavioral change in an individual. For example, if a person's responsibilities are similar after a job promotion, their routine and actions might not be significantly different. Conversely, it is also possible that normal days are anomalies and not related to the events themselves. ### Ethical considerations While monitoring worker behavior has benefits to health, it also highlights the need for ethical data usage. For instance, organizations analyzing mobile phone data should use informed consent to protect private data. The primary intention of life event detection must be to offer help and support. Nevertheless, sensing data can be used adversarially to monitor employee productivity to maximize benefits for the organization. Thus, sacrificing trust between employee and organization, while damaging interaction between people and technology. We do not advocate the use of mobile sensing methods like event detection in these scenarios. Future studies must collect data only through transparent and ethical processes to protect the employee's data. Moreover, extreme anomaly detection have higher error rates owing to its challenging nature. Therefore, having a human-in-the-loop is a necessity. We discuss two scenarios of good and bad interventions. **Scenario 1.** A recently promoted information worker is overwhelmed and unable to meet product goals. A **good** intervention offers resources for mentorship and stress management. A **bad** intervention gives ultimatums that affect job security. **Scenario 2.** An employee recovering from a physical injury struggles to keep up with their old workload. A **good** intervention connects them to an accessbility expert or counselor to help them with their specific issues. A **bad** intervention monitors the employees performance and puts them on performance review. ## 7 Conclusion In this paper, we showed that mobile sensing can address the challenging task of detecting rare LEs in-the-wild. We proposed MTAD, a multi-task framework for life event detection using behavioral sensing data from mobile phones. MTAD's use of an auxiliary sequence predictor addresses several challenges like extreme class imbalance (\(<2\%\)) and biased reconstruction score. We demonstrated the superior performance of our approach using a real-world longitudinal dataset by comparing it with state-of-the-art baselines. From a human-centered perspective, MTAD's effectiveness in personalization without additional training, robustness to decay, and balanced prediction of positive and negative events are desirable qualities. Ultimately, we envision our work motivates ubiquitous health-based intervention strategies through smartphones. ## Acknowledgments This work is supported in part by the Army Research Laboratory (ARL) under Award W911NF202011. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied by ARO or the U.S. Government.
rares life events は精神的な健康に大きく影響を与え、その検出は行動的研究における重要なステップであり、健康に基づく介入に向けた道を開く。私たちは、モバイルセンシングデータをこれらの異常を検出するために使用できると考えている。しかし、この問題の人の中心的な性質、そしてこれらのイベントの稀少性とユニークさは、無 supervision の機械学習方法への課題となる。この論文では、最初にセンシングデータを用いて生活イベントと人間の行動のランダム性を調査する。次に、無 supervision のオートエンコーダーを用いた多タスクフレームワークを提案し、非定期的な行動を捉える、そして職場におけるパフォーマンスの変化を識別するための補助的なシーケンス予測器を提案する。私たちは、126人の情報労働者からなるモバイルセンシング調査データを用いて実験を行い、10106日間のデータと198個の稀有なイベント(<2%)を対象とした
2305.00386
Importance Weighted Expectation-Maximization for Protein Sequence Design
Designing protein sequences with desired biological function is crucial in biology and chemistry. Recent machine learning methods use a surrogate sequence-function model to replace the expensive wet-lab validation. How can we efficiently generate diverse and novel protein sequences with high fitness? In this paper, we propose IsEM-Pro, an approach to generate protein sequences towards a given fitness criterion. At its core, IsEM-Pro is a latent generative model, augmented by combinatorial structure features from a separately learned Markov random fields (MRFs). We develop an Monte Carlo Expectation-Maximization method (MCEM) to learn the model. During inference, sampling from its latent space enhances diversity while its MRFs features guide the exploration in high fitness regions. Experiments on eight protein sequence design tasks show that our IsEM-Pro outperforms the previous best methods by at least 55% on average fitness score and generates more diverse and novel protein sequences.
Zhenqiao Song, Lei Li
2023-04-30T04:56:36
http://arxiv.org/abs/2305.00386v3
# Importance Weighted Expectation-Maximization for Protein Sequence Design ###### Abstract Designing protein sequences with desired biological function is crucial in biology and chemistry. Recent machine learning methods use a surrogate sequence-function model to replace the expensive wet-lab validation. How can we efficiently generate diverse and novel protein sequences with high fitness? In this paper, we propose IsEM-Pro, an approach to generate protein sequences towards a given fitness criterion. At its core, IsEM-Pro is a latent generative model, augmented by combinatorial structure features from a separately learned Markov random fields (MRFs). We develop an Monte Carlo Expectation-Maximization method (MCEM) to learn the model. During inference, sampling from its latent space enhances diversity while its MRFs features guide the exploration in high fitness regions. Experiments on eight protein sequence design tasks show that our IsEM-Pro outperforms the previous best methods by at least 55% on average fitness score and generates more diverse and novel protein sequences. Machine Learning, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model,, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model,, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model,, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model,, Model, Model, Model, Model, Model, Model,, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model,, Model, Model, Model, Model,, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model,, Model, Model, Model, Model, Model, Model,, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model, Model,, Model, Model, Model, Model, Model,, Model, sign novel, diverse and desirable **Proto**ein sequences (IsEMPro). Specifically, we introduce a latent variable in the generative model to capture the inter-dependencies in protein sequences. Sampling in latent space leads to more diverse candidates and can escape from locally optimal fitness regions. Instead of using standard variational inference models such as variational auto-encoder (VAE) (Kingma and Welling, 2013), we leverage importance sampling inside the EM algorithm to learn the latent generative model. As illustrated in Figure 1, our approach can navigate through multiple local optimum, and yield better overall performance. We further incorporate combinatorial structure of amino acids in protein sequences using Markov random fields (MRFs). It guides the model towards higher fitness landscape, leading to faster uphill path to desired proteins. We carry out extensive experiments on eight protein sequence design tasks and compare the proposed method with previous strong baselines. The contribution of this paper are listed as follows: * We propose a structure-enhanced latent generative model for protein sequence design. * We develop an efficient method to learn the proposed generative model, based on importance sampling inside the EM algorithm. * Experiments on eight protein datasets with different objectives demonstrate that our IsEM-Pro generates protein sequences with at least 55% higher average fitness score and higher diversity and novelty than previous best methods. Further analyse show that the protein sequences designed by our model can fold stably, giving empirical evidence that our proposed IsEM-Pro has the ability to generate real proteins. ## 2 Background In this section, we review protein sequence design and basic variational inference. ### Protein Sequence Design upon Wild-Type The protein sequence design problem is to search for a sequence with desired property in the sequence space \(\mathcal{V}^{L}\), where \(\mathcal{V}\) denotes the vocabulary of amino acids and L denotes the desired sequence length. The target is to find a protein sequence with highest fitness given by a protein fitness function \(f:\mathcal{V}^{L}\rightarrow\mathbb{R}\), which can be measured through wet-lab experiments. Wild-type refers to protein occurring in nature. Evolutionary search based methods are widely used (Bloom and Arnold, 2009; Arnold, 2018; Angermueller et al., 2019; Ren et al., 2022). They use wild-type sequence as starting point during iterative search. In this paper, we do not focus on the modification upon the wild-type sequence, but aim to efficiently generate novel and diverse sequences with improved protein functional properties. ### Monte Carlo Expectation-Maximization A latent generative model assumes data x (e.g. a protein sequence) is generated from a latent variable z. A classic algorithm to learn a latent generative model is Monte Carlo expectation-maximization (MCEM). The optimization procedure for maximizing the log marginal likelihood is to alternate between expectation step (E-step) and maximization step (M-step) (Neal and Hinton, 1998; Jordan et al., 1999). EM directly targets the log marginal likelihood of an observation x by involving a variational distribution \(q_{\phi}(z)\): \[\begin{split}\log p_{\theta}(x)&=E_{q_{\phi}(z)}[ \log p_{\theta}(x,z)-\log q_{\phi}(z)]\\ &+D_{KL}(q_{\phi}(z)||p_{\theta}(z|x))\end{split} \tag{1}\] where \(p_{\theta}(z|x)\) is the true posterior distribution and \(p_{\theta}(x,z)=p_{\theta}(x|z)p_{\theta}(z)\) is the joint distribution, composed of the conditional likelihood \(p_{\theta}(x|z)\) and the prior \(p_{\theta}(z)\). In MCEM, E-step samples a set of z from \(q_{\phi}(z)\) to estimate the expectation 1 using Monte Carlo method, and then M-step fits the model parameters \(\theta\) by maximizing the Monte Carlo estimation (Levine and Casella, 2001). It can be proved that this process will never decease the log marginal likelihood (Dieng and Paisley, 2019). We will develop our method based on this MCEM framework. ## 3 Proposed Method: IsEM-Pro In this section, we describe our method in detail. We will first present the probabilistic model and its learning algorithm. To make the learning more efficient, we describe how to uncover and use the potential constraints conveyed in the Figure 2: Workflow of traditional protein sequence design. We aim to accelerate this process by directly generating desirable sequences. protein sequences. ### Problem Formulation Our goal is to search over a space of discrete protein sequences \(\mathcal{V}^{L}-\mathcal{V}\) consists of 20 amino acids and \(L\) is the sequence length - for sequence \(x\in\mathcal{V}^{L}\) that maximizes a given fitness function \(f:\mathcal{V}^{L}\rightarrow\mathbb{R}\). Let the fitness value \(y=f(x)\), given a predefined threshold \(\lambda\), we define a conditional likelihood function: \[P(\mathcal{S}|x)=\begin{cases}1,&f(x)\geq\lambda,\\ 0,&\text{otherwise}\end{cases} \tag{2}\] where \(\mathcal{S}\) represents the event that the fitness of x is ideal (\(y\geq\lambda\)). Using \(P_{d}(x)\) to denote the protein distribution in nature, we assume we have access to a set of observations of x drawn from it. We also assume a class of generative models \(P_{\theta}(x)\) that can be trained with these samples and can approximate \(P_{d}\) well. Since the search space is exponentially large (O(\(20^{L}\))), random search would be time-intensive. We formulate the protein design problem as generating satisfactory sequences from the posterior distribution \(P_{\theta}(x|\mathcal{S})\): \[P_{\theta}(x|\mathcal{S})=\frac{P_{\theta}(x)P(\mathcal{S}|x)}{P_{\theta}( \mathcal{S})} \tag{3}\] where \(P_{\theta}(\mathcal{S})=\int_{x}P_{\theta}(x)P(\mathcal{S}|x)dx\) is a normalization constant which does not rely on x. Protein sequences generated from \(P_{\theta}(x|\mathcal{S})\) are not only more likely to be real proteins, but also have higher functional scores (i.e., fitness). The higher the \(\lambda\) is, the higher fitness the discovered protein sequences have. ### Probabilistic Model Directly generating satisfactory sequences from the posterior distribution \(P_{\theta}(x|\mathcal{S})\) is highly efficient compared with randomly search over the exponentially discrete space. However, realizing this idea is difficult as computing \(P_{\theta}(\mathcal{S})=\int_{x}P(\mathcal{S}|x)P_{\theta}(x)dx\) needs an integration over all possible x, which is intractable. Instead, we propose to learn a proposal distribution \(Q_{\phi}(x)\) with learnable parameter \(\phi\) to approximate \(P_{\theta}(x|\mathcal{S})\). In order to find the optimal \(\phi\) of the proposal distribution, we choose to minimize the KL divergence between the posterior distribution \(P_{\theta}(x|\mathcal{S})\) and the proposal distribution \(Q_{\phi}(x)\): \[\begin{split}\phi^{*}&=\operatorname*{argmax}_{ \phi}-D_{KL}(P_{\theta}(x|\mathcal{S})||Q_{\phi}(x))\\ &=\operatorname*{argmax}_{\phi}E_{P_{\theta}(x|\mathcal{S})}\log Q _{\phi}(x)+\mathcal{H}(\mathcal{P}_{\theta})\end{split} \tag{4}\] where \(\mathcal{H}(\mathcal{P}_{\theta})=-E_{P_{\theta}(x|\mathcal{S})}\log P_{ \theta}(x|\mathcal{S})\) is the entropy of \(P_{\theta}(x|\mathcal{S})\) and can be dropped because it does not matter \(\phi\). Diversity is a key consideration in our protein design procedure, which not only satisfies the diverse nature of species, but also can reduce undesired inter-domain misfolding (Wright et al., 2005). In order to promote the diversity of the designed protein sequences, we introduce a latent variable z into our model to capture the high-order dependencies among amino acids in protein sequences. Thus our final goal is to maximize the log marginal likelihood \(\log Q_{\phi}(x)\) of sequence x from the posterior distribution \(P_{\theta}(x|\mathcal{S})\) integrating over z: \[\begin{split}\mathcal{L}&=E_{P_{\theta}(x|\mathcal{ S})}\log Q_{\phi}(x)\\ &=E_{P_{\theta}(x|\mathcal{S})}\{E_{R_{\omega}(z)}[\log Q_{\phi}( x,z)-\log R_{\omega}(z)]\\ &\qquad\qquad+D_{KL}(R_{\omega}(z)||Q_{\phi}(z|x)\}\\ &=E_{P_{\theta}(x|\mathcal{S})}\mathcal{F}(R_{\omega}(z),\phi) \end{split} \tag{5}\] where the second equality is the EM objective defined in Equation 1 with approximate posterior distribution \(R_{\omega}(z)\). \(\omega\) is jointly learned with \(\phi\) by maximizing the above expectation. For the joint distribution \(Q_{\phi}(x,z)=P(z)Q_{\phi}(x|z)\), we use standard normal distribution for \(P(z)\) and Transformer decoder (Vaswani et al., 2017) for \(Q_{\phi}(x|z)\), which is augmented by the combinatorial structure features introduced in subsection 3.4. ### Importance Weighted EM To maximize the objective defined in Equation 5, we plan to learn our proposal distribution \(Q_{\phi}\) through importance sampling based EM, of which the sampling procedure and iterative optimization process can lead to a better estimate (Figure 1 (b)), resulting in novel and diverse proteins with higher fitness. Since we can not directly sample from \(P_{\theta}(x|\mathcal{S})\) due to the intractable integration factor \(P_{\theta}(\mathcal{S})\), we propose to approximate the expectation using importance sampling with proposal distribution \(Q_{\phi}(x)\)(Neal, 2001; Dieng and Paisley, 2019). Because \(\mathcal{S}\) is only conditioned on x, we have \(P_{\theta}(x,z|\mathcal{S})=\frac{P_{\theta}(x,z)P(\mathcal{S}|x)}{P_{\theta}( \mathcal{S})}\). We assume the latent variable z in \(P_{\theta}(x,z)\) and \(Q_{\phi}(x,z)\) are defined on the same latent space \(\mathcal{Z}\) and have the same prior \(P(z)\). Then the final objective in Equation 5 can be reformulated as: \[\begin{split}\mathcal{L}&=E_{Q_{\phi}(x)}\frac{P_{ \theta}(x|\mathcal{S})}{Q_{\phi}(x)}\mathcal{F}(R_{\omega}(z),\phi)\\ &=E_{Q_{\phi}(x,z)}\frac{P_{\theta}(x,z|\mathcal{S})}{Q_{\phi}(x,z )}\mathcal{F}(R_{\omega}(z),\phi)\\ &=\frac{1}{\mathcal{C}}E_{Q_{\phi}(x,z)}\frac{P_{\theta}(x|z)}{Q_{ \phi}(x|z)}P(\mathcal{S}|x)\mathcal{F}(R_{\omega}(z),\phi)\end{split} \tag{6}\] The first equality is due to the importance sampling, the second equality holds because \(\mathcal{F}(R_{\omega}(z),\phi)\) is an integration over z and does not rely on z. In the third equality, \(\mathcal{C}=P_{\theta}(\mathcal{S})\) is a constant which does not rely on \(\phi\) and \(\omega\), and will be dropped in the following learning process. We use importance sampling based EM to approximate the above objective (Hastings, 1970; Levine and Casella, 2001) with joint samples \((x_{n},z_{n})\sim Q_{\phi}(x,z)\). Specifically, at iteration t, the optimization process can be reformulated as : E-step: \[\mathcal{L}_{t}=\frac{\sum_{n=1}^{N}w(x_{n},z_{n})\mathcal{F}(R_{\omega}(z_{n}), \phi)}{\sum_{n=1}^{N}w(x_{n},z_{n})} \tag{7}\] M-step: \[\phi^{(t+1)}=\operatorname*{argmax}_{\phi}\mathcal{L}_{t} \tag{8}\] where \(w(x_{n},z_{n})=\frac{P_{\phi}(x_{n}|z_{n})}{Q_{\phi^{(t)}}(x_{n}|z_{n})}P( \mathcal{S}|x_{n})\) is the unnormalized importance weight, and N is the sample size. Through the combined sampling procedure and iterative optimization process, we can obtain a good proposal distribution \(Q_{\phi}\), from which we can generate satisfactory sequences with small time cost. ### Guiding Model Climbing through Combinatorial Structure As shown in previous work, the combinatorial structure of amino acids in protein sequences can be learned from a generative graphical model Markov random fields (MRFs) fitted on the sequences from the same family (Hopf et al., 2017; Luo et al., 2021). These structure constraints are the results of the evolutionary process under natural selection and may reveal clues on which amino-acid combinations are more favorable than others. Thus we incorporate these features into our model to guide it towards higher fitness landscape to faster find desired protein sequences. Given a protein sequence \(x=(x_{1},x_{2},..,x_{L})\) with \(L\) amino acids, the generative model generates it with likelihood \(P_{L}(x)=\frac{\exp(E(x))}{Z}\) where \(Z=\int_{x}\exp(E(x))dx\) is a normalization constant and E(x) is the corresponding energy function, which is defined as the sum of all pairwise constraints and single-site constraints as follows: \[E(x)=\sum_{i=1}^{L}\varepsilon_{i}(x_{i})+\sum_{i=1}^{L}\sum_{j=1,j\neq i}^{L} \varepsilon_{ij}(x_{i},x_{j}) \tag{9}\] where \(\varepsilon_{i}(x_{i})\) denotes the single-site constraint of \(x_{i}\) at position i and \(\varepsilon_{ij}(x_{i},x_{j})\) denotes the pairwise constraint of \(x_{i}\) and \(x_{j}\) at position i, j. The above graphical model is illustrated in Figure 3. We train the model following CCMpred (Seemayer et al., 2014) using a pseudo-likelihood \(\hat{P}_{L}(x)\) (provided in Appendix E) combined with Ridge regularization to make the learning of \(P_{L}(x)\) easier. But different from them, we additionally add a lasso regularizer to the training objective to make the graph sparse, of which the regularization coefficients are set to the same values as the ridge regularizer: \[\begin{split} L&=\sum_{x}\log\hat{P}_{L}(x)- \mathcal{L}(\varepsilon)-\mathcal{R}(\varepsilon)\\ \mathcal{L}(\varepsilon)&=\lambda_{\text{single}} \sum_{i=1}^{L}||\varepsilon_{i}||_{1}^{1}+\lambda_{\text{pair}}\sum_{i,j=1,i \neq j}||\varepsilon_{ij}||_{1}^{1}\\ \mathcal{R}(\varepsilon)&=\lambda_{\text{single}} \sum_{i=1}^{L}||\varepsilon_{i}||_{2}^{2}+\lambda_{\text{pair}}\sum_{i,j=1,i \neq j}||\varepsilon_{ij}||_{2}^{2}\end{split} \tag{10}\] where \(\varepsilon_{i}=[\varepsilon_{i}(a_{1}),\varepsilon_{i}(a_{2}),...,\varepsilon _{i}(a_{20})]\) is the vector of the single-site constraints of the 20 amino acids at position i, and \(\varepsilon_{ij}=[\varepsilon_{ij}(a_{1},a_{2}),\varepsilon_{ij}(a_{1},a_{3}),...,\varepsilon_{ij}(a_{L},a_{L-1})]\) is the vector of all possible pairwise constraints at position i, j. Following (Kamisetty et al., 2013), we set \(\lambda_{\text{single}}=1\) and \(\lambda_{\text{pair}}=0.2*(L-1)\). After training the MRFs, we can encode a protein sequence x with the learned constraints. Specifically, we first encode the i-th amino acid by concatenating its corresponding single-site constraint as well as the possible pairwise ones: \[\begin{split}\mathbf{\varepsilon_{i}}(x_{i})=[\varepsilon_{i}(x_{i}), \varepsilon_{i1}(x_{i},a_{1}.),...,\varepsilon_{iL}(x_{i},a_{L}.)]\\ \varepsilon_{ij}(x_{i},a_{j}.)=[\varepsilon_{ij}(x_{i},a_{1}), \varepsilon_{ij}(x_{i},a_{2}),...,\varepsilon_{ij}(x_{i},a_{20})]\end{split} \tag{11}\] where \(\varepsilon_{ij}(x_{i},a_{j}.)\) gathers the 20 amino acids for any position \(j\neq i\). Then we map \(\mathbf{\varepsilon_{i}}(x_{i})\) to the amino-acid embedding space with trainable parameter \(W_{\varepsilon}\), and add the mapped vector to the original amino-acid embedding \(e(x_{i})\) to get the final feature vector as our model input: \[\begin{split}\hat{e}(x_{i})=e(x_{i})+W_{\varepsilon}*\mathbf{ \varepsilon_{i}}(x_{i})\\ H_{0}=\tilde{z},\quad H_{i}=\hat{e}(x_{i-1})\text{ for }1\leq i<L \end{split} \tag{12}\] which means for the autoregressive Transformer decoder, the first token input is set the sampled latent vector \(\tilde{z}\) and the input for other position is set to the combinatorial structure augmented feature vector of last token. At iteration t in MCEM, the learning process for the combinatorial structure enhanced latent generative model becomes: Figure 3: Learning combinatorial structure constraints through Markov random fields. E-step: \[\mathcal{L}_{t}=\frac{\sum_{n=1}^{N}w(x_{n},z_{n})\mathcal{F}(R_{\omega}(z_{n}), \phi;\mathbf{\varepsilon})}{\sum_{n=1}^{N}w(x_{n},z_{n})} \tag{13}\] M-step: \[\phi^{(t+1)}=\operatorname*{argmax}_{\phi}\mathcal{L}_{t} \tag{14}\] \(\mathbf{\varepsilon}\) is fixed during latent generative model learning and we omit it in the following parts to make description simple. The overall learning algorithm is given in Appendix B.2. ## 4 Experiments In this section, we conduct extensive experiments to validate the effectiveness of our proposed IsEM-Pro on protein sequence design task. ### Implementation Details Our model is built based on Transformer (Vaswani et al., 2017) with \(6\)-layer encoder initialized by ESM-2 (Lin et al., 2022)1 and \(2\)-layer decoder with random initialization, of which the encoder parameters are fixed during training process. Thus the MRFs features are only incorporated in decoder. The model hidden size and feed-forward hidden size are set to \(320\) and \(1280\) respectively as ESM-2. We use the [CLS] representation from the last layer of encoder to calculate the mean and variance vectors of the latent variable through single-layer mapping. Then the sampled latent vector is used as the first token input of decoder. The latent vector size is correspondingly set to \(320\). We first train a VAE model as \(P_{\theta}\) for \(30\) epochs and \(\phi^{(0)}\) is initialized by \(\theta\). The number of iterative process in the importance sampling based VEM is set to \(10\). The protein combinatorial structure constraints \(\mathbf{\varepsilon}\) are learned on the training sequences for each dataset instead of real multiple sequence alignments (MSAs) to keep a fair comparison. Footnote 1: [https://dl.f.fraipublicfiles.com/fairesm/models/sem2_16_SM_UFR50D.pt](https://dl.f.fraipublicfiles.com/fairesm/models/sem2_16_SM_UFR50D.pt) The mini-batch size and learning rate are set to \(4,096\) tokens and \(1\)e-\(5\) respectively. The model is trained with \(1\) NVIDIA RTX A\(6000\) GPU card. We apply Adam algorithm (Kingma and Ba, 2014) as the optimizer with a linear warm-up over the first \(4,000\) steps and linear decay for later steps. We randomly split each dataset into training/validation sets with the ratio of \(9\):\(1\). We run all the experiments for five times and report the average scores. More experimental settings are given in Appendix B.1. In inference, we design protein sequences by taking the wild-type as encoder input and the latent vector is sampled from prior distribution \(P(z)\). The sequences are decoded using sampling strategy with top-\(5\). The candidate number is set to K=\(128\) following the setting of Jain et al. (2022) on the GFP dataset. ### Datasets Following Ren et al. (2022), we evaluate our method on the following eight protein engineering benchmarks: (1) **Green Fluorescent Protein (avGFP)**: The goal is to design sequences with higher log-fluorescence intensity values. We collect data following Sarkisyan et al. (2016). (2) **Adeno-Associated Viruses (AAV)**: The target is to generate amino-acid segment (position \(561-588\)) for higher gene therapeutic efficiency. We collect data following Bryant et al. (2021). (3) **TEM-1 \(\beta\)-Lactamase (TEM)**: The goal is to design high thermodynamic-stable sequences. We merge the data from Firnberg et al. (2014). (4) **Ubiquitination Factor Ube4b (E4B)**: The objective is to design sequences with higher enzyme activity. We gather data following Starita et al. (2013). (5) **Aliphatic Amide Hydrolase (AMIE)**: The goal is to produce amidase sequences with higher enzyme activity. We merge data following Wrenbeck et al. (2017). (6) **Levoglucosan Kinase (LGK)**: The target is to optimize LGK protein sequences with improved enzyme activity. We collect data following Klesmith et al. (2015). (7) **Poly(A)-binding Protein (Pab1)**: The goal is to design sequences with higher binding fitness to multiple adenosine monophosphates. We gather data following Melamed et al. (2013). (8) **SUMO E2 Conjugase (UBE2I)**: We aim to find human SUMO E2 conjugate with higher growth rescue rate. Data are obtained following Weile et al. (2017). The detailed data statistics, including protein sequence length, data size and data source are provided in Appendix A. ### Baseline Models We compare our method against the following representative baselines: (1) **CMA-ES**(Hansen and Ostermeier, 2001) is a famous evolutionary search algorithm. (2) **FBGAN** proposed by Gupta and Zou (2019) is a novel feedback-loop architecture with generative model GAN. (3) **DbAS**(Brookes and Listgarten, 2018) is a probabilistic modeling framework and uses adaptive sampling algorithm. (4) **CbAS**(Brookes et al., 2019) improves on DbAS by conditioning on the desired properties. (5) **PEX** proposed by Ren et al. (2022) is a model-guided sequence design algorithm using proximal exploration. (6) **GFlowNet-AL**(Jain et al., 2022) applies GFlowNet to design biological sequences. We use the implementations of CMA-ES, DbAS and CbAS provided in Trabucco et al. (2022) and for other baselines, we apply their released codes. To better analyze the influence of different components in our model, we also conduct ablation tests as follows: (1) **IsEM-Pro-w/o-ESM** removes ESM-2 as encoder initialization. (2) **IsEM-Pro-w/o-ISEM** removes iterative optimization process. (3) **IsEM-Pro-w/o-MRFs** removes MRFs features and iterative optimization process. (4) **IsEM-Pro-w/o-LV** removes latent variable, MRFs features and iterative optimization process. (5) **ESM-Search** samples sequences from the softmax distribution obtained by finetuning ESM-2 on the protein datasets and taking the wild-type as input. ### Evaluation Metrics We use three automatic metrics to evaluate the performance of the designed sequences: (1) **MFS**: Maximum fitness score. The oracle model adopted to evaluate MFS is described in Appendix B.1; (2) **Diversity** proposed by Jain et al. (2022) is used to evaluate how different the designed candidates are from each other; (3) **Novelty** proposed by Jain et al. (2022) is used to evaluate how different the pro \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline Models & avGFP & AAV & TEM & E4B & AMIE & LGK & Pab1 & UBE2I & Average \\ \hline CMA-ES & \(4.492\) & \(-3.417\) & \(0.375\) & \(-0.768\) & \(-8.224\) & \(-0.077\) & \(0.164\) & \(2.461\) & \(-0.624\) \\ FBGAN & \(1.251\) & \(-4.227\) & \(0.006\) & \(0.369\) & \(-2.410\) & \(-1.206\) & \(0.029\) & \(0.208\) & \(-0.747\) \\ DbAS & \(3.548\) & \(4.327\) & \(0.003\) & \(-1.286\) & \(-2.658\) & \(-1.148\) & \(1.524\) & \(3.088\) & \(0.924\) \\ CbAS & \(3.550\) & \(4.336\) & \(0.106\) & \(-1.000\) & \(-1.306\) & \(-0.362\) & \(1.842\) & \(3.263\) & \(1.303\) \\ PEX & \(3.764\) & \(3.265\) & \(0.121\) & \(5.019\) & \(-0.474\) & \(0.007\) & \(1.153\) & \(1.995\) & \(1.856\) \\ GFlowNet-AL & \(5.062\) & \(1.205\) & \(1.552\) & \(3.155\) & \(0.059\) & \(0.027\) & \(2.168\) & \(3.576\) & \(2.101\) \\ ESM-Search & \(2.610\) & \(-5.099\) & \(0.148\) & \(-1.860\) & \(-2.351\) & \(-0.029\) & \(1.406\) & \(3.244\) & \(-0.241\) \\ IsEM-Pro & **6.185** & **4.813** & **1.850** & **3.737** & **0.062** & **0.035** & **2.923** & **4.356** & **3.267** \\ – w/o ESM & \(1.214\) & \(-4.313\) & \(0.005\) & \(-1.352\) & \(-6.376\) & \(-0.225\) & \(0.072\) & \(1.843\) & \(-1.141\) \\ – w/o ISEM & \(4.708\) & \(1.130\) & \(0.708\) & \(0.046\) & \(-2.335\) & \(-0.077\) & \(1.913\) & \(0.475\) & \(1.342\) \\ – w/o MRFs & \(4.376\) & \(1.008\) & \(0.952\) & \(0.045\) & \(-1.771\) & \(-0.012\) & \(1.652\) & \(2.418\) & \(1.083\) \\ – w/o LV & \(4.274\) & \(2.251\) & \(0.078\) & \(-1.612\) & \(-2.266\) & \(-0.931\) & \(0.041\) & \(-0.262\) & \(0.196\) \\ \hline \hline \end{tabular} \end{table} Table 1: Maximum fitness scores (MFS) of all methods on eight datasets. Higher values indicate better functional properties in the dataset. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline Models & avGFP & AAV & TEM & E4B & AMIE & LGK & Pab1 & UBE2I & Average \\ \hline CMA-ES & \(221.55\) & \(22.73\) & \(269.25\) & \(93.78\) & \(256.07\) & \(415.63\) & \(59.35\) & \(128.10\) & \(183.30\) \\ FBGAN & \(0.05\) & \(2.76\) & \(0.08\) & \(0.63\) & \(57.87\) & \(39.36\) & \(0.75\) & \(0.80\) & \(12.43\) \\ DbAS & \(1.01\) & \(3.01\) & \(1.47\) & \(1.09\) & \(1.12\) & \(1.63\) & \(1.64\) & \(2.05\) & \(1.66\) \\ CbAS & \(4.02\) & \(3.03\) & \(2.06\) & \(1.90\) & \(1.33\) & \(1.09\) & \(2.71\) & \(2.95\) & \(1.92\) \\ PEX & \(3.59\) & \(1.88\) & \(8.57\) & \(4.08\) & \(4.50\) & \(10.53\) & \(3.63\) & \(10.24\) & \(5.87\) \\ GFlowNet-AL & \(221.95\) & \(22.83\) & \(266.99\) & \(86.78\) & \(316.79\) & \(412.58\) & \(61.33\) & \(143.28\) & \(191.56\) \\ ESM-Search & \(1.50\) & \(1.83\) & \(7.32\) & \(1.21\) & \(0.92\) & \(0.91\) & \(5.46\) & \(6.14\) & \(3.16\) \\ IsEM-Pro & **226.31** & **23.81** & **270.27** & **96.57** & **332.23** & **420.93** & **70.09** & **153.27** & **199.18** \\ – w/o ESM & \(198.08\) & \(9.25\) & \(176.20\) & \(3.79\) & \(264.36\) & \(340.62\) & \(1.49\) & \(110.53\) & \(138.04\) \\ – w/o ISEM & \(195.85\) & \(16.36\) & \(244.05\) & \(85.81\) & \(306.22\) & \(382.12\) & \(53.01\) & \(99.51\) & \(180.40\) \\ – w/o MRFs & \(207.59\) & \(9.53\) & \(221.49\) & \(90.81\) & \(244.14\) & \(327.14\) & \(58.44\) & \(129.75\) & \(161.11\) \\ – w/o LV & \(16.67\) & \(0.89\) & \(4.39\) & \(0.48\) & \(3.98\) & \(9.64\) & \(0.21\) & \(1.76\) & \(4.75\) \\ \hline \hline \end{tabular} \end{table} Table 2: Diversity scores of all models on eight datasets. Higher values indicate more diverse protein sequences. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline Models & avGFP & AAV & TEM & E4B & AMIE & LGK & Pab1 & UBE2I & Average \\ \hline CMA-ES & \(221.55\) & \(22.73\) & \(269.25\) & \(93.78\) & \(256.07\) & \(415.63\) & \(59.35\) & \(128.10\) & \(183.30\) \\ FBGAN & \(0.05\) & \(2.76\) & \(0.08\) & \(0.63\) & \(57.87\) & \(39.36\) & \(0.75\) & \(0.80\) & \(12.43\) \\ DbAS & \(1.01\) & \(3.01\) & \(1.47\) & \(1.09\) & \(1.12\) & \(1.63\) & \(1.64\) & \(2.05\) & \(1.66\) \\ CbAS & \(4.02\) & \(3.03\) & \(2.06\) & \(1.90\ posed candidates are from the sequences in training data. ### Main Results Table 1, 2 and 3 respectively report the maximum fitness scores, diversity scores and novelty scores of all models. IsEM-Pro achieves the highest fitness scores on all protein families and outperforms the previous best method GFlowNet-AL by 55% on average (Table 1). The reasons are two-folds. On one hand, the importance sampling based VEM can help our model to navigate to a better region instead of getting trapped in a worse local optima. On the other hand, the combinatorial structure features help to recognize the preferred mutation patterns which have higher success rate under the nature selection pressure, potentially leading to sequences with higher fitness scores. IsEM-Pro achieves the highest average diversity score over the eight tasks (Table 2). Our model gains the highest diversity on \(4\) out of \(8\) tasks while GFlowNet-AL gains \(2\)o and CMA-ES gains \(1\). It indicates that though involving combinatorial structure constraints can give the guidance for preferred protein patterns, it might also limit the sequence design to these patterns to some extent. The involved latent variable can capture complex inter-dependencies among amino acids, which benefits for more diverse protein design. IsEM-Pro can design more novel protein sequences on all datasets (Table 3). Our model achieves higher novelty scores on all datasets due to the reason that more new samples are involved during the importance sampling based iterative optimization process, which is beneficial for more novel protein design. ### Ablation Study Bottom halves of Table 1, 2 and 3 report the results of ablation tests. IsEM-Pro-w/o-MRFs improves the average diversity and novelty scores by as much as \(33\)x compared with IsEM-Pro-w/o-LV, which demonstrates that introducing a latent variable can significantly help to generate diverse proteins. IsEM-Pro-w/o-MRFs achieves higher maximum fitness scores than IsEM-Pro-w/o-ESM on all datasets, validating that adopting a pretrained protein language model as the encoder helps to design more satisfactory protein sequences. However, directly finetuning ESM-2 to sample candidates (ESM-Search) drops \(1.3\) points on average fitness score compared with taking ESM-2 as an encoder (IsEM-Pro-w/o-MRFs), demonstrating that ESM-2 is not suitable for direct sequence design. Incorporating combinatorial structure features can further improve the fitness of the designed proteins (IsEM-Pro-w/o-ISEM V.S. IsEM-Pro-w/o-MRFs), based on which learning the proposal distribution by importance sampling based VEM can better promote more desirable, diverse and novel protein generation. ## 5 Analysis In this section, we will make rigorous analyse to demonstrate the effectiveness of our method from different aspects. ### Approximate KL Divergence To validate how close the proposal distribution \(Q_{\phi}(x)\) is to the posterior distribution \(P_{\theta}(x|\mathcal{S})\), we calculate the KL divergence through Monte Carlo approximation. Here we calculate the KL divergence between \(Q_{\phi}(x)\) and \(P_{\theta}(x|\mathcal{S})\) as we have proven in Lemma \(C.1\) (provided in Appendix C.2) that the sampling difference between these two distributions can be bounded under this divergence. We leverage an unbiased and low-variance estimator (proof shown in Appendix C.1) to approximate KL divergence as follows: \[D_{KL}(Q_{\phi}(x)||P_{\theta}(x|\mathcal{S}))=E_{Q_{\phi}(x)}[r(x)-1-\log r(x)] \tag{15}\] \begin{table} \begin{tabular}{l l l l} \hline \hline Methods & MFS & Diversity & Novelty \\ \hline Latent-Add & \(4.102\) & \(218.26\) & \(209.85\) \\ Latent-Memory & \(4.040\) & **219.00** & \(211.38\) \\ IsEM-Pro & **6.185** & \(218.62\) & **226.31** \\ \hline \hline \end{tabular} \end{table} Table 4: Results of different schemes of introducing a latent variable with a pretrained encoder evaluated on avGFP dataset. Figure 4: Approximate KL divergence on eight protein datasets. where \(r(x)=\frac{P_{\theta}(x|\mathcal{S})}{Q_{\phi}(x)}\). The approximate KL divergence on eight protein datasets over \(1\)k-\(10\)k samples are illustrated in Figure 4. From the figure, we can see that the variance of KL divergence is very small over different sample size for all datasets. Besides, the KL divergence finally arrives at a small value, such as \(0.018\) for E4B and \(0.033\) for avGFP. It gives empirical evidence that when we sample from the ultimate \(Q_{\phi}(x)\), it has minor difference compared with sampling from the posterior distribution \(P_{\theta}(x|\mathcal{S})\). ### Effect of VAE Implementation Method Next, we study the effect of different implementation schemes of involving a latent variable with a pretrained encoder. Some works have tried adding the latent representation to the original embedding layer (Latent-Add) or using it as an additional memory (Latent-Memory) when adopting a pretrained language model as encoder (Li et al., 2020). We also implement our model with these two schemes, and evaluate the model performance on avGFP dataset. Table 4 shows that our method, which takes the latent representation as the first token input of decoder, achieves a higher fitness and novelty scores though with a mild decrease on diversity. ### Case Study To gain an insight on how well the designed proteins are, we analyze the generated avGFP sequence with highest fitness in detail using Phyre2 tool (Kelley et al., 2015). Figure 5 (a) illustrates the generated variant can fold stably. According to the software, the most similar protein is Cytochrome b562 integral fusion with enhanced green fluorescent protein (EGFP) (Edwards et al., 2008). There are 227 residues ( 96% of the candidate sequence) have been modeled with 100.0% confidence by using this protein as template. Details are given in Appendix D. Figure 5(b) visualizes the superposition of the top-5 most similar templates to our sequence in the protein data bank, which are all fluorescent proteins and show highly consistent structure in most regions, validating that our model can design a real fluorescent protein. ## 6 Related Work **Machine Learning for Protein Fitness Landscape Prediction.** Machine learning has been increasingly used for modeling protein fitness landscape, which is crucial for protein engineering. Some work leverage co-evolution information from multiple sequence alignments to predict fitness scores (Kamisetty et al., 2013; Luo et al., 2021). Melamed et al. (2013) propose to construct a deep latent generative model to capture higher-order mutations. Meier et al. (2021) propose to use pretrained protein language models to enable zero-shot prediction. The learned protein landscape models can be used to replace the expensive wet-lab validation to screen enormous designed sequences (Rao et al., 2019; Ren et al., 2022). **Methods for Protein Sequence Design.** Protein sequence design has been studied with a wide variety of methods, including traditional directed evolution (Arnold, 1998; Dalby, 2011; Packer and Liu, 2015; Arnold, 2018) and machine learning methods. The mainly used machine learning algorithms include reinforcement learning (Angermueller et al., 2019; Jain et al., 2022), Bayesian optimization (Belanger et al., 2019; Moss et al., 2020; Terayama et al., 2021), search using deep generative models (Brookes and Listgarten, 2018; Brookes et al., 2019; Madani et al., 2020; Kumar and Levine, 2020; Das et al., 2021; Hoffman et al., 2022; Melnyk et al., 2021; Ren et al., 2022) adaptive evolution methods (Hansen, 2006; Swersky et al., 2020; Sinai et al., 2020) as well as likelihood-free inference (Zhang et al., 2021). Different from the existing work, we propose importance sampling based VEM to learn a latent generative model which is enhanced by combinatorial structure features of protein space. The whole framework can not only help the generative model to climb to a better region in either Fujiyama landscape or Badlands landscape (Kauffman and Weinberger, 1989), but also significantly promote design diversity and novelty. ## 7 Conclusion This paper proposes IsEM-Pro, a latent generative model for protein sequence design, which incorporates additional combinatorial structure features learned by MRFs. We use Figure 5: 3-D visualization of a designed sequence of green fluorescent protein. importance weighted EM to learn the model, which can not only enhance design diversity and novelty, but also lead to protein sequences with higher fitness. Experimental results on eight protein sequence design tasks show that our method outperforms several strong baselines on all metrics.
Protein設計において、生物学的な機能を望ましいものにすることは、生物学と化学の両分野において非常に重要な役割を果たします。近年、機械学習手法は、高価な湿式実験の検証を代用する、代替的配列-機能モデルを使用しています。効率的に、高機能性を備えた多様な新タンパク質配列を生成する方法をどのように探究すればよいでしょうか。この論文では、 IsEM-Pro というアプローチを提案します。このアプローチは、特定の機能基準に向かってタンパク質配列を生成するための、潜在的な生成モデルです。 IsEM-Pro の核となる部分は、独立して学習されたマロリックランダムフィールド(MRFs)の結合構造特徴で拡張された、潜在的な生成モデルです。私たちは、モンテカルロ期待最大値法(MCEM)を開発し、このモデルを学習します。推論時には、その潜在空間からのサンプリング
2309.09711
Asymptotic Performance of the GSVD-Based MIMO-NOMA Communications with Rician Fading
In recent years, the multiple-input multiple-output (MIMO) non-orthogonal multiple-access (NOMA) systems have attracted a significant interest in the relevant research communities. As a potential precoding scheme, the generalized singular value decomposition (GSVD) can be adopted in MIMO-NOMA systems and has been proved to have high spectral efficiency. In this paper, the performance of the GSVD-based MIMO-NOMA communications with Rician fading is studied. In particular, the distribution characteristics of generalized singular values (GSVs) of channel matrices are analyzed. Two novel mathematical tools, the linearization trick and the deterministic equivalent method, which are based on operator-valued free probability theory, are exploited to derive the Cauchy transform of GSVs. An iterative process is proposed to obtain the numerical values of the Cauchy transform of GSVs, which can be exploited to derive the average data rates of the communication system. In addition, the special case when the channel is modeled as Rayleigh fading, i.e., the line-of-sight propagation is trivial, is analyzed. In this case, the closed-form expressions of average rates are derived from the proposed iterative process. Simulation results are provided to validate the derived analytical results.
Chenguang Rao, Zhiguo Ding, Kanapathippillai Cumanan, Xuchu Dai
2023-09-18T12:28:09
http://arxiv.org/abs/2309.09711v1
# Asymptotic Performance of the GSVD-Based MIMO-NOMA Communications with Rician Fading ###### Abstract In recent years, the multiple-input multiple-output (MIMO) non-orthogonal multiple-access (NOMA) systems have attracted a significant interest in the relevant research communities. As a potential precoding scheme, the generalized singular value decomposition (GSVD) can be adopted in MIMO-NOMA systems and has been proved to have high spectral efficiency. In this paper, the performance of the GSVD-based MIMO-NOMA communications with Rician fading is studied. In particular, the distribution characteristics of generalized singular values (GSVs) of channel matrices are analyzed. Two novel mathematical tools, the linearization trick and the deterministic equivalent method, which are based on operator-valued free probability theory, are exploited to derive the Cauchy transform of GSVs. An iterative process is proposed to obtain the numerical values of the Cauchy transform of GSVs, which can be exploited to derive the average data rates of the communication system. In addition, the special case when the channel is modeled as Rayleigh fading, i.e., the line-of-sight propagation is trivial, is analyzed. In this case, the closed-form expressions of average rates are derived from the proposed iterative process. Simulation results are provided to validate the derived analytical results. Free deterministic equivalents, generalized singular value decomposition (GSVD), linearization trick, multiply-input multiply-output (MIMO), non-orthogonal multiple access (NOMA), operator-valued free probability, Rician fading channel. ## I Introduction In recent years, with the increasing demand for high-quality and extremely high-throughput communications, multiple-input multiple-output (MIMO) has been considered to be one of the crucial technologies for beyond fifth-generation (B5G) and sixth-generation (6G), and has widely been analyzed and applied in wireless communications [1, 2, 3, 4]. In many MIMO communication scenarios, more than one user are served by one base station simultaneously. To effectively use the spectrum resources in multi-user MIMO communication scenarios, non-orthogonal multiple access (NOMA) technology has been widely adopted [5, 6, 7]. With the MIMO-NOMA scheme, the base station serves more than one user in the same time-frequency resource blocks, which has been proved to have superior performance than the traditional MIMO-orthogonal multiple access (OMA) scheme, especially at high signal-to-noise-ratios (SNRs) [8]. However, since the base station and users equip multiple antennas in a MIMO-GSVD system, the interference between subchannels cannot be ignored, which motivates the analysis of precoding schemes for MIMO-NOMA [9, 10, 11]. Among the precoding schemes, the generalized singular value decomposition (GSVD) emerges due to its trade-off for complexity and performance [12, 13, 14, 15]. In the GSVD-based MIMO-NOMA communication system, the channel matrices of two users are diagonalized simultaneously thereupon the MIMO channels are converted into several single-input single-output (SISO) subchannels, and there is no interference between each subchannel. The performance of the GSVD-based MIMO-NOMA scheme has been studied in [14, 16], and [15]. However, all the studies have only analyzed the case of Rayleigh fading, i.e., only non line-of-sight (nLoS) propagation components are considered. In many communication scenarios, the line-of-sight (LoS) propagation cannot be ignored, which demands the channel matrix to be modeled as Rician fading. To the best knowledge of the authors, the performance of GSVD-based MIMO-NOMA system communications with Rician fading has not been studied in the literature, which motivates our work in this paper. The main challenge is to find an approach to obtain the distribution characteristic of the generalized singular values (GSVs) of channel matrices. In [14], the GSVs of channel matrices \(\mathbf{H}_{1}\) and \(\mathbf{H}_{2}\) are proved to be equal to the eigenvalues of a matrix \(\mathbf{L}=\mathbf{H}_{1}(\mathbf{H}_{2}^{H}\mathbf{H}_{2})^{-1}\mathbf{H}_{1} ^{H}\) when \(\mathbf{H}_{2}\) has full column rank. In the case of Rayleigh fading, the channel matrix \(\mathbf{H}_{i}\) is modeled as a Gaussian random matrix, where the random matrix theory can be applied to obtain the probability density function (PDF) of eigenvalues of \(\mathbf{L}\). However, in the case of Rican fading, the channel matrix is modeled as the sum of a Gaussian random matrix and a deterministic matrix, where the random matrix theory cannot be applied directly. Therefore, the operator-valued free probability, which is an extension of free probability, is used in this paper. Two useful tools based on the operator-valued free probability, the linearization trick and free deterministic equivalents, are exploited. In [17], the linearization trick was proposed and used to convert a complex Gaussian random matrix polynomial problem to a linear addition problem in block random matrices, which can be easily solved by using the subordination theorem. To study more general cases, in [18], the free deterministic equivalents method was proposed to derive the asymptotic distribution of the eigenvalues of matrix polynomial consisting of self-adjoint and non self-adjoint Gaussian random matrices, deterministic matrices, and Haar-distributed unitary matrices. By exploiting this method, the matrices are replaced with operator-valued random variables, and the matrix polynomial is converted to a new polynomial consisting of operator-valued random variables, whose distribution characteristic can be derived by applying the properties of the operator-valued free probability theorem. The gap between the analytical results derived with the free deterministic equivalents and actual values is proved to vanish when the dimension goes infinite. However, this study dealt with the polynomial whose each element only has a positive degree, while matrix \(\mathbf{L}\) contains the \(-1\) degree, and cannot be applied directly. Motivated by this, we apply the linearization trick and free deterministic equivalents to find a new way to derive the asymptotic distribution characteristic of the eigenvalues of \(\mathbf{L}\), as long as the GSVs of two matrices modeled with Rician fading. The main contributions of this paper are summarized as follows: * A GSVD-based MIMO-NOMA transmission system with Rician fading is considered. For performance analysis, the asymptotic distribution characteristics of the GSVs of channel matrices \(\mathbf{H}_{1}\) and \(\mathbf{H}_{2}\) are derived. Two cases under the conditions on number of antennas are discussed respectively. When \(\mathbf{H}_{2}\) has full column rank, the problem is equivalent to deriving the asymptotic distribution characteristics of the eigenvalues of a matrix \(\mathbf{L}=\mathbf{H}_{1}(\mathbf{H}_{2}^{H}\mathbf{H}_{2})^{-1}\mathbf{H}_{1} ^{H}\). The linearization trick is used twice to convert \(\mathbf{L}\) to a new matrix \(\mathbf{J}\), whose each element is the polynomial of \(\mathbf{H}_{i}\) with degree \(0\) or \(1\), \(i=1,2\). Then the free deterministic equivalents method is applied to construct an iterative process to obtain the distribution of \(\mathbf{G}\), as long as the GSVs. When \(\mathbf{H}_{2}\) does not has a full column rank, a full-rank matrix \(\mathbf{H}_{3}\) with an approximation parameter \(\epsilon\) is constructed, while the pair \(\{\mathbf{H}_{1},\mathbf{H}_{3}\}\) is proved to contains the same GSVs as the pair \(\{\mathbf{H}_{1},\mathbf{H}_{2}\}\) when \(\epsilon\to 0\). Then the distribution of \(\mathbf{G}\) can be derived by using a similar way. * The average rates of two users are derived based on the distribution characteristics of the GSVs of channel matrices. The high-accuracy of these results are verified with the simulation, even if the numbers of antennas are small. * When the channel is modeled as Rayleigh fading, the results derived in the case of Rician fading are simplified, and closed-form expressions of average rates are derived in this special case. \(\Pi\). System Model Consider a MIMO-NOMA downlink communication system with one base station and two users, which are denoted by \(\mathrm{S}\), \(\mathrm{U}_{1}\) and \(\mathrm{U}_{2}\), respectively. 1\(\mathrm{S}\) is equiped with \(N\) antennas, while the \(\mathrm{U}_{i}\) is equiped with \(M_{i}\) antennas. The channel matrix between \(\mathrm{S}\) and \(\mathrm{U}_{i}\) is denoted by \(\mathbf{H}_{i}\in\mathbb{C}^{M_{i}\times N}\), which is modeled with Rician fading model. \(\mathbf{H}_{i}=\mathbf{\bar{H}}_{i}+\mathbf{\bar{H}}_{i}\), where \(\mathbf{\bar{H}}_{i}\) is a deterministic matrix that represents the line of sight (LoS) component, and \(\mathbf{\bar{H}}_{i}\) is a random Gaussian matrix with mean \(\mathbf{O}\) and covariance matrix \(\mathbf{I}\) that represents the none-line of sight (nLoS) component. \(\mathbf{I}\) represents an identity matrix, and \(\mathbf{O}\) represents a zero matrix. According to the NOMA scheme, \(\mathrm{S}\) broadcasts the superposed message for two users. Denote \(\mathbf{s}_{i}\) as the message for the \(\mathrm{U}_{i}\), \(\mathbf{s}_{i}\) satisfies \(E\{\mathbf{s}_{i}\mathbf{s}_{i}^{H}\}=\mathbf{I}_{N}\). The message broadcast by \(\mathrm{S}\) can be expressed as \(\mathbf{x}=\sqrt{l_{1}}\mathbf{s}_{1}+\sqrt{l_{2}}\mathbf{s}_{2}\). \(0<l_{i}<1\) represents the power allocation coefficient for \(\mathrm{U}_{i}\) and satisfies \(l_{1}+l_{2}=1\). Denote \(\mathbf{P}\in\mathbb{C}^{N\times N}\) as the precoding matrix, and \(\mathbf{Q}_{i}\in\mathbb{C}^{M_{i}\times M_{i}}\) as the decoding matrix. The received message by \(\mathrm{U}_{i}\) can be expressed as follows: Footnote 1: For the case with more than two users, a hybrid approach can be employed in which users are divided into groups of two users, each group is allocated orthogonal resources, and within each two-user group, the proposed GSVD-based MIMO-NOMA scheme can be utilized. \[\mathbf{\widetilde{y}}_{i}=\mathbf{Q}_{i}\mathbf{y}_{i}=\frac{\sqrt{P}}{t \sqrt{d_{i}^{2}}}\mathbf{Q}_{i}\mathbf{H}_{i}\mathbf{P}\mathbf{x}+\mathbf{n}, \tag{1}\] where \(d_{i}\) represents the distance between \(\mathrm{S}\) and \(\mathbf{U}_{i}\), \(\tau\) represents the large-scale path loss exponent, \(t\) represents the long-term power normalization coefficient, and \(P\) represents the transmission power. \(\mathbf{n}\) is a white noise vector with power \(P_{N}\). Without loss of generality, the average gain for \(\mathrm{U}_{i}\) is assumed to be larger than \(\mathrm{U}_{2}\), i.e., \(\frac{E(\mathrm{tr}\{\mathbf{H}_{1}\mathbf{H}_{2}^{H}\})}{d_{1}^{2}}\geq\frac{E \mathrm{tr}(\mathbf{H}_{2}\mathbf{H}_{2}^{H}\})}{d_{2}^{2}}\). To cancel the interference between subchannels, the GSVD precoding scheme is applied. The GSVD of \(\{\mathbf{H}_{1},\,\mathbf{H}_{2}\}\) is denoted as follows: \[\mathbf{H}_{i}=\mathbf{U}_{i}^{H}\mathbf{\Sigma}_{i}\mathbf{V}^{-1},\,i=1,2, \tag{2}\] where \(\mathbf{U}_{i}\in\mathbb{C}^{M_{i}\times M_{i}}\) is an unitary matrix, \(\mathbf{V}\in\mathbb{C}^{N\times N}\) is an invertible matrix, and \(\mathbf{\Sigma}_{i}\in\mathbb{C}^{M_{i}\times N}\) is a rectangular diagonal matrix with \(S\) non-zero diagonal elements. From [12], \(S\) can be known as \(S=\min\{M_{1},N\}+\min\{M_{2},N\}-\min\{M_{1}+M_{2},N\}\). Denote \(\mathbf{\Sigma}_{i}=\text{diag}\{\sigma_{i,1},\sigma_{i,2},...,\sigma_{i,S}\}\). The GSV \(\omega_{j},j=1,2,...,S\) is defined as follows: \[\omega_{j}=\frac{\sigma_{1,j}^{2}}{\sigma_{2,j}^{2}}. \tag{3}\] Then \(\sigma_{i,j}\) can be expressed by \(\omega_{j}\) as \(\sigma_{1,j}^{2}=\frac{\omega_{j}}{1+\omega_{j}}\) and \(\sigma_{2,j}^{2}=\frac{1}{1+\omega_{j}}\). Set the precoding and decoding matrices as \(\mathbf{P}=\mathbf{V}\) and \(\mathbf{Q}_{i}=\mathbf{U}_{i}\), then the received message \(\mathbf{\widetilde{y}}_{i}\) can be expressed as follows: \[\mathbf{\widetilde{y}}_{i}=\frac{\sqrt{P}}{t\sqrt{d_{i}^{2}}}\mathbf{U}_{i} \mathbf{H}_{i}\mathbf{V}\mathbf{x}+\mathbf{n}=\frac{\sqrt{P}}{t\sqrt{d_{i}^{2} }}\mathbf{\Sigma}_{i}\mathbf{x}+\mathbf{n}, \tag{4}\] and the message at \(j\)-th subchannel of \(\mathbf{\widetilde{y}}_{i}\) can be expressed as follows: \[y_{i,j}=\frac{\sqrt{P}}{t\sqrt{d_{i}^{2}}}\sigma_{i,j}x_{j}+n_{j}, \tag{5}\] where \(x_{j}=\sqrt{l_{1}}s_{1,j}+\sqrt{l_{2}}s_{2,j}\) represents the \(j\)-th element of \(\mathrm{U}_{i}\). The MIMO channel is now converted into several parallel SISO subchannels, where the successive interference cancellation (SIC) can be applied to eliminate interference. In this paper, the statistical channel state information (CSI)-based SIC is used [19], i.e., the SIC is applied in the user who has a larger average power of channel fading. Since \(\frac{E(\mathbf{r}\{\mathbf{H}_{1}\mathbf{H}_{1}^{H}\})}{d_{1}^{2}}\geq\frac{E( \mathbf{r}\{\mathbf{H}_{2}\mathbf{H}_{2}^{H}\})}{d_{2}^{2}}\), SIC is applied in \(\mathrm{U}_{1}\). Specifically, \(\mathrm{U}_{2}\) decodes \(s_{2}\) first, and then decoding \(s_{1}\) after eliminating \(s_{2}\). \(\mathrm{U}_{2}\) decodes \(s_{2}\) by regarding \(s_{1}\) as noise directly. Then the rate of \(s_{i,j}\) can be expressed as follows: \[R_{1,j}=\log\left(1+\frac{\omega_{j}}{1+\omega_{j}}\frac{l_{1}\rho}{t\mathrm{d}_{1}^{2 }}\right), \tag{6}\] \[R_{2,j}=\log\left(1+\frac{l_{2}}{l_{1}+(1+\omega_{j})\rho^{-1}td_{2}^{2}}\right), \tag{7}\] where \(\rho=\frac{P}{P_{N}}\) SNR. In this paper, the average performance \(E\{R_{i,j}\}\) for each sub-channel is considered, i.e., \(\omega_{j}\) is regarded as the same with any \(j\). Thus, the average rate of \(\mathrm{U}_{i}\) can be expressed as follows: \[\bar{R}_{1}=S\int_{0}^{+\infty}\log\left(1+\frac{x}{1+x}\frac{l_{1}\rho}{td_{1} ^{r}}\right)dF_{\omega}(x), \tag{8}\] \[\bar{R}_{2}=S\int_{0}^{+\infty}\log\left(1+\frac{l_{2}}{l_{1}+(1+x)\rho^{-1}td _{2}^{r}}\right)dF_{\omega}(x), \tag{9}\] where \(F_{\omega}(x)\) denotes the cumulative density function (CDF) of \(\omega\). ## III Performance Analysis In this work, the numbers of antennas are assumed to satisfy \(M_{1}\leq M_{2}\) and \(M_{1}+M_{2}>N\). Since the GSVs of \(\{\mathbf{H}_{1},\mathbf{H}_{2}\}\) are reciprocal of the GSVs of \(\{\mathbf{H}_{2},\mathbf{H}_{1}\}\), the case \(M_{1}>M_{2}\) is equivalent to the case \(M_{1}\leq M_{2}\) by swapping \(\mathbf{H}_{1}\) and \(\mathbf{H}_{2}\). As for the case of \(M_{1}+M_{2}\leq N\), it can be understood from [14] that \(\omega_{j}\) is deterministic, and has little value for analysis. From [14], it can be known that when \(M_{2}\geq N\), \(\omega_{j}\) equals to the non-zero eigenvalues of a matrix \(\mathbf{L}\), which is expressed as follows: \[\mathbf{L}=\mathbf{H}_{1}(\mathbf{H}_{2}^{H}\mathbf{H}_{2})^{-1}\mathbf{H}_{1 }^{H}. \tag{10}\] Then we focus on the approach to derive the eigenvalue distribution of \(\mathbf{L}\). Before the analysis, the Cauchy transform is used, which contains the asymptotic distribution characteristics of random variables [20]. **Definition 1**.: \(\mu\) _is a random variable with \(F_{\mu}(x)\) CDF, then its Cauchy transform of \(\mu\) is defined as follows:_ \[G_{\mu}(z)=\int_{R}\frac{1}{z-x}dF_{\mu}(x). \tag{11}\] In this paper, \(z\) is restricted as \(\text{Re}(z)<0\), where \(\text{Re}\{z\}\) represents the real part of \(z\). The problem is now transformed to obtaining \(G_{\omega}(z)\). Before the analysis, the operator-valued free probability is introduced as the mathematical preliminary in the following subsection. ### _Free Probability and Operator-Valued Free Probability_ Free probability theory is first proposed by Voiculescu in 1985 as a tool to analyze non-commutative random variables, for example, matrices. Operator-valued free probability is an extension of free probability, which is first proposed by Voiculescu in [21] to solve the matrix polynomial problems. In this subsection, the fundamental concepts of free probability and operator-valued free probability are briefly introduced, respectively. More details of free probability and operator-valued free probability can be found in the Appendix A in [22] and [23]. The definition of the non-commutative probability space is presented as follows: **Definition 2**.: _Denote \(\mathcal{A}\) as a unital non-commutative algebra over \(\mathbb{C}\) with a unit \(1_{\mathcal{A}}\) and a unital linear functional \(\phi:\mathcal{A}\rightarrow\mathbf{C}\) satisfying \(\phi(1_{\mathcal{A}})=1\). Then the pair \((\mathcal{A},\phi)\) is defined as a non-commutative probability space. The elements \(a\in\mathcal{A}\) are called as non-commutative random variables._ Based on the definition of non-commutative probability space, the operator-valued non-commutative probability space is defined as follows: **Definition 3**.: _Denote \(\mathcal{A}\) as a unital non-commutative algebra over \(\mathbb{C}\) with a unit \(1_{\mathcal{A}}\), and denote \(\mathcal{B}\subset\mathcal{A}\) as a unital subalgebra of \(\mathcal{A}\). Define a unital linear functional \(\phi:\mathcal{A}\rightarrow\mathcal{B}\) satisfying:_ * \(\phi(B)=B\)_, for any_ \(B\in\mathcal{B}\)_._ * \(\phi(B_{1}AB_{2})=B_{1}\phi(A)B_{2}\)_, for any_ \(A\in\mathcal{A}\) _and_ \(B_{1},B_{2}\in\mathcal{B}\)_._ _Then the triplet \((\mathcal{A},\phi,\mathcal{B})\) is defined as a \(\mathcal{B}\)-valued non-commutative probability space._ In this paper, free probability theory and operator-valued free probability theory are applied in random matrix theory. Denote \(\mathcal{M}_{n}\) as the algebra of \(n\times n\) complex random matrices. For a element \(\mathbf{X}\in\mathcal{M}_{n}\), define \(\phi(\mathbf{X})=\frac{1}{n}E\{\text{Tr}\{\mathbf{X}\}\}\) and \(1_{\mathcal{M}_{n}}=\mathbf{I}_{n}\). Then \((\mathcal{M}_{n},\phi)\) is a non-commutative probability space. Denote \(\mathcal{D}_{n}\) as the algebra of \(n\times n\) complex diagonal matrices. Define a map \(E_{\mathcal{D}_{n}}:\mathcal{M}_{n}\rightarrow\mathcal{D}_{n}\) defined as follows: \[E_{\mathcal{D}_{n}}\{\mathbf{X}\}=\text{diag}\{E\{X_{1,1}\},E\{X_{2,2}\},...,E \{X_{n,n}\}\}. \tag{12}\] The triplet \((\mathcal{M}_{n},E_{\mathcal{D}_{n}},\mathcal{D}_{n})\) is a \(\mathcal{D}_{n}\)-valued probability space. Then the Cauchy transform can be extended over \((\mathcal{M}_{n},E_{\mathcal{D}_{n}})\) as the \(\mathcal{D}_{n}\)-valued Cauchy transform, which is defined as follows: **Definition 4**.: \((\mathcal{M}_{n},E_{\mathcal{D}_{n}},\mathcal{D}_{n})\) _is a \(\mathcal{D}_{n}\)-valued probability space. Let \(\mathbf{X}\in\mathcal{M}_{n}\), then \(\mathcal{D}_{n}\)-valued Cauchy transform of \(\mathbf{X}\) is defined as follows:_ \[\mathcal{G}_{\mathbf{X}}^{\mathcal{D}_{n}}(\mathbf{A})=E_{\mathcal{D}_{n}}\{( \mathbf{A}-\mathbf{X})^{-1}\}, \tag{13}\] _where \(\mathbf{A}\in\mathcal{D}_{n}\) satisfies \(\frac{1}{2i}(\mathbf{A}-\mathbf{A}^{H})\succ 0\)._ Assume the expected cumulative distribution of the eigenvalues of \(\mathbf{X}\) converges to a random variable \(x\) when \(n\rightarrow+\infty\), it can be known that \(G_{x}(z)=\frac{1}{n}\text{Tr}\{\mathcal{G}_{\mathbf{X}}^{\mathcal{D}_{n}}(z \mathbf{I}_{n})\}\)[22]. Thus, \(G_{\omega}(z)\) can be obtained by deriving \(\mathcal{G}_{\mathbf{L}}^{\mathcal{D}_{n}}(z\mathbf{I}_{\mathbf{n}})\). However, \(\mathcal{G}_{\mathbf{L}}^{\mathcal{D}_{n}}(z\mathbf{I}_{\mathbf{n}})\) is still complex to derive. Thus, another method, the linearization trick, is exploited, which will be introduced in the following subsection. ### _Linearization Trick_ In [17, 24], the linearization trick is introduced, which is required in our work. The main theorem of the linearization trick is introduced as follows: **Theorem 1**.: _Assume \(\mathbf{X}_{i},i=1,2,...,M\) are complex random matrices in the \(\mathcal{D}_{n}\)-valued probability space \((\mathcal{M}_{n},E^{\mathcal{D}_{n}})\). Assume \(\mathbf{P}\) is the polynomial of \(\mathbf{X}_{i}\) and can be expressed as follows:_ \[\mathbf{P}=-\mathbf{u}^{H}\mathbf{Q}^{-1}\mathbf{v}, \tag{14}\] _where \(\mathbf{u},\mathbf{v}\in\mathbb{C}^{K-1\times 1}\), \(\mathbf{Q}\in\mathbb{C}^{K-1\times K-1}\) is invertible. Each element in \(\mathbf{u},\mathbf{v}\) and \(\mathbf{Q}\) is the polynomial of \(\mathbf{X}_{i}\) with degree \(\leq 1\). Then a new matrix over \(\mathcal{D}_{K}\)-valued probability space can be constructed as follows:_ \[\hat{\mathbf{P}}=\begin{pmatrix}0&\mathbf{u}^{H}\\ \mathbf{v}&\mathbf{Q}\end{pmatrix}. \tag{15}\] _Define a matrix_ \[\mathbf{B}=\begin{pmatrix}z\mathbf{I}_{n}&\mathbf{O}\\ \mathbf{O}&\mathbf{O}_{K-n}\end{pmatrix}. \tag{16}\] _Then the \(\mathcal{D}_{K}\)-valued Cauthy transform of \(\hat{\mathbf{P}}\) can be obtained as_ \[\mathcal{G}_{\mathbf{P}}^{\mathcal{D}_{K}}(\mathbf{B})=\begin{pmatrix} \mathcal{G}_{\mathbf{P}}^{\mathcal{D}_{n}}(z\mathbf{I}_{n})&*\\ *&*\end{pmatrix}. \tag{17}\] Proof.: See [17] and [24]. In this theorem, \(\hat{\mathbf{P}}\) is called as the linearization of \(\mathbf{P}\). \(\mathcal{G}_{\mathbf{P}}^{\mathcal{D}_{n}}(z\mathbf{I}_{n})\) can be obtained by deriving \(\mathcal{G}_{\hat{\mathbf{P}}}^{\mathcal{D}_{K}}(\mathbf{B})\). Since \(\hat{\mathbf{P}}\) can be considered as the sum of selfadjoint operator-valued matrix polynomials of \(\mathbf{X}_{i},i=1,2,...,M\), \(\mathcal{G}_{\mathbf{P}}^{\mathcal{D}_{K}}(\mathbf{B})\) can be derived by using the free deterministic equivalents and subordination formulation. **Remark 1**.: _There is a condition in the linearization trick that the polynomial entries in \(\mathbf{u},\mathbf{v}\) and \(\mathbf{Q}\) must have degree \(\leq 1\). However, (17) also holds without this condition. In this paper, we will use this theorem without the condition._ ### _The Approach to Obtain \(G_{\omega}(z)\)_ In this subsection, the cases of \(M_{2}\geq N\) and \(M_{2}<N<M_{1}+M_{2}\) are discussed. #### Ii-C1 The case \(M_{2}\geq N\) When \(M_{2}\geq N\), \(\omega\) equals to the non-zero eigenvalues of \(\mathbf{L}=\mathbf{H}_{1}(\mathbf{H}_{2}^{H}\mathbf{H}_{2})^{-1}\mathbf{H}_{1} ^{H}\). Based on this precondition, the main process to obtain \(G_{\omega}(z)\) can be divided into several steps as follows: * Apply the linearization trick to construct a matrix \(\mathbf{J}\) that \(\mathcal{G}_{\mathbf{L}}^{\mathcal{D}_{M_{1}}}(z\mathbf{I}_{n})\) is in the \(\mathcal{G}_{\mathbf{J}}^{\mathcal{D}_{M_{1}}}(z\mathbf{I}_{n})\), while each element in \(\mathbf{J}\) is the polynomial of \(\mathbf{H}_{i}\) with degree \(0\) or \(1\). * Apply the free deterministic equivalents method [22] to construct a matrix \(\boldsymbol{\mathcal{J}}\) by satisfying \(\lim\limits_{n\to+\infty}\mathcal{G}_{\mathbf{J}}^{\mathcal{D}_{an}}(z \mathbf{I}_{n})-\mathcal{G}_{\boldsymbol{\mathcal{J}}}^{\mathcal{D}_{an}}(z \mathbf{I}_{n})=0\). * Derive \(\mathcal{G}_{\boldsymbol{\mathcal{J}}}^{\mathcal{D}_{an}}(z\mathbf{I}_{n})\) with subordination theorem. Then propose an iterative approach to obtain the approximated results of \(\mathcal{G}_{\mathbf{J}}^{\mathcal{D}_{an}}(z\mathbf{I}_{n})\) and \(\mathcal{G}_{\mathbf{L}}^{\mathcal{D}_{M_{1}}}(z\mathbf{I}_{n})\). * Find the relationship between \(G_{\omega}(z)\) and \(\mathcal{G}_{\mathbf{L}}^{\mathcal{D}_{M_{1}}}(z\mathbf{I}_{n})\) to obtain \(G_{\omega}(z_{0})\). The detailed process is presented as follows. First, by applying Theorem 1, a matrix \(\hat{\mathbf{L}}\) can be constructed as follows: \[\hat{\mathbf{L}}=\begin{pmatrix}\mathbf{O}&\mathbf{H}_{1}\\ \mathbf{H}_{1}^{H}&-\mathbf{H}_{2}^{H}\mathbf{H}_{2}\end{pmatrix}. \tag{18}\] Denote \(n=M_{1}+N\), then \[\mathcal{G}_{\mathbf{L}}^{\mathcal{D}_{n}}(\mathbf{B})=\begin{pmatrix} \mathcal{G}_{\mathbf{L}}^{\mathcal{D}_{M_{1}}}(z\mathbf{I}_{M_{1}})&\mathbf{O }\\ \mathbf{O}&\mathbf{O}&*\end{pmatrix}, \tag{19}\] where \[\mathbf{B}=\begin{pmatrix}z\mathbf{I}_{M_{1}}&\mathbf{O}\\ \mathbf{O}&\mathbf{O}_{N}\end{pmatrix}. \tag{20}\] Now the target is to obtain \(\mathcal{G}_{\mathbf{L}}^{\mathcal{D}_{n}}(\mathbf{B})\). However, the one of the elements in \(\hat{\mathbf{L}}\) is the \(2\)-degree of \(\mathbf{H}_{2}\), which makes the problem challenging to solve. Therefore, it is necessary to apply Theorem 1 to \(\hat{\mathbf{L}}\) again. Denote \(\mathbf{X}_{i}\in\mathbb{C}^{n\times n}\) as follows: \[\mathbf{X}_{1}=\begin{pmatrix}\mathbf{O}&\mathbf{H}_{1}\\ \mathbf{H}_{1}^{H}&\mathbf{O}\end{pmatrix},\mathbf{X}_{2}=\begin{pmatrix} \mathbf{O}_{(n-M_{2})\times M_{1}}&\mathbf{O}\\ \mathbf{O}&\mathbf{H}_{2}\end{pmatrix}. \tag{21}\] Then \(\hat{\mathbf{L}}\) can be expressed as \(\hat{\mathbf{L}}=\mathbf{X}_{1}-\mathbf{X}_{2}^{H}\mathbf{X}_{2}\). Now rewrite \(\hat{\mathbf{L}}\) as \[\hat{\mathbf{L}}=-\begin{pmatrix}\mathbf{X}_{2}^{H}&\mathbf{O}&\mathbf{I}_{n} \end{pmatrix}\begin{pmatrix}\mathbf{I}_{n}&\mathbf{O}&\mathbf{O}\\ \mathbf{O}&\mathbf{X}_{1}&-\mathbf{I}_{n}\\ \mathbf{O}&-\mathbf{I}_{n}&\mathbf{O}\end{pmatrix}^{-1}\begin{pmatrix}\mathbf{ X}_{2}\\ \mathbf{O}\\ \mathbf{I}_{n}\end{pmatrix}. \tag{22}\] By applying Theorem 1, \(\mathbf{J}\) can be constructed as follows: \[\mathbf{J}=\begin{pmatrix}\mathbf{O}&\mathbf{X}_{2}^{H}&\mathbf{O}&\mathbf{I}_{ n}\\ \mathbf{X}_{2}&\mathbf{I}_{n}&\mathbf{O}&\mathbf{O}\\ \mathbf{O}&\mathbf{O}&\mathbf{X}_{1}&-\mathbf{I}_{n}\\ \mathbf{I}_{n}&\mathbf{O}&-\mathbf{I}_{n}&\mathbf{O}\end{pmatrix}. \tag{23}\] Let \[\mathbf{C}=\begin{pmatrix}\mathbf{B}&\mathbf{O}\\ \mathbf{O}&\mathbf{O}_{3n}\end{pmatrix}, \tag{24}\] then \(\mathcal{G}_{\mathbf{L}}^{\mathcal{D}_{M_{1}}}(\mathbf{z}\mathbf{I}_{\mathbf{M }_{1}})\) can be derived from: \[\mathcal{G}_{\mathbf{J}}^{\mathcal{D}_{an}}(\mathbf{C})=\begin{pmatrix} \mathcal{G}_{\mathbf{L}}^{\mathcal{D}_{n}}(\mathbf{B})&\mathbf{O}\\ \mathbf{O}&*\end{pmatrix}=\begin{pmatrix}\mathcal{G}_{\mathbf{L}}^{\mathcal{D}_{M_ {1}}}(\mathbf{z}\mathbf{I}_{\mathbf{M}_{1}})&\mathbf{O}\\ \mathbf{O}&*\end{pmatrix}. \tag{25}\] Note that \(\mathbf{J}\) is consisted of the \(1\)-degree of \(\mathbf{H}_{1}\) and \(\mathbf{H}_{2}\), it can be expressed as the sum of a deterministic matrix and a random hermitian matrix as \(\mathbf{J}=\mathbf{\bar{J}}+\mathbf{\bar{J}}\). The detailed matrices are expressed as follows: \[\mathbf{\bar{J}}=\begin{pmatrix}\mathbf{O}&\mathbf{\bar{X}}_{2}^{H}&\mathbf{O }&\mathbf{I}_{n}\\ \mathbf{\bar{X}}_{2}&\mathbf{I}_{n}&\mathbf{O}&\mathbf{O}\\ \mathbf{O}&\mathbf{O}&\mathbf{\bar{X}}_{1}&-\mathbf{I}_{n}\\ \mathbf{I}_{n}&\mathbf{O}&-\mathbf{I}_{n}&\mathbf{O}\end{pmatrix},\mathbf{\bar{J}}= \begin{pmatrix}\mathbf{O}&\mathbf{\bar{X}}_{2}^{H}&\mathbf{O}&\mathbf{O}\\ \mathbf{\bar{X}}_{2}&\mathbf{O}&\mathbf{O}&\mathbf{O}\\ \mathbf{O}&\mathbf{O}&\mathbf{\bar{X}}_{1}&\mathbf{O}\\ \mathbf{O}&\mathbf{O}&\mathbf{O}&\mathbf{O}\end{pmatrix}, \tag{26}\] \[\mathbf{\bar{X}}_{1}=\begin{pmatrix}\mathbf{O}&\mathbf{\bar{H}}_{1}\\ \mathbf{\bar{H}}_{1}^{H}&\mathbf{O}\end{pmatrix},\mathbf{\bar{X}}_{1}=\begin{pmatrix} \mathbf{O}&\mathbf{\bar{H}}_{1}\\ \mathbf{\bar{H}}_{1}^{H}&\mathbf{O}\end{pmatrix}, \tag{27}\] \[\mathbf{\bar{X}}_{2}=\begin{pmatrix}\mathbf{O}_{(n-M_{2})\times M_{1}}&\mathbf{O }\\ \mathbf{O}&\mathbf{\bar{H}}_{2}\end{pmatrix},\mathbf{\bar{X}}_{2}=\begin{pmatrix} \mathbf{O}_{(n-M_{2})\times M_{1}}&\mathbf{O}\\ \mathbf{O}&\mathbf{\bar{H}}_{2}\end{pmatrix}.\] with the elements in \(\mathcal{A}\). With an element \(\mathbf{\mathcal{X}}\in\mathcal{M}_{n}(\mathcal{A})\), define a map \(\mathcal{E}_{\mathcal{D}_{n}}:\mathcal{M}_{n}(\mathcal{A})\rightarrow\mathcal{D} _{n}\) as follows: \[\mathcal{E}_{\mathcal{D}_{n}}\{\mathbf{\mathcal{X}}\}=\text{diag}\{\phi(\mathcal{X} _{1,1}),\phi(\mathcal{X}_{2,2}),...,\phi(\mathcal{X}_{n,n})\}. \tag{29}\] It can be inferred that for any deterministic matrix \(\mathbf{Y}\in\mathcal{M}_{n}\), \(\mathcal{E}_{\mathcal{D}_{N}}\{\widehat{\mathbf{\mathcal{H}}}_{i}^{H}\mathbf{Y} \widehat{\mathbf{\mathcal{H}}}_{i}\}=E_{\mathcal{D}_{N}}\{\widehat{\mathbf{\mathcal{H }}}_{i}^{H}\mathbf{Y}\widehat{\mathbf{\mathcal{H}}}_{i}\}\). Define the \(\mathcal{D}_{n}\)-valued Cauchy transform of \(\mathbf{X}\) as follows: \[\mathcal{G}_{\mathbf{\mathcal{X}}}^{\mathcal{D}_{n}}(\mathbf{A})=\mathcal{E}_{ \mathcal{D}_{n}}\{(\mathbf{A}-\mathbf{\mathcal{X}})^{-1}\}, \tag{30}\] where \(\mathbf{A}\in\mathcal{D}_{n}\) satisfies \(\frac{1}{2i}(\mathbf{A}-\mathbf{A}^{H})\succ 0\). Then the following theorem is presented: **Theorem 2**.: \(\mathbf{\mathcal{J}}\) _and \(\mathbf{J}\) satisfying:_ \[\lim_{n\rightarrow+\infty}\mathcal{G}_{\mathbf{J}}^{\mathcal{D}_{n}}(z \mathbf{I}_{n})-\mathcal{G}_{\mathbf{\mathcal{J}}}^{\mathcal{D}_{n}}(z\mathbf{I}_ {n})=0. \tag{31}\] Proof.: See Appendix A-B in [22], and this theorem is a special case of the one in [22]. Then by deriving \(\mathcal{G}_{\mathbf{\mathcal{J}}}^{\mathcal{D}_{n}}(z\mathbf{I}_{n})\), \(\mathcal{G}_{\mathbf{J}}^{\mathcal{D}_{n}}(\mathbf{C})\) can be obtained. Before that, a Lemma is required and introduced: **Lemma 1**.: _Assume \(\mathbf{X}\in\mathbb{C}^{m\times n}\) is a Gaussian random matrix with mean \(\mathbf{O}\) and covariance matrix \(\mathbf{I}\), \(\mathbf{Y}\in\mathbb{C}^{n\times n}\) is a deterministic diagonal matrix independent from \(\mathbf{X}\), then_ \[E_{\mathcal{D}_{m}}\{\mathbf{X}\mathbf{Y}\mathbf{X}^{H}\}=\text{Tr}\{\mathbf{ Y}\}\mathbf{I}_{m}. \tag{32}\] Proof.: See Appendix A. Then the approach to derive \(\mathcal{G}_{\mathbf{\mathcal{J}}}^{\mathcal{D}_{n}}(z\mathbf{I}_{n})\) can be shown as follows: **Theorem 3**.: _Divide \(\mathcal{G}_{\mathbf{\mathcal{J}}}^{\mathcal{D}_{n}}(\mathbf{C})\) into several block matrices as follows:_ _Denote \(\mathcal{G}_{\mathbf{\mathcal{J}}}^{\mathcal{D}_{n}}(\mathbf{C})=\text{blkdiag}\{ \mathbf{E}_{1},\mathbf{E}_{2},\mathbf{E}_{3},\mathbf{E}_{4}\}\), and \(\mathbf{E}_{i}=\text{blkdiag}\{\mathbf{E}_{i,1},\mathbf{E}_{i,2}\}\), where \(\mathbf{E}_{1,1}\in\mathbb{C}^{M_{1}\times M_{1}}\), \(\mathbf{E}_{1,2}\in\mathbb{C}^{N\times N}\), \(\mathbf{E}_{2,1}\in\mathbb{C}^{n-M_{2}\times n-M_{2}}\), \(\mathbf{E}_{2,2}\in\mathbb{C}^{M_{2}\times M_{2}}\), \(\mathbf{E}_{3,1}\in\mathbb{C}^{M_{1}\times M_{1}}\), and \(\mathbf{E}_{3,2}\in\mathbb{C}^{N\times N}\) are diagonal matrices. Then \(\mathbf{E}_{i,j}\) satisfy the following equations:_ \[\mathbf{E}_{1,1}=\text{diag}\{((z-\text{Tr}\{\mathbf{E}_{1,2}\}) \mathbf{I}_{M_{1}}-\bar{\mathbf{H}}_{1}\mathbf{A}_{1}^{-1}\bar{\mathbf{H}}_{1 }^{H})^{-1}\}, \tag{33}\] \[\mathbf{E}_{1,2}=\text{diag}\{(\mathbf{A}_{1}-(z-\text{Tr}\{ \mathbf{E}_{1,2}\})^{-1}\bar{\mathbf{H}}_{1}^{H}\bar{\mathbf{H}}_{1})^{-1}\},\] (34) \[\mathbf{E}_{2,2}=\text{diag}\{(\bar{\mathbf{H}}_{2}\mathbf{A}_{2} ^{-1}\bar{\mathbf{H}}_{2}^{H}-(1+\text{Tr}\{\mathbf{E}_{1,2}\})\mathbf{I}_{M_ {2}})^{-1}\}, \tag{35}\] _where_ \[\mathbf{A}_{1}=(1+\text{Tr}\{\mathbf{E}_{1,2}\})^{-1}\bar{\mathbf{H}}_{2}^{H} \bar{\mathbf{H}}_{2}-(\text{Tr}\{\mathbf{E}_{2,2}\}+\text{Tr}\{\mathbf{E}_{1,1 }\})\mathbf{I}_{N}, \tag{36}\] \[\mathbf{A}_{2}=(z-\text{Tr}\{\mathbf{E}_{1,2}\})^{-1}\bar{\mathbf{H }}_{1}^{H}\bar{\mathbf{H}}_{1}+(\text{Tr}\{\mathbf{E}_{2,2}\}+\text{Tr}\{ \mathbf{E}_{1,1}\})\mathbf{I}_{N}. \tag{37}\] Proof.: See Appendix B. From the above theorem, the numerical results of \(\mathbf{E}_{i,j}\) with a certain input \(z_{0}\) can be obtained with iteration. Then the asymptotic result of \(G_{\mathbf{L}}(z_{0})\) can be obtained by \[G_{\mathbf{L}}(z_{0})\approx\frac{1}{M_{1}}\text{Tr}\{\mathbf{E}_{1,1}\}. \tag{38}\] The accuracy improves when \(n\) increases, i.e., when the number of antennas increases. Therefore, the approach provides more accurate results in large-scale MIMO scenarios. To obtain \(G_{\omega}(z_{0})\), the following theorem is introduced: **Theorem 4**.: \(G_{\omega}(z_{0})\) _can be expressed as follows:_ \[G_{\omega}(z_{0})=\frac{M_{1}}{S}G_{\mathbf{L}}(z_{0})-\frac{M_{1}-S}{Sz_{0}}. \tag{39}\] Proof.: See Appendix C. #### Iii-B2 The case \(M_{2}<n<M_{1}+M_{2}\) When \(M_{2}<N<M_{1}+M_{2}\), it is difficult to find a matrix that is constructed with \(\mathbf{H}_{1}\) and \(\mathbf{H}_{2}\) which has the same eigenvalues as \(\omega\). Therefore, we add rows to the bottoms of \(\mathbf{H}_{2}\) to transfer this case to the \(M_{2}=N\) case. Before that, an important theorem is introduced: **Theorem 5**.: _For two matrices \(\mathbf{H}_{1}\in\mathbb{C}^{M_{1}\times N}\), \(\mathbf{H}_{2}\in\mathbb{C}^{M_{2}\times N}\), \(M_{2}<N<M_{1}+M_{2}\). \(S=M_{1}+M_{2}-N\). Construct a new matrix_ \[\mathbf{H}_{3}=\begin{pmatrix}\mathbf{H}_{2}\\ \epsilon\mathbf{F}\end{pmatrix}, \tag{40}\] _where \(\mathbf{F}\in\mathbb{C}^{N-M_{2}\times N}\) and satisfies \(\text{rank}\{\mathbf{H}_{3}\}=N\). \(\epsilon>0\) is an approximation parameter. Assume the GSVs of pairs \(\{\mathbf{H}_{1},\mathbf{H}_{3}\}\) is sorted as from the smallest to the largest \(\mu_{1}<\mu_{2}<...<\mu_{M_{1}}\). Then when \(\epsilon\to 0\), \(\mu_{1},\mu_{2},...,\mu_{S}\) continuously converges to the GSVs of pairs \(\{\mathbf{H}_{1},\mathbf{H}_{2}\}\), and \(\mu_{S+1},\mu_{S+2},...,\mu_{M_{1}}\rightarrow+\infty\), respectively._ Proof.: See [25]. Choose \(\mathbf{F}\) as the first \(N-M_{2}\) rows of \(\mathbf{I}_{N}\) and construct \(\mathbf{H}_{3}\) as (40). Denote \(\mu\) as the GSV of \(\{\mathbf{H}_{1},\mathbf{H}_{3}\}\), then \(\mu\) equals to the eigenvalues of \(\mathbf{L}=\mathbf{H}_{1}(\mathbf{H}_{3}^{H}\mathbf{H}_{3})^{-1}\mathbf{H}_{1}^{H}\), which can be derived by using the similar approach in the case of \(M_{2}>N\). Denote \(n=M_{1}+N\), construct \[\mathbf{J}=\begin{pmatrix}\mathbf{O}&\mathbf{X}_{3}^{H}&\mathbf{O}&\mathbf{I}_ {n}\\ \mathbf{X}_{3}&\mathbf{I}_{n}&\mathbf{O}&\mathbf{O}\\ \mathbf{O}&\mathbf{O}&\mathbf{X}_{1}&-\mathbf{I}_{n}\\ \mathbf{I}_{n}&\mathbf{O}&-\mathbf{I}_{n}&\mathbf{O}\end{pmatrix}., \tag{41}\] where \[\mathbf{X}_{ \[\tilde{\mathbf{H}}_{3}=\begin{pmatrix}\tilde{\mathbf{H}}_{2}\\ \epsilon\mathbf{F}\end{pmatrix},\widetilde{\mathbf{H}}_{3}=\begin{pmatrix} \widetilde{\mathbf{H}}_{2}\\ \mathbf{O}\end{pmatrix}. \tag{46}\] Then the free deterministic equivalent of \(\mathbf{J}\) can be constructed as follows. Define \(\widetilde{\boldsymbol{\mathcal{H}}}_{i}\) as the same matrix in the case of \(M_{2}\geq N\). Then by replacing all \(\mathbf{H}_{1}\) with \(\boldsymbol{\mathcal{H}}_{1}=\bar{\mathbf{H}}_{1}+\widetilde{\boldsymbol{ \mathcal{H}}}_{1}\) and \(\mathbf{H}_{3}\) with \[\boldsymbol{\mathcal{H}}_{3}=\begin{pmatrix}\bar{\mathbf{H}}_{2}\\ \epsilon\mathbf{F}\end{pmatrix}+\begin{pmatrix}\widetilde{\boldsymbol{ \mathcal{H}}}_{2}\\ \mathbf{O}\end{pmatrix}, \tag{47}\] in \(\mathbf{J}\), \(\boldsymbol{\mathcal{J}}\) can be constructed as the free deterministic equivalent of \(\mathbf{J}\). \(\boldsymbol{\mathcal{J}}\) can be written as \(\boldsymbol{\mathcal{J}}=\bar{\mathbf{J}}+\widetilde{\boldsymbol{\mathcal{J}}}\), where \(\widetilde{\boldsymbol{\mathcal{J}}}\) can be constructed by replacing \(\bar{\mathbf{H}}_{i}\) with \(\widetilde{\boldsymbol{\mathcal{H}}}_{i}\) in \(\widetilde{\mathbf{J}}\). \(\boldsymbol{\mathcal{J}}\) and \(\mathbf{J}\) satisfy (31). The approach to derive \(\mathcal{G}^{\mathcal{D}_{\boldsymbol{\mathcal{H}}}}_{\boldsymbol{\mathcal{J}}} (z\mathbf{I}_{n})\) is presented as follows: **Theorem 6**.: _Deviate \(\mathcal{G}^{\mathcal{D}_{\boldsymbol{\mathcal{H}}}}_{\boldsymbol{\mathcal{J}}} (\mathbf{C})\) into several block matrices as follows:_ _Denote \(\mathcal{G}^{\mathcal{D}_{\boldsymbol{\mathcal{H}}}}_{\boldsymbol{\mathcal{J}}} (\mathbf{C})=\text{blkdiag}\{\mathbf{E}_{1},\mathbf{E}_{2},\mathbf{E}_{3}, \mathbf{E}_{4}\}\), and \(\mathbf{E}_{i}=\text{blkdiag}\{\mathbf{E}_{i,1},\mathbf{E}_{i,2}\}\), where \(\mathbf{E}_{1,1}\in\mathbb{C}^{M_{1}\times M_{1}}\), \(\mathbf{E}_{1,2}\in\mathbb{C}^{N\times N}\), \(\mathbf{E}_{2,1}\in\mathbb{C}^{M_{1}\times M_{1}}\), \(\mathbf{E}_{2,2}\in\mathbb{C}^{N\times N}\), \(\mathbf{E}_{3,1}\in\mathbb{C}^{M_{1}\times M_{1}}\), and \(\mathbf{E}_{3,2}\in\mathbb{C}^{N\times N}\) are diagonal matrices. Then \(\mathbf{E}_{i,j}\) satisfy the following equations:_ \[\mathbf{E}_{1,1}=\text{diag}\{((z-\text{Tr}\{\mathbf{E}_{1,2}\})\mathbf{I}_{M _{1}}-\bar{\mathbf{H}}_{1}\mathbf{A}_{1}^{-1}\bar{\mathbf{H}}_{1}^{H})^{-1}\}, \tag{48}\] \[\mathbf{E}_{1,2}=\text{diag}\{(\mathbf{A}_{1}-(z-\text{Tr}\{\mathbf{E}_{1,2} \})^{-1}\bar{\mathbf{H}}_{1}^{H}\bar{\mathbf{H}}_{1})^{-1}\}, \tag{49}\] \[\mathbf{E}_{2,2}^{(1)}=\text{diag}\{(\bar{\mathbf{H}}_{2}\mathbf{A}_{2}^{-1} \bar{\mathbf{H}}_{2}^{H}-(1+\text{Tr}\{\mathbf{E}_{1,2}\})\mathbf{I}_{M_{2}}\] \[-\epsilon^{2}\bar{\mathbf{H}}_{2}\mathbf{A}_{2}^{-1}\mathbf{F}^{H}(\epsilon^{ 2}\mathbf{FA}_{2}^{-1}\mathbf{F}^{H}-\mathbf{I}_{N-M_{2}})^{-1}\mathbf{FA}_{2} ^{-1}\bar{\mathbf{H}}_{2}^{H})^{-1}\}, \tag{50}\] _where_ \[\mathbf{A}_{1}=(1+\text{Tr}\{\mathbf{E}_{1,2}\})^{-1}\bar{ \mathbf{H}}_{2}^{H}\bar{\mathbf{H}}_{2} +\epsilon^{2}\mathbf{F}^{H}\mathbf{F} \tag{51}\] \[-(\text{Tr}\{\mathbf{E}_{2,2}^{(1)}\}+\text{Tr}\{\mathbf{E}_{1,1} \})\mathbf{I}_{N},\] (52) \[\mathbf{A}_{2}=(z-\text{Tr}\{\mathbf{E}_{1,2}\})^{-1}\bar{ \mathbf{H}}_{1}^{H}\bar{\mathbf{H}}_{1}+(\text{Tr}\{\mathbf{E}_{2,2}^{(1)}\}+ \text{Tr}\{\mathbf{E}_{1,1}\})\mathbf{I}_{N}, \tag{53}\] \[\mathbf{F}=\big{(}\mathbf{I}_{N-M_{2}},\mathbf{O}_{N-M_{2}\times M_{2}},\big{)} \tag{54}\] \(\mathbf{E}_{2,2}^{(1)}\) _is the left-up \(M_{2}\times M_{2}\) block of \(\mathbf{E}_{2,2}\)._ Proof.: See Appendix D. Similarly, the numerical results of \(\mathbf{E}_{i,j}\) with a certain input \(z_{0}\) can be obtained with an iterative approach. Then \(G_{\mathbf{L}}(z_{0})\) can be obtained by using (38). To obtain \(G_{\omega}(z_{0})\), the following theorem is introduced: **Theorem 7**.: _When \(\epsilon\to 0\), \(G_{\omega}(z_{0})\) converges to the following result:_ \[G_{\omega}(z_{0})\rightarrow\frac{M_{1}}{S}G_{\mathbf{L}}(z_{0}). \tag{55}\] Proof.: See Appendix E. Now \(G_{\omega}(z)\) can be obtained in both cases, and can be used to derive \(\bar{R}_{i}\), which will be discussed in the next section. ### _The Results of Average Rates_ In this subsection, the relationship between \(\bar{R}_{i}\) and \(G_{\omega}(z)\) is studied. This requires the following theorem. **Theorem 8**.: \(\bar{R}_{i}\) _can be computed with \(G_{\omega}(z)\) as follows:_ \[\bar{R}_{1}=\log\left(1+\frac{l_{1}\rho}{td_{1}^{r}}\right)+\int_{0}^{\frac{l_{1} \rho}{td_{1}^{r}}}\left(\frac{1}{(y+1)^{2}}G_{\omega}(-(y+1)^{-1})\right)dy, \tag{56}\] \[\bar{R}_{2}=\int_{0}^{\frac{P}{td_{2}^{r}}}\left(-G_{\omega}(-(1+y))+l_{1}G_{ \omega}(-(1+l_{1}y))\right)dy, \tag{57}\] Proof.: See Appendix F. By applying this theorem, the approximated numerical values of \(\bar{R}_{i}\) can be obtained by computing the numerical integrations in (56) and (57). ## IV The Special Case of Rayleigh Fading In [14], the average rates of GSVD-base MIMO-NOMA communications systems with Rayleigh fading were provided. However, the expressions are not in closed-forme. Besides, the numbers of antennas are restricted as \(M_{1}=M_{2}\). Motivated by this, the close-formed expressions of \(\bar{R}_{i}\) with Rayleigh fading are analyzed and derived based on the results presented in the previous section. When the channel is modeled as Rayleigh fading, i.e., \(\bar{\mathbf{H}}_{i}=\mathbf{O}\), for the case of \(M_{2}\geq N\), (33) - (37) can be rewritten as follows: \[\text{Tr}\{\mathbf{E}_{1,1}\} =\text{Tr}\{(z-\text{Tr}\{\mathbf{E}_{1,2}\})^{-1}\mathbf{I}_{M_{1}}\} \tag{58}\] \[=M_{1}(z-\text{Tr}\{\mathbf{E}_{1,2}\})^{-1},\] \[\text{Tr}\{\mathbf{E}_{1,2}\} =-\text{Tr}\{(\text{Tr}\{\mathbf{E}_{2,2}\}+\text{Tr}\{\mathbf{E}_{1,1} \})^{-1}\mathbf{I}_{N}\} \tag{59}\] \[=-N(\text{Tr}\{\mathbf{E}_{2,2}\}+\text{Tr}\{\mathbf{E}_{1,1}\})^{-1},\] \[\text{Tr}\{\mathbf{E}_{2,2}\} =-\text{Tr}\{(1+\text{Tr}\{\mathbf{E}_{1,2}\})^{-1}\mathbf{I}_{M_{2}}\} \tag{60}\] \[=-M_{2}(1+\text{Tr}\{\mathbf{E}_{1,2}\})^{-1}.\] And for the case of \(M_{2}<N<M_{1}+M_{2}\). (48) - (53) can be rewritten as follows: \[\text{Tr}\{\mathbf{E}_{1,1}\}=M_{1}(z-\text{Tr}\{\mathbf{E}_{1,2}\})^{-1}, \tag{61}\] \[\text{Tr}\{\mathbf{E}_{1,2}\} =-\text{Tr}\{(\text{Tr}\{\mathbf{E}_{2,2}^{(1)}\}+\text{Tr}\{ \mathbf{E}_{1,1}\}-\epsilon^{2}\mathbf{F}^{H}\mathbf{F})^{-1}\mathbf{I}_{N}\}\] (62) \[=-M_{2}(\ **Theorem 9**.: _When \(\bar{\mathbf{H}}_{i}=\mathbf{O}\), the closed-form asymptotic results of \(G_{\mathbf{L}}(z)\) can be derived as follows:_ \[\begin{split}& G_{\mathbf{L}}(z)\approx\\ &\left\{\frac{M_{1}-N+2M_{1}z+M_{2}z-Nz+\sqrt{\Delta(z)}}{2M_{1}z (z+1)},z\neq-1,\right.\\ &-\frac{M_{1}+M_{2}-N}{M_{1}+M_{2}},z=-1,\end{split} \tag{63}\] _where_ \[\Delta(z)=(M_{1}-N)^{2}-2Qz+(M_{2}-N)^{2}z^{2}, \tag{64}\] \[Q=NM_{1}+NM_{2}+M_{1}M_{2}-N^{2}. \tag{65}\] Proof.: By solving (57) - (59), (63) can be obtained. Define \(I(a,b)=\int_{a}^{b}G_{\mathbf{L}}(z)dz\). The close-formed expression of \(I(a,b)\) can be derived by taking the definite integral of (63) as follows: \[I(a,b)=\frac{M_{1}-N}{2M_{1}}\log\frac{b}{a}+\frac{1}{2M_{1}}(I_{1}(a,b)+I_{2 }(a,b)+I_{3}(a,b)), \tag{66}\] where \[\begin{split}& I_{1}(a,b)=-|M_{1}-N|\times\\ &\log\left(\frac{a}{b}\frac{(M_{1}-N)^{2}-Qb+|M_{1}-N|\sqrt{ \Delta(b)}}{(M_{1}-N)^{2}-Qa+|M_{1}-N|\sqrt{\Delta(a)}}\right),\end{split} \tag{67}\] \[\begin{split}& I_{2}(a,b)=-|M_{2}-N|\times\\ &\log\left(\frac{Q-(M_{2}-N)^{2}b+|M_{2}-N|\sqrt{\Delta(b)}}{Q-(M _{2}-N)^{2}a+|M_{2}-N|\sqrt{\Delta(a)}}\right),\end{split} \tag{68}\] \[\begin{split}& I_{3}(a,b)=(M_{1}+M_{2})\times\\ &\log\left(\frac{Q_{1}-Q_{2}b+(M_{1}+M_{2})\sqrt{\Delta(b)}}{Q_{1} -Q_{2}a+(M_{1}+M_{2})\sqrt{\Delta(a)}}\right),\end{split} \tag{69}\] \[\begin{split}& Q_{1}=M_{1}^{2}+M_{1}M_{2}+NM_{2}-NM_{1},\\ & Q_{2}=M_{2}^{2}+M_{1}M_{2}+NM_{1}-NM_{2}.\end{split} \tag{70}\] \(\Delta(a)\) and \(\Delta(b)\) are defined by substituting \(z\) with \(a\) and \(b\) in (64), respectively. The integration only involves basic mathematical knowledge but is tedious, so the detailed process is not concluded in this paper. Then closed-form expressions of \(\bar{R}_{i}\) can be obtained as follows: **Theorem 10**.: _When \(\bar{\mathbf{H}}_{i}=\mathbf{O}\), the asymptotic result of \(\bar{R}_{i}\) can be expressed as follows:_ * _When_ \(M_{2}\geq N\)_._ \[\bar{R}_{1}\approx\frac{M_{1}}{S}\log\left(1+\frac{t_{1}\rho}{td_{1}^{r}}\right) +\frac{M_{1}}{S}I(-1,-\frac{td_{1}^{r}}{t_{1}\rho+td_{1}^{r}}),\] (71) \[\bar{R}_{2}\approx\frac{M_{1}}{S}I(-1-\frac{\rho l_{1}}{d_{2}^{r}},-1-\frac{\rho}{d_{2}^{r}})-\frac{M_{1}-S}{S}\ln\frac{d_{2}^{r}+\rho}{d_{2}^{ r}+\rho l_{1}}.\] (72) * _When_ \(M_{2}<N<M_{1}+M_{2}\)__ \[\bar{R}_{1}\approx\log\left(1+\frac{l_{1}\rho}{td_{1}^{r}}\right)+\frac{M_{1}}{S }I(-1,-\frac{td_{1}^{r}}{l_{1}\rho+td_{1}^{r}}),\] (73) \[\bar{R}_{2}\approx\frac{M_{1}}{S}I(-1-\frac{\rho l_{1}}{td_{2}^{r}},-1-\frac{\rho}{td_{2}^{r}}).\] (74) Proof.: See Appendix G. In [14], the PDF of \(\omega\) is derived when \(M_{1}=M_{2}\). To compare the results, \(f_{\omega}(x)\) is derived from (63) as follows: **Theorem 11**.: _When \(\bar{\mathbf{H}}_{i}=\mathbf{O}\), the PDF of \(\omega\) can be expressed as follows:_ \[f_{\omega}(x)\!=\!\begin{cases}\frac{\sqrt{2Qx\!-\!(M_{2}\!-\!N)^{2}x^{2}\!-\! (M_{1}\!-\!N)^{2}}}{2\pi Sx(x+1)},x_{1}\!<\!x\!<\!x_{2}\\ 0,\text{otherwise}\end{cases}, \tag{75}\] _where_ \[\begin{split}& x_{1}=\frac{Q-2\sqrt{M_{1}M_{2}(NM_{1}+NM_{2}-N^{2} )}}{(M_{2}-N)^{2}},\\ & x_{2}=\frac{Q+2\sqrt{M_{1}M_{2}(NM_{1}+NM_{2}-N^{2})}}{(M_{2}-N)^{2 }}.\end{split} \tag{76}\] Proof.: See Appendix H. When \(M_{1}=M_{2}\), \(f_{\omega}(x)\) can be verified as the same as the results in [14]. This can also verify the analytical results in Theorem 3, 6 and 9. ## V Simulation Results In this section, numerical results are presented to validate the proposed analytical results. All the numerical results are obtained from \(10^{6}\) simulation experiments with Matlab. The parameters of the channel are chosen as \(d_{1}=200\) m, \(d_{2}=2000\) m, \(\tau=2\),. The power of the white noise is set as \(-20\) dBm. Fig. 1 shows the numerical and analytical results of sum rates \(\bar{R}_{1}+\bar{R}_{2}\) achieved by NOMA. The two sets of antenna numbers are chosen as \((M_{1},M_{2},N)=(24,24,36)\) and \((M_{1},M_{2},N)=(36,48,60)\). The power allocation coefficient is set as \(l_{1}=0.9\). The deterministic matrices \(\bar{\mathbf{H}}_{i}\) are randomly generated. The analytical results are obtained from Theorem 8. For comparison, the results of the traditional OMA scheme are also presented in Fig. 1. As shown in this figure, the gaps between numerical and analytical results are negligible, which verifies the accuracy of the proposed approach. Besides, it can be observed from the figure that the NOMA scheme has higher sum rates than that of OMA scheme, which shows the superior performance of the NOMA scheme. In particular, the gaps between the rates of GSVD-NOMA and the other schemes increase as \(P\) increases, which demonstrates the great benefits of GSVD at high SNR. Fig. 2 presents the numerical and analytical results of rates with different numbers of antennas. The transmission power is set as \(P=40\) dBm. The power allocation coefficient is set as \(l_{1}=0.05\). For the case of \(M_{2}<N<M_{1}+M_{2}\), the approximation parameter \(\epsilon\) is set as \(\epsilon=10^{-5}\). The numbers of antennas \(M_{1}\), \(M_{2}\) and \(N\) increase proportionally, with \(\mu\geq 1\) as the coefficient. The proportions are chosen as \((M_{1},M_{2},N)=(\mu,\mu,\mu)\), \((M_{1},M_{2},N)=(\mu,2\mu,\mu)\), \((M_{1},M_{2},N)=(2\mu,2\mu,3\mu)\) and \((M_{1},M_{2},N)=(3\mu,4\mu,5\mu)\), respectively. The analytical results are obtained from Theorem 8. To have the same channel conditions for different \(M_{1},M_{2},N\), the deterministic matrices \(\bar{\mathbf{H}}_{i}\) are set as all-one matrices. From Fig. 2, the curves of numerical and analytical show a similar performance, which verifies the analytical results. In addition, it is worth noting that the analytical results are still pretty close to the numerical results when the numbers of antennas are very small, for example, \((M_{1},M_{2},N)=(1,1,1)\). This indicates proposed approach in this paper shows a good performance even in small-scale MIMO communications, which demonstrates the generalization of the approach. Fig. 3 presents the numerical results of the channel average data rate \(R_{i}=\frac{1}{S}\sum_{j=1}^{S}R_{i,j}\) as scatter points. The results of \(\frac{1}{S}\bar{R}_{i}\) are also presented as real lines for comparison. Channel proportions are chosen as \((M_{1},M_{2},N)=(\mu,2\mu,\mu)\) and \((M_{1},M_{2},N)=(2\mu,2\mu,3\mu)\), respectively. From Fig. 3, \(R_{i}\) can be observed to converge to \(\frac{1}{S}\bar{R}_{i}\) when \(\mu\) increases. When \(\mu\geq 20\), the scatter points converge on the real lines, and \(\frac{1}{S}\bar{R}_{i}\) can be used to approximate the value of \(R_{i}\). This result manifests the asymptotic property of large-scale MIMO communication systems. Fig. 4 presents the numerical and analytical results of rates with different \(\epsilon\) when \(M_{2}<N<M_{1}+M_{2}\). The two sets of antennas are chosen as \((M_{1},M_{2},N)=(24,24,36)\) and \((M_{1},M_{2},N)=(36,48,60)\), respectively. \(\epsilon\) is chosen from \(10\) to \(0.1\). The deterministic matrices \(\bar{\mathbf{H}}_{i}\) are randomly generated. Since the only parameter that changes is the approximate parameter \(\epsilon\), the numerical results of average rates remain unchanged. It can be observed from Fig. 4 that the accuracy of analytical results increases rapidly when \(\epsilon\) decreases. Specifically, when \(-\log\epsilon\geq 0.6\), i.e., \(\epsilon\leq 0.25\), the gap between numerical and analytical results can hardly be distinguished. This demonstrates the high accuracy of the proposed method in this paper. Fig. 5 presents the numerical and analytical results of rates with different numbers of antennas when \(\bar{\mathbf{H}}_{i}=\mathbf{O}\), i.e., the channel is modeled as Rayleigh fading. The transmission power is set to be \(P=40\) dBm. The power allocation coefficient is set as \(l_{1}=0.05\). The proportions are choosen as \((M_{1},M_{2},N)=(2\mu,2\mu,\mu)\) and \((M_{1},M_{2},N)=(3\mu,4\mu,5\mu)\), respectively. The analytical results are obtained from Theorem 10. It can be seen from Fig. 5 that the differences between numerical and analytical results are almost invisible, which can verify the accuracy of the closed-form expressions in Theorem 10. In addition, as is the case with the Rician distribution, the analytical results coincide with the numerical results well even \(\mu\) is very small, which shows the practical significance of the proposed approach. Fig. 6 presents the closed-form analytical results in Theorem 10, analytical results obtained with numerical integration in Theorem 8, and the results in [14]. Numerical results are also presented for comparison. Since the results in [14] only considered the case of \(M_{1}=M_{2}\), the two sets of antennas are chosen as \((M_{1},M_{2},N)=(24,24,12)\) and \((M_{1},M_{2},N)=(48,48,60)\), respectively. The power allocation coefficient is set as \(l_{1}=0.05\). It can be seen that there is only a little difference between these results, which demonstrates the Fig. 1: Numerical and analytical results of average sum rates \(\bar{R}_{1}+\bar{R}_{2}\) for the NOMA and OMA schemes with different SNR. Fig. 2: Numerical and analytical results of average rates with different numbers of antennas. \(\mu\) represents the proportionality coefficient of the number of antennas. accuracy of these three methods. ## VI Conclusion In this paper, a GSVD-based MIMO-NOMA communication system with Rician fading was considered. Based on the operator-valued free probability theory, the linearization trick and the deterministic equivalents method were exploited to obtain \(G_{\omega}(z_{0})\), the Cauchy transform of GSVs of channel matrices. Then the average rates \(\bar{R}_{i}\) were obtained from \(G_{\omega}(z_{0})\). In addition, the special case when \(\bar{\mathbf{H}}_{i}=\mathbf{O}\) was considered. The close-formed expressions of average rates are derived. Simulation results were provided to verify the accuracy of the analytical results. ## Appendix A Proof of Lemma 1 Denote \(\mathbf{X}=\{\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{n}\}\), \(\mathbf{Y}=\text{diag}\{y_{1},...,y_{n}\}\), then \(E_{\mathcal{D}_{m}}\{\mathbf{XYX}^{H}\}\) can be derived as \[\begin{split} E_{\mathcal{D}_{m}}\{\mathbf{XYX}^{H}\}& =E_{\mathcal{D}_{m}}\{\sum_{i=1}^{n}y_{i}\mathbf{x}_{i}\mathbf{x}_{i }^{H}\}\\ &=\sum_{i=1}^{n}y_{i}E_{\mathcal{D}_{m}}\{\mathbf{x}_{i}\mathbf{x }_{i}^{H}\}=\text{Tr}\{\mathbf{Y}\}\mathbf{I}_{m}.\end{split} \tag{77}\] The lemma is now proved. Fig. 4: Numerical and analytical results of rates with different choose of \(\epsilon\) when \(M_{2}<N\). Fig. 5: Numerical and analytical results of rates with different numbers of antennas when \(\bar{\mathbf{H}}_{i}=\mathbf{O}\). \(\mu\) represents the proportionality coefficient of the number of antennas. Fig. 3: Numerical results of average channel data rate \(R_{i}=\frac{1}{8}\sum_{j=1}^{S}R_{i,j}\) and analytical results of average sum rates \(\frac{1}{8}\bar{R}_{i}\) with different numbers of antennas. \(\mu\) represents the proportionality coefficient of the number of antennas. Fig. 6: Numerical results and analytical results obtained by different methods when \(\bar{\mathbf{H}}_{i}=\mathbf{O}\) and \(M_{1}=M_{2}\). ## Appendix B Proof of Theorem 3 Since \(\mathbf{J}=\bar{\mathbf{J}}+\widetilde{\mathcal{J}}\), \(\mathcal{G}_{\mathcal{J}}^{\mathcal{D}_{\mathbf{z}_{n}}}(\mathbf{C})\) can be expressed with \(\bar{\mathbf{J}}\) and \(\widetilde{\mathcal{J}}\) by applying subordination theorem as follows: \[\mathcal{G}_{\mathcal{J}}^{\mathcal{D}_{\mathbf{z}_{n}}}(\mathbf{C}) =\mathcal{G}_{\bar{\mathbf{J}}}^{\mathcal{D}_{\mathbf{z}_{n}}}( \mathbf{C}-\mathcal{R}_{\widetilde{\mathcal{J}}}^{\mathcal{D}_{\mathbf{z}_{n} }}(\mathcal{G}_{\mathcal{J}}^{\mathcal{D}_{\mathbf{z}_{n}}}(\mathbf{C}))) \tag{78}\] \[=E_{\mathcal{D}_{\mathbf{z}_{n}}}\{(\mathbf{C}-\mathcal{R}_{ \widetilde{\mathcal{J}}}^{\mathcal{D}_{\mathbf{z}_{n}}}(\mathcal{G}_{ \mathcal{J}}^{\mathcal{D}_{\mathbf{z}_{n}}}(\mathbf{C}))-\bar{\mathbf{J}})^{-1 }\}.\] where \(\mathcal{R}_{\widetilde{\mathcal{J}}}^{\mathcal{D}_{\mathbf{z}_{n}}}()\) is the \(\mathcal{D}_{4n}\)-valued R-transform [22]. Since \(\widetilde{\mathcal{J}}\) is a hermitian matrix whose elements on and above the diagonal are freely independent, it is semicircular over \(\mathcal{D}_{12n}\) and free from the deterministic matrix \(\tilde{G}\)[22]. From [26], Th. 7.2, the \(\mathcal{D}_{4n}\)-valued R-transform of a semicircular variable can be written as follows: \[\mathcal{R}_{\widetilde{\mathcal{J}}}^{\mathcal{D}_{\mathbf{z}_{n}}}(\mathbf{ D})=\mathcal{E}_{\mathcal{D}_{4n}}\{\widetilde{\mathcal{J}}\mathbf{D}\widetilde{ \mathcal{J}}\}. \tag{79}\] Assume \(\mathcal{G}_{\mathcal{D}_{\mathbf{z}}}^{\mathcal{D}_{\mathbf{z}_{n}}}( \mathbf{C})=\text{blkdiag}\{\mathbf{E}_{1},\mathbf{E}_{2},\mathbf{E}_{3}, \mathbf{E}_{4}\}\) and substitute it into \(\mathcal{R}_{\widetilde{\mathcal{J}}}^{\mathcal{D}_{\mathbf{z}}}(\mathcal{G} _{\mathcal{J}}^{\mathcal{D}_{\mathbf{z}}}(\mathbf{C}))\), we have \[\mathcal{R}_{\widetilde{\mathcal{J}}}^{\mathcal{D}_{\mathbf{z}_{n}}}( \mathcal{G}_{\mathcal{J}}^{\mathcal{D}_{\mathbf{z}_{n}}}(\mathbf{C}))=\text{ blkdiag}\{\mathcal{E}_{\mathcal{D}_{\mathbf{z}}}\{\widetilde{\mathcal{X}}_{2}^{H} \mathbf{E}_{2}\widetilde{\mathcal{X}}_{2}\},\] \[\mathcal{E}_{\mathcal{D}_{\mathbf{z}}}\{\widetilde{\mathcal{X}}_ {2}\mathbf{E}_{1}\widetilde{\mathcal{X}}_{2}^{H}\},\mathcal{E}_{\mathcal{D}_{ \mathbf{z}}}\{\widetilde{\mathcal{X}}_{1}\mathbf{E}_{3}\widetilde{\mathcal{ X}}_{1}\},\mathbf{O}\}. \tag{80}\] To make it more concise, we denote \[\mathcal{R}_{\widetilde{\mathcal{J}}}^{\mathcal{D}_{\mathbf{z}_{n}}}(\mathcal{G }_{\mathcal{J}}^{\mathcal{D}_{\mathbf{z}_{n}}}(\mathbf{C}))=\text{blkdiag}\{ \mathbf{K}_{1},\mathbf{K}_{2},\mathbf{K}_{3},\mathbf{O}\}. \tag{81}\] Then by substituting (80) into (78), the following equations can be derived: \[\mathbf{E}_{1} =E_{\mathcal{D}_{\mathbf{z}}}\{(\mathbf{B}-\mathbf{K}_{1}- \mathbf{K}_{3}-\bar{\mathbf{X}}_{1}+\bar{\mathbf{X}}_{2}^{H}(\mathbf{K}_{2} +\mathbf{I}_{n})^{-1}\bar{\mathbf{X}}_{2})^{-1}\}, \tag{82}\] \[\mathbf{E}_{2} =E_{\mathcal{D}_{\mathbf{z}}}\{(-\mathbf{K}_{2}\!-\!\mathbf{I}_{ n}\!-\!\bar{\mathbf{X}}_{2}(\mathbf{B}\!-\!\mathbf{K}_{1}\!-\!\mathbf{K}_{3} \!-\!\bar{\mathbf{X}}_{1})^{-1}\bar{\mathbf{X}}_{2}^{H})^{-1}\},\] (83) \[\mathbf{E}_{3} =E_{\mathcal{D}_{\mathbf{z}}}\{(\mathbf{B}-\mathbf{K}_{1}- \mathbf{K}_{3}-\bar{\mathbf{X}}_{1}+\bar{\mathbf{X}}_{2}^{H}(\mathbf{K}_{2} +\mathbf{I}_{n})^{-1}\bar{\mathbf{X}}_{2})^{-1}\}. \tag{84}\] Then we are going to derive \(\mathbf{K}_{i}\). Denote the diagonal matrix \(\mathbf{E}_{i}=\text{blkdiag}\{\mathbf{E}_{i,1},\mathbf{E}_{i,2}\}\), where \(\mathbf{E}_{1,1}\in\mathbb{C}^{M_{1}\times M_{1}}\), \(\mathbf{E}_{1,2}\in\mathbb{C}^{N\times N}\), \(\mathbf{E}_{2,1}\in\mathbb{C}^{n-M_{2}\times n-M_{2}}\), \(\mathbf{E}_{2,2}\in\mathbb{C}^{M_{2}\times M_{2}}\), \(\mathbf{E}_{3,1}\in\mathbb{C}^{M_{1}\times M_{1}}\), \(\mathbf{E}_{3,2}\in\mathbb{C}^{N\times N}\). By substituting \(\mathbf{E}_{2}\) into \(\mathbf{K}_{1}=E_{\mathcal{D}_{\mathbf{z}_{n}}}\{\widetilde{\mathcal{X}}_{2}^{H }\mathbf{E}_{2}\widetilde{\mathcal{X}}_{2}\}\), \(\mathbf{K}_{1}\) can be derived as follows: \[\mathbf{K}_{1}=\text{blkdiag}\{\mathbf{O},E_{\mathcal{D}_{N}}\{\mathbf{\tilde{ H}}_{2}^{H}\mathbf{E}_{2,2}\widetilde{\mathbf{H}}_{2}\}\}. \tag{85}\] From Lemma 1, \(\mathbf{K}_{1}\) can be derived as follows: \[\mathbf{K}_{1}=\text{blkdiag}\{\mathbf{O},\text{Tr}\{\mathbf{E}_{2,2}\}\mathbf{ I}_{N}\}. \tag{86}\] Similarly, \(\mathbf{K}_{2}\) and \(\mathbf{K}_{3}\) can be derived as follows: \[\mathbf{K}_{2}=\text{blkdiag}\{\mathbf{O},\text{Tr}\{\mathbf{E}_{1,2}\}\mathbf{ I}_{M_{2}}\}, \tag{87}\] \[\mathbf{K}_{3}=\text{blkdiag}\{\text{Tr}\{\mathbf{E}_{3,2}\}\mathbf{I}_{M_{1}}, \text{Tr}\{\mathbf{E}_{3,1}\}\mathbf{I}_{N}\}. \tag{88}\] By substituting \(\mathbf{K}_{2}\) with the above results, \(\tilde{\mathbf{X}}_{2}^{H}(\mathbf{K}_{2}+\mathbf{I}_{n})^{-1}\tilde{\mathbf{X}}_ {2}\) can be derived as follows: \[\tilde{\mathbf{X}}_{2}^{H}(\mathbf{K}_{2}+\mathbf{I}_{n})^{-1}\tilde{\mathbf{X}}_ {2}=\text{blkdiag}\{\mathbf{O},(1+\text{Tr}\{\mathbf{E}_{1,2}\})^{-1}\tilde{ \mathbf{H}}_{2}^{H}\tilde{\mathbf{H}}_{2}\}. \tag{89}\] Denote \(\mathbf{A}_{1}\) as (36), note that \(\mathbf{E}_{1,j}=\mathbf{E}_{3,j}\), then \(\mathbf{E}_{1}\) and \(\mathbf{E}_{3}\) can be derived as follows: \[\mathbf{E}_{1}=\mathbf{E}_{3}=E_{\mathcal{D}_{\mathbf{z}}}\{\begin{pmatrix}(z- \text{Tr}\{\mathbf{E}_{1,2}\})\mathbf{I}_{M_{1}}&-\bar{\mathbf{H}}_{1}\\ -\bar{\mathbf{H}}_{1}^{H}&\mathbf{A}_{1}\end{pmatrix}^{-1}\}. \tag{90}\] Extend (90) by applying the block matrix inverse formula in [27], (33) and (34) can be derived. Similarly, by substituting \(\mathbf{K}_{1}\) and \(\mathbf{K}_{3}\) with the results above, \(\bar{\mathbf{X}}_{2}(\mathbf{B}-\mathbf{K}_{1}-\mathbf{K}_{3}-\bar{\mathbf{X}}_{1})^ {-1}\bar{\mathbf{X}}_{2}^{H}\) can be derived as follows: \[\bar{\mathbf{X}}_{2}(\mathbf{B}-\mathbf{K}_{1}\!-\!\mathbf{K}_{3}-\bar{\mathbf{X}}_ {1})^{-1}\bar{\mathbf{X}}_{2}^{H}=\text{blkdiag}\{\mathbf{O},-\bar{\mathbf{H}}_{2} \mathbf{A}_{2}^{-1}\bar{\mathbf{H}}_{2}^{H}\}. \tag{91}\] Then \(\mathbf{E}_{2}\) can be derived as follows: \[\mathbf{E}_{2}\] \[=E_{\mathcal{D}_{\mathbf{z}}}\{\begin{pmatrix}-\mathbf{I}_{n\!-\!M Similarly, \(\mathbf{K}_{2}\) can be derived as follows: \[\mathbf{K}_{2}=\text{blkdiag}\{\mathbf{O},\text{Tr}\{\mathbf{E}_{1,2}\}\mathbf{I} _{M_{2}},\mathbf{O}_{N-M_{2}}\}. \tag{100}\] \(\mathbf{K}_{3}\) has the same expression with (88). By substituting \(\mathbf{K}_{2}\) with the results above, \(\bar{\mathbf{X}}_{3}^{H}(\mathbf{K}_{2}+\mathbf{I}_{n})^{-1}\bar{\mathbf{X}}_{3}\) can be derived as follows: \[\begin{split}&\bar{\mathbf{X}}_{3}^{H}(\mathbf{K}_{2}+\mathbf{I}_ {n})^{-1}\bar{\mathbf{X}}_{3}\\ &=\text{blkdiag}\{\mathbf{O},(1+\text{Tr}\{\mathbf{E}_{1,2}\})^{ -1}\bar{\mathbf{H}}_{2}^{H}\mathbf{H}_{2}+\epsilon^{2}\mathbf{J}^{H}\mathbf{J }\}.\end{split} \tag{101}\] Denote \(\mathbf{A}_{1}\) as (36), and \(\mathbf{E}_{1}\) and \(\mathbf{E}_{3}\) can be derived as the same expression as (90). Then (48) and (49) can be obtained. Similarly, by substituting \(\mathbf{K}_{1}\) and \(\mathbf{K}_{3}\) with the results above, \(\bar{\mathbf{X}}_{3}(\mathbf{B}-\mathbf{K}_{1}-\mathbf{K}_{3}-\bar{\mathbf{ X}}_{1})^{-1}\bar{\mathbf{X}}_{3}^{H}\) can be derived as follows: \[\begin{split}&\bar{\mathbf{X}}_{3}(\mathbf{B}-\mathbf{K}_{1}- \mathbf{K}_{3}-\bar{\mathbf{X}}_{1})^{-1}\bar{\mathbf{X}}_{3}^{H}=\text{blk diag}\{\mathbf{O},-\bar{\mathbf{H}}_{3}\mathbf{A}_{2}^{-1}\bar{\mathbf{H}}_{3}^{H}\}\\ &=\begin{pmatrix}\mathbf{O}&\mathbf{O}&\mathbf{O}\\ \mathbf{O}&\bar{\mathbf{H}}_{2}\mathbf{A}_{2}^{-1}\bar{\mathbf{H}}_{2}^{H}& \epsilon\bar{\mathbf{H}}_{2}\mathbf{A}_{2}^{-1}\mathbf{J}^{H}\\ \mathbf{O}&\epsilon\mathbf{J}\mathbf{A}_{2}^{-1}\bar{\mathbf{H}}_{2}^{H}& \epsilon^{2}\mathbf{J}\mathbf{A}_{2}^{-1}\bar{\mathbf{J}}^{H}\\ \end{pmatrix}.\end{split} \tag{102}\] Then \(\mathbf{E}_{2}\) can be derived as follows: \[\mathbf{E}_{2}=\begin{pmatrix}-\mathbf{I}_{M_{1}}&\mathbf{O}\\ \mathbf{O}&E_{\mathcal{D}_{n}}\{\mathbf{T}^{-1}\}\\ \end{pmatrix}^{-1}, \tag{103}\] where \[\begin{split}&\mathbf{T}=\\ &\begin{pmatrix}\bar{\mathbf{H}}_{2}\mathbf{A}_{2}^{-1}\bar{ \mathbf{H}}_{2}^{H}-(1+\text{Tr}\{\mathbf{E}_{1,2}\})\mathbf{I}_{M_{2}}& \epsilon\bar{\mathbf{H}}_{2}\mathbf{A}_{2}^{-1}\mathbf{J}^{H}\\ &\epsilon\mathbf{J}\mathbf{A}_{2}^{-1}\bar{\mathbf{H}}_{2}^{H}& \epsilon^{2}\mathbf{J}\mathbf{A}_{2}^{-1}\bar{\mathbf{J}}^{H}-\mathbf{I}_{N-M _{2}}\\ \end{pmatrix}.\end{split} \tag{104}\] \(\mathbf{E}_{2,2}^{(1)}\) can be obtained by applying the block matrix inverse formula to (104), and the theorem is proved. ## Appendix E Proof of Theorem 7 Denote \(d\) as a randomly chosen eigenvalue of \(\mathbf{L}\). Then \(G_{\mathbf{L}}(z_{0})\) can be expressed as follows: \[\begin{split} G_{\mathbf{L}}(z_{0})&=E\{(z_{0}-d)^{-1}\}\\ &=\frac{S}{M_{1}}E\{(z_{0}-\omega^{\prime})^{-1}\}+\frac{M_{1}-S}{ M_{1}}E\{(z_{0}-e)^{-1}\},\end{split} \tag{105}\] where \(\omega^{\prime}\) denotes randomly chosen element from \(\{\mu_{1},\mu_{2},...,\mu_{S}\}\), \(e\) denotes the randomly chosen element from \(\{\mu_{S+1},\mu_{S+2},...,\mu_{M_{1}}\}\). From Theorem 5, when \(\epsilon\to 0\), \(\omega^{\prime}\to\omega\ e\to+\infty\). Then the approximated value of \(G_{\mathbf{L}}(z_{0})\) can be known as follows: \[G_{\mathbf{L}}(z_{0})\to\frac{S}{M_{1}}E\{(z_{0}-\omega)^{-1}\}=\frac{S}{M_{1} }G_{\omega}(z_{0}), \tag{106}\] which leads to (54) can be derived, and the theorem is proved. ## Appendix F Proof of Theorem 8 Denote \(a=\frac{l_{1}\rho}{td_{1}^{*}}\), by differentiating (8) with respect to \(a\), we have that \[\begin{split}\frac{d\bar{R}_{1}}{da}&=\int_{0}^{+ \infty}\frac{x}{(a+1)x+1}f_{\omega}(x)dx\\ &=\frac{1}{a+1}+\frac{1}{(a+1)^{2}}G_{\omega}(-(a+1)^{-1}).\end{split} \tag{107}\] Since \(\bar{R}_{1}=0\) when \(a=0\), \(\bar{R}_{1}\) can be expressed as (55). Denote \(b=\frac{\rho}{td_{2}^{*}}\), then by differentiating (9) with respect to \(b\), we have that \[\begin{split}\frac{d\bar{R}_{2}}{db}=&\int_{0}^{+ \infty}\frac{1}{x+1+b}\cdot\frac{l_{2}x+l_{2}}{x+1+l_{1}b}f_{\omega}(x)dx\\ &-\frac{l_{1}l_{2}}{1-l_{1}}\int_{0}^{+\infty}\frac{1}{x+1+l_{1} b}f_{\omega}(x)dx\\ =&-G_{\omega}(-(1+b))+l_{1}G_{\omega}(-(1+l_{1}b)). \end{split} \tag{108}\] Since \(\bar{R}_{2}=0\) when \(b=0\), \(\bar{R}_{2}\) can be expressed as (56). The theorem is proved. ## Appendix G Proof of Theorem 10 When, \(M_{2}\geq N\), from Theorem 4, \(\int_{a}^{b}G_{\omega}(z)dz\) can be expressed by \(I(a,b)=\int_{a}^{b}G_{\mathbf{L}}(z)dz\) as follows: \[\int_{a}^{b}G_{\omega}(z)dz=\frac{M_{1}}{S}I(a,b)-\frac{M_{1}-S}{S}\ln\frac{b}{ a}. \tag{109}\] Denote \(x=-\frac{1}{y+1}\), (55) can be rewritten as follows: \[\begin{split}\bar{R}_{1}&=\log\left(1+\frac{l_{1} \rho}{td_{1}^{*}}\right)+\int_{-1}^{-\frac{td_{1}^{*}}{t_{1}\rho+td_{1}^{*}}}G_{ \omega}(x)dx\\ &=\log\left(1+\frac{l_{1}\rho}{td_{1}^{*}}\right)+\frac{M_{1}}{S}I (-1,-\frac{td_{1}^{*}}{l_{1}\rho+td_{1}^{*}})\\ &-\frac{M_{1}-S}{S}\ln(\frac{td_{1}^{*}}{l_{1}\rho+td_{1}^{*}}), \end{split} \tag{110}\] and then (71) can be derived. Denote \(y_{1}=-(1+y)\), \(y_{2}=-(1+l_{1}y)\), (56) can be rewritten as follows: \[\begin{split}\bar{R}_{2}&=\int_{-1}^{-1-\frac{\rho}{td_ {2}^{*}}}G_{\omega}(y_{1})dy_{1}-\int_{-1}^{-1-\frac{\rho}{td_{2}^{*}}}G_{ \omega}(y_{2})dy_{2}\\ &=\frac{M_{1}}{S}I(-1,-1-\frac{\rho}{td_{2}^{*}})-\frac{M_{1}-S}{S} \ln(1+\frac{\rho}{td_{2}^{*}})\\ &-\frac{M_{1}}{S}I(-1,-1-\frac{\rho l_{1}}{td_{2}^{*}})+\frac{M_{1}- S}{S}\ln(1+\frac{\rho l_{1}}{td_{2}^{*}}),\end{split} \tag{111}\] and then (72) can be derived. Similarly, when \(M_{2}<N<M_{1}+M_{2}\), from Theorem 7, \(\int_{a}^{b}G_{\omega}(z)dz\) can be expressed as follows: \[\begin{split}\int_{a}^{b}G_{\omega}(z)dz=\frac{M_{1}}{S}I(a,b). \end{split} \tag{112}\] Then (55) and (56) can be rewritten as follows: \[\bar{R}_{1}=log\left(1+\frac{l_{1}\rho}{td_{1}^{*}}\right)+\frac{M_{1}}{S}I(-1,- \frac{td_{1}^{*}}{l_{1}\ ## Appendix H Proof of Theorem 11 When \(M_{2}\geq N\), from [23], the PDF of \(\omega\) can be derived from \[\begin{split} f_{\omega}(x)=&-\lim_{y\to 0^{+}}\frac{1}{ \pi}\text{Im}\{G_{\omega}(x+iy)\}\\ =&-\lim_{y\to 0^{+}}\frac{M_{1}}{\pi S}\text{Im}\{G_{ \mathbf{L}}(x+iy)\}-\frac{M_{1}-S}{S(x+iy)},\end{split} \tag{115}\] where \(\text{Im}\{z\}\) represents the imaginary part of \(z\). It is obvious that \[\lim_{y\to 0^{+}}\text{Im}\{\frac{M_{1}\!-\!N\!+\!2M_{1}z\!+\!M_{2}z\!-\!Nz}{(x +iy)(x+1+iy)}\!-\!\frac{M_{1}\!-\!S}{S(x\!+\!iy)}\}=0. \tag{116}\] When \(z=x+iy\), \(x>0\), \(\lim_{y\to 0^{+}}\text{Im}\{\frac{\sqrt{\Delta(z)}}{(z)(z)}\}\) can be written as follows: \[\begin{split}\lim_{y\to 0^{+}}\text{Im}\{\frac{\sqrt{\Delta(z)}}{ z(z+1)}\}&=\lim_{y\to 0^{+}}\text{Im}\{\frac{\sqrt{\Delta(x+iy)}}{(x+ iy)(x+1+iy)}\}\\ &=\lim_{y\to 0^{+}}\text{Im}\{\frac{\sqrt{|\Delta(x+iy)|}e^{j \frac{\theta}{2}}}{|x+iy||x+1+iy|}\}\\ &=\lim_{y\to 0^{+}}\frac{\sqrt{|\Delta(x)|}}{x(x+1)}\sin(\frac{ \theta}{2}),\end{split} \tag{117}\] where \(\theta\) denotes the angles of \(\Delta(x+iy)\). Extend \(\Delta(x+iy)\) as \[\begin{split}\Delta(x+iy)=&((M_{2}-N)^{2}(x^{2}-y ^{2})+(M_{2}-N)^{2}-2Qx)\\ &+2((M_{1}-N)^{2}x-Q)yi,\end{split} \tag{118}\] then \(\lim_{y\to 0^{+}}\theta\) can be known as follows: \[\lim_{y\to 0^{+}}\theta=\begin{cases}\pi,x_{1}<x<x_{2}\\ 0,x<x_{1}\text{ or }x>x_{2}\end{cases}, \tag{119}\] where \(x_{1}\) and \(x_{2}\) are roots of \((M_{2}-N)^{2}x^{2}+(M_{1}-N)^{2}-2Qx=0\), which can be derived as (76). Then \(f_{\omega}(x)\) can be derived as follows: \[f_{\omega}(x)=\begin{cases}\frac{\sqrt{-\Delta(x)}}{2\pi Sx(x+1)},x_{1}<x<x_{ 2}\\ 0,\text{otherwise}\end{cases}. \tag{120}\] When \(M_{2}<N<M_{1}+M_{2}\), \(f_{\omega}(x)\) can be written as follows: \[f_{\omega}(x)=-\lim_{y\to 0^{+}}\frac{M_{1}}{\pi S}\text{Im}\{G_{\mathbf{L}}(x +iy)\}, \tag{121}\] which is equivalent to the case of \(M_{2}\geq N\). Thus the expression of PDF is equal to (120). The theorem is now proved.
近年、多入力多出力(MIMO)非orthogonal多重アクセス(NOMA)システムは、関連する研究コミュニティで注目を集めています。この潜在的なプリコーディングスキームとして、一般化されたSingular Value Decomposition(GSVD)は、MIMO-NOMAシステムに適用され、高スペクトル効率が示されています。本論文では、GSVDに基づいたMIMO-NOMA通信の性能をRician雑音に適用しました。特に、チャネル行列の一般化された Singular Value(GSV)の分布特性を分析しました。GSVDの一般化されたSingular Value(GSV)のCauchy変換を導くために、線形化のトリックと決定論的等価法という2つの新しい数学的ツールを適用しました。これらのツールは、演算子値の自由確率論に基づいています。迭代プロセスが提案され、GSVのCauchy変換の数値を算出
2309.12457
Time-Reversal Symmetry Breaking Superconductivity in CaSb$_2$
CaSb$_2$ is a bulk superconductor and a topological semimetal, making it a great platform for realizing topological superconductivity. In this work, we investigate the superconducting upper and lower critical field anisotropy using magnetic susceptibility, and study the superconducting state using muon spin-relaxation. The temperature dependence of transverse-field relaxation rate can be fitted with a single-gap model or two-gap model. Zero-field relaxation shows little temperature dependence when the muon-spin is parallel to the $c*$-axis, while an increase in relaxation appears below 1 K when the muon-spin is parallel to the $ab$-plane. We conclude an $s+is$ order parameter considering the breaking of time-reversal symmetry (TRS), which originates from competing interband interactions between the three bands of CaSb$_2$. To explain the direction-dependent breaking of TRS we suggest loop currents developing in the plane of distorted square-net of Sb atoms.
M. Oudah, Y. Cai, M. V. De Toro Sanchez, J. Bannies, M. C. Aronson, K. M. Kojima, D. A. Bonn
2023-09-21T20:00:23
http://arxiv.org/abs/2309.12457v3
Critical Field Anisotropy and Muon Spin Relaxation Study of Superconducting Dirac-Semimetal CaSb\({}_{2}\) ###### Abstract CaSb\({}_{2}\) has been identified as a bulk superconductor and a topological semimetal, which makes it a great platform for realizing topological superconductivity. In this work, we investigate the superconducting upper and lower critical field anisotropy using magnetic susceptibility, and study the superconducting state using muon spin-relaxation. The temperature dependence of transverse-field relaxation can be fitted with a single-gap model or two-gap model, consistent with previous tunnel-diode oscillator measurements. We highlight that the normal state of CaSb\({}_{2}\) shows a large diamagnetic signal, which is likely related to its Dirac semimetal nature. Zero-field relaxation shows little temperature dependence when the muon-spin is parallel to the \(c\)-axis, while an increase in relaxation appears below 1 K when the muon-spin is parallel to the \(ab\)-plane. This may be related to a second superconducting phase appearing at low temperature below the bulk \(T_{c}\). However, we find no discernible anomaly in \(\mu_{0}H_{\rm c1}(0)\) around this temperature as has been seen in other superconductors with secondary superconducting states that appear at lower temperatures. ## I Introduction When a material enters the superconducting state it breaks U(1) gauge symmetry, and breaking of any additional symmetries is typically an indication of unconventional superconductivity [1]. In some unconventional superconductors, time-reversal symmetry (TRS) is broken as the material enters the superconducting state as proven by detection of spontaneous magnetic fields below the onset of superconductivity. This spontaneous magnetic field has been detected in zero-field muon relaxation measurements in unconventional superconductors such as UPt\({}_{3}\)[2] and Sr\({}_{2}\)RuO\({}_{4}\)[3; 4]. Spontaneous magnetic fields can emerge in non-centrosymmetric superconductors, where the lack of inversion symmetry results in a spin-split Fermi surface due to antisymmetric spin-orbit coupling (SOC) and a singlet-triplet mixing in the superconducting state [5], such as LaNiC\({}_{2}\)[6], Re\({}_{2}\)Zr [7], and La\({}_{7}\)Ir\({}_{3}\)[8]. This breaking of TRS can even appear in centrosymmetric multi-gap superconductors, for specific compositions, such as the case in FeSe [9], Fe(Se,Te) [10] and Ba\({}_{1-x}\)K\({}_{x}\)Fe\({}_{2}\)As\({}_{2}\)[11; 12] and single \(s\)-wave gap locally non-centrosymmetric SrPtAs and CaPtAs with strong SOC [13; 14]. In PrOs\({}_{4}\)Sb\({}_{12}\) TRS breaking appears below \(T_{\rm c}\) and is discussed in relation to nonmagnetic quadrupolar fluctuations in the normal state [15]. In the Sr-doped topological insulator Bi\({}_{2}\)Se\({}_{3}\), TRS breaking has been discussed in relation to the anisotropic Dirac cone dispersion in the normal state band structure of doped Bi\({}_{2}\)Se\({}_{3}\) allowing for triplet pairing [16]. In all of the above materials, the onset of TRS breaking coincides with \(T_{\rm c}\) under ambient conditions and none are reported to break TRS in manner such that it can only be detected when muon spins are in a specific direction. CaSb\({}_{2}\) belongs to the family of non-symmorphic antimonides \(M\)Sb\({}_{2}\) (\(M\)= Alkaline-Earth, Rare-Earth) containing screw rotation symmetry. CaSb\({}_{2}\) is a compensated semimetal [17; 18] and the calculated Fermi-surface demonstrating compensation supports CaSb\({}_{2}\) being a topological nodal-line semimetal [17; 19] due to the non-symmorphic space group 11, \(P2_{1}/m\)[20]. The compensated semimetal state is related to three bands crossing the Fermi level, two electron-like bands dominated by contributions from the Sb site forming a distorted square-net and a hole-like band dominated by contributions from the other Sb site forming a zig-zag chain along the \(b\)-direction [17]. Superconductivity was discovered recently in polycrystalline samples [21], and further confirmed in single crystal samples [17]. Recently, the anisotropy of the upper critical field of CaSb\({}_{2}\) based on resistivity measurements and the lower critical field estimate based on magnetization at 0.55 K have been reported [22]. The specific heat transition in single crystal samples suggests deviation from a single \(s\)-wave gap, with the possibility of multiple gaps in the superconduct ing state [17]. A coherence peak was observed near \(T_{\rm c}\) in Sb-nuclear quadrupole resonance (NQR) suggesting \(s\)-wave superconductivity [23], at least near \(T_{\rm c}\). Using a tunnel diode oscillator (TDO), the temperature dependence of the penetration depth in CaSb\({}_{2}\) reveals the presence of multiple gaps and exponential behaviour at low temperature that indicates nodeless superconductivity [24]. These reports warrant further investigations of the anisotropy of the superconducting state of CaSb\({}_{2}\) and studying the superconductivity using a local probe such as muon spin rotation/relaxation (\(\mu\)SR). Here we report the critical field anisotropy from magnetic susceptibility, estimate the penetration depth and the temperature dependence of the superconducting gap from transverse-field (TF) muon data, and find evidence for TRS breaking below \(T_{\rm c}\) in zero-field (ZF) \(\mu\)SR only when muons spin lies in the \(ab\)-plane, suggesting a spontaneous field perpendicular to the \(ab\)-plane emerges in CaSb\({}_{2}\). The breaking of TRS appears below \(\sim 1\) K, which is well below the \(T_{\rm c}\) of 1.6 K. The observation of TRS breaking despite the lack of magnetism or strong spin-orbit-coupling in CaSb\({}_{2}\) is intriguing, and may be related to the topological nature of the superconducting state reflecting the topologically non-trivial normal state band structure. ## II Methods Single crystals of CaSb\({}_{2}\) were grown using an Sb self flux method as described in Ref. [17]. This yields shiny, plate-like crystals with dimensions up to \(3\times 3\times 0.5\,\rm{mm}^{3}\) that are stable in air for several weeks. Phase purity and orientation of the crystals were checked by X-ray diffraction (XRD) using a Bruker D8 with Cu K\(\alpha_{1}\) radiation (1.54056 A). We confirmed bulk superconductivity in our samples using magnetization and specific heat measurements, which exhibited superconducting transitions at \(1.6\pm 0.02\) K. Measurements of the magnetic susceptibility were done using a Magnetic Property Measurements System 3 (MPMS3) from Quantum Design, equipped with a \({}^{3}\)He insert. Muon spectroscopy measurements were performed at the M15 beamline at TRIUMF's Centre for Molecular and Material Science, which is equipped with a dilution refrigerator. Multiple crystals were mounted on a silver plate with GE varnish, then the silver plate was attached to a cold finger utilizing copper filled grease for good thermal conductivity in the dilution refrigerator. We further secured the samples with a thin silver foil over the samples before mounting them into the dilution refrigerator. We achieved true-zero field for zero-field (ZF) \(\mu\)SR using the method previously described by Morris and Heffner [25] to accurately detect any spontaneous fields. We used non-spin-rotated mode for the ZF-\(\mu\)SR measurements and spin-rotated mode for the TF-\(\mu\)SR measurements. In spin-rotated mode the muon spins are rotated perpendicular to the beam velocity, spin lying in the \(ab\)-plane, before landing in the sample, and the field is applied to the sample along the beam direction, perpendicular to the \(ab\)-plane. The \(\mu\)SR data were analyzed with musrfit software [26] to obtain physical parameters. ## III Results and Discussion ### Critical Field Anisotropy Measurements of the anisotropy of the field-dependent dc magnetic susceptibility around the superconducting transition were performed down to 0.4 K, as shown in Fig. 1(a) and (b). The temperature dependent magnetic susceptibility was measured for fields applied parallel or perpendicular to the plate of the crystal, where the \(ab\)-plane lies in the plane of plate-like crystals. We define \(c*\) as the direction perpendicular to the \(ab\)-plane for this monoclinic crystal structure. In both cases, the transition temperature was defined as the 5% volume drop in the volume susceptibility where the 100% volume was defined as the signal in the lowest field measured (1 mT). Figure 1: Temperature dependence of the dc susceptibility measured with different applied fields using a zero-field-cooling procedure measured with \(H//c*\) and \(H//ab\) shown in (a) and (b), respectively. Data represented as volume susceptibility, \(\chi_{\rm V}\), and data in (b) are from Ref [17]. (c) The temperature dependence of the upper critical field \(\mu_{0}H_{c2}\) determined from \(\chi_{\rm V}\) with \(H//c*\) and \(H//ab\), from resistivity [17], and from TF-\(\mu\)SR with \(H//c*\). Solid lines are WHH fits described in the text to the data used to estimate \(\mu_{0}H_{c2}(0)\). The demagnetization correction based on the Brandt formula [27] yields a 100% volume fraction for the case of \(H//ab\), but the same correction results in a 30% volume fraction in the case of \(H//c*\). This is due to our plate-like crystals not being perfect rectangular slabs, as assumed in the calculation of the demagnetization factor. For the \(H//c*\) direction, we take the 5% volume drop relative the signal measured in the lowest field being the 100% volume fraction, as shown in Fig. 1(a). The upper critical field was estimated using the Werthamer-Helfand-Hohenberg (WHH) relation [28] for the data as shown with the lines in Fig. 1(c). The temperature dependence of \(H_{c2}^{ab}\) is typical of a type-II superconductor, as the transition moves to lower and lower field values upon increasing the temperature. Similar behavior is observed for measurements with the other field orientation \(H_{c2}^{c}\). The estimated upper critical fields are \(H_{c2}^{c}=8.1\) mT and \(H_{c2}^{ab}=24.7\) mT, thus yielding an anisotropy ratio \(\gamma_{\rm anisotropy}=H_{c2}^{ab}/H_{c2}^{c}\) in CaSb\({}_{2}\) of about 3.1. We estimate the coherence length based on the upper critical field, and obtain 202 nm and 66 nm for \(\xi_{\rm GL,ab}\) and \(\xi_{\rm GL,c}\), respectively. We note that deviation from WHH relation near \(T_{\rm c}\), where \(\mu_{0}H_{\rm c2}(0)\) increases slowly with decreasing temperature, is consistent with nodal-line \(s\)-wave superconductivity [29]. This warrants further investigation in future studies. To estimate the lower critical field \(\mu_{0}H_{\rm c1}(0)\) we measured the field-dependent magnetization (\(M\)) at different temperatures below the critical temperature in both directions, as shown in Fig. 2(a) and (d). We use a linear fit to the low-field data measured at 0.4 K in each direction, and subtract this fit from all the measured curves for measurements in each direction, as demonstrated in the inset of Fig. 2(a). The value of the uncorrected \(\mu_{0}H_{\rm c1}^{*}(0)\) at each temperature was determined based on the intersect of a linear fit to the upturn of this subtracted data, \(M-M_{fit}\), and the horizontal line where \(M-M_{fit}=0\) marked in the inset of Fig. 2(a) for 0.4 K and 1.5 K. The \(\mu_{0}H_{\rm c1}^{*}(0)\) plotted against the corresponding temperature for the \(H//c*\) and \(H//ab\) direction are shown in Fig. 2(b) and (e), and the same data plotted against temperature squared are shown in Fig. 2(c) and (f). We note that the temperature dependence of \(\mu_{0}H_{\rm c1}^{*}(0)\) is fitted well by the equation [30] \[\mu_{0}H_{c1}^{*}(T)=\mu_{0}H_{c1}^{*}(0)\left[1-\left(\frac{T}{T_{c}}\right) ^{2}\right] \tag{1}\] where \(\mu_{0}H_{\rm c1}^{*}(0)\) is the lower critical field at 0 K, where fits are shown with green line in Fig. 2(c) and (f). Although the data fit the equation above well, we attempt two independent fits to the temperature regions above and below 1 K in search of any anomaly that can be reconciled with our zero-field \(\mu\)SR. Enhancement of the lower critical field was observed in UPt\({}_{3}\)[31] and PrOs\({}_{4}\)Sb\({}_{12}\)[32], and this was related to the emergence of a second super Figure 2: The magnetization (\(M\)) as a function of applied field \(\mu_{0}H\) measured at different temperatures \(T\) below \(T_{c}\) with field applied parallel to \(c*\) direction and \(ab\)-plane shown in (a) and (d), respectively. A degaussing procedure was carried out between measurements, and a linear fit was applied to the low field region of the 0.4 K data. Lower critical field \(\mu_{0}H_{\rm c1}^{*}(0)\) as a function of temperature (\(T\)) for \(H//c*\) and \(H//ab\), shown in (b) and (e), estimated using the magnetization data in (a) and (d) by subtracting the linear fit to 0.4 K data from all the curves, shown in inset of (a). Linear fit is applied to the upturn data and the intersect is defined as \(\mu_{0}H_{\rm c1}^{*}(0)\). (c) and (f) show \(\mu_{0}H_{\rm c1}^{*}(0)\) as a function of \(T^{2}\) and fits to the data \(<1\) K (dashed blue line), \(>1\) K (dashed red line), and over the entire temperature range are shown (solid green line). conducting phase at low temperature. We discuss this further below in Sec. III.3. The typical equations used for estimating the penetration depth (\(\lambda\)) from \(\mu_{0}H_{\mathrm{c1}}(0)\) do not have a solution for the values measured in our experiment. Instead, we estimate the \(\mu_{0}H_{\mathrm{c}}(0)\) from \(\mu_{0}H_{\mathrm{c1}}(0)\) and \(\mu_{0}H_{\mathrm{c2}}(0)\) with the following equation [33]: \[\mu_{0}H_{c}=\sqrt{\mu_{0}H_{\mathrm{c1}}\times\mu_{0}H_{\mathrm{c2}}} \tag{2}\] Here we assume the \(\mu_{0}H_{\mathrm{c1}}(0)\) values for \(H//c*\) and \(H//ab\) are equivalent to the \(\mu_{0}H_{\mathrm{c1}}^{*}(0)\) measured, without applying any demagnetization correction. This is due to the difficulty of calculating the demagnetization correction for the \(H//c*\). The value of thermodynamic critical field \(\mu_{0}H_{\mathrm{c}}(0)\) is expected to be equivalent for both directions, but does not match for the current measurements on our samples. We take the average in both directions and estimate \(\mu_{0}H_{\mathrm{c}}(0)=6.0\pm 0.5\) mT, as shown in Table 1, which is consistent with the value previously reported using the integral of \(M(H)\) curve [22]. Despite this consistency of \(\mu_{0}H_{\mathrm{c}}(0)\), we suspect that the currently measured lower critical field values are inaccurate and should be the subject of future studies. We estimate the Ginzburg-Landau parameter for each direction with the following equation: \[\mu_{0}H_{c2}=\sqrt{2}\kappa_{GL}\mu_{0}H_{c} \tag{3}\] where \(\kappa_{GL}\) is the Ginzburg-Landauer parameter. We summarize our characterization of the superconducting state based on magnetic susceptibility measurements in Table 1. We estimate the upper critical field \(H//c*\) extracted from the 50% drop in resistivity measurement, previously published by some of the current authors [17], to be about \(11.5\pm 0.8\) mT in Fig. 1. This value is different from that reported recently by another group [22], which may be due to different sample quality or due to different current densities in the resistivity measurements. Nevertheless, the anisotropy of the upper critical field of \(\sim 3\) from the resistivity measurements reported [22] is consistent with our anisotropy estimates based on magnetic susceptibility measurements. Here we note that for \(H//c*\), it seems that the superconducting state of CaSb\({}_{2}\) is at the border of type-I/type-II regime. This poses challenges for our muon measurement as highlighted below. ### Transverse-Field Muon Study We perform transverse field \(\mu\)SR measurements on CaSb\({}_{2}\) with applied magnetic field perpendicular to the \(ab\)-plane, which can be used to determine the temperature dependence of the penetration depth. With the temperature dependence of the penetration depth we can infer properties of the energy gap of the superconducting state of CaSb\({}_{2}\). For these measurements we extract the field from the precession of muons inside the background silver, which is the most precise way of measuring the applied field. We performed the measurement with applied fields ranging from \(1.3-11.5\) mT, and all measurements were performed on samples cooled in the applied field. We employed a beam of muons polarized such that the spin is perpendicular to their momentum, while the applied field is parallel to the momentum. The spin of the muon precesses with a frequency proportional to the local magnetic field, and, upon decay, emits a positron preferentially in the muon spin direction. Typically with a field that is above the lower critical field, we expect a well ordered flux line lattice when using a field cooling procedure. Typical time evolution of the asymmetry for CaSb\({}_{2}\) is shown in Fig. 3(a), measured in 5.8 mT at 2.00 K and 0.03 K, above and below \(T_{\mathrm{c}}\) respectively. In the mixed state, we have an inhomogenous field distribution due to the presence of a flux line lattice (FLL), which results in a decay of the precession signal as a function of time. We fit the asymmetry spectra using a two term sinusoidal decaying function \[G_{\mathrm{TF}}(t) =A\left[F\exp\left(\frac{-\sigma^{2}t^{2}}{2}\right)\cos\left( \omega_{1}t+\phi\right)\right.\] \[\left.+(1-F)\exp(-\psi t)\cos\left(\omega_{2}t+\phi\right)\right]\] where the first term captures the signal from muons stopping in the sample and the second term captures the signal from muons stopping in the silver sample holder. \(F\) is the fraction of the signal coming from the sample, while \(\omega_{1}\) and \(\omega_{2}\) are the muon precession frequencies in the sample and the background, respectively. The \(A\) term is the total asymmetry and the \(\phi\) is the initial phase of the muons. \(\sigma\) and \(\psi\) are the depolarization rates for the sample and the background signals, respectively. The \(\sigma\) term contains a contribution from the field distribution caused by the vortex lattice in the superconducting state (\(\sigma_{sc}\)) and the smaller, temperature independent, contribution from randomly oriented nuclear dipole moments (\(\sigma_{N}\)). These two signals are added in quadrature, such that the contribution from the FLL can be obtained as \(\sigma_{sc}=\sqrt{\sigma^{2}-\sigma_{\mathrm{N}}^{2}}\). The superconducting relaxation rate (\(\sigma_{sc}\) \begin{table} \begin{tabular}{|c c c|} \hline **Direction** & \(ab\) & \(c*\) \\ \hline \(\mu_{0}H_{\mathrm{c1}}(0)\) & \(2.1\pm 0.4\) mT & \(2.8\pm 0.4\) \\ \(\mu_{0}H_{\mathrm{c2}}(0)\) & \(24.7\pm 0.8\) mT & \(8.1\pm 0.8\) mT \\ \(\mu_{0}H_{\mathrm{c}}(0)\) & \(6.0\pm 0.5\) mT \\ \(\xi_{\mathrm{GL}}\) & 202 nm & 66 nm \\ \(\kappa_{\mathrm{GL}}\) & \(2.90\pm 0.26\) & \(0.95\pm 0.12\) \\ \hline \end{tabular} \end{table} Table 1: Superconducting parameters derived from our measurements of CaSb\({}_{2}\). \(\mu_{0}H_{\mathrm{c1}}(0)\) is the average of that calculated for each direction based on Eq. 2. indicates the mean square inhomogeniety in the field experiend by muons, \(\left\langle\left(\Delta B\right)^{2}\right\rangle,\) due to the FLL [34], where \(\left\langle\left(\Delta B\right)^{2}\right\rangle=\left\langle\left(B-\left\langle B \right\rangle\right)^{2}\right\rangle\), which results in the relaxation rate for the FLL \[\sigma_{sc}^{2}=\gamma_{\mu}^{2}\left\langle\left(\Delta B\right)^{2}\right\rangle\] where \(\gamma_{\mu}(=2\pi\times 135.5\)MHz/T) is the muon gyromagnetic ratio. The Fourier power against internal magnetic field, shown in Fig. 3(b), shows a large peak corresponding to \(\omega_{2}\) of the silver sample holder. The relaxation rate \(\sigma\) as a function of temperature extracted from TF-\(\mu\)SR in various fields for CaSb\({}_{2}\) is plotted in Fig. 3(c). We extract \(T_{\text{c}}\) from the TF-\(\mu\)SR in various fields, where we define \(T_{\text{c}}\) as the intersection of the line of best fit with sharpest slope in the transition seen in \(\sigma\) and the normal state nuclear contribution \(\sigma_{\text{N}}\sim 0.210\). We calculate the expected nuclear magnetic moment by first estimating the muon stopping site based on the Hartree potential, where we find the preferred muon stopping site is (0.65,0.75,0.15). Based on the magnetic active nuclei, only Sb in our case, we find an expected nuclear dipolar field of 3.5323 \(\mu\)N. This corresponds to a \(\sigma_{\text{N,calc}}\sim 0.210\)\(\mu\)s, in agreement with the value measured experimentally, as shown in Fig. 3(c). The applied magnetic fields, as extracted from the precession of muons in the Ag sample holder \(\omega_{2}\), are plotted against \(T_{\text{c}}\) in Fig. 1(c), and we fit the WHH relation to obtain the \(\mu_{\text{\tiny{H}}}\)= 5.8 mT upper critical field from TF-\(\mu\)SR as \(10.5\pm 0.4\) mT. This \(\mu_{0}H_{\text{c2}}(0)\) value is consistent with estimates based on 50% drop in resistivity and those extracted from magnetization measurement with field applied perpendicular to the \(ab\)-plane. From the field dependence of \(\sigma\) measured well below \(T_{\text{c}}\) at \(25\pm 5\) m, shown in Fig. 4(b), we find a peak at low fields. The FLL state is only realized at fields well above this peak region, where ideally in strong type-II superconductors we expect a relatively weak field dependence above this peak and below the upper critical field. Since CaSb\({}_{2}\) is barely type-II, we do not have a wide range of weak field dependence, but nevertheless choose 5.8 mT, which is well above the peak position, as a field representing the highest likelihood of realizing a homogeneous FLL state. For fields approaching the upper critical field, \(\left[H/H_{c2}=0.5\right]\), the penetration depth can be calculated from the relaxation rate using the Brandt formula [35] for a triangular Abrikosov vortex lattice: \[\sigma_{\text{sc}}(T)=\frac{0.0609\times\gamma_{\mu}\phi_{0}}{\lambda^{2}(T)}\] where \(\sigma_{\text{sc}}(T)\) is in \(\mu\)s\({}^{-1}\) and \(\lambda(T)\) is in nm. \(\phi_{0}\) is the magnetic flux quantum, \(\left(2.067\times 10^{-15}\text{ Wb}\right)\). We can relate the temperature dependence of the relaxation rate to the penetration depth with \[\frac{\sigma_{sc}(T)}{\sigma_{sc}(0)}=\frac{\lambda^{-2}(T)}{\lambda^{-2}(0)}.\] The temperature dependence of the energy gap \(\Delta(T,\hat{k})\) within BCS theory [36] is given by: \[\Delta(T,\hat{k})=\Delta(0)\tanh\left\{1.82\left(1.018\left(\frac{T_{c}}{T}-1 \right)\right)^{0.51}\right\}g_{\hat{k}}\] where \(\Delta(0)\) is the gap magnitude at zero temperature, and the \(g_{\hat{k}}\) term accounts for the orientation (\(\hat{k}\)) dependence of the gap function, which can, for example, be substituted with 1 for an \(s\)-wave model and \(|\cos(2\phi)|\) for a \(d\)-wave model, where \(\phi\) is the azimuthal angle. CaSb\({}_{2}\) has a coherence length over normal state mean free path of about 1.78 [17], which places it at the border between clean and dirty limit. The temperature dependence of the superconducting gap can be obtained from the temperature dependence of the penetration depth in the clean limit with the relation Figure 3: Representative TF-\(\mu\)SR signals collected above and below \(T_{\text{c}}\) in CaSb\({}_{2}\) under an applied magnetic field of 5.8 mT. The solid lines are fits using the sinusoidal decaying function described in the text. (b) The Fourier transform of the M15 \(\mu\)SR Asymmetry for measurements in different applied magnetic fields at \(\sim\) 30 mK, representing the field distribution of the local field probed by muons. The sharp peaks in the data indicate the applied field experienced by muons stopping in the silver cold finger. The broad features at lower/higher internal fields represents the muons stopping in the CaSb\({}_{2}\) sample. (c) The muon Gaussian relaxation rate \(\sigma\) as a function of temperature in different applied magnetic fields. \[\frac{\lambda^{-2}(T)}{\lambda^{-2}(0)}=1+2\left\langle\int_{|\Delta(T,\hat{k})|}^ {\infty}\left(\frac{\delta f}{\delta E}\right)\frac{EdE}{\sqrt{E^{2}-\Delta^{2}( T,\hat{k})}}\right\rangle\] while in the dirty limit with the relation \[\frac{\lambda^{-2}(T,\hat{k})}{\lambda^{-2}(0)}=\left\langle\frac{\Delta(T, \hat{k})}{\Delta(0)}\tanh\left[\frac{\Delta(T,\hat{k})}{2k_{B}T}\right]\right\rangle\] where \(f=\left[1+\exp\left(E/k_{B}T\right)\right]^{-1}\) is the Fermi function, and quantities in brackets are the average over the Fermi surface. Considering previous specific heat measurements showing deviation from single-gap BCS [17] and that tunnel diode oscillator measurements on CaSb\({}_{2}\) are better fitted with a two-gap model, we utilized a two-gap model fit for our muon data where the total depolarization is expressed as the sum of two components: \[\frac{\sigma_{FLL}^{-2}(T)}{\sigma_{FLL}^{-2}(0)}=x\frac{\sigma_{FLL}^{-2} \left(T,\Delta_{0,1}\right)}{\sigma_{FLL}^{-2}\left(0,\Delta_{0,1}\right)}+(1 -x)\frac{\sigma_{FLL}^{-2}\left(T,\Delta_{0,2}\right)}{\sigma_{FLL}^{-2}\left( 0,\Delta_{0,2}\right)}\] where \(\Delta_{0,1}\) and \(\Delta_{0,2}\) are the gap values at zero temperature and \(x\) is the fraction of gap-1 over the sum of two gaps. We fit the gap using an \(s+s\) wave model as shown in Fig. 4(a). Assuming the zero-field tunnel diode oscillator measurement performed on CaSb\({}_{2}\) is representative of our samples, supported by the similarity of the specific heat data reported in the same paper [24] with our specific heat measurement on our samples [17], we accept the presence of two gaps in zero field in CaSb\({}_{2}\). Two gaps with the same value as reported (\(\Delta_{1}(0)/k_{B}T_{\rm c}\)= 1.8 and \(\Delta_{2}(0)/k_{B}T_{\rm c}\)= 0.81) [24] and single gap (\(\Delta(0)/k_{B}T_{\rm c}\)= 1.59) fit are shown in Fig. 4(a) in red and black, respectively. We note that for data measured in 5.8 mT a single \(s\)-wave gap is sufficient to fit the data, and the two-gap model does not significantly improve the fit. We note that the evolution of two gaps with applied magnetic fields has been demonstrated in measurements on NbSe\({}_{2}\)[37], and a similar evolution of the two gaps with applied magnetic field may appear in other multi-gap superconductors, such as CaSb\({}_{2}\). However, such a conclusion cannot be drawn from the current \(\mu\)SR data due to the large error bars. The strong magnetic field dependence of (\(\sigma_{sc}\)) in CaSb\({}_{2}\), shown in Fig. 4(b), is associated with the low Figure 4: (a) The temperature dependence of superconducting contribution to the relaxation rate \(\sigma_{\rm SC}\) (green symbols), and the fits with a single \(s\)-wave gap and two \(s\)-wave gaps in black and red, respectively. (b) The muon Gaussian relaxation rate \(\sigma\) as a function of applied magnetic field at base temperature (\(\sim 25\pm 5\) mK) and the average at low temperature, below \(0.25T_{\rm c}\) for each curve in Fig. 3(c). Figure 5: Uemura plot showing the superconducting transition temperature \(T_{\rm c}\) vs the Fermi temperature \(T_{\rm F}\), where CaSb\({}_{2}\) is shown as blue triangle assuming a 2D and 3D nature of charge carriers. The Bose-Einstein condensation temperature is shown as the \(T_{\rm B}\) line and the Fermi temperature as the \(T_{\rm F}\) line. Data for materials in literature are also plotted [38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49]. We highlight the region of unconventional superconductors in grey at the top left region and conventional superconductors in the bottom right region. \(\mu_{0}H_{c2}(0)\) compared with the applied fields, and may be related to the faster suppression of smaller gap to the superfluid density. Such behavior has been discussed in other two-gap superconductors such as NbSe\({}_{2}\)[50], MgB\({}_{2}\)[51], and SmFeAsO\({}_{0.85}\)[52]. We notice a peak in the low-temperature field-dependence around 3.3 mT, where such a peak typically appears around \(\mu_{0}H_{c1}(0)\). The likely presence of multiple gaps along with the possibility of gap anisotropy both can affect the temperature dependence of FLL, and the high field compared with upper critical field used in our TF-\(\mu\)SR experiments make it difficult to makes a definitive statement on the gap symmetry based on our relaxation rate data. The relatively high field used in our TF-\(\mu\)SR experiment makes corrections to extract the penetration depth from the relaxation rate data difficult, due to the likely distortions to the FLL state in our case. Nevertheless, we give an estimate using of the penetration depth \(\lambda_{ab}=426\) nm. We compare the superconducting state in CaSb\({}_{2}\) with other known superconductors using the well-known Uemura plot. We plot the superconducting transition temperature against the Fermi temperature for CaSb\({}_{2}\) along with various other superconductors in Fig. 5, where we highlight the region of unconventional superconductors in grey at the top left region and conventional superconductors in the bottom right region. Considering the quasi-2D nature of CaSb\({}_{2}\), we estimate the Fermi temperature assuming a 2D system via the relation \(T_{F}=\frac{(\hbar^{2}\pi)m_{2D}}{k_{B}m^{2}}\) and for a 3D system via the relation \(T_{F}=(\hbar^{2}/2)(3\pi^{2})^{2/3}n^{2/3}/k_{B}m^{*}\)[38]. We use the previously reported carrier concentration and effective mass \(m*\)[17]. Based on our estimates, CaSb\({}_{2}\) appears in a region in between conventional superconductors and unconventional superconductors, where the estimate assuming a 2D system falls closer to the unconventional region. ### Zero-Field Muon and Time Reversal Symmetry Breaking We utilize muon spin relaxation measurements in zero-field (ZF) to search for any sign of spontaneous magnetic fields associated with breaking of TRS in the superconducting state. ZF spectra for CaSb\({}_{2}\) collected above \(T_{\rm c}\) and at the lowest temperature \(\sim 30\) mK are shown in Fig. 6(b) and (c) for muon spins perpendicular to the \(ab\)-plane and parallel to the \(ab\)-plane, respectively. In the absence of any static electronic moments, the muon polarization decay is due to the randomly oriented nuclear magnetic moments, which is generally described by the Gaussian Kubo-Toyabe function G\({}_{KT}\)(t) \[G_{\rm KT}(t)=\frac{1}{3}+\frac{2}{3}\left(1-\sigma^{2}t^{2}\right)\exp\left( -\frac{\sigma^{2}t^{2}}{2}\right)\] where \(\sigma\) reflects the width of the field experienced by muons due to nuclear dipoles. We fit the ZF spectra with the following relaxation function \[A(t)=A_{1}G_{\rm KT}(t)\exp(-\Lambda t)+A_{\rm BG}\] where \(A_{1}\) is the sample asymmetry and \(A_{BG}\) is the background asymmetry. The additional \(\exp(-\Lambda t)\) represents any additional relaxation in the sample, such as broken TRS. The zero field muon spin relaxation rates of CaSb\({}_{2}\) with muon spin perpendicular and parallel to the \(ab\)-plane show a consistent contribution from nuclear dipole moments as expected. However, the contribution to the additional term shows a change in \(\Lambda\) at low temperature only when the muon spin is parallel to the \(ab\)-plane. Interestingly, this increase in \(\Lambda\) shows a linear dependence at low temperature and seems to appear at 1.0 K, which is well below \(T_{\rm c}\)\(\sim\) 1.6 K. In the case of muon spin perpendicular to the \(ab\)-plane we find no significant change at \(T_{\rm c}\), nor do we find any change at lower temperatures. We explore possible interpretations for this spontaneous TRS breaking, taking into account its dependence on the muon spin direction and the temperature being well below \(T_{\rm c}\). One possible explanation is triplet pairing in the superconducting state, but we exclude this based on Knight shift measurements revealing a decrease below \(T_{\rm c}\)[53]. Breaking of TRS was reported below \(T_{\rm c}\) in SrPtAs, a multigap \(s\)-wave superconductor, and this is discussed in relation to the multicomponent order parameter, belonging to a multidimensional representation, and grain-boundaries in the polycrystalline samples. This is unlikely in CaSb\({}_{2}\) due to the single crystal nature of the sample in our experiment. Another distinction is the appearance of the TRS breaking when muon spins are parallel to the \(ab\)-plane only, which implies that the TRS breaking in CaSb\({}_{2}\) is such that only muons spins parallel to the plane are sensitive to it. We consider the possibility of another superconducting phase breaking TRS that appears at 1 K, analogous with TRS breaking that appears below \(T_{\rm c}\) in UPt\({}_{3}\) in \(\mu\)SR [2] and Kerr effect [54] measurements. A second superconducting phase emerging below \(T_{\rm c}\) has been discussed for PrOs\({}_{4}\)Sb\({}_{12}\), where an enhancement of \(\mu_{0}H_{c1}(0)\) associated with the secondary phase is reported [32]. We considered the possibility of a secondary phase emerging at \(\sim 1\) K in CaSb\({}_{2}\), so we analyzed the \(\mu_{0}H_{c1}(0)\) estimates in Fig. 2 by fitting the data above and below 1 K. We see a slight increase of estimated \(\mu_{0}H_{c1}(0)\) based on data below 1 K compared with that above 1 K when \(H//c*\), while smaller difference appears for data above and below 1 K of \(\mu_{0}H_{c1}(0)\) with \(H//ab\). The possible anomaly in \(\mu_{0}H_{c1}(0)\) in CaSb\({}_{2}\) is similar to that observed in PrOs\({}_{4}\)Sb\({}_{12}\)[32], although if present it is much weaker in CaSb\({}_{2}\) and is only observed with field applied \(H//c*\). The appearance of TRS breaking with a spontaneous field in the \(c*\) direction may be related to the change in the \(T^{2}\) dependence in \(\mu_{0}H_{c1}(0)\) measured in the same direction. However, for both directions a single fit to \(\mu_{0}H_{c1}(0)\) over all temperatures falls within the error bars, which makes the above analysis mere speculation in an attempt to understand the TRS breaking observed. We consider the topology of CaSb\({}_{2}\) in the normal state to explain the observation of TRS breaking. The topologically non-trivial Dirac nodal lines in CaSb\({}_{2}\) have been shown theoretically to support topological superconductivity with \(B_{g}\) pairing symmetry, which has been termed nodal-line superconductivity [55]. Also, the topologically trivial \(A_{g}\) symmetry is also supported in CaSb\({}_{2}\)[55], which is more likely considering the nodeless gap behaviour observed in our TF-\(\mu\)SR measurements and previous works [23; 24]. This leads us to conclude that even if a second superconducting phase emerges in CaSb\({}_{2}\) at low temperatures it must have no extended nodes in its gap. ### Large Diamagnetism in the Normal State Large diamagnetism in the normal state has been observed in Dirac semimetals [56; 57; 58; 59; 60], and discussed in relation to a linear dispersion near the Fermi level [56]. In non-magnetic compounds like CaSb\({}_{2}\), the normal state magnetic response can be dominated by Pauli paramagnetism originating from the spin of free electrons. As well, diamagnetic contributions can arise from Landau diamagnetism, originating from the orbital motion, and a minor contribution from Larmor diamagnetism, originating from core ions. The Larmor diamagnetism in the case of CaSb\({}_{2}\) is expected to be around \(\sim 1.6\times 10^{-4}\) emu mol\({}^{-1}\) Oe\({}^{-1}\)[61], while the experimentally observed value is larger than \(\sim 2.5\times 10^{-4}\) emu mol\({}^{-1}\) Oe\({}^{-1}\). The weak temperature dependence of the magnetic susceptibility down to 50 K, shown in Fig. 7(b), is consistent with strong diamagnetic contribution related to the Dirac nodal-lines [62]. This strong diamagnetic signal is observed despite the paramagnetic contribution expected in the normal states, and indicates a strong contribution from orbital diamagnetism that is likely related to the Dirac electrons in semimetallic CaSb\({}_{2}\). Further investigations of this diamagnetic signal in future studies will help us clarify its origin. The possible contribution from Dirac electrons in the normal state of CaSb\({}_{2}\) to the magnetic properties and evidence for the validity of band structure calculations showing Dirac nodal-line states [17; 19; 22] deepens the interest in the superconducting state realized in CaSb\({}_{2}\). The carrier concentration is reported to be highly influenced by synthesis conditions [19], where changes seem to have little effect on the three-dimensional hole pocket and superconducting \(T_{\rm c}\). This suggests that topology may be tuned in CaSb\({}_{2}\) without affecting the superconductivity observed, such that topology and superconductivity can Figure 6: (a) Temperature dependence of the electronic relaxation rate \(\Lambda\) with muon spins parallel to \(ab\)-plane and perpendicular to \(ab\)-plane (parallel to \(c\)-axis). A clear increase in the extracted rate can be seen below 1 K with muon spin parallel to \(ab\)-plane, indicating the appearance of spontaneous fields inside the superconducting state. (b) ZF \(\mu\)SR spectra collected at 3.5 K and 34 mK for spin perpendicular to \(ab\)-plane (parallel to \(c\)-axis), with fit using Kubo-Toyabe function. (b) ZF \(\mu\)SR spectra collected at 2.00 K and 21 mK for spin parallel to \(ab\)-plane, fit using Kubo-Toyabe function. We can see a clear difference between the asymmetry shown in (c), indicating the presence of spontaneous fields inside the superconducting state. (d) Muon stopping site inside the CaSb\({}_{2}\) crystal structure and spin direction for the experiment in (b) with \(S_{\mu}//c\)-axis. (e) Same muon stopping site as (d), but with \(S_{\mu}//ab\)-plane as in experimental result of (c). be realized in the same sample. In summary, we demonstrated upper and lower critical field anisotropy in the superconducting state using magnetic susceptibility, and study the superconducting state using muon spin-relaxation. The temperature dependence of transverse-field relaxation can be fitted equally well with a single-gap model or two-gap model. A two-gap scenario is more likely considering previously reported tunnel-diode oscillator measurements on CaSb\({}_{2}\). The normal state of CaSb\({}_{2}\) shows a large diamagnetic signal, which is likely related to its Dirac semimetal nature. Zero-field relaxation shows little temperature dependence when the muon-spin is parallel to the \(c\)\(\ast\)-axis, while an increase in relaxation appears below 1 K when the muon-spin is parallel to the \(ab\)-plane. However, we find no discernible anomaly around this temperature in other measurements on CaSb\({}_{2}\). In various materials, such as UPt\({}_{3}\), PrOs\({}_{4}\)Sb\({}_{12}\), and Sr\({}_{2}\)RuO\({}_{4}\), the onset of TRS breaking coincides with \(T_{\rm c}\) under ambient conditions and its detection is independent of the direction of muon spins. Considering the anisotropy and two gap nature of the superconducting state and the emergence of direction-dependent TRS breaking at 1 K, \(\sim 2/3T_{\rm c}\), further investigation of CaSb\({}_{2}\) to clarify the possible emergence of a secondary phase is needed. Interactions between band structure topology, normal state diamagnetism, and superconductivity should be the subject of future studies. Finally, considering that the band structure of CaSb\({}_{2}\) is dominated by Sb atoms sitting at the two distinct sites in the material, investigation of other \(M\)Sb\({}_{2}\) antimonides with similar structures is of great interest to contrast with our current findings. ## IV Acknowledgements We thank TRIUMF staff for their technical support during muon experiment. We thank M. Sigrist, G. Luke, Y. Uemura, J. Sonier, and A. Ramires for discussion on superconductivity. We also thank H. Matsuura and M. Ogata for discussion on diamagnetism in Dirac semimetals. MO acknowledges the support by Stewart Blusson Quantum Matter Institute and the Max Planck-UBC-UTokyo Center for Quantum Materials. JB, DAB, and MCA acknowledge the support by the Natural Sciences and Engineering Research Council of Canada (NSERC). Figure 7: (a) Field-dependent isothermal magnetization of CaSb\({}_{2}\) (\(H//c\ast\)) measured in range of 0 to 7 T at 3 K and 10 K. Analysis of dHvA oscillations was shown in earlier report [17]. (b) Temperature dependent magnetization \(M\) of CaSb\({}_{2}\) measured in an applied field of 1.0 T, where \(H//c\ast\).
CaSb$_2$はBulk超伝導体であり、 also topologi cal semimetalで、これはトポロジカル超伝導を実現するための素晴らしいプラットフォームです。この研究では、磁気的相関性を用いて超伝導の上下臨界磁場異方性を調査し、μonspin-relaxationを用いて超伝導状態を調査しました。横軸磁場緩和率の時間依存性は、単一ギャップモデルまたは二つのギャップモデルで適合できます。ゼロ磁場緩和は、μonspinが$c*$-軸に平行な場合、温度依存性がほとんどありません。一方、μonspinが$ab$-平面に平行な場合、温度依存性が1K未満になると増加します。私たちは、時間反転不変性 (TRS) の破れを考慮して、$s+is$注文パラメータを結論付けました。これは、CaSb$_2$の
2309.13991
Magnetism on the thermal dynamics of 2D antiferromagnetic membranes
We developed a theoretical scheme of incorporating the magnetoelastic contribution into the thermal elastic dynamics for the thin membranes of 2D antiferromagnetic material with restricted geometry. We extended the elastic Gr\"uneisen relation into an effective version which includes the magnetic counterpart to the volume change of internal energy. Based on the specific heat and thermal conductivity from the elastic and magnetic origins we predicted the dependency of observables, such as effective Gr\"uneisen parameter, thermal expansion coefficient, and the damping factor, with respect to a wide range of temperature across the phase transition. Our model of analysis as been validated by applying to the case of FePS3 flake resonator and the theoretical predictions fits well with the reported experiment data.
Xiang Zhang, Makars Siskins, Yaroslav Blanter
2023-09-25T09:43:45
http://arxiv.org/abs/2309.13991v2
# Magnetism on the thermal dynamics of 2D antiferromagnetic membranes ###### Abstract We developed a theoretical scheme of incorporating the magnetoelastic contribution into the thermal elastic dynamics for the thin membranes of 2D antiferromagnetic material with restricted geometry. We extended the elastic Gruneisen relation into an effective version which includes the magnetic counterpart to the volume change of internal energy. Based on the specific heat and thermal conductivity from the elastic and magnetic origins we predicted the dependency of observables, such as effective Gruneisen parameter, thermal expansion coefficient, and the damping factor, with respect to a wide range of temperature across the phase transition. Our model of analysis as been validated by applying to the case of FePS\({}_{3}\) flake resonator and the theoretical predictions fits well with the reported experiment data. ## I Introduction In recent decades the 2D magnetic (van der Waals) layered materials have consistently attained the focus of research from both theoretical and experimental aspects [1; 2]. Compared to the three-dimensional counterpart, the 2D magnetic membranes constitute ideal platform to explore fundamental physics of magnetism and also its coupling to other degrees of freedom in the low dimensional regime [3]. The heterostructures build upon the 2D magnetism show susceptibility with respect to external stimuli leading to the emergent interfacial phenomena and novel spintronic devices [1; 4]. Within these materials, the FePS\({}_{3}\) compound is of particular interest because it is measured to be a 2D Ising model with zigzag antiferromagnetic (AFM) order in which the magnetic Fe atom constitute honeycomb lattice [5; 6]. Although the magnetic and electronic structure of this material has been studied intensively, there is limited understanding of its thermal properties and especially the magnetic contribution to the specific heat and thermal flux in the restricted geometry such as the thin membranes of several nanometers in thickness and micrometers in the planar dimension [5; 7; 8]. The knowledge of its thermal properties is important for further application in spin-caloritronics [4] and also stands for another tool of investigating the magnetic phase transition apart from the Raman spectroscopy [5; 9]. In this Chapter, we extend the analysis of magnetoelastic coupling into a wide range of temperature beyond the phase transition, aiming at providing a theoretical explanation for the observed anomaly [9] in thermal transport of FePS\({}_{3}\) flake resonator. Showing in the Fig. 1, the membranes suspended over a cavity undergo a drum-like vibration whose eigenfrequency is related to the planar strain which can be tuned by the gate voltage and also by the environment temperature due to the thermal expansion. At a fixed gate voltage the membrane is pushed down, and the increase of temperature leads to the drop of strain that at around the Neel temperature (\(T_{N}\approx 114\,\)K) the breaking of magnetic stiffness would soften this material and a sudden drop of resonance frequency has been observed [9]. Moreover, the vanishing of magnons as additional thermal carrier after \(T>T_{N}\) would lead to a drop of the overall thermal conductivity which has been measured through the damping factor \(Q^{-1}\) as function of temperature. In order to quantitatively explain experimental findings for thermal phenomena of the hybrid system, we develop a scheme of merging the magnetic contribution into the thermoelastic dynamics and predict the temperature dependence for observables including heat capacity, linear expansion coefficient, and damping factor for the clamped FePS\({}_{3}\) membranes. Starting from the non-magnetic thermoelastic free energy we firstly derive the expression for the damping factor \(Q^{-1}\) of thin membrane/plate which turns out to be a function of the overall thermal expansion coefficient, the specific heat, and Figure 1: (a) Schematic figure for the FePS\({}_{3}\) resonator setup. The device is settled in nearly vacuum environment so that the thermal transfer through air damping can be ignored. Thermal expansion coefficient of the SiO\({}_{2}\) substrate is tiny and the silicon base is also small compared to the FePS\({}_{3}\). The flake thickness is \(h=45\,\)nm and diameter \(d=10\,\mu\)m. (b) Fixed gate voltage pushes down the membranes and as temperature increases the flake expands leading to a decrease of planar tension. Figure quoted from publication [9]. the thermal conductivity (See section II). Then we derive the total specific heat \(C_{V}\) which has origins including the phonon and magnon excitations and also the part of energy required to break the Ising coherence around phase transition. We calculate the thermal conductivity \(\kappa\) as a sum of the phonon and magnon both as heat carriers and showed its magnitude are much smaller than the bulk compound because the limited particle lifetime due to the restricted geometry. Most importantly, by including the magnetoelastic Hamiltonian into the thermoelastic free energy we prove the total thermal expansion coefficient \(\tilde{\alpha}\) retains the usual formalism of Gruneisen relation but with the incorporated effective Gruneisen parameter \(\tilde{\gamma}\). It essentially describes the variation of internal energy including all the components ascribed to the volume change (See section III). Using real material parameters we fitted experimental measurements with our model of analysis. Good agreement with recent experiment data [9; 10] supports the validity of our results (See section IV). The strong magnetic _weight_ as part of the internal energy for this geometry restricted membranes making it an ideal platform to study the optomechanics integrated with the magnetism tuning. It is also expected that the model developed in this work can be useful for further analysis in the 2D spin-caloritronic devices. ## II Bending of thin plate with the temperature gradient In order to calculate the damping coefficient \(Q^{-1}\), we firstly have to solve the coupled dynamics including the degree of freedom from elasticity, magnetism, and temperature field. In the following section III.3, one shall see that the contribution of magnetoelastic coupling can be incorporated into the effective thermoelastic coupling and the governing equations of motion can be narrowed to including only the dynamics of elastic vibration and temperature gradient. In this section, we deal with the round plate with its undeformed surface lying on the \(X-Y\) plane and study its out-of-plane (\(\hat{z}\)) vibration. We use the cylinder coordinate \((r,\varphi,z)\) and assume its thickness \(h\) is much smaller than the plate diameter \(d\), i.e. \(h\ll d\). The displacement \(u_{z}\) and deformation \(\epsilon_{ij}\) for plate are also considered to be small such that \(u_{i}\ll h\) and \(\epsilon_{ij}\ll h\). The displacement fields along \((\hat{r},\hat{\varphi})\) direction are respectively represented by \(u_{r}\) meaning the radial extension and \(u_{\varphi}\) meaning the circumferential distortion. One should note that \(u_{\varphi}\) represents the displaced distance along the \(\hat{\varphi}\) direction not the \(\varphi\) itself, \(u_{\varphi}=rd\varphi\). The strain tensor in cylinder coordinate is expressed in the form [11] \[\begin{split}\epsilon_{rr}&=\frac{\partial u_{r}}{ \partial r},\;\epsilon_{\varphi\varphi}=\frac{1}{r}\frac{\partial u_{\varphi} }{\partial\varphi}+\frac{u_{r}}{r},\;\epsilon_{zz}=\frac{\partial u_{z}}{ \partial z},\\ \epsilon_{\varphi z}&=\frac{1}{2}\left(\frac{1}{r} \frac{\partial u_{z}}{\partial\varphi}+\frac{\partial u_{\varphi}}{\partial z }\right),\;\epsilon_{rz}=\frac{1}{2}\left(\frac{\partial u_{r}}{\partial z}+ \frac{\partial u_{z}}{\partial r}\right),\\ \epsilon_{r\varphi}&=\frac{1}{2}\left(\frac{\partial u _{\varphi}}{\partial r}-\frac{u_{\varphi}}{r}+\frac{1}{r}\frac{\partial u_{r} }{\partial\varphi}\right).\end{split} \tag{1}\] It is easy to show that according to the coordinate transformation, the relation \(\epsilon_{rr}+\epsilon_{\varphi\varphi}+\epsilon_{zz}=\epsilon_{xx}+\epsilon_ {yy}+\epsilon_{zz}\) holds, meaning the volume change, as it should, does not depends on the choice of coordination. Beyond this, the thermoelastic free energy [11] \[\begin{split} F(T)&=F_{0}(T)+\frac{1}{2}K_{T}\left( \epsilon_{i}^{i}\right)^{2}+\mu\sum_{ij}\left(\epsilon_{ij}-\frac{1}{3} \epsilon_{i}^{i}\delta_{ij}\right)^{2}\\ &-K_{T}\alpha\left(T-T_{0}\right)\epsilon_{i}^{i},\end{split} \tag{2}\] and elastic tensor relation for isotropic material \[\sigma_{ij}=K_{T}\epsilon_{i}^{i}\delta_{ij}+2\mu(\epsilon_{ij}-\frac{1}{3} \epsilon_{i}^{i}\delta_{ij}),\quad\epsilon_{i}^{i}=\sum_{i}\epsilon_{ii}, \tag{3}\] also hold true in formalism for any orthogonal coordinates [11]. In order to effective describe the characteristic deformation of the 3D elastic body we establish a concept of neutral surface. Regarding to the bending of thin plate, one side is compressed (the concave side) while the opposite is extended (convex side). Between these two sides, there is a surface which has neither extension nor compression, i.e. \(\epsilon_{i}^{i}=0\), and is referred as the _neutral surface_. Mount the undeformed neutral surface onto the \(z=0\) plane and based on the small deformation assumption, the displacement on the neutral surface is \(u_{r}^{0}=0,\,u_{\varphi}^{0}=0,\,u_{z}^{0}=\zeta(r,\varphi,t)\) with \(\zeta\ll h\). Due to the small deformation, the internal stress on \(z\)-th surface should be much smaller than the stress along the longitudinal direction, \(\sigma_{iz}=0\), which leads to the hypotheses inside the bulk volume [12; 13] \[\epsilon_{rz}=0,\quad\epsilon_{\varphi z}=0,\quad\sigma_{zz}=0. \tag{4}\] With the assumed neutral surface hypotheses, the displacement inside the plate can be expressed by the function of \(\zeta\) that \[u_{r}=-z\frac{\partial\zeta}{\partial r},\quad u_{\varphi}=-\frac{z}{r}\frac {\partial\zeta}{\partial\varphi},\quad u_{z}=\zeta. \tag{5}\] and the remaining strain components are given by \[\begin{split}\epsilon_{rr}&=-z\frac{\partial^{2} \zeta}{\partial r^{2}},\quad\quad\quad\quad\quad\epsilon_{\varphi\varphi}=-z \left(\frac{1}{r}\frac{\partial\zeta}{\partial r}+\frac{1}{r^{2}}\frac{ \partial^{2}\zeta}{\partial\varphi^{2}}\right),\\ \epsilon_{r\varphi}&=-z\frac{\partial}{\partial r} \left(\frac{1}{r}\frac{\partial\zeta}{\partial\varphi}\right),\quad\epsilon_{ zz}=\frac{z\sigma}{1-\sigma}\left(\frac{\partial^{2}\zeta}{\partial r^{2}}+ \frac{1}{r}\frac{\partial\zeta}{\partial r}+\frac{1}{r^{2}}\frac{\partial^{2} \zeta}{\partial\varphi^{2}}\right).\end{split} \tag{6}\] Define the Laplace operator on the plane \[\Delta=\frac{\partial^{2}}{\partial r^{2}}+\frac{1}{r}\frac{\partial}{\partial r} +\frac{1}{r^{2}}\frac{\partial^{2}}{\partial\varphi^{2}}, \tag{7}\] then \(\epsilon_{rr}+\epsilon_{\varphi\varphi}=-z\Delta\zeta\) and \(\epsilon_{zz}=\frac{z\sigma}{1-\sigma}\Delta\zeta\). For the case of axial symmetric plate it is reasonable to assume \(\zeta=\zeta(r,t)\) which does not depends on the polar angle \(\varphi\), then the strain can be even simplified into \[\begin{split}\epsilon_{rr}&=-z\frac{\partial^{2} \zeta}{\partial r^{2}},\quad\epsilon_{\varphi\varphi}=-\frac{z}{r}\frac{ \partial\zeta}{\partial r},\\ \epsilon_{zz}&=\frac{z\sigma}{1-\sigma}\Delta\zeta, \quad\Delta=\frac{\partial^{2}}{\partial r^{2}}+\frac{1}{r}\frac{\partial}{ \partial r},\end{split} \tag{8}\] and other components equals to zero. Substituting the strain tensor into the thermoelastic free energy (Eq. 2) one can derive its expression as the function of \(\zeta\) \[F_{\text{el}}=\int_{0}^{2\pi}d\varphi\int_{0}^{R}rdr\frac{Yh^{3}}{12(1-\sigma ^{2})}\left[(1+\sigma)\frac{\alpha}{3}\,I_{T}\Delta\zeta+\frac{1}{2}\left({ \zeta^{\prime\prime}}^{2}+\frac{1}{r^{2}}{\zeta^{\prime}}^{2}+\frac{2\sigma }{r}{\zeta^{\prime}}{\zeta^{\prime\prime}}\right)\right], \tag{9}\] where the thermal inertia \[I_{T}(r)=\frac{12}{h^{3}}\int_{-h/2}^{h/2}z\,\theta(r,z)\,dz, \tag{10}\] in which \(\theta=T-T_{0}\) is the small differences between the temperature within the plate \(T\) and the environment temperature \(T_{0}\). The internal force exerted on to the volume element of unit surface is \(f_{\zeta}=-\delta F_{\text{el}}/\delta\zeta\) and the equation of motion for the vibration of the circular plate is \[\rho h\frac{\partial^{2}\zeta}{\partial t^{2}}+\frac{Yh^{3}}{12(1-\sigma^{2}) }\left[\Delta\Delta\zeta+(1+\sigma)\alpha/3\,\Delta I_{T}\right]=0. \tag{11}\] As for the dynamics of temperature field the heat diffusion equation is a rephrase of energy conservation, that is the heat absorption equals to the energy flows \(T\frac{\partial\beta}{\partial t}=-\nabla\cdot\mathbf{q}=\kappa\Delta T\) with \(\mathbf{q}=-\kappa\nabla T\) is the thermal flux and \(\kappa\) is the heat conduction coefficient [14]. From the thermoelastic coupling we understand that the heat absorption leads to not only increase of particle motion but also the volume expansion, \(dS=dS_{0}(T)+K_{T}\alpha\epsilon_{i}^{i}\). Applying the relation \(\partial S_{0}/\partial T=\rho C_{V}/T\), we have \(\rho C_{V}\partial T\big{/}\partial t=\kappa\Delta T-K_{T}\alpha T_{0}\partial \epsilon_{i}^{i}/\partial t\). The equation of motion for describing the dynamics of small temperature differences within the plate has the general form \[\kappa\Delta\theta+\kappa\frac{\partial^{2}\theta}{\partial z^{2}}=\rho C_{V} \frac{\partial\theta}{\partial t}+K\alpha T_{0}\frac{\partial\epsilon_{i}^{i} }{\partial t}. \tag{12}\] As from Ref. [15] we make an approximation that the temperature gradient is small in the longitudinal direction compared to the vertical direction, \(\Delta\theta\ll\partial^{2}\theta/\partial z^{2}\). Combing the strain components from Eq. 8 the governing equation for the dynamics of temperature field in thin plate then becomes \[\kappa\frac{\partial^{2}\theta}{\partial z^{2}}=\rho C_{V}\frac{\partial\theta }{\partial t}-zK_{T}\alpha T_{0}\frac{1-2\sigma}{1-\sigma}\frac{\partial \Delta\zeta}{\partial t}. \tag{13}\] Inserting the ansatz solution \(\zeta=\zeta_{0}e^{i\omega t}\) and \(\theta=\theta_{0}e^{i\omega t}\) into the Eq. 13 we have the equation for temperature field which can be solved by the boundary condition that there is no thermal conduction on the top and bottom surface, \[\frac{\partial\theta_{0}}{\partial z}=0\quad\text{at}\;z=\pm\frac{h}{2}. \tag{14}\] The solved temperature profile across the plate is given by \[\theta_{0}(r,z)=\frac{K_{T}\alpha T_{0}}{\rho C_{V}}\frac{1-2\sigma}{1-\sigma} \left[z-\frac{\sin{(mz)}}{m\cos{(mh/2)}}\right]\Delta\zeta_{0}, \tag{15}\] with the wave vector \[m=\sqrt{-\frac{i\omega\rho C_{V}}{\kappa}}=(1-i)\sqrt{\frac{\omega\rho C_{V}}{ 2\kappa}}. \tag{16}\] Applying this temperature profile into the moment of inertia (Eq. 10) and the elastic equation of motion (Eq. 11) becomes an eigen-equation \[\begin{split}\rho h\omega^{2}\zeta_{0}&=\frac{Yh^{ 3}}{12(1-\sigma^{2})}[1+\Delta_{Y}(1+f(\omega))]\Delta\Delta\zeta_{0}\\ &=\frac{Y_{\omega}h^{3}}{12(1-\sigma^{2})}\Delta\Delta\zeta_{0}, \end{split} \tag{17}\] with the modified Young's modulus \(Y_{\omega}=[1+\Delta_{Y}(1+f(\omega))]\) is frequency-dependent and the adiabatic degree \(f(\omega)\) ranges from \(-1\) to \(0\) for low and high vibrating frequency identifying the isothermal and adiabatic extremes, \[f(\omega)=\frac{24}{m^{3}h^{3}}\left[\frac{mh}{2}-\tan{\left(\frac{mh}{2} \right)}\right]. \tag{18}\] The quantity \(\Delta_{Y}\) which is a measure of _thermal relaxation strength_ acquires the from \[\Delta_{Y}=\frac{1+\sigma}{1-\sigma}\frac{Y\alpha^{2}T_{0}}{\rho C_{V}}. \tag{19}\] Letting \(12(1-\sigma^{2})\rho\omega^{2}/Y_{\omega}h^{2}=q^{4}\), then the Eq. 17 becomes \(\Delta\Delta\zeta_{0}=q^{4}\zeta_{0}\) and can be solved by \(\Delta\zeta_{0}=q^{2}\zeta_{0}\) with \(\zeta_{0}=AJ_{0}(qr)+BY_{0}(qr)+CI_{0}(qr)+DK_{0}(qr)\). Here \((J_{0},Y_{0},I_{0},K_{0})\) are the first and second Bessel functions of the zero-th order respectively. Due to the finite value of \(\zeta_{0}\) at \(r=0\), the \(B=D=0\) and \(\zeta_{0}=AJ_{0}(qr)+CI_{0}(qr)\) with the coefficient \((A,C)\) to be defined by the boundary condition. For the case of clamped plate, the boundary condition (\(a\) is plate radius) has the form \(\zeta_{0}\big{|}_{r=a}=0,\ \partial\zeta_{0}\big{/}\partial r\big{|}_{r=a}=0\), which can be satisfied by \((q_{n}a)^{2}\equiv\mathcal{C}_{n}=\{10.21,39.38,89.10,\cdots\}\). The complex eigenfrequency then reads \[\omega=\omega_{0}\sqrt{1+\Delta_{Y}(1+f(\omega_{0}))}, \tag{20}\] with the unperturbed eigenfrequency for the \(n\)-th vibration mode is \[\omega_{0}=q_{n}^{2}h\sqrt{\frac{Y}{12\rho(1-\sigma^{2})}}=\mathcal{C}_{n} \frac{h}{a^{2}}\sqrt{\frac{Y}{12\rho(1-\sigma^{2})}}. \tag{21}\] Due to the complex value of frequency \(\omega\), the time dependency \(e^{i\omega t}\) of physical quantity decays along with the oscillation. Assuming \(\omega=\omega_{0}(1+i\eta)\) then the displacement decays as \(\zeta(t)\sim e^{i\omega t}e^{-\eta\omega_{0}t}\). The damping for this oscillating system is captured by the damping factor \(Q^{-1}\) which is defined to be the ratio of energy loss per radian to the energy stored in the oscillator. Because the oscillating energy is quadratic to the displacement field so we have \(E(t)\sim e^{-2\eta\omega_{0}t}\) leading to the fractional energy loss per radian is \(1-e^{-2\eta}\approx 2\eta\). Thus the system damping for elastic oscillator is qualified by the \(Q^{-1}=2|\text{Im}(\omega)/\text{Re}(\omega)|\). Shortening the parameter \(mh\) within the function \(f(\omega)\) into a single variable \(\xi\)[15] \[\xi=h\sqrt{\frac{\omega_{0}\rho C_{V}}{2\kappa}}, \tag{22}\] the thermoelastic damping \(Q^{-1}\) can be derived as \[Q^{-1}=\Delta_{Y}\left(\frac{6}{\xi^{2}}-\frac{6}{\xi^{3}}\frac{\sinh\xi+\sin \xi}{\cosh\xi+\cos\xi}\right)=\frac{1+\sigma}{1-\sigma}\frac{Y\alpha^{2}T_{0}} {\rho C_{V}}\left(\frac{6}{\xi^{2}}-\frac{6}{\xi^{3}}\frac{\sinh\xi+\sin\xi} {\cosh\xi+\cos\xi}\right). \tag{23}\] Since the thermoelastic variables such as \(\alpha\), \(\kappa\) and \(C_{V}\) are temperature dependent, it is easy to understand the damping factor \(Q^{-1}\) also changes with \(T_{0}\) and it will show anomaly in the present of second order phase transition with which the specific heat \(C_{V}\) has observed discontinuity. For convenience, in the following we will replace the environment temperature \(T_{0}\) by the symbol \(T\) with consensus. ## III Thermal observables for elastic plate hybrid with magnetic phase transition In this section we study the thermal observables for the elastic plate hybrid with magnetism for a wild range of temperature across the phase transition. To this aim we start with deriving the heat capacity and thermal conductivity due to the bosons. Then we shows the incorporation of magnetoelastic coupling into the effective thermoelastic free energy and derive the effective expansion coefficient \(\tilde{\alpha}\) and damping factor \(Q^{-1}\) for the thermal-magnetic-elastic vibrating system. In general, below the phase transition the material's heat capacity \(C=dQ\big{/}dT\) comes from the thermal excitation of the bosons, which are quasi-particles mainly the phonons for ordinary insulators and also include magnons for FM and AFM materials. If the temperature is homogeneous then the Bose-Einstein density of excited bosons is uniformly distributed across the material. However, the existence of temperature field leads to the excess number of quasi-particles staying out of equilibrium and then transport according to the temperature gradient. If the environment temperature is close to the range of magnetic phase transition, the coherence of precession between the neighbouring spins breaks down and an additional contribution to the specific heat should be taken into account. The decaying of magnetization \(\mathbf{M}\) as the heating procedure leads to an accompanying decrease of the effective exchange field \(H_{E}\) and anisotropy field \(H_{A}\) in magnon's dispersion equation \[\omega_{\mathbf{k}}=\gamma\mu_{0}\sqrt{H_{A}^{2}+2H_{E}H_{A}+H_{E}^{2}(1-\psi_{\bm {k}}^{2})}, \tag{24}\] in which \(\psi_{\mathbf{k}}\) is the structure factor defined by \(\psi_{\mathbf{k}}=(1/z)\sum_{\mathbf{\delta}}e^{i\mathbf{k}\cdot\mathbf{\delta}}\) and \(\mathbf{\delta}\) is the vector connecting the \(z\) nearest neighbouring spins of opposite orientations. This energy renormalization [16; 17] should also be incorporated into the calculation of magnon's specific heat and thermal conductivity. Mathematically the heat capacity due to the bosons is \[C_{V}=\frac{1}{V}\frac{\partial}{\partial T}\sum_{\mathbf{k}}\hbar\omega_{\mathbf{k}} \bar{n}_{\mathbf{k}},\quad\bar{n}_{\mathbf{k}}=\frac{1}{e^{\beta\hbar\omega_{\mathbf{k}}}- 1}, \tag{25}\] where \(\bar{n}_{\mathbf{k}}\) is the Bose-Einstein's equilibrium amount of bosons of energy \(\hbar\omega_{\mathbf{k}}\). The thermal conductivity is defined as the coefficient for heat flux due to the temperature gradient, \(\mathbf{q}=-\kappa\nabla T\). From kinetic transfer theory this thermal flux can be calculated by \[\begin{split}\mathbf{q}&=-\frac{1}{V}\sum_{\mathbf{k}}\hbar \omega_{\mathbf{k}}\mathbf{v}_{\mathbf{k}}(\tau_{\mathbf{k}}\mathbf{v}_{\mathbf{k}}\cdot\nabla\bar{n}_{ \mathbf{k}})\\ &=-\frac{1}{V}\frac{\partial}{\partial T}\sum_{\mathbf{k}}\hbar \omega_{\mathbf{k}}\bar{n}_{\mathbf{k}}\tau_{\mathbf{k}}(\nabla T\cdot\mathbf{v}_{\mathbf{k}})\mathbf{ v}_{\mathbf{k}},\end{split} \tag{26}\] in which an isotropic \(\kappa\) can be extracted if the particle velocity \(\mathbf{v}_{k}\) is homogeneous to each direction. However, if the particle velocity has directional bias then \(\kappa\) depends on the orientation and thermal transfer shows anisotropy. In the simplest case, if the particle's lifetime \(\tau_{k}=\tau_{0}\) and velocity \(\mathbf{v}_{k}=\bar{v}\) does not depends on wavevector \(\mathbf{k}\) we see that the thermal flux can be simplified as \(\mathbf{q}=-\frac{\bar{v}^{2}\tau_{0}}{V}\frac{\partial}{\partial T}\sum_{\mathbf{k}} \hbar\omega_{k}\bar{n}_{k}\nabla T\equiv-C\bar{v}^{2}\tau_{0}\cdot\nabla T\) leading to the simple form \(\kappa=C\bar{v}^{2}\tau_{0}\). Once we know the dispersion relation \(\omega_{k}\) and the lifetime \(\tau_{k}\) for the mobile quasi-particles we can determine specific heat \(C_{V}\) and thermal conductivity \(\kappa\) at least in a numerical way. The elastic specific heat and thermal conductivity can be derived from the statistics of low lying phonon modes (acoustic modes) based on the general equations of 25 and 26 with the sound wave dispersion relation \(\omega_{\mathbf{k}}=\bar{v}\mathbf{k}\), and \(\bar{v}\) is the Debye averaged acoustic velocity \[\begin{split} C_{\rm db}(T)&=\frac{\hbar^{2}}{2\pi} \frac{3}{k_{B}T^{2}}\int_{0}^{k_{\rm db}}dk\frac{k\omega_{k}^{2}e^{\beta\hbar \omega_{k}}}{(e^{\beta\hbar\omega_{k}}-1)^{2}},\\ \kappa_{\rm db}(T)&=\frac{\hbar^{2}}{4\pi}\frac{3 \bar{v}^{2}}{k_{B}T^{2}}\int_{0}^{k_{\rm db}}dk\frac{\tau_{k}k\omega_{k}^{2}e^ {\beta\hbar\omega_{k}}}{(e^{\beta\hbar\omega_{k}}-1)^{2}}.\end{split} \tag{27}\] The elastic zone boundary can be defined by the Debye temperature as \(k_{\rm db}=k_{B}T_{\rm db}/\hbar\bar{v}\). Note that here we have assumed the elastic lattice is of 2D while the vibration is still 3 dimensional. It can be adjusted easily to the longitudinal or shear polarization only by replacing the factor 3 into 1 or 2 respectively. ### Specific heat and thermal conduction due to the magnon excitations Specific to the 2D AFM material, we set the external field \(H_{0}=0\) and according to Eq. 24 the dispersion relation depends on the direction of wave vector \(\mathbf{k}\). Contrary to detailed treatment as in the Refs. [7], in this work we simplify the detailed 2D lattice structure and make the homogeneous assumption (\(a=b\)) such that we rephrase the \(\psi_{\mathbf{k}}\) of Eq. 24 into \(\psi_{k}=\cos{(\pi k/2k_{m})}\) being isotropic with \(k\in[0,k_{m}]\) is limited to the first Brillouin zone and \(k_{m}\) is defined from the spherical energy boundary assumption [16; 18], \[\sum_{\mathbf{k}}=\frac{V}{(2\pi)^{2}}\int d\mathbf{k}=\frac{Na^{2}}{(2\pi)^{2}}\int_ {0}^{k_{m}}2\pi kdk=N, \tag{28}\] such that \(k_{m}a/2=\sqrt{\pi}\). Thus, the dispersion relation for AFM magnon becomes \[\hbar\omega_{k}=\gamma\mu_{0}H_{E}\sqrt{\sin^{2}{(\pi k/2k_{m})}+\eta^{2}+2 \eta}, \tag{29}\] with \(\eta=H_{A}/H_{E}\) is the ratio of anisotropy field to the exchange field. For some AFM material such as the RbMnF\({}_{3}\) which has very small anisotropy \(H_{A}=4.5\) Oe, the exchange field is as large as \(H_{E}=830\) kOe leading to \(\eta\approx 0\) and it is considered as a typical 3D Heisenberg antiferromagnet [19; 20]. For other materials such as the FeF and the FePS\({}_{3}\) used in our experiment the magnetic anisotropy is strong and comparable to the exchange field, resulting in \(\eta\gtrapprox 1\) which makes them a quasi Ising system [19; 21]. As environment temperature goes up, the spontaneous magnetization \(M(T)\) decays because the thermal magnon excitation [16; 22] and also the decoherence between neighbouring spins for the \(T\lessapprox T_{N}\). Since \(M=-g\mu_{B}NS\), the effective spin magnitude \(S(T)\) decays which results in the decreasing of \(H_{E}=2Sz|J|/\mu_{0}\gamma\) and \(H_{A}=2SA/\mu_{0}\gamma\) in the dispersion relation. As a consequence, the temperature dependence for the \(\omega(T)\) should be taken into account in deriving the magnon specific heat and thermal conductivity. For simple treatment one can apply the molecular field approximation (mean field theory) in which the magnetization \(M(T)=M_{0}B(x)\) with \(B(x)\) is the Brillouin function and \(x=\mu_{0}n_{w}M(T)g\mu_{B}S_{0}/k_{B}T\) is the normalized energy [23]. Although this mean field approach does not provide the correct magnetization around phase transition, it leads to good results of magnon spectra at temperatures \(T<0.8T_{N}\)[16; 17]. With the derived dispersion relation, the heat capacity due to thermal magnons excitation in the 2D AFM model is \[\begin{split} C_{\rm mag}(T)&=\frac{\hbar^{2}}{2 \pi}\frac{1}{k_{B}T^{2}}\int_{0}^{k_{m}}dk\frac{k\,\omega_{k}^{2}\cdot e^{\beta \hbar\omega_{k}}}{(e^{\beta\hbar\omega_{k}}-1)^{2}}\\ &=\frac{\hbar^{2}k_{m}^{2}}{2\pi k_{B}}\frac{1}{T^{2}}\int_{0}^{1} dq\frac{q\,\omega_{q}^{2}\cdot e^{\beta\hbar\omega_{q}}}{(e^{\beta\hbar\omega_{q}}-1)^ {2}},\end{split} \tag{30}\] where the explicit temperature dependence on \(\omega_{k}(T)\) has been suppressed and we replace \(k\) with the normalized wave vector \(q=k/k_{m}\) ranging from 0 to 1. As a comparison, we plot the \(C_{\rm mag}(T)\) derived from the 2-D integral of \((q_{x},\,q_{y})\) of the dispersion relation \(\hbar\omega_{\mathbf{q}}=\gamma\mu_{0}H_{E}\sqrt{(1-\psi_{\mathbf{q}}^{2})+\eta^{2}+2\eta}\) with \(\psi_{\mathbf{q}}=\cos{(q_{x}\pi/2)}\cos{(q_{y}\pi/2)}\). Showing in Fig. 2(a) we see the complete 2-D integral and the simplified one result in almost exactly the same curve which validates that we can indeed ignore the direction of \((q_{x},\,q_{y})\) and shorten the \(\psi_{\mathbf{q}}\) into 1-D integral on \(q\) with the \(\psi_{q}=\cos{(q\pi/2)}\). For the magnon's thermal conductivity, it is defined from the heat flux (Eq. 26) that \[\mathbf{q}=\frac{-1}{(2\pi)^{2}}\int\,d\mathbf{k}\frac{1}{k_{B}T^{2}}\frac{\tau_{k}\,( \hbar\omega_{k})^{2}\,e^{\beta\hbar\omega_{k}}}{(e^{\beta\hbar\omega_{k}}-1)^ {2}}(\nabla T\cdot\mathbf{v}_{k})\mathbf{v}_{k}. \tag{31}\] Using the 2-D dispersion relation we derive the velocity of magnons to be \[\begin{split}(v_{x},v_{y})&=\frac{\gamma\mu_{0}H_{E}}{2 \hbar k_{m}}\frac{\pi\psi_{k}}{\sqrt{(1-\psi_{k}^{2})+\eta^{2}+2\eta}}\times\\ &\left(\sin\frac{k_{x}\pi}{2k_{m}}\cos\frac{k_{y}\pi}{2k_{m}},\, \cos\frac{k_{x}\pi}{2k_{m}}\sin\frac{k_{y}\pi}{2k_{m}}\right),\end{split} \tag{32}\] with which the integral can be performed as \[\int(\mathbf{\nabla}\mathbf{T}\cdot\mathbf{v}_{k})\mathbf{v}_{k}\,d\mathbf{k}=\mathbf{\nabla}\mathbf{T}\, \frac{1}{2}\int(v_{x}^{2}+v_{y}^{2})\,d\mathbf{k}, \tag{33}\] where we have used the fact that \(\int v_{x}^{2}d\mathbf{k}=\int v_{y}^{2}\mathbf{k}\) due to the symmetry consideration. From the heat flux expression we extract the thermal conductivity and written into the form of 2-D integral, \[\begin{split}\kappa_{\text{mag}}&=\left(\frac{ \gamma\mu_{0}H_{E}}{8}\right)^{2}\frac{1}{k_{B}T^{2}}\int\frac{\tau_{k}\,e^{ \beta\hbar\omega_{q}}}{(e^{\beta\hbar\omega_{q}}-1)^{2}}\left[\left(\frac{ \gamma\mu_{0}H_{E}}{\hbar}\right)^{2}-\left(\omega_{q}^{2}-\omega_{0}^{2} \right)\right](1-\cos q_{x}\pi\cdot\cos q_{y}\pi)d\mathbf{q},\\ &=\left(\frac{\gamma\mu_{0}H_{E}}{8}\right)^{2}\frac{\pi}{k_{B} T^{2}}\int_{0}^{1}\frac{\tau_{k}\,\omega_{q}^{2}e^{\beta\hbar\omega_{q}}}{(e^{ \beta\hbar\omega_{q}}-1)^{2}}\frac{q\sin^{2}q\pi}{\sin^{2}q\pi/2+\eta^{2}+2 \eta}dq,\end{split} \tag{34}\] where the second equality is reached by the homogeneous lattice assumption that the integral can be simplified into the 1-D and \(\hbar\omega_{q}=\gamma\mu_{0}H_{E}\sqrt{\sin^{2}q\pi/2+\eta^{2}+2\eta}\). Showing in Fig. 2(b) we notice again the difference between complete 2-D integral and the simplified one is small enough for the case of \(\kappa_{\text{mag}}\), and in the following we shall use the 1-D integral of \(q\) for the thermal observables. At low temperatures the heat capacity and thermal conductivity share the same growing curve due to the fact that \(\kappa\approx Cvl=Cv^{2}\tau\) and \(v,\tau\) are almost constant for small \(T\). At large enough temperatures the exchange fields \(H_{E}(T)\) decays in step with the decreasing of \(M(T)\) which leads to the softening of magnons and after phase transition there is no existence of magnons. Thus we see the drop of \(C_{\text{mag}}\) and \(\kappa_{\text{mag}}\) for \(T>T_{N}\). Additionally the particle's lifetime (or its inverse \(\tau^{-1}=\eta\) the relaxation rate) plays an important role in their transport properties. In general, the relaxation rate for various particles, either bosons or fermions, comes from several origins [24] that \(\eta=\eta_{\text{bd}}+\eta_{\text{pt}}+\eta_{\text{nlnsc}}\), with \(\eta_{\text{bd}}\) is the boundary deflection by material edges, \(\eta_{\text{pt}}\) is the scattering with the point defects, and \(\eta_{\text{nlnsc}}\) stands for the non-linear scattering among particles themselves. Usually \(\eta_{\text{bd}}+\eta_{\text{pt}}=\eta_{0}\equiv\tau_{0}^{-1}\) is a constant which does not depend on wavevector \(k\) and temperature \(T\). The non-linear scattering has several origins for different particles but it is generally proportional to \(T\) for the 3-particle scattering and \(T^{2}\) for the 4-particle scattering process [25; 26]. Therefore \(\eta_{k}=\eta_{0}(1+b_{k}T+c_{k}T^{2})\) and the coefficients can be calculated by studying the detailed process. However, in this work of membranes setup both the phonon and magnon's lifetime are limited by the defect and boundary scattering [27]. Therefore we shall ignore the non-linear scattering between quasi-particles and claim the lifetime \(\tau=\tau_{0}\) is a constant which does not depends on the wave vector nor the temperature. ### Specific heat due to the break of spin coherence around phase transition As the environment temperature close to the phase transition regime the magnetic specific heat is dominated by energy absorption for the breaking of spin coherence and due to the nature of second order phase transition the anomaly of \(C_{M}\) near \(T_{N}\) should be expected [23]. The derivation for anomaly of \(C_{M}\) depends on the detailed lattice structure. In this chapter we focus on the material FePS\({}_{3}\) which is an Ising-type 2D antiferromagnet of the honeycomb (hexagon) lattice [6; 7; 8; 28]. According to the references [29; 30; 31], the partition function for honeycomb lattice reads \[\frac{1}{N}\log Z(T)=\log 2+\frac{1}{16\pi^{2}}\int_{0}^{2\pi}\int_{0}^{2\pi} \,d\theta_{1}d\theta_{2}\log\big{[}\cosh^{3}2K+1-\sinh^{2}2K\cdot P_{\mathbf{ \theta}}\big{]}, \tag{35}\] Figure 2: (a) The magnon’s specific heat and (b) thermal conductivity derived from the complete 2-D integral and the simplified 1-D integral respectively. Here we assumed the lifetime for magnon is approximately 1.8 ps and does not depends on the modes for simplicity. The results indicate the difference between these two integral strategy is small and we can use the simplified version for further calculations. where \(K=J^{\prime}/k_{B}T\equiv\beta J^{\prime}\) is the normalized temperature in which \(J^{\prime}\) is the effective coupling energy from the exchange Hamiltonian \(H=-2J\sum\mathbf{S}_{i}\cdot\mathbf{S}_{j}\equiv-J^{\prime}\sum\mathbf{\hat{S}}_{i}\cdot\bm {\hat{S}}_{j}\), thus \(J^{\prime}=2JS^{2}\)[30]. The integrand parameter is \(P_{\mathbf{\theta}}=\cos\theta_{1}+\cos\theta_{2}+\cos\left(\theta_{1}+\theta_{2}\right)\)[31]. The critical point for honeycomb lattice is reached as \(\sinh 2K_{c}=\sqrt{3}\) and the Neel temperature is \[T_{N}=\frac{2J^{\prime}}{k_{B}\log\left(2+\sqrt{3}\right)}. \tag{36}\] Thus one can derive the effective coupling energy \(J^{\prime}\) based on the measured Neel temperature. Following the procedures of differentiating \(E_{\text{Is}}=-\frac{d\log Z}{d\beta}\) and \(C_{\text{Is}}=\frac{dE_{\text{Is}}}{dI^{\star}}\), we have the specific heat due to the breaking of spin coherence reads \[\frac{1}{Nk_{B}}C_{\text{Is}}(T)=\frac{K^{2}}{16\pi^{2}}\int_{0 }^{2\pi}\int_{0}^{2\pi}d\theta_{1}d\theta_{2}\left\{\phantom{ should be extended to the one including the magnetic contribution \(\tilde{\alpha}=\alpha_{E}+\alpha_{M}\). The magnetic Gruneisen relation \(\alpha_{M}=\beta_{T}\rho\gamma_{M}C_{M}\) is almost similar to the elastic counterpart (\(\alpha_{E}=\beta_{T}\gamma_{E}\rho C_{V}\)) meaning the thermal and magnetic properties both originate from the variation of spin coherence and it is the magnetic Gruneisen parameter makes them a connection. Therefore the overall thermal expansion coefficient for the hybrid system can be written into the form \[\tilde{\alpha} =\beta_{T}\rho\gamma_{E}C_{E}+\beta_{T}\rho\gamma_{M}C_{M}=\beta _{T}\rho\left(\gamma_{E}C_{E}+\gamma_{M}C_{M}\right)\] \[=\beta_{T}\rho\tilde{\gamma}C_{V}, \tag{44}\] which maintains the Gruneisen relation formalism but with \(C_{V}=C_{E}+C_{M}\) is the total specific heat combining the elastic and magnetic ones and with the effective Gruneisen parameter defined as \[\tilde{\gamma}=\frac{\gamma_{E}C_{E}+\gamma_{M}C_{M}}{C_{E}+C_{M}}. \tag{45}\] Although the elastic and magnetic Gruneisen parameters are both almost independent of temperature [35; 37], the effective Gruneisen parameter usually presents a peak at phase transition \(T_{N}\) (as shown in Fig. 4). This phenomenon originates from the anomaly of magnetic specific heat near phase transition rendering the \(\tilde{\gamma}\approx\gamma_{E}\) for \(T\) far away from \(T_{N}\) and \(\tilde{\gamma}\approx\gamma_{M}\) for the \(T\) close to \(T_{N}\). Usually the \(\gamma_{M}\) is several times larger than the elastic \(\gamma_{E}\) and it can be theoretically predicted based on detailed study of magnetic structure [36]. In this work, however, we shall simplify the analysis by assuming a phenomenological factor \(\nu=\gamma_{M}\big{/}\gamma_{E}\) which can be further determined by fitting the theoretical prediction of the thermal observables such as the \(\tilde{\alpha}\) and \(Q^{-1}\) to the measured values. In this way the part of thermal expansion mediated by magnetostriction can be effectively absorbed into the non-magnetic formalism simply by replacing \(\alpha_{E}\) with \(\tilde{\alpha}\). Together, the specific heat and thermal conductivity in the elastic and thermal dynamics equation (Eq. 11 and Eq. 13) should also be replaced by the total specific heat \(C_{V}=C_{E}+C_{M}\) and total thermal conductivity \(\kappa=\kappa_{E}+\kappa_{M}\)[32]. The overall damping coefficient \(Q^{-1}\) for the elastic and magnetic hybrid plate has the form (Eq. 23) \[Q^{-1}=\frac{1+\sigma}{1-\sigma}\frac{Y\tilde{\alpha}^{2}T}{\rho C_{V}}\left( \frac{6}{\xi^{2}}-\frac{6}{\xi^{3}}\frac{\sinh\xi+\sin\xi}{\cosh\xi+\cos\xi} \right),\quad\xi=h\sqrt{\frac{\omega_{0}\rho C_{V}}{2\kappa}}, \tag{46}\] with the \(\tilde{\alpha}\), \(C_{V}\), and \(\kappa\) are thermal observables which can be measured and predicted based on the theory developed in this chapter. ## IV Model validation through the thermal observables measured for the 2D AFM material FePS3 Here we validate the theory developed in this chapter by calculating the linear thermal expansion coefficient \(\alpha_{L}\) and damping factor \(Q^{-1}\) of the Ising-type 2D antiferromagnetic material FePS\({}_{3}\) whose phase transition temperature is about \(T_{N}=114K\)[9]. In the published paper (Ref. [9]), Siskins and etc. have measured the vibration frequency of the base model \(f_{0}\) for the membrane-plate of FePS\({}_{3}\) in the setup of Fig. 1. According to Ref. [38] the resonance frequency of the round resonator in the membrane-plate regime can be approximated by \[f_{0}=\sqrt{f_{\rm membrane}^{2}+f_{\rm plate}^{2}}\,, \tag{47}\] in which the plate frequency is \(f_{\rm plate}=\omega_{0}\big{/}2\pi\) according to the Eq. 21 and the membranes fundamental frequency is \[f_{\rm membranes}=\frac{2.4048}{2\pi a}\sqrt{\frac{N}{\rho h}}, \tag{48}\] with \(N=N_{0}+Yh\epsilon_{r}^{\rm th}\big{/}(1-\sigma)\) is the in-plane tension along radial direction. \(N_{0}\) is the initial tension introduced by fabrication and can be further tuned by the Figure 4: Temperature dependence for the effective Grüneisen parameter \(\tilde{\gamma}\) derived from Eq. 45. The elastic parameter is calculated to be to be \(\gamma_{E}=1.798\)[9] and the ratio is chosen to be \(\gamma_{M}/\gamma_{E}=\nu=4\). The \(\tilde{\gamma}\) start from \(\gamma_{E}\) because the \(C_{M}\approx 0\) for small temperatures. external gate voltage \(V_{G}\). The second part comes from the thermal expansion of the membranes which becomes a solely factor for the temperature dependency of the resonance frequency \(f_{0}\) if we assume the plate frequency \(f_{\rm plate}\) is independent to the environmental temperature because the Young's modulus \(Y\) and Poisson coefficient \(\sigma\) are almost independent to a small range of \(T\) varying from \(0\) to \(200\,K\) in the Siskins' experiment. The thermal strain is relate to the linear expansion coefficient of the resonator and silicon substrate by the relation \(d\epsilon_{r}^{\rm th}/dT=-(\alpha_{L}-\alpha_{\rm si})\). As a consequence, by measuring the temperature dependency of \(f_{0}(T)\) one can derive the thermal expansion coefficient of FePS\({}_{3}\) such that \[\alpha_{L}=\alpha_{\rm si}-\left(\frac{2\pi a}{2.4048}\right)^{2}\frac{\rho(1 -\sigma)}{Y}\frac{d(f_{0}^{2})}{dT}. \tag{49}\] The experimental measurement are presented in Fig. 5 and one indeed observes the \(\alpha_{L}\) anomaly around the phase transition. From the theoretical point of view, the linear expansion coefficient is one-third of the volume expansion coefficient developed in the previous section of the hybrid system, namely \(\alpha_{L}=\tilde{\alpha}/3\) based on Eq. 44. In order to derive the theoretical prediction of \(\alpha_{L}\), one needs to calculate the specific heat of the elastic and magnetic parts. Firstly, for the magnetic specific of the Ising origin (Eq. 37), the effective coupling energy \(J^{\prime}\) is derived from the measured Neel temperature \(T_{N}=114K\) and according to Eq. 36 we have \(J^{\prime}=6.48\,{\rm meV}\). Therefore the nearest neighbour spin-to-spin coupling energy in the Hamiltonian \(H=-2J\sum\mathbf{S}_{i}\cdot\mathbf{S}_{j}\) has the value \(J=J^{\prime}/2S^{2}=0.81\,{\rm meV}\) since the atomic spin for FePS\({}_{3}\) is \(S=2\). One sees that the derived \(J\) is very close to the first-nearest neighbour interaction (shown in Fig. 6) \(J_{1}=2J\approx 1.5\,{\rm meV}\) measured in the neutron scattering experiment [7; 8]. Using this derived \(J^{\prime}\) we plot the \(C_{\rm ls}\) in Fig. 3(b). Secondly, for the magnetic specific of the magnon origin (Eq. 30), it is necessary to figure out the exchange and anisotropy field on the sublattices in order to apply the dispersion relation in Eq. 29. However, according to the magnetostriction effect the inter-atomic interaction are modulated by the strain and varies with the membranes thickness [28]. Here we simplify the analysis by selecting the effective field as \(\mu_{0}H_{E}=69\,{\rm Tesla}\) and \(\mu_{0}H_{A}=138\,{\rm Tesla}\) in order to best fit the derived \(C_{M}(T)\) and \(\alpha_{L}(T)\) with the measured data. According to the relation \(H_{E}=2|J|zS/\mu_{0}\gamma\), the effective interaction between sublattices then becomes \(J_{\rm sub}\approx-1\,{\rm meV}\) and anisotropy is \(A\approx 6\,{\rm meV}\) which are close to the measured data whose values take \(J_{2}=-0.04\,{\rm meV}\), \(J_{3}=-0.96\,{\rm meV}\), and \(A=3.78\,{\rm meV}\) as quoted from Refs. [7; 8]. The calculated \(C_{\rm mag}\) is shown in Fig. 2(a) and the total magnetic specific heat \(C_{M}\) is shown in Fig. 7(a). Obtained from first-principle calculation, the elastic parameters of FePS\({}_{3}\) are \(Y=103\,{\rm GPa}\), \(\sigma=0.304\), \(\rho=3375\,{\rm kg\,m^{-3}}\) and \(\bar{v}=3823\,{\rm m\,s^{-1}}\)[9]. According to the Ref. [10], the elastic specific heat for FePS\({}_{3}\) is a mixing of Debye and Einstein parts with the Debye temperature \(T_{\rm db}=236\,{\rm K}\) and Einstein temperature \(T_{\rm ei}=523\,{\rm K}\). The suggested combination ratio is \(0.54\) and the elastic specific heat \(C_{E}=(1-0.54)C_{\rm db}+0.54C_{\rm ei}\) can be derived from Eq. 27. In Fig. 7(b) we present the calculated \(C_{E}\) as doted blue line and the total specific heat \(C_{V}=C_{E}+C_{M}\) as solid red line. Our theoretical predictions fit well the measured data shown in Fig. 8 and therefore validate the choice of parameters and the applicability of our model. Furthermore, using these parameters we get the elastic Gruneisen factor \(\gamma_{E}=\frac{3}{2}\left(\frac{1+\sigma}{2-3\sigma}\right)=1.798\) and the compressibility \(\beta_{T}=1.14\times 10^{-11}\,{\rm Pa^{-1}}\). By assuming the ratio Figure 5: (a) Solid-blue line: the measured fundamental resonator frequency \(f_{0}\) as function of temperature for the FePS\({}_{3}\) plate-membranes. Solid-red line: the derivative of \(f_{0}^{2}\) to \(T\). (b) Derived linear thermal expansion coefficient of FePS\({}_{3}\) plate-membranes according to Eq. 49. Quoted from the Fig.2 in Ref. [9]. Figure 6: Schematic of the magnetic lattice for FePS\({}_{3}\) quoted from Ref. [8]. White dots mean the spin pointing out of the page and the black dots mean the spins pointing into the page. \(J_{1},J_{2},J_{3}\) are the first-, second-, and third nearest neighbour interaction for the Hamiltonian \(H=-\sum_{i,j}J_{i,j}\mathbf{S}_{i}\cdot\mathbf{S}_{j}\)[7]. The magnon dispersion relation with the effective exchange field is calculated based on the sub-lattice structure indicated by the red and blue rhombus. Total spin of magnetic Fe atom is \(S=2\) and the coordination number for sublattice is \(z=2\). \(\nu=\gamma_{M}/\gamma_{E}=4\) and applying the derived specific heats we calculate and plot the effective Gruneisen parameter \(\tilde{\gamma}\) as function of temperature in Fig. 4. It is then straight-forwards to derive the overall linear expansion coefficient for the hybrid system \(\alpha_{L}=\tilde{\alpha}/3\) based on equation 44. Bear in mind that if one uses molar specific heat from the Fig. 7(b), the density should also chosen to be the molar density which is \(\rho=18443\,\mathrm{mol}\,\mathrm{m}^{-3}\) for FePS\({}_{3}\). Showing in Fig. 9 the theoretical prediction for \(\alpha_{L}\) fits well the measured data which consolidates the scheme of merging the magnetoelastic coupling into the non-magnetic equation of motions for the hybrid system. In order to calculate and plot the damping coefficient \(Q^{-1}\) according to Eq. 46 one still needs to know the temperature dependence of thermal conductivity \(\kappa\) especially in the hybrid materials whose thermal conduction has several different origins. As for the FePS\({}_{3}\), we have \(\kappa=\kappa_{\mathrm{ph}}+\kappa_{\mathrm{mag}}\) and we can ignore the scattering between phonons and magnons because the magnon's energy in antiferromagnetic is usually at the range of THz while the phonon's energy is usually of several GHz which means the coupling between these two quasi-particles is small. As stated in the previous section, particle lifetime is limited by the boundary scattering and can be treated as a constant \(\tau=\tau_{0}\). The \(\kappa_{\mathrm{mag}}\) can be derived according to Eq. 34 together with material constants and the fitting parameter \(\tau_{0,\mathrm{mag}}\approx 3.8\,\mathrm{ps}\)[39]. As for the phonon's contribution, we simplify the analysis by utilizing the Debye averaged sound velocity and apply the fitting parameter \(\tau_{0,\mathrm{ph}}\approx 0.8\,\mathrm{ps}\) such that \(\kappa_{\mathrm{ph}}=C_{E}\bar{v}^{2}\tau_{0,\mathrm{ph}}\). The total thermal conductivity is plotted in Fig. 10(a) and we see it is much smaller than the measured value for bulk FePS\({}_{3}\) compound which has \(\kappa\approx 1\,\mathrm{W}/\mathrm{mK}\) at room temperature [5]. This is due to the membranes geometry whose thickness is only \(h=45\,\mathrm{nm}\) which limits mobility of phonons and thus the small thermal conductivity. The transverse thermal time constant \(\tau_{z}=h^{2}\rho C_{V}/\pi\kappa\), which measures the time for establishing the temperature equilibrium across the plate, is also plotted and it is close to the Sikkins measurement. With the parameter \(\xi=\pi\sqrt{f_{0}\,\tau_{z}}\) and based on the previously derived expansion coefficient \(\tilde{\alpha}\) and total specific heat \(C_{V}\), we have the damping coefficient \(Q^{-1}\) derived and shown in Fig. 11. We see the agreement between theoretical prediction and experiment data is good enough and the drop of thermal transfer after phase transition can be ascribed to the depletion of magnons as thermal carriers. ## V Summary and outlook In conclusion we studied the magnetoelastic effect on the thermal transfer within the thin AFM plate for a wide range of temperature across the magnetic phase transition. In specific, we developed a theory of merging the exchange magnetoelastic interaction into the thermal elastic free energy and further predicted the temperature dependence for observables such as specific heat \(C_{V}\), linear expansion coefficient \(\tilde{\alpha}\), and damping factor \(Q^{-1}\) for the quasi-2D Ising AFM material FePS\({}_{3}\). Compared to the experimentally measured data, our theoretical pre Figure 8: Measured Specific heat for FePS\({}_{3}\) quoted from Takano’s paper [10]. (a) the experimental data and Takano’s prediction for \(C_{M}\). In his calculation, the magnetic specific heat instantly decays to zero which does not fits into the measurements whereas my plotting fits better. (b) the experimental data for the total specific heat. Note here the temperature ranges from 0 to 300 K while in my plotting the temperature stops at 200 K. Figure 7: (a) Magnetic specific heat \(C_{M}=C_{\mathrm{ls}}+C_{\mathrm{mag}}\) is the sum of the 2D Ising statistics and the magnon’s contribution. (b) Solid red: total specific heat \(C_{V}=C_{E}+C_{M}\) of the FePS\({}_{3}\). It shows anomaly around the phase transition because the divergence of magnetic \(C_{M}\). Doted blue: the elastic specific heat \(C_{E}=(1-0.54)C_{\mathrm{ch}}+0.54C_{\mathrm{el}}\) according to Ref. [10]. We point out that there are 5 mol of atoms per 1 mol molecule for the FePS\({}_{3}\) compound. Figure 9: Solid red: theoretical predicted linear expansion coefficient \(\alpha_{L}=\tilde{\alpha}/3\) based on the Eq. 44 with the derived specific heat from Fig. 7 and effective Grüneisen parameter \(\tilde{\gamma}\) from Fig. 4. Solid blue: experimental derived \(\alpha_{L}\) from Eq. 49(b). dictions agree very well especially for the specific heat and linear expansion coefficient. As for the transport related property, the theoretical plot of \(Q^{-1}(T)\) shows the overall trend consistent with the measured curve and it still has rooms for improvement. It is because in this work we have simplified the magnon and phonon velocity \(\mathbf{v}_{k}\) to be homogeneous and utilized an isotropic thermal conductivity for analysis. According to the quasi 2D material these assumptions may not sufficient enough and one can improve these transport properties by studying the detailed lattice structure [8]. It may also helpful to find a double peak effect [40] for the \(\kappa(T)\) is helpful to explain the secondary surging of \(Q^{-1}\) after \(T>T_{N}\). However, our theoretical treatment builds a general scheme to study the thermal observables for the magnetic-elastic-thermal integrated system. The key is generalizing the Gruneisen relation by incorporating various contributions and arriving at an effective Gruneisen coefficient \(\tilde{\gamma}\) (Eq. 45). This quantity essentially describes the variation of internal energy with respect to the volume change and its temperature dependency represents the changing of _weight_ in the internal energy for each components in the hybrid system. Therefore the scheme developed in this chapter can be extended to include other contributors such as electrons in the spintronic and spin-caloritronic devices.
``` magnetoelastic効果を2D反ferro磁性材料の薄膜の熱弾性動態に組み込むための理論的スキームを開発しました。その結果、弾性Gr\"uneisenの関係を内部エネルギーの体積変化に対する磁気的な対称要素を含む有効なバージョンに拡張しました。弾性と磁気起源からの特定熱容量と熱伝導率に基づき、効果的なGr\"uneisenパラメータ、熱膨張係数、減衰係数を温度幅広い範囲で相変化に伴う観測値に予測しました。この解析モデルは、FePS3の薄片共振器のケースに適用することで検証され、理論予測は報告された実験データとよく合っています。 ```
2303.00008
Stellar halo substructure generated by bar resonances
Using data from the Gaia satellite's Radial Velocity Spectrometer Data Release 3 (RVS, DR3), we find a new and robust feature in the phase space distribution of halo stars. It is a prominent ridge at constant energy and with angular momentum $L_z>0$. We run test particle simulations of a stellar halo-like distribution of particles in a realistic Milky Way potential with a rotating bar. We observe similar structures generated in the simulations from the trapping of particles in resonances with the bar, particularly at the corotation resonance. Many of the orbits trapped at the resonances are halo-like, with large vertical excursions from the disc. The location of the observed structure in energy space is consistent with a bar pattern speed in the range $\Omega_\mathrm{b}\approx35-40$ km s$^{-1}$ kpc$^{-1}$. Overall, the effect of the resonances is to give the inner stellar halo a mild, net spin in the direction of the bar's rotation. As the distribution of the angular momentum becomes asymmetric, a population of stars with positive mean $L_z$ and low vertical action is created. The variation of the average rotational velocity of the simulated stellar halo with radius is similar to the behaviour of metal-poor stars in data from the APOGEE survey. Though the effects of bar resonances have long been known in the Galactic disc, this is strong evidence that the bar can drive changes even in the diffuse and extended stellar halo through its resonances.
Adam M. Dillamore, Vasily Belokurov, N. Wyn Evans, Elliot Y. Davies
2023-02-28T19:00:00
http://arxiv.org/abs/2303.00008v2
# A stellar halo walks into a bar: creation of substructure from a smooth distribution function ###### Abstract Using data from the _Gaia_ satellite's Radial Velocity Spectrometer Data Release 3 (RVS, DR3), we find a new and robust feature in the phase space distribution of halo stars. It is a prominent ridge at energies \(E\approx-1.4\times 10^{5}\) km\({}^{2}\) s\({}^{-2}\) and with angular momemtum \(L_{z}>0\). We run test particle simulations of a stellar halo-like distribution of particles in a realistic Milky Way potential with a rotating bar. We observe similar structures generated in the simulations from the trapping of particles in resonances with the bar, particularly at the corotation resonance. Many of the orbits trapped at the resonances are halo-like, with large vertical excursions from the disc. The location of the observed structure in energy space is consistent with a bar pattern speed in the range \(\Omega_{\rm b}\approx 35-40\) km s\({}^{-1}\) kpc\({}^{-1}\). Overall, the effect of the resonances is to give the inner stellar halo a mild, net spin in the direction of the bar's rotation. As the distribution of the angular momentum becomes asymmetric, a population of stars with positive mean \(L_{z}\) and low vertical action is created. The variation of the average rotational velocity of the simulated stellar halo with radius is similar to the behaviour of metal-poor stars in data from the APOGEE survey. Though the effects of bar resonances have long been known in the Galactic disc, this is strong evidence that the bar can drive changes even in the diffuse and extended stellar halo through its resonances. keywords: Galaxy: kinematics and dynamics - Galaxy: halo - Galaxy: structure ## 1 Introduction The first person to entertain the idea that the Milky Way is a barred galaxy seems to have been de Vaucouleurs (1964), based on kinematic evidence of deviations from circular motions in the 21-cm line profiles near the Galactic centre. Curiously, though, de Vaucouleurs (1964) has the orientation of the bar wrong, with the near side at negative Galactic longitudes, rather than positive. The first clear evidence that the Milky Way is barred with the near side at positive longitudes seems to have come from infrared photometry. Both integrated 2.4 \(\mu\)m emission (Blitz and Spergel, 1991) and the distribution of IRAS Miras (Whitelock, 1992) clearly showed the inner Galaxy is brighter at positive longitudes. Work on the gas kinematics (Binney et al., 1991), the starcounts of red clump giants (Stanek et al., 1994) and especially the infrared photometric maps obtained by the COBE satellite (Weiland et al., 1994) added to the growing evidence of a barred Milky Way, which became a concensus by the mid 1990s. Many of the parameters of the bar in the Milky Way have been constrained in recent years. The pattern speed of the bar \(\Omega_{\rm b}\) controls its length. The stellar orbits that support the bar cannot exist much beyond corotation (Contopoulos, 1980). Older studies matching the gas flows suggested a fast and short bar with \(\Omega_{\rm b}\) in the range 50 to 60 kms\({}^{-1}\) kpc\({}^{-1}\)(Fux, 1999; Bissantz et al., 2003). As the quality and quantity of the gas data has improved, more recent works have determined lower values, consistent with a slower and longer bar (Sormani et al., 2015; Li et al., 2022). The pattern speed of the bar is now thought to lie in the range 35 to 40 kms\({}^{-1}\) kpc\({}^{-1}\). This is consistent with results from stellar kinematics - for example, use of the projected continuity equation or Tremaine and Weinberg (1984) method by Sanders et al. (2019), or modelling of multiple bulge stellar populations using spectroscopic data (Portail et al., 2017). The bar angle - that is the angle between the line joining the Sun and Galactic Centre and the major axis of the bar - is more uncertain. Studies reaching to longitudes \(|\ell|>10^{\circ}\) often find bar angles \(\approx 45^{\circ}\)(e.g., Gonzalez-Fernandez et al., 2012). In studies confined to the inner parts, bar angles of \(20^{\circ}\)-\(35^{\circ}\) are typical (e.g., Stanek et al., 1997; Wegg et al., 2015; Simion et al., 2017). It has been suggested that the Galaxy contains two bars, with the central bar not aligned with the long bar, though it is unclear whether such a configuration is stable. The change in the orientation on moving outwards is therefore not well understood, though there are surely additional structures in the inner Galaxy aside from the main bar. Resonant perturbation theory (Lynden-Bell and Kalnajs, 1972; Lynden-Bell, 1973) was introduced into galactic dynamics to understand the trapping of stellar orbits by bar-like perturbations. For nearly circular orbits in a disc, the effects of the bar are most significant at the corotation and Lindblad resonances (e.g., Shu, 1991). Kalnajs (1991) first suggested that this effect may be responsible for creation of the Hyades and Sirius streams in the disc. The closed orbits inside and outside the outer Lindblad Resonance are elongated perpendicular and parallel to the bar in this picture. Subsequently, Dehnen (1998) used _HIPPARCOS_ data to demonstrate the existence of abundant substructure or moving groups in the disc, which he ascribed partly to resonance effects. The _Gaia_ data releases have revealed abundant ridges, undulations and streams in velocity or action space, sculpted by the Galactic bar (e.g., Fragkoudi et al., 2019; Khoperskov et al., 2020; Trick et al., 2021; Trick, 2022; Wheeler et al., 2022). The idea that the Galactic bar can create substructure in the disc is well-established. The disc orbits are nearly circular and confined largely to the Galactic plane, so it is easy to see how a rotating bisymmetric disturbance can couple to the orbits. By contrast, in the halo, stars are moving on eccentric orbits and can reach heights of many kiloparsecs above the Galactic plane. While less well studied, numerical experiments suggest that these stellar orbits may still be susceptible to trapping by bar resonances (Moreno et al., 2015). In addition, the effects of bars on the dark halo has been the subject of much debate over the last decades (e.g., Ceverino and Klypin, 2007; Debattista and Sellwood, 1998, 2000; Weinberg and Katz, 2002). The slowing of bars by dynamical friction of the dark matter halo can change the central parts from cusps to cores. This induces profound changes in the orbits of the dark matter particles in the halo. Recent data from the third data release (DR3) of _Gaia_(Gaia Collaboration et al., 2016, 2022) has revealed previously undetected substructure in the stellar halo of the Milky Way. By plotting Galactocentric radial velocity \(v_{r}\) against Galactocentric radius \(r\), Belokurov et al. (2022) showed that there are multiple 'chevron'-shaped over-densities in this radial phase space. These bear a close resemblance to structures which result from the phase mixing of debris from a merging satellite on an eccentric or radial orbit (Fillmore and Goldreich, 1984; Dong-Paez et al., 2022; Davies et al., 2023). Belokurov et al. (2022) showed that they are present at both low and high metallicity, and are still visible when only stars with [Fe/H] \(>-0.7\) are included. However, stars originating from the Milky Way's accreted satellites have lower metallicity than this. The _Gaia_ Sausage-Enceladus (Belokurov et al., 2018; Helmi et al., 2018) was one of the most massive mergers in the Milky Way's history, and its comparatively high metallicity debris dominates the stellar halo in the solar neighbourhood Naidu et al. (2020). However, only a very small proportion of its stars have [Fe/H] \(>-0.7\)(Naidu et al., 2020; Feuillet et al., 2021). This challenges the assumption that the radial phase space structures are due to accreted material. There are already indications that bar-driven features in the stellar halo are possible. For example, motivated by the shortness of the Ophiuchus Stream, Hattori et al. (2016) used numerical simulations to show that its properties may have been influenced by its interaction with the bar, despite the Stream stars lying at heights of \(\approx 5\) kpc above the Galactic plane. Schuster et al. (2019) examined the halo moving groups G18-39 and G21-22 as possible resonant structures generated by the bar - albeit with a pattern speed 45-55 kms\({}^{-1}\)kpc\({}^{-1}\) which is rather high nowadays. These mildly retrograde moving groups were discovered in Silva et al. (2012), who suggested that they may be debris from the unusual and retrograde globular cluster \(\omega\) Centauri. Myeong et al. (2018) identified the resonance generating the Hercules Stream in the solar neighbourhood as present even in stars with metallicities as low as [Fe/H] \(\approx\) -2.9 and so normally associated with thick disc and halo. Very recently, the interaction of a rotating bar with radial phase space crowns has been studied by Davies et al. (2023). They showed that with realistic values of the pattern speed, the bar is capable of blurring and destroying much of the phase space structure resulting from a satellite merger. If a bar can destroy, then it is natural to ask whether it can also create substructure. In this work, we examine the creation of substructure from a smooth stellar halo via resonances. We identify a prominent ridge in energy and angular momentum space (a proxy for action space) and show that such a feature is a natural outcome of bar-driven resonances in the stellar halo. The prominence and narrowness of this ridge provides a novel method of measuring the pattern speed of the bar. The paper is arranged as follows. In Section 2, we summarize the dynamics of orbits in rotating potentials and introduce the principal bar resonances. We use data from _Gaia_ DR3 in Section 3 to reveal structures in energy-angular momentum and radial phase space. In Section 4, we describe our simulations of the stellar halo and bar, and compare them to the data. Finally we present our conclusions in Section 5. ## 2 Dynamics in a rotating potential Consider a steady non-axisymmetric potential rotating with pattern speed (angular frequency) \(\Omega_{\rm b}\) about the \(z\)-axis. Although the energy \(E\) and \(z\) component of angular momentum \(L_{z}\) of a particle can vary, a linear combination of the two known as the Jacobi integral is conserved (Binney and Tremaine, 2008). This is defined as \[H_{\rm J}=E-\Omega_{\rm b}L_{z}. \tag{1}\] The Jacobi integral is the energy in the rotating frame. As the potential is steady in this frame, it follows from time-invariance that the Jacobi integral is conserved. Hence in the \(E\) versus \(L_{z}\) plane, stars are constrained to move along straight lines of gradient \(\Omega_{\rm b}\), provided the pattern speed remains steady. In galaxies, most orbits are regular and not chaotic. Action-angle coordinates are a set of canonical coordinates useful in such nearly integrable systems (e.g. Arnold, 1978). Each star is described by three actions \(J_{i}\) which are constant and three angles \(\theta_{i}\) which increase linearly with frequencies \(\Omega_{i}\). As an example, a slightly eccentric and inclined orbit in a weakly non-axisymmetric potential can be approximated as motion on an epicycle around a guiding centre. This centre follows a circular orbit in the \(z=0\) plane with azimuthal frequency \(\Omega_{\phi}\), set by the rotation curve of the potential. The particle oscillates in the (cylindrical polar) \(R\)-direction with the epicyclic frequency \(\Omega_{R}\) and in the \(z\)-direction with the vertical frequency \(\Omega_{z}\). Action-angle coordinates can be computed for general orbits in axisymmetric galactic potentials using modern stellar dynamical packages like Agama(Vasiliev, 2019). The locations of the principal resonances in space is in general hard to establish. Only for orbits confined to the disc plane are matters reasonably straightforward. In the frame corotating with the bar at frequency equal to the pattern speed \(\Omega_{\rm b}\), the mean azimuthal frequency of a star is \(\Omega_{\phi}-\Omega_{\rm b}\). We define the ratio of frequencies in this frame \(\tau_{\Omega}\equiv(\Omega_{\phi}-\Omega_{\rm b})/\Omega_{R}\). An orbit is resonant with the bar if \(\tau_{\Omega}=\pm n/m\), where \(n\) and \(m\) are integers. The most important of these are the corotation resonance \((\Omega_{\phi}=\Omega_{\rm b}\), \(r_{\Omega}=0)\) and the Lindblad resonances \((r_{\Omega}=\pm 1/m)\)(Binney and Tremaine, 2008). For orbits that explore the stellar halo, the location of the resonances can change with height above or below the Galactic plane. Why are the resonant orbits so important? Suppose a bar-like disturbance rotating at angular frequency \(\Omega_{\rm b}\) is applied to the disc. On each traverse, the resonant stars meet the crests and troughs of the perturbation potential at the same spots in their orbits and this causes secular changes in the orbital elements (e.g. Lynden-Bell, 1973; Lynden-Bell and Kalnajs, 1972; Collett et al., 1997; Molloy et al., 2015). The non-resonant stars feel only periodic fluctuations that average to zero in the long term. As the strength of the bar's perturbation increases, stars near the locus of exact resonance are captured into libration around the parent periodic orbit. So, the neighbourhoods of the resonances are the regions of a galaxy where a bar can produce long-lived changes in the stellar populations. ## 3 Data ### _Gaia_ DR3 RVS sample We use the sample of stars from _Gaia_ DR3 (Gaia Collaboration et al., 2016, 2022b) described by Belokurov et al. (2022) which includes line-of-sight velocity measurements from the Radial Velocity Spectrograph (RVS, Katz et al., 2022). We use distances calculated by Bailer-Jones et al. (2021). This sample has selection cuts on parallax error (\(\varpi/\sigma_{\varpi}>10\)) and heliocentric distance (\(D<15\) kpc). All sources within \(1.5^{\circ}\) of known globular clusters closer than 5 kpc have been removed (Belokurov et al., 2022). This leaves \(\sim 25\) million sources out of the \(\sim 34\) million with line-of-sight velocity measurements. For Section 3.3 we further limit our sample to stars with metallicity ([Fe/H]) values calculated by Belokurov et al. (2022) (see that work for a detailed description). This provides us with a final sample of 22.5 million sources, all with 6D phase space and [Fe/H] measurements. We transform the positions and velocities into a Galactocentric left-handed coordinate system in which the Sun's position and velocity are \((x_{\odot},y_{\odot},z_{\odot})=(8,0,0)\) kpc and \((v_{\odot x},v_{\odot y},v_{\odot z})=(-9.3,251.5,8.59)\) km s\({}^{-1}\) (as reported by Gaia Collaboration et al., 2022a). ### Energy-angular momentum space The plot of energy \(E\) versus \(z\)-component of the angular momentum \(L_{z}\) is a good approximation to action space \((J_{r},J_{\phi})\). This is because energy is a proxy for radial action \(J_{r}\), whilst \(L_{z}\) is the azimuthal action \(J_{\phi}\) in nearly axisymmetric potentials. To calculate estimates of the energies \(E\) of the stars in our sample, we use a modified version of the potential \(\tt{MWPotential2014}\) described in Bovy (2015). Similarly to Belokurov et al. (2022), we set the virial mass of the Navarro-Frenk-White dark matter halo to \(1\times 10^{12}M_{\odot}\), the virial radius to 260 kpc and the concentration to 18.8, which gives a circular velocity at the Sun's radius of about 235 km s\({}^{-1}\) (c.f. Bland-Hawthorn & Gerhard, 2016). The distribution of \(E\) versus \(L_{z}\) is plotted in the top panel of Fig. 1. The left-handed coordinate system means that stars in the disc have \(L_{z}>0\). The second panel shows the same distribution after unsharp masking has been applied in \(E\) to reveal details of the distribution more clearly. Alternating overdensities (black) and underdensities (white) are seen most prominently at \(L_{z}>0\) with roughly constant energy. To correct for selection effects arising from the RVS and parallax error cuts, we apply a weighting to the data points. This is based on the ratio of our sample size to the total sample size of stars with photo-geometric distance measurements, as a function of \(r\) - see Appendix A for full details. The weighted distributions are shown in the third and fourth panels of Fig. 1. The weighting has mostly removed the two lowest energy ridges, indicating that these resulted from selection effects. However, the highest energy ridge at \(E\approx-1.4\times 10^{5}\) km\({}^{2}\) s\({}^{-2}\) remains, suggesting that this is a genuine feature of the distribution. It can also be seen in the dissection of the stellar halo in action space by Myeong et al. (2018), albeit with a much smaller dataset. Figure 1: **Top panel:** distribution of energy \(E\) vs angular momentum \(L_{z}\), where prograde orbits have \(L_{z}>0\). The red dashed lines mark the angular momentum cut used in Fig. 2. The highly populated disc can be seen on the right-hand side of the distribution. **2nd panel:** as above, but with unsharp filtering applied; a background smoothed with a Gaussian kernel in \(E\) has been subtracted off. We use 100 pixels along the \(E\)-axis, and the Gaussian kernel has a standard deviation of 2 pixels. Black (white) pixels denote overdensities (underdensities). This reveals several ridges at roughly constant energy with mostly \(L_{z}>0\). **Bottom two panels:** As above, but all data points are weighted based on the number of stars in the RVS sample as a function of radius. This largely removes two of the ridges (at \(E\approx-1.5\) and \(-1.6\times 10^{5}\) km\({}^{2}\) s\({}^{-2}\)), which were due to selection effects in the sample. The highest energy ridge (at \(E\approx-1.4\times 10^{5}\) km\({}^{2}\) s\({}^{-2}\)) remains. ### Radial phase space To isolate the region of \(E\)-\(L_{z}\) space where the ridge is most prominent, we apply an angular momentum cut to include only stars with \(0.25\times 10^{3}<L_{z}\) [kpc km s\({}^{-1}\)] \(<1.0\times 10^{3}\). We have checked that the results are not sensitive to the exact choice of \(L_{z}\) cut. We calculate the Galactocentric spherical radii and radial velocities \((r,v_{r})\) for each particle. The \((r,v_{r})\) distributions are shown in Fig. 2 for stars with [Fe/H] \(<-0.7\) (top row) and [Fe/H] \(>-0.7\) (bottom row). While the low metallicity sample contains a mix of accreted and _in situ_ stars (Belokurov & Kravtsov, 2022; Conroy et al., 2022; Myeong et al., 2022), [Fe/H] \(>-0.7\) is dominated by _in situ_ stars (Belokurov et al., 2022). We use the weighting described above in the left-hand column, and in the right-hand column we normalise the histograms so that each column of pixels has the same total count. The chevron-shaped ridges reported by Belokurov et al. (2022) are visible, particularly for [Fe/H] \(<-0.7\). For both metallicity cuts the most prominent feature is a roughly triangular overdensity reaching a maximum radius of \(\sim 7\) kpc. We find that its position and size is highly dependent on the choice of \(L_{z}\) cut used, and it coincides with the sloping low energy overdensity at the edge of the \(E\)-\(L_{z}\) distribution in Fig. 1. This therefore consists mostly of stars on close to circular orbits in the disc. Its shape depends on the \(L_{z}\) cut because the energy of disc orbits is \(L_{z}\)-dependent. More interestingly, there is a second fainter chevron that reaches \(\approx 10.5\) kpc, corresponding to 'Chevron 1' described by Belokurov et al. (2022). This is a sharp edge in the distribution, outside of which the density of stars is lower. This is visible for both metallicity cuts, albeit more faintly at [Fe/H] \(>\) -0.7 (also see the bottom-left panel of Fig. 8 in Belokurov et al., 2022). We have found that the position of this chevron is independent of the exact \(L_{z}\) cut used. This suggests that the orbital energies of its stars have little \(L_{z}\) dependence (by contrast to the disc). We therefore postulate that it is a manifestation of the horizontal overdensity in Fig. 1 at \(E\approx-1.4\times 10^{5}\) km\({}^{2}\) s\({}^{-2}\). To test this we plot the phase space positions of particles with energies very close to this in the middle column of Fig. 2. These particles form a chevron with its outer edge closely lining up with the edge of Chevron 1. This strongly suggests that the energy ridge and chevron are indeed different representations of the same entity. In summary, this is a feature at fixed energy, comprised of particles on mostly prograde orbits with a range of \(L_{z}\). The radial phase space shows that they have apocentres of up to \(\approx 10\) kpc. Belokurov et al. (2022) suggested that this and other chevrons arose from the phase-mixing of the debris from a massive satellite which merged with the Milky Way, possibly the GSE. However, the presence of Chevron 1 Figure 2: Radial phase space of the DR3 RVS sample, with an \(L_{z}\) cut applied. The top (bottom) row shows low (high) metallicity stars, with the threshold set at [Fe/H] \(=-0.7\). The high metallicity sample is dominated by _in situ_ stars. In the middle column, we plot the positions of particles with energy \(E\approx-1.4\times 10^{5}\) km\({}^{2}\) s\({}^{-2}\), which corresponds to the horizontal ridge in \(E\)-\(L_{z}\) space (Fig. 1). In the right-hand column, we column-normalise the histograms (i.e. each column of pixels has the same total count). In the top row, several chevron-shaped overdensities are visible, the most prominent of which (peaking at \(r\approx 10.5\) kpc) is also visible at high metallicity. As shown by the middle column, this corresponds to the energy of the horizontal overdensity in Fig. 1. The dark region (at \(r\lesssim 7\) kpc) is due to the disc, and moves depending on the \(L_{z}\) cut. at high [Fe/H] challenges this hypothesis, since very few GSE stars have such high metallicities (Belokurov et al., 2022). ## 4 Simulations We now investigate whether such a feature with the above properties can instead be produced by a rotating bar via resonant trapping. We run test particle simulations of a stellar halo-like population of particles using the galactic dynamics package Agama(Vasiliev, 2019). We initialise this population from a steady-state distribution function in the axisymmetric Milky Way potential described in Section 3.2, before introducing a rotating bar-like perturbation which smoothly increases in strength. ### Barred potential To represent a bar, we use the Dehnen (2000) model generalised to 3 dimensions by Monari et al. (2016). This consists of a quadrupole perturbation to the potential, written in cylindrical coordinates \((R,\phi,z)\) in the form \[\Phi_{\rm b}=A_{\rm b}(t)\cos(2[\phi-\phi_{\rm b}(t)])\ \left(\frac{R}{r} \right)^{2}\times\begin{cases}(r/R_{\rm b})^{3}-2&r<R_{\rm b}\\ -(R_{\rm b}/r)^{3}&r\geq R_{\rm b},\end{cases} \tag{2}\] where \(r^{2}=R^{2}+z^{2}\). We increase the amplitude of the perturbation \(A_{\rm b}(t)\) smoothly between times \(t_{0}\) and \(t_{1}\) according to \[A_{\rm b}(t) =A_{f}\left(\frac{3}{16}\xi^{5}-\frac{5}{8}\xi^{3}+\frac{15}{16} \xi+\frac{1}{2}\right), \tag{3}\] \[\xi \equiv\frac{t-t_{0}}{t_{1}-t_{0}}-1. \tag{4}\] For \(t<t_{0}\) and \(t>t_{1}\), \(A_{\rm b}\) equals 0 and \(A_{f}\) respectively. Following Dehnen (2000), we parameterise the bar perturbation amplitude with the bar strength \[\alpha\equiv 3\frac{A_{f}}{v_{0}^{2}}\left(\frac{R_{\rm b}}{R_{0}}\right)^{3}, \tag{5}\] where \(v_{0}\) is the circular velocity at \(R_{0}\equiv 8\) kpc. We rotate this potential at constant pattern speed \(\Omega_{\rm b}\). We compare simulations with pattern speeds of \(\Omega_{\rm b}=\{35,40,45\}\) km s\({}^{-1}\) kpc\({}^{-1}\), but focus on the \(\Omega_{\rm b}=40\) km s\({}^{-1}\) kpc\({}^{-1}\) model for most of this section. This is close to many recent estimates of the bar's pattern speed (e.g. Wang et al., 2013; Portail et al., 2017; Sanders et al., 2019; Binney, 2020). We set \(\alpha=0.01\) following Dehnen (2000), \(t_{0}=2\) Gyr and \(t_{1}=4\) Gyr, so the bar grows over a period of 2 Gyr. We run the simulation between \(t=0\) and \(t_{f}=8\) Gyr. This is roughly the duration of the period of the Milky Way's history since the GSE merger, over which no major satellite has been accreted (e.g. Belokurov et al., 2018; Kruijssen et al., 2020). We set the bar radius to \(R_{\rm b}=2\) kpc. While this is likely smaller than the Milky Way's bar at present (e.g. Hammersley et al., 1994; Wegg et al., 2015; Lucey et al., 2022), it may be more realistic for the period over which it was formed (Rosas-Guevara et al., 2022). ### Distribution function We generate a halo-like population of stars using the double power law distribution function implemented in Agama by Vasiliev (2019), which is a generalisation of ones introduced by Posti et al. (2015) and Williams and Evans (2015): \[f(\mathbf{J}) =\frac{M}{(2\pi J_{0})^{3}}\left[1+\left(\frac{J_{0}}{h(\mathbf{J})} \right)^{q}\right]^{\Gamma/q}\left[1+\left(\frac{h(\mathbf{J})}{J_{0}}\right)^{q }\right]^{-B/q}, \tag{6}\] \[h(\mathbf{J}) \equiv J_{r}+|\mathbf{\phi}|+J_{z}. \tag{7}\] Here, \(J_{0}\) is the characteristic total action \(|\mathbf{J}|\) of orbits near the break radius of the double-power law profile (Posti et al., 2015). We wish to choose values of \(J_{0}\) and the power law indices \(\Gamma\) and \(B\) to roughly reproduce the density profile of the Milky Way's stellar halo. Multiple studies have found that the density profile can be fitted by a double power law with inner slope \(\gamma\sim 2.5\) and outer slope Figure 3: **Top two panels:** Distributions of \(\mathbf{E}\) and \(L_{z}\) of the star particles at the beginning and end of the simulation. The colour map uses the same logarithmic normalisation for each panel, and as in Fig. 1 the red dashed lines mark the \(L_{z}\) cut. After the growth of the bar, ridge-like overdensities at certain energies emerge from the initially smooth distribution. These are at \(L_{z}>0\) (i.e. orbiting in the same direction as the bar’s rotation). **Bottom panel:** Difference between the \(t=0\) and later distributions. Black (white) pixels mark where the density has increased (decreased). This emphasises that particles have moved away from \(L_{z}\approx 0\) and \(L_{z}<0\) into the ridges at \(L_{z}>0\). The red diagonal streaks mark the paths taken by a selection of particles, and have gradients close to the pattern speed. \(\beta\sim 4.5\), with a break radius at \(r\approx 25\) kpc (e.g. Watkins et al., 2009; Deason et al., 2011; Faccioli et al., 2014; Pila-Diez et al., 2015). We find that choosing \(J_{0}=3500\) kpc km s\({}^{-1}\) and \((\Gamma,B)=(2.4,4.6)\) roughly recovers the values for the Milky Way. We set the steepness of the break to \(\eta=10\), which results in the density power law slope changing between radii of \(\sim 10-40\) kpc. ### Results We integrate the orbits of these stars in the combined potential of the Milky Way and the growing rotating bar for time \(t_{f}\). We calculate the energy \(E\) and angular momentum \(L_{z}\) of the star particles in each snapshot, and show these distributions in Fig. 3. At the beginning of the simulation, there is a large concentration of stars at small \(L_{z}\) but otherwise no structure. However, at \(t=4\) and \(8\) Gyr horizontal striations are visible at \(L_{z}>0\), corresponding to specific energies where there are overdensities and underdensities of stars. These stripes resemble those seen in the data in Fig. 1, as they occupy a similar region of \(E\)-\(L_{z}\) space. Since the initial distribution function is symmetric in \(L_{z}\), the fact that these simulated ridges occur only at \(L_{z}>0\) implies that they must be related to the rotation of the bar. The bottom panel of Fig. 3 show how the star particles move through \(E\)-\(L_{z}\) space. The red streaks mark the paths taken by a selection of stars, where the motion is towards the top-right. This results in a depletion of stars at \(L_{z}\leq 0\) (white pixels) and an enhancement in their density in the ridges at \(L_{z}>0\). This behaviour strongly resembles that expected for a steady but rotating non-axisymmetric potential, where stars are constrained to move along lines of gradient \(\Omega_{\rm b}\) in the \(E\) vs \(L_{z}\) plane due to conservation of the Jacobi integral. We see similar behaviour here despite the fact that the bar perturbation is time-dependent. We have checked that the gradient of these streaks is indeed approximately \(\Omega_{\rm b}\). We show the radial phase space of the simulations in Fig. 4, where we have applied the same radius and angular momentum cuts as on the data in Fig. 2 (\(4<r\) [kpc] \(<12,0.25\times 10^{3}<L_{z}\) [kpc km s\({}^{-1}\)] \(<1.0\times 10^{3}\)). The top and bottom rows show the start and end of the simulation, at \(t=0\) and \(8\) Gyr respectively. The three columns use different normalisations. The initial distribution function has a smooth radial phase space, with no substructure visible. However, by the end of the simulation a wedge-shaped overdense region emerges, with its tip at \(r\approx 10\) kpc. This bears a strong resemblance to the clearest feature visible in the data in Fig. 2. To assess how the change in \(L_{z}\) depends on the amplitude of vertical motion of the stars, we calculate the vertical actions \(J_{z}\) using the Stuckel Fulde approximation (Binney, 2012) in an axisymmetrised potential (without the bar). The distributions of \(J_{z}\) vs \(L_{z}\) are shown in Fig. 5 at the beginning and end of the simulation. The bottom panel shows the ratio of the two distributions (i.e. the difference in log density). As expected from Fig. 3, at the end of the simulation the average \(L_{z}\) is positive, as stars move away from around \(L_{z}=0\). Fig. 5 shows that this has \(J_{z}\) dependence, with particles at low Figure 4: Radial phase space of the simulated stars, with the same \(L_{z}\) cut as used on the data in Fig. 2. The top and bottom rows show the start and end of the simulation respectively. From left to right, the columns are unnormalised, normalised by column, and normalised by both row and column. The distribution is initially smooth, with no structure visible. However, at the end of the simulation after the growth of the bar, a wedge-shaped overdensity with its tip at \(\sim 10\) kpc is visible. This is revealed more clearly in the right-hand column, and bears a close resemblance to the structure seen in the data in Fig. 2. experiencing greater increases in \(L_{z}\) on average. This is likely to be in part due to the fact that stars at low \(J_{z}\) tend to have lower energy and hence interact more strongly with the bar. Fig. 5 demonstrates that the bar is capable of creating a population of stars with low \(J_{z}\) and positive mean \(L_{z}\) from an initially non-rotating distribution. We note that this bears some similarity to the population ultra metal-poor stars discussed by Sestito et al. (2019). A significant fraction of these stars are on prograde orbits with \(J_{z}\lesssim 100\) kpc km s\({}^{-1}\), comparable to the distribution in the middle panel of Fig. 5. Our results suggest that the Milky Way's bar could be responsible for creating the observed distribution of these stars and thus alleviate the need to invoke an early emergence of a prehistoric metal-poor disc (see e.g. Sestito et al., 2019; Di Matteo et al., 2020; Martini et al., 2022). We compute the spherically averaged median azimuthal velocity \(v_{\phi}\) as a function of cylindrical radius \(R\). This is plotted in Fig. 6 at three snapshots. The distribution changes from having no net spin to having a positive median \(v_{\phi}\), with larger values at small radii. This is expected from Figs. 3 and 5, which show that by \(t=8\) Gyr there is an excess of star particles at \(L_{z}>0\), particularly at low energies. In Fig. 6 we also plot data from data release 17 (DR17) of the Figure 5: Vertical action \(J_{z}\) vs angular momentum \(L_{z}\) at the beginning and end of the simulation. The bottom panel shows the ratio of the two histograms, where black (white) pixels show where the density has increased (decreased). This indicates that stars at low \(J_{z}\) are more likely to be pushed to high \(L_{z}\) by the bar. Figure 6: Median azimuthal velocity \(\langle v_{\phi}\rangle\) as a function of cylindrical radius \(R\) in each snapshot of the simulations (coloured lines) compared to observed values for low [Fe/H] stars from APOGEE DR17 (black points). While initially the halo has no net spin, after the bar grows the mean azimuthal velocity is positive, particularly inside 8 kpc. The observed values show a similar trend, albeit with a higher median \(v_{\phi}\). Figure 7: **Top panel:** initial (\(x\)-axis) and final (\(y\)-axis) azimuthal frequencies \(\Omega_{\phi}\) of simulated stars. These are calculated by integrating orbits from the initial and final phase space positions in a potential of constant pattern speed. The red points show stars with final energy close to \(E=-1.5\times 10^{5}\) km\({}^{2}\) s\({}^{-2}\), the approximate energy of the most prominent overdensity in Fig. 3. **Bottom panel:** as above, but instead showing radial frequencies \(\Omega_{\phi}\). In both cases horizontal stripes indicate that the final frequencies are highly clustered about certain values. These include \(\Omega_{\phi}=40\) km s\({}^{-1}\) kpc\({}^{-1}\), the bar pattern speed. Most of the red points lie close to this frequency, showing that this corresponds to the prominent energy overdensity in Fig. 3. APOGEE survey (Majewski et al., 2017; Abdurro'uf et al., 2022) for comparison. We follow closely the steps outlined in Belokurov & Kravtsov (2022) to select unproblematic red giants stars with small chemical and kinematic uncertainties. In addition, we apply [Al/Fe]\(<0.1\) and [Fe/H]\(<-1.2\) cuts to limit our sample to predominantly accreted halo stars. These APOGEE low-metallicity data points qualitatively follow a similar trend to the simulation, with a decreasing median \(v_{\phi}\) with increasing \(R\). We calculate the instantaneous radial and azimuthal frequencies of the star particles as follows. We integrate their orbits from their phase space positions in each snapshot, using the corresponding potential with constant pattern speed. The frequencies are calculated from the radial and azimuthal motion averaged over several orbital periods. In Fig. 7, we plot the final versus initial values of \(\Omega_{\phi}\) and \(\Omega_{R}\) for the simulated star particles with the \(L_{z}\) and \(r\) cuts used for Fig. 4. While many particles' frequencies do not change between the start and end of the simulation, narrow horizontal stripes in the panels of Fig. 7 indicate that the final frequency distributions are tightly clustered around particular values. This includes \(\Omega_{\phi}=40\,\mathrm{km\,s^{-1}\,kpc^{-1}}\), which is equal to the pattern speed of the bar. Hence these particles are trapped around the corotation resonance. The red points show particles with energies close to that of the most prominent overdensity in Fig. 3. Many of these are located at or very close to the corotation resonance, strongly suggesting that the peaks in the energy and frequency distributions are due to bar resonances. To check this we compute the ratio of azimuthal and radial frequencies in the frame corotating with the bar, \(r_{\Omega}\equiv(\Omega_{\phi}-\Omega_{\mathrm{b}})/\Omega_{R}\). The distributions of this quantity are plotted in Fig. 8 at three different times. This confirms that the frequencies are clustered around resonances with the bar. While the initial distribution is relatively smooth, the intermediate and final distributions have multiple sharp peaks located at particular values of \(r_{\Omega}\). This includes the corotation resonance \(r_{\Omega}=0\) and the outer Lindblad resonance \(r_{\Omega}=-0.5\), with the corotation peak begin particularly strong. The distributions at 4 and 8 Gyr are very similar, showing that the trapping takes place during the growth phase of the bar (\(2<t/\mathrm{Gyr}<4\)) and that the resonant peaks are largely unchanged while the bar is steadily rotating with constant strength. We show a selection of orbits near three of the resonances in Fig. 9. The top row shows the orbits in the plane of the disc in the frame corotating with the bar (whose major axis lies along the \(X\)-axis). Stars near the corotation resonance (right-hand column) do not cross the bar's major axis, instead orbiting on one side only. The bottom row shows the vertical vs radial behaviour of the orbits. While some resonant orbits remain close to the disc (\(Z=0\) plane), others have large vertical excursions, reaching over 5 kpc above and below the disc. Hence we see that stars on halo-like orbits can contribute to the resonant peaks seen in Fig. 8. Note that the blue orbit in the middle column also has a resonance between its \(R\) and \(Z\) motion. ### Comparison with data We wish to estimate the approximate pattern speed of the bar which can best reproduce the observed structures in Fig. 1. For this purpose, we run simulations with constant pattern speeds \(\Omega_{\mathrm{b}}=\{35,40,45\}\) km s\({}^{-1}\) kpc\({}^{-1}\) with the same background potential and initial distribution function. We calculate the energy \(E\) and angular momentum \(L_{z}\) of the particles, and apply unsharp masking to the \(E\)-\(L_{z}\) distributions similar to that used in Fig. 1. The filtered \(E\)-\(L_{z}\) distributions for the data and three simulations are shown in Fig. 10 with the approximate energies of the observed ridges marked by red dashed lines. As the pattern speed \(\Omega_{\mathrm{b}}\) is increased, the frequencies of the resonances increase. This results in a decrease of the energies of the corresponding overdensities in \(E\)-\(L_{z}\) space. Of the three simulations, the closest match is at a pattern speed of \(\Omega_{\mathrm{b}}=35\) km s\({}^{-1}\) kpc\({}^{-1}\), at which the energy of the corotation ridge is \(-1.4\times 10^{5}\) km\({}^{2}\) s\({}^{-2}\). The \(\Omega_{\mathrm{b}}=40\) km s\({}^{-1}\) kpc\({}^{-1}\) model is also a reasonably good fit, while at 45 km s\({}^{-1}\) kpc\({}^{-1}\) the energy of the corotation ridge is too low to match the data. We therefore conclude that pattern speeds in the range \(35-40\) km s\({}^{-1}\) kpc\({}^{-1}\) produce good matches between the energies of the observed and simulated ridges in \(E\)-\(L_{z}\) space. This is in agreement Figure 8: Distributions of \(r_{\Omega}\), the ratio of azimuthal to radial frequencies in the frame rotating at the pattern speed \(\Omega_{\mathrm{b}}\), at three snapshots. The distributions at \(t=4\) and 8 Gyr have clear resonant peaks at \(r_{\Omega}=0\) (corresponding to corotation) and \(r_{\Omega}=-0.5\) (the outer Lindblad resonance). The \(r_{\Omega}=-0.25\) or ultraharmonic resonance is also marked, although the peak is weaker here. Figure 9: Orbits close to each of the three resonances marked above, plotted in the frame corotating with the bar. The Galactic disc lies in the \(X-Y\) plane, with the bar’s major axis along the \(X\)-axis. The bottom row shows the height above the disc \(Z\) vs cylindrical radius \(R\). The two colours show two separate orbits at each resonance. These demonstrate that trapped orbits can be both disc-like (e.g. orange, middle column) or halo-like, with large vertical excursions (e.g. orange, left-hand column). with most recent studies that constrain the pattern speed. Binney (2020) found that \(\Omega_{\rm b}\approx 36\) km s\({}^{-1}\) kpc\({}^{-1}\) was favoured, while Portail et al. (2017), Wang et al. (2013) and Sanders et al. (2019) preferred values in the range \(39-41\) km s\({}^{-1}\) kpc\({}^{-1}\). ## 5 Conclusions Bars are important drivers of morphological changes in galaxies. The Milky Way is no exception. Its massive bar extends to around Galactocentric distances of \(\approx 5\) kpc and contains \(\approx 30-40\) per cent of the total stellar mass in the Galaxy. This hefty structure has long been known to drive evolution in the disc through its resonances. These are the locations where a combination of the stars' natural frequencies is equal to the forcing frequency or pattern speed of the bar. At the resonances, secular changes in the orbits occur, causing long term effects in the stellar populations. Data from the _Gaia_ satellite has reinvigorated the study of substructure in the disc. Many of the notches, ridges and ripples in phase space have been interpreted as the imprints of resonances with the bar (e.g., Khoperskov et al., 2020; Trick et al., 2021; Trick, 2022), though the effects of spirality driven by the bar are also possible (Hunt et al., 2018). The Hercules and Sirius streams in the solar neighbourhood were previously associated with the outer Lindblad resonance of the bar (Kalnajs, 1991; Dehnen, 1998). This requires the bar to have a high pattern speed. Nowadays, the corotation resonance of a slow bar with pattern speed \(\Omega_{\rm b}\approx 40\) km s\({}^{-1}\) kpc\({}^{-1}\) is seen as the more likely explanation (e.g. Monari et al., 2019). There have also been hints that the bar may affect both streams (Hattori et al., 2016) and stellar populations in the halo (e.g., Moreno et al., 2015; Myeong et al., 2018; Schuster et al., 2019). In this paper, we have identified a distinctive feature in the phase space distribution of halo stars. It is a prominent ridge at energies \(E\approx-1.4\times 10^{5}\) km\({}^{2}\) s\({}^{-2}\) and with \(L_{z}>0\). This is apparent when data from the _Gaia_ Data Release 3 (DR3) Radial Velocity Spectrometer (RVS) sample is plotted in energy versus angular momentum (\(E\)-\(L_{z}\)) space. It was also seen by Myeong et al. (2018), albeit with a much smaller dataset. Further ridges are also present before weighting, but they are likely artefacts caused by the selection function. The prominent ridge corresponds to a chevron-shaped overdensity in radial velocity versus radius (\(v_{r}\)-\(r\)) space previously reported by Belokurov et al. (2022). it resembles those produced by phase mixing the debris of a merged satellite galaxy (e.g. Fillmore & Goldreich, 1984; Sanderson & Helmi, 2013; Dong-Paez et al., 2022; Belokurov et al., 2022). However, the structure persists at both high and low metallicity ([Fe/H] \(>-0.7\) and \(<-0.7\)), suggesting that the chevron is not composed of stars accreted from _Gaia_ Sausage-Enceladus or elsewhere. It is natural to seek a dynamical solution for its provenance, as stars of all metallicities are affected. To understand its origin, we run test particle simulations of a stellar halo population of particles under the influence of a rotating bar. After generating a steady-state distribution in an axisymmetric potential, we smoothly increase the bar strength over a period of 2 Gyr. The bar remains steady for the final 4 Gyrs of its life. The resultant distribution of stars in \(E\)-\(L_{z}\) and \(v_{r}\)-\(r\) space shows features similar to the data. Particles move from around \(L_{z}\approx 0\) into ridges at \(L_{z}>0\), with both \(E\) and \(L_{z}\) increasing, so as to approximately conserve the Jacobi integral. When plotted in radial phase space, the ridges appear as chevrons. At the end of the simulation, the azimuthal and radial frequencies \(\Omega_{\phi}\) and \(\Omega_{R}\) are strongly clustered about certain values. These correspond to resonances with the bar, especially the corotation resonance and outer Lindblad resonance. It is the corotation resonance that is responsible for the most prominent ridge in \(E\)-\(L_{z}\) space. We plot the orbits of particles close to these resonances. Some are disc-like, but some are halo-like with large vertical excursions from the Galactic plane. The bar spins up the inner stellar halo. We show this by calculating the spherically averaged median azimuthal velocity \(v_{\phi}\) as a function of cylindrical radius \(R\) in our simulations. The average \(v_{\phi}\) at \(R\lesssim 10\) kpc is increased, so the simulated stellar halo gains a net spin. The observational data on the metal-poor star subsample from APOGEE shows qualitatively the same trend as the simulations, with the median \(v_{\phi}\) decreasing from \(\approx 70\) to \(10\) km s\({}^{-1}\) between Galactocentric radii \(R\) of 3 and 9 kpc. The distribution of the angular momentum becomes asymmetric. This asymmetry is more pronounced for stars with low \(J_{z}\). As a result, a population of stars on seemingly disc-like orbits is forged from the initially non-rotating and smooth halo distribution. These orbits are similar to those previously reported for Figure 10: \(E\)-\(L_{z}\) distributions of the data (left-hand panel) and simulations with three different pattern speeds (right three panels). As in the bottom panel of Fig. 1, the data have been weighted and unsharp filtering has been applied to each distribution in energy. Black (white) pixels denote overdensities (underdensities). The red dashed line marks the approximate energy of the horizontal ridge in the data at \(L_{z}>0\). As the pattern speed is increased, the horizontal ridges in the simulations decrease in energy (and increase in frequency). The most prominent one (due to the corotation resonance) roughly aligns with the ridge in the data at a pattern speed of \(\Omega_{\rm b}\approx 35-40\) km s\({}^{-1}\) kpc\({}^{-1}\). metal-poor, very metal-poor and extremely metal-poor stars in studies advocating a disc-like state for the high-redshift Galaxy (see e.g. Sestito et al., 2019; Di Matteo et al., 2020; Mardini et al., 2022). If following our results, the necessity for a pre-historic disc is removed, a different picture emerges. Before spinning up into a coherently rotating disc at metallicities \(-1.5<\)[Fe/H]\(<-1\), the early Milky Way was instead a kinematically hot and messy place in line with the analysis by Belokurov & Kravtsov (2022). This pre-disc Milky Way population, dubbed _Aurora_, is strongly centrally concentrated and thus lies predominantly within Solar radius and has a modest net spin (see Belokurov & Kravtsov, 2022). Similar conclusions as to the state of the high-redshift Galaxy are reached in recent observational (Conroy et al., 2022; Rix et al., 2022; Myeong et al., 2022) and theoretical (Gurvich et al., 2023; Hopkins et al., 2023) studies. We repeat our simulation with different bar pattern speeds in the range 35-45 km s\({}^{-1}\) kpc\({}^{-1}\). As the pattern speed of the bar is increased, the frequencies of the resonances increase. This decreases the energy of the horizontal ridges in \(E\) vs \(L_{z}\) space, as well as the radial extent of the corresponding chevron over-densities in \(v_{r}\) vs \(r\) space. We find that, with our choice of potential, a pattern speed of \(\Omega_{\rm b}\approx 35-40\) km s\({}^{-1}\) kpc\({}^{-1}\) is required to match the energy of the most prominent ridge in \(E\)-\(L_{z}\) space. This is consistent with most recent estimates of the Milky Way bar's pattern speed, most of which are \(39-41\) km s\({}^{-1}\) kpc\({}^{-1}\)(e.g. Wang et al., 2013; Portail et al., 2017; Sanders et al., 2019; Binney, 2020). While our results suggest that 35 km s\({}^{-1}\) kpc\({}^{-1}\) may be slightly favoured over 40 km s\({}^{-1}\) kpc\({}^{-1}\), a more detailed study is required to include consideration of uncertainties in the potential. There are a number of avenues for further exploration. First, this offers up a new tool to study the Galactic bar. Its formation epoch and its evolution will imprint themselves on the stellar halo as well as the disc. If the pattern speed of the bar is slowing because of dynamical friction exerted by the dark halo, this may induce detectable features in the ridges of the stellar halo populations. Secondly, the Lindblad and other resonances may also be the sites of depletions and enhancements in the action space of the stellar halo, though these have not yet been unambiguously associate with structures in the data. Thirdly, the bar may also create resonant features in the dark halo. This may be important if the substructures coincide with the solar neighbourhood, as this may enhance the signal expected in direct detection experiments in Earth-borne laboratories. We are actively pursuing these investigations at the moment and will report results shortly. ## Acknowledgements We are grateful to the Cambridge Streams group and Hans-Walter Rix for helpful discussions during this study. AMD and EYD thank the Science and Technology Facilities Council (STFC) for PhD studentships. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. This research made use of Astropy,1 a community-developed core Python package for Astronomy (Astropy Collaboration et al., 2013, 2018). This work was funded by UKRI grant 2604986. For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising. Footnote 1: [http://www.astropy.org](http://www.astropy.org) ## Data Availability This study uses publicly available _Gaia_ data.
Gaia衛星の径向速度分光計データリリース3(RVS, DR3)から得られたデータを用いて、ハロー星における相対時間分布に新しい、堅牢な特徴を発見しました。それはエネルギーと角運動量$L_z>0$が一定のRidgesです。私たちは、 Milky WayPOTENTIALの現実的な回転性 bar を持つ恒星ハローのような粒子分布のテスト粒子のシミュレーションを実行しました。シミュレーションでは、バーの共鳴に粒子捕獲が原因で、特にコローレーション共鳴で同様の構造が発生しました。多くの共鳴に捕獲された軌道は、ハロー様であり、盤面から大きな垂直運動を伴っています。観測された構造のエネルギー空間における位置は、バーの回転速度の範囲$\Omega_\mathrm{b}\approx35-40$ km s$^{-1}$ kpc$^{-1}$と一致しています。全体的に、
2309.15338
Counterintuitive patterns on angles and distances between lattice points in high dimensional hypercubes
Let $\mathcal{S}$ be a finite set of integer points in $\mathbb{R}^d$, which we assume has many symmetries, and let $P\in\mathbb{R}^d$ be a fixed point. We calculate the distances from $P$ to the points in $\mathcal{S}$ and compare the results. In some of the most common cases, we find that they lead to unexpected conclusions if the dimension is sufficiently large. For example, if $\mathcal{S}$ is the set of vertices of a hypercube in $\mathbb{R}^d$ and $P$ is any point inside, then almost all triangles $PAB$ with $A,B\in\mathcal{S}$ are almost equilateral. Or, if $P$ is close to the center of the cube, then almost all triangles $PAB$ with $A\in \mathcal{S}$ and $B$ anywhere in the hypercube are almost right triangles.
Jack Anderson, Cristian Cobeli, Alexandru Zaharescu
2023-09-27T00:59:14
http://arxiv.org/abs/2309.15338v1
Counterintuitive patterns on angles and distances between lattice points in high dimensional hypercubes ###### Abstract. Let \(\mathcal{S}\) be a finite set of integer points in \(\mathbb{R}^{d}\), which we assume has many symmetries, and let \(P\in\mathbb{R}^{d}\) be a fixed point. We calculate the distances from \(P\) to the points in \(\mathcal{S}\) and compare the results. In some of the most common cases, we find that they lead to unexpected conclusions if the dimension is sufficiently large. For example, if \(\mathcal{S}\) is the set of vertices of a hypercube in \(\mathbb{R}^{d}\) and \(P\) is any point inside, then almost all triangles \(PAB\) with \(A,B\in\mathcal{S}\) are almost equilateral. Or, if \(P\) is close to the center of the cube, then almost all triangles \(PAB\) with \(A\in\mathcal{S}\) and \(B\) anywhere in the hypercube are almost right triangles. Key words and phrases:Hypercubes, lattice points, Euclidean distance 2020 Mathematics Subject Classification: 11B99; 11K99, 11P21, 51M20, 52Bxx ## 1. Introduction Recent developments in network communications [12, 17] or artificial intelligence [6] have shed new light on studies of graphs and special models based on sets explored in combinatorial geometry or related to lattice points in multidimensional spaces [1, 3, 7, 10, 11, 14, 15]. Our object in this article is to present a few results related to the fact that in high dimensional hypercubes, a random pick of lattice points to find some that are at an 'exceptional distance' apart from each other has zero chance of success if the dimension goes to infinity. (Here, an _exceptional distance_ is any one that is different from the average.) Let \(\mathcal{S}\subset\mathbb{R}^{d}\), \(d\geq 1\), be a finite set and let \(\boldsymbol{a}=(a_{1},\ldots,a_{d})\in\mathbb{R}^{d}\). If we look from a distant point \(\boldsymbol{a}\) to the points in \(\mathcal{S}\), we find that they are all at about the same distance, which is the closer to a certain value the farther away from \(\mathcal{S}\) our point of view is. On the contrary, if our point of view is close to \(\mathcal{S}\), even in \(\mathcal{S}\) or in its convex envelope, we see the variety of distances ranging from zero to the diameter of \(\mathcal{S}\). But what we observe is largely influenced by the size of the space and its dimensions. Our goal here is to highlight some counterintuitive phenomena, some of them somehow related to the ones that have been studied for the set of lattice points visible from each other in a hypercube [3]. In order to illustrate two of these phenomena, let us note that if we randomly pick triangles with vertices at the lattice points of a cube that is large enough, the likelihood of encountering a significant number of some special triangles is low. For instance, in Figure 1, we can see two type of selections, each with a distinct feature in dimensions 2 and 3. The first type of choice conditions the triangles to have a common vertex, while the second one requires that two of the triangle's vertices be chosen randomly from the cube's vertices, while the third one remains unrestricted. Then we can wonder, what are the odds of getting similar triangles in the first case or non-degenerate isosceles triangles in the second case? Certainly, the questions may appear uninteresting, as the answer is so small in both situations. Furthermore, as the size of the cube and the dimension increases, the variety of these triangles increases immensely, and the attempt to randomly find the special ones seems completely in vain. Despite this, the situation is not like that at all, but rather the complete opposite. Thus, Theorem 1 shows that, if the dimension of the hypercube becomes large enough, then almost all triangles that have two vertices at the corners of the hypercube and the third a lattice point inside are almost isosceles. And on the same note, if both the size of the hypercube and the dimension become sufficiently large, then Theorem 3 shows that almost all triangles with vertices anywhere on the lattice points of the hypercube, which have a certain common vertex, not only are nearly isosceles but also have a particular shape, being almost all almost similar. To make things precise, let \(N\geq 1\) be integer and let \(\mathcal{W}=\mathcal{W}(d,N)\) be the maximal hypercube of lattice points from \([0,N]^{d}\). Since we are interested both in the discrete case and in the limit process, a good coverage of the phenomenon is taken if we choose \(\mathcal{S}\subseteq\mathcal{W}\). We measure the distance between points \(\boldsymbol{v}^{\prime},\boldsymbol{v}^{\prime\prime}\in\mathbb{R}^{d}\) with the Euclidean distance \[\mathfrak{d}(\boldsymbol{v}^{\prime},\boldsymbol{v}^{\prime\prime})=\big{(} (v_{1}^{\prime\prime}-v_{1}^{\prime})^{2}+\cdots+(v_{d}^{\prime\prime}-v_{d}^ {\prime})^{2}\big{)}^{1/2}\] and, to compare with each other sizes from different dimensions, we use the _normalized distance_: \[\mathfrak{d}_{d}(\boldsymbol{v}^{\prime},\boldsymbol{v}^{\prime\prime})= \frac{1}{\sqrt{d}N}\big{(}(v_{1}^{\prime\prime}-v_{1}^{\prime})^{2}+\cdots+(v _{d}^{\prime\prime}-v_{d}^{\prime})^{2}\big{)}^{1/2}.\] Then the normalized distance between two opposite vertices, the farthest away points in \(\mathcal{W}\), is \(\mathfrak{d}_{d}\left((0,\ldots,0),(N,\ldots,N)\right)=1\). In direct contrast, besides '_the thickest_' hyperlane' \(\mathcal{W}\), we also consider '_the thinnest_' one, that of dimension zero, the set of vertices of \([0,N]^{d}\), which we denote by \(\mathcal{V}=\mathcal{V}(d,N)\). For orientation in \(\mathcal{W}\) or around, a useful support from some point \(\boldsymbol{a}\) turns out to be the distance from \(\boldsymbol{a}\) to the center of the cube, \(\boldsymbol{c}\). That is why we denote \(r_{\boldsymbol{a}}:=\mathfrak{d}_{d}(\boldsymbol{a},\boldsymbol{c})\). From an arbitrary point \(\boldsymbol{a}\), in Sections 2 and 5, we find exact formulas for the average distances to points in \(\mathcal{V}\) or in \(\mathcal{W}\), respectively. Also, we calculate the second moments about these averages in both cases. They are the main tool that allow us to draw catching properties that most pairs or triples of points in the hypercube have in high dimensions. We mention that similar procedures were used recently in other different settings. For example, in a continuum case, in order to provide a framework for studying multifractal geometry, the authors of [2] and [16] study the average distance and the asymptotic behavior of higher moments of self-similar measures on self-similar subsets of \(\mathbb{R}\), and on graph-directed self-similar subsets of \(\mathbb{R}\). Corresponding characteristic properties of lattice points that are Figure 1. Random triangles, in 2D and 3D, with vertices of integer coordinates in \([0,100]\). In each image, the triangles are chosen in such a way that they meet one of the following conditions: A. All triangles have a common vertex, initially chosen randomly but later fixed. B. All triangles have two vertices randomly chosen from the vertices of the cube, while the third vertex is free. visible from each others were observed in [3]. Averages of relative distances from points in geometric figures were also the object of study in the articles [5, 8, 9, 12, 13]. To exemplify our results, regarding, for example, to the vertices of the hypercube, one may ask what is the expected distance from them to a fixed arbitrary point \(\mathbf{a}\) and what is the probability that such a distance is close to the average. In Section 3, we show that, for any fixed point \(\mathbf{a}\in\mathcal{W}\), almost all vertices are at a normalized distance from \(\mathbf{a}\) that is close to \(\sqrt{1/4+r_{\mathbf{a}}^{2}}\), so long as the dimension \(d\) is sufficiently large. As a consequence, it follows that almost all triangles formed from \(\mathbf{a}\) and two vertices of the hypercube will be nearly an isosceles triangle, since the distances from \(\mathbf{a}\) to each of the two vertices will both be close to the same value. **Theorem 1**.: _For all \(\varepsilon>0\), there exists an integer \(d_{\varepsilon}\) such that, for all integers \(d\geq d_{\varepsilon}\), \(N\geq 1\), and any point \(\mathbf{a}\in\mathcal{W}\), the proportion of triangles \((\mathbf{a},\mathbf{v_{1}},\mathbf{v_{2}})\) such that_ \[|\mathfrak{d}_{d}(\mathbf{a},\mathbf{v_{1}})-\mathfrak{d}_{d}(\mathbf{a},\mathbf{v_{2}})|\leq\varepsilon,\] _where \(\mathbf{v_{1}},\mathbf{v_{2}}\in\mathcal{V}\), is greater than or equal to \(1-\varepsilon\)._ Another consequence arises from noticing that, for any vertex \(\mathbf{v}\in\mathcal{V}\), the square of the normalized distance from the center of the cube to \(\mathbf{v}\) is \(1/4\). As a result, for almost all vertices \(\mathbf{v}\), the square of the distance from \(\mathbf{a}\) to \(\mathbf{v}\) is almost the sum of the squares of the distances from \(\mathbf{c}\) to \(\mathbf{a}\) and from \(\mathbf{c}\) to \(\mathbf{v}\). Therefore, it is natural to ponder if \((\mathbf{a},\mathbf{c},\mathbf{v})\) may be close to a right triangle, and in fact this is the case so long as \(\mathbf{a}\) is not too near to \(\mathbf{c}\). **Theorem 2**.: _For all \(\varepsilon>0\), there exists an integer \(d_{\varepsilon}\), and a function \(f(d)\leq 1/2\), such that for all integers \(d\geq d_{\varepsilon}\), \(N\geq 1\), and any point \(\mathbf{a}\in\mathcal{W}\) with \(\mathfrak{d}_{d}(\mathbf{a},\mathbf{c})\geq f(d)\) (where \(\mathbf{c}\) is the center of the hypercube), the proportion of triangles \((\mathbf{a},\mathbf{c},\mathbf{v})\) with \(\mathbf{v}\in\mathcal{V}\) and whose angle \(\theta_{\mathbf{c}}(\mathbf{v})\) at \(\mathbf{c}\) satisfies_ \[|\cos\theta_{\mathbf{c}}(\mathbf{v})|\leq\varepsilon,\] _is greater than or equal to \(1-\varepsilon\)._ Precise estimates and the effective bounds versions of Theorems 1 and 2 are proved in Section 4. In the second part of our manuscript, starting with Section 5, we turn our focus to looking at distances from a fixed point \(\mathbf{a}\) to any integer point \(\mathbf{w}\) in the cube. We similarly find that almost all points \(\mathbf{w}\in\mathcal{W}\) are at a normalized distance from \(\mathbf{a}\) which is close to \(\sqrt{1/12+1/(6N)+\mathfrak{d}_{d}^{2}(\mathbf{a},\mathbf{c})}\), provided that the dimension \(d\) is sufficiently large. Furthermore, we will also show that almost all pairs of points in the cube are at a relative distance close to \(\sqrt{1/6+1/(3N)}\). As a consequence, we find that almost all triangles with one vertex at \(\mathbf{a}\) and the other two anywhere in \(\mathcal{W}\) are nearly identical. We summarise this fact in the following theorem, which, in explicit and effective form, we prove in Section 6. **Theorem 3**.: _For any \(\varepsilon>0\), there exist positive integers \(d_{\varepsilon}\), \(N_{\varepsilon}\), such that, for all integers \(d\geq d_{\varepsilon}\), \(N\geq N_{\varepsilon}\), and any point \(\mathbf{a}\in\mathcal{W}\), the proportion of triangles \((\mathbf{a},\mathbf{w}_{1},\mathbf{w}_{2})\), with \(\mathbf{w}_{1},\mathbf{w}_{2}\in\mathcal{W}\), in which_ \[\left|\mathfrak{d}_{d}(\mathbf{w}_{1},\mathbf{w}_{2})-\frac{1}{\sqrt{6}}\right|\leq \varepsilon,\text{ and }\left|\mathfrak{d}_{d}(\mathbf{a},\mathbf{w}_{j})-\sqrt{\frac{1}{12}+r_{\mathbf{a}}} \right|\leq\varepsilon,\text{ for }j=1,2\] _is greater than or equal to \(1-\varepsilon\). (Here, \(r_{\mathbf{a}}=\mathfrak{d}_{d}(\mathbf{a},\mathbf{c})\) denotes the normalized distance from \(\mathbf{a}\) to the center of \(\mathcal{W}\).)_ For a probabilistic description of some natural expectations in high dimensional hypercubes we refer the reader to [3, Section 8]. It is a super-fast approach to the subject, although, there, the discussion is done in a continuum and the positions of both the observer and the viewed point are variable, while in this paper, most of the time the observer has a fixed position. ## 2. Distances between any fixed point and the vertices of the hypercube For any \(\boldsymbol{a}=(a_{1},\ldots,a_{d})\in\mathcal{W}\), in the following we denote \(\|\boldsymbol{a}\|^{2}:=a_{1}^{2}+\cdots+a_{d}^{2}\) and \(\boldsymbol{|a|}:=a_{1}+\cdots+a_{d}\). Let \(\mathcal{V}\) denote the set of all vertices of \([0,N]^{d}\). This cube has \(2^{d}\) vertices and each of them has components equal to \(0\) or \(N\). Notice that if \(\mathcal{V}\) is seen as a subset of the set of lattice points \(\mathcal{W}\), then no two vertices in \(\mathcal{V}\) are visible from each other, since there are always other points of integer coordinates in \([0,N]^{d}\) that interfere between them provided that \(N\geq 2\). The set of points in \(\mathcal{W}\) that are visible from each other was the object of study in [3]. ### The average \(A_{\boldsymbol{a},\mathcal{V}}(d,N)\) Let \(\boldsymbol{a}=(a_{1},\ldots,a_{d})\in\mathbb{R}^{d}\) be fixed and let \(A_{\boldsymbol{a},\mathcal{V}}(d,N)\) denote the average of the squares of the distances from \(\boldsymbol{a}\) to all vertices \(\boldsymbol{v}\in\mathcal{V}\). We have \[A_{\boldsymbol{a},\mathcal{V}}(d,N) =\frac{1}{\#\mathcal{V}}\sum_{\boldsymbol{v}\in\mathcal{V}} \mathfrak{d}^{2}(\boldsymbol{v},\boldsymbol{a})\] \[=\frac{1}{2^{d}}\sum_{\boldsymbol{v}\in\mathcal{V}}\left((v_{1}-a _{1})^{2}+\cdots+(v_{d}-a_{d})^{2}\right)\] \[=\frac{1}{2^{d}}\sum_{\boldsymbol{v}\in\mathcal{V}}\sum_{j=1}^{d} v_{j}^{2}-\frac{1}{2^{d-1}}\sum_{\boldsymbol{v}\in\mathcal{V}}\sum_{j=1}^{d}v_{j}a_ {j}+\frac{1}{2^{d}}\sum_{\boldsymbol{v}\in\mathcal{V}}\sum_{j=1}^{d}a_{j}^{2}.\] For any fixed \(j\), there are \(2^{d-1}\) vertices \(\boldsymbol{v}\in\mathcal{V}\) with the \(j\)-th component equal to \(0\), while the remaining ones have the \(j\)-th component equal to \(N\). Then \[A_{\boldsymbol{a},\mathcal{V}}(d,N) =\frac{1}{2^{d}}\sum_{j=1}^{d}2^{d-1}N^{2}-\frac{1}{2^{d-1}}\sum _{j=1}^{d}a_{j}2^{d-1}N+\frac{1}{2^{d}}\,\|\boldsymbol{a}\|^{2}\,2^{d}\] \[=\frac{1}{2}dN^{2}-\boldsymbol{|a|}N+\|\boldsymbol{a}\|^{2}\,.\] We state the result in the next lemma. **Lemma 1**.: _Let \(\mathcal{V}\) be the set of vertices of the hypercube \([0,N]^{d}\), where \(N\geq 1\) and \(d\geq 1\) are integers. Let \(\boldsymbol{a}=(a_{1},\ldots,a_{d})\in\mathbb{R}^{d}\) be fixed. Then, the average of all the squares of distances from \(\boldsymbol{a}\) to points in \(\mathcal{V}\) is_ \[A_{\boldsymbol{a},\mathcal{V}}(d,N)=\frac{1}{2^{d}}\sum_{j=1}^{d}2^{d-1}N^{2}= \frac{1}{2}dN^{2}-\boldsymbol{|a|}N+\|\boldsymbol{a}\|^{2}\,. \tag{1}\] In particular, Lemma 1 says that the average distance from the origin to the vertices of \([0,N]^{d}\) equals \(\sqrt{dN^{2}/2}\), which is the same as saying that the average normalized distance is \(1/\sqrt{2}\). ### The second moment about the average distances to the vertices Starting with the definition of the second moment, which we denote by \(\mathfrak{M}_{2;\mathbf{a},\mathcal{V}}(d,N)\), we rearrange the terms in its defining summation to aggregate the average and make use of Lemma 1. Thus, writing shortly \(A_{\mathbf{a},\mathcal{V}}\) instead of \(A_{\mathbf{a},\mathcal{V}}(d,N)\), we have: \[\begin{split}\mathfrak{M}_{2;\mathbf{a},\mathcal{V}}(d,N)& :=\frac{1}{\#\mathcal{V}}\sum_{\mathbf{v}\in\mathcal{V}}\left( \mathfrak{d}^{2}(\mathbf{v},\mathbf{a})-A_{\mathbf{a},\mathcal{V}}\right)^{2}\\ &=\frac{1}{2^{d}}\sum_{\mathbf{v}\in\mathcal{V}}\left(\mathfrak{d}^{4} (\mathbf{v},\mathbf{a})-2\,\mathfrak{d}^{2}(\mathbf{v},\mathbf{a})A_{\mathbf{a},\mathcal{V}}+A_{ \mathbf{a},\mathcal{V}}^{2}\right)\\ &=\frac{1}{2^{d}}\left(\sum_{\mathbf{v}\in\mathcal{V}}\mathfrak{d}^{4 }(\mathbf{v},\mathbf{a})-2A_{\mathbf{a},\mathcal{V}}\sum_{\mathbf{v}\in\mathcal{V}}\mathfrak{d} ^{2}(\mathbf{v},\mathbf{a})+\sum_{\mathbf{v}\in\mathcal{V}}A_{\mathbf{a},\mathcal{V}}^{2} \right)\\ &=\frac{1}{2^{d}}\cdot\Sigma_{\mathbf{a},\mathcal{V}}-A_{\mathbf{a}, \mathcal{V}}^{2}.\end{split} \tag{2}\] To find the sum denoted by \(\Sigma_{\mathbf{a},\mathcal{V}}\) in (2), we write it explicitly: \[\Sigma_{\mathbf{a},\mathcal{V}}:=\sum_{\mathbf{v}\in\mathcal{V}}\mathfrak{d}^{4}(\mathbf{ v},\mathbf{a})=\sum_{\mathbf{v}\in\mathcal{V}}\sum_{m=1}^{d}\sum_{n=1}^{d}h(v_{m},v_{n},a_{ m},a_{n}), \tag{3}\] where \(h(v_{m},v_{n},a_{m},a_{n})=(v_{m}-a_{m})^{2}(v_{n}-a_{n})^{2}\) is the sum of the following nine monomials: \[\begin{split} h(v_{m},v_{n},a_{m},a_{n})=& v_{m}^{2}v_{n}^{2}-2v_{m}^{2}v_{n}a_{n}+v_{m}^{2}a_{n}^{2}\\ &-2v_{m}a_{m}v_{n}^{2}+4v_{m}a_{m}v_{n}a_{n}-2v_{m}a_{m}a_{n}^{2} \\ &+a_{m}^{2}v_{n}^{2}-2a_{m}^{2}v_{n}a_{n}+a_{m}^{2}a_{n}^{2}. \end{split} \tag{4}\] Next we take into account the contribution of each monomial in (4) to the corresponding sum in (2). For this we separate the group of the \(d\) diagonal terms (those with \(m=n\)) from the group of the \(d^{2}-d\) off-diagonal terms, and then count the number of vertices with the non-zero components at the right place. We have: \[\begin{split} S_{1}(\mathbf{a},\mathcal{V})&=\sum_{ \mathbf{v}\in\mathcal{V}}\sum_{m=1}^{d}\sum_{n=1}^{d}v_{m}^{2}v_{n}^{2}=N^{4} \left(d2^{d-1}+(d^{2}-d)2^{d-2}\right);\\ S_{2}(\mathbf{a},\mathcal{V})&=\sum_{\mathbf{v}\in\mathcal{ V}}\sum_{m=1}^{d}\sum_{n=1}^{d}v_{m}^{2}v_{n}a_{n}=N^{3}\left(2^{d-1}|\,\mathbf{a} |+(d-1)2^{d-2}|\,\mathbf{a}|\right);\\ S_{3}(\mathbf{a},\mathcal{V})&=\sum_{\mathbf{v}\in\mathcal{ V}}\sum_{m=1}^{d}\sum_{n=1}^{d}v_{m}^{2}a_{n}^{2}=N^{2}d2^{d-1}\,\|\mathbf{a} \|^{2}\,;\end{split} \tag{5}\] then \[\begin{split} S_{4}(\boldsymbol{a},\mathcal{V})&=S_{2}( \boldsymbol{a},\mathcal{V})=\sum_{\boldsymbol{v}\in\mathcal{V}}\sum_{m=1}^{d} \sum_{n=1}^{d}v_{m}a_{m}v_{n}^{2}\\ &=N^{3}\left(2^{d-1}\boldsymbol{|}\,\boldsymbol{a}\boldsymbol{| }+(d-1)2^{d-2}\boldsymbol{|}\,\boldsymbol{a}\boldsymbol{|}\right);\\ S_{5}(\boldsymbol{a},\mathcal{V})&=\sum_{\boldsymbol {v}\in\mathcal{V}}\sum_{m=1}^{d}\sum_{n=1}^{d}v_{m}a_{m}v_{n}a_{n}=N^{2}\left(2 ^{d-1}\,\|\boldsymbol{a}\|^{2}+2^{d-2}(\boldsymbol{|}\,\boldsymbol{a}\boldsymbol {|}^{2}-\|\boldsymbol{a}\|^{2})\right);\\ S_{6}(\boldsymbol{a},\mathcal{V})&=\sum_{\boldsymbol {v}\in\mathcal{V}}\sum_{m=1}^{d}\sum_{n=1}^{d}v_{m}a_{m}a_{n}^{2}=N2^{d-1} \boldsymbol{|}\,\boldsymbol{a}\,\|\,\|\boldsymbol{a}\|^{2}\,;\end{split} \tag{6}\] and \[\begin{split} S_{7}(\boldsymbol{a},\mathcal{V})&=S_{ 3}(\boldsymbol{a},\mathcal{V})=\sum_{\boldsymbol{v}\in\mathcal{V}}\sum_{m=1}^ {d}\sum_{n=1}^{d}a_{m}^{2}v_{n}^{2}=N^{2}d2^{d-1}\,\|\boldsymbol{a}\|^{2}\,; \\ S_{8}(\boldsymbol{a},\mathcal{V})&=S_{6}( \boldsymbol{a},\mathcal{V})=\sum_{\boldsymbol{v}\in\mathcal{V}}\sum_{m=1}^{d} \sum_{n=1}^{d}a_{m}^{2}v_{n}a_{n}=N2^{d-1}\boldsymbol{|}\,\boldsymbol{a}\,\| \,\|\boldsymbol{a}\|^{2}\,;\\ S_{9}(\boldsymbol{a},\mathcal{V})&=\sum_{\boldsymbol {v}\in\mathcal{V}}\sum_{m=1}^{d}\sum_{n=1}^{d}a_{m}^{2}a_{n}^{2}=2^{d}\,\| \boldsymbol{a}\|^{4}\,.\end{split} \tag{7}\] On adding the sums in (5), (6) and (7) as many times as indicated by the appearances of their defining monomials in (4), we find that the sum \(\Sigma_{\boldsymbol{a},\mathcal{V}}\) from (3) is equal to \[\begin{split}\Sigma_{\boldsymbol{a},\mathcal{V}}&= \left(S_{1}(\boldsymbol{a},\mathcal{V})-2S_{2}(\boldsymbol{a},\mathcal{V})+S_ {3}(\boldsymbol{a},\mathcal{V})\right)\\ &\qquad\qquad\qquad-\left(2S_{4}(\boldsymbol{a},\mathcal{V})-4S_ {5}(\boldsymbol{a},\mathcal{V})+2S_{6}(\boldsymbol{a},\mathcal{V})\right)\\ &\qquad\qquad\qquad\qquad\qquad+\left(S_{7}(\boldsymbol{a}, \mathcal{V})-2S_{8}(\boldsymbol{a},\mathcal{V})+S_{9}(\boldsymbol{a}, \mathcal{V})\right)\\ &=2^{d-2}\left((d^{2}+d)N^{4}-4(d+1)\boldsymbol{|}\,\boldsymbol{a }\boldsymbol{|}N^{3}-8\boldsymbol{|}\,\boldsymbol{a}\boldsymbol{|}\,\| \boldsymbol{a}\|^{2}\,N\right.\\ &\qquad\qquad\qquad\qquad\qquad\left.+4\big{(}\boldsymbol{|}\, \boldsymbol{a}\boldsymbol{|}^{2}+(d+1)\,\|\boldsymbol{a}\|^{2}\,\big{)}N^{2}+4 \,\|\boldsymbol{a}\|^{4}\right).\end{split} \tag{8}\] Then, inserting the results from (8) and (1) in formula (2), we arrive at a closed form expression for \(\mathfrak{M}_{2;\boldsymbol{a},\mathcal{V}}(d,N)\), which we state in the next lemma. **Lemma 2**.: _Let \(d,N\geq 1\) be integers and let \(\mathcal{V}\) be the set of vertices of the hypercube \([0,N]^{d}\). Then, the second moment about the mean \(A_{\boldsymbol{a},\mathcal{V}}(d,N)\) equals_ \[\mathfrak{M}_{2;\boldsymbol{a},\mathcal{V}}(d,N)=\frac{1}{\#\mathcal{V}}\sum_ {\boldsymbol{v}\in\mathcal{V}}\left(\mathfrak{d}^{2}(\boldsymbol{v}, \boldsymbol{a})-A_{\boldsymbol{a},\mathcal{V}}(d,N)\right)^{2}=\frac{1}{4}dN^{4 }-\boldsymbol{|}\,\boldsymbol{a}\boldsymbol{|}N^{3}+\|\boldsymbol{a}\|^{2}\,N^ {2}.\] ## 3. The average of the squares and the square root of the average Since the normalized second moment \(\mathfrak{M}_{2;\boldsymbol{a},\mathcal{V}}(d,N)/d^{2}N^{4}=o(1)\) as \(d\to\infty\), it follows that for any fixed \(\boldsymbol{a}\in\mathcal{W}\), almost all normalized distances from \(\boldsymbol{a}\) to the vertices of \(\mathcal{W}\) are close to \(\sqrt{A_{\boldsymbol{a},\mathcal{V}}(d,N)/dN^{2}}\). This is the object of the following theorem. **Theorem 4**.: _Let \(B_{\mathbf{a},\mathcal{V}}:=A_{\mathbf{a},\mathcal{V}}(d,N)/dN^{2}\) denote the average of the squares of the normalized distances from \(\mathbf{a}\) to the vertices of \([0,N]^{d}\). Let \(\eta\in(0,1/2)\) be fixed. Then, for any integers \(d\geq 2\), \(N\geq 1\), and any point \(\mathbf{a}\in\mathcal{W}\), we have_ \[\frac{1}{\#\mathcal{V}}\#\left\{\mathbf{v}\in\mathcal{V}:\mathfrak{d}_{d}(\mathbf{a}, \mathbf{v})\in\left[\sqrt{B_{\mathbf{a},\mathcal{V}}}-\frac{1}{d^{\eta}},\ \sqrt{B_{\mathbf{a},\mathcal{V}}}+\frac{1}{d^{\eta}}\right]\right\}\geq 1 -\frac{1}{d^{1-2\eta}}\,.\] Proof.: Let \(\eta,d,N,\mathbf{a}\) be as in the statement of the theorem. Since \[-\mathbf{\mid}\mathbf{a}\mathbf{\mid}N+\mathbf{\mid}\mathbf{\mid}^{2}\leq 0,\] from Lemma 2 we find that \[\frac{\mathfrak{M}_{2;\mathbf{a},\mathcal{V}}(d,N)}{d^{2}N^{4}}\leq\frac{1}{4d}\,. \tag{9}\] On the other hand, for any parameters \(b,T>0\), \[\frac{\mathfrak{M}_{2;\mathbf{a},\mathcal{V}}(d,N)}{d^{2}N^{4}} =\frac{1}{\#\mathcal{V}}\times\sum_{\mathbf{v}\in\mathcal{V}}\left( \mathfrak{d}_{d}^{2}(\mathbf{a},\mathbf{v})-B_{\mathbf{a},\mathcal{V}}\right)^{2} \tag{10}\] \[\geq\frac{1}{\#\mathcal{V}}\times\sum_{\begin{subarray}{c}\mathbf{v }\in\mathcal{V}\\ \mid\mathfrak{d}_{d}^{2}(\mathbf{a},\mathbf{v})-B_{\mathbf{a},\mathcal{V}}\mid\geq\frac{1}{ bT}\end{subarray}}\left(\mathfrak{d}_{d}^{2}(\mathbf{a},\mathbf{v})-B_{\mathbf{a}, \mathcal{V}}\right)^{2}\] \[\geq\frac{1}{\#\mathcal{V}}\times\sum_{\begin{subarray}{c}\mathbf{v }\in\mathcal{V}\\ \mid\mathfrak{d}_{d}^{2}(\mathbf{a},\mathbf{v})-B_{\mathbf{a},\mathcal{V}}\mid\geq\frac{1}{ bT}\end{subarray}}\frac{1}{b^{2}T^{2}}\,.\] Then, on combining (9) and (10), we see that \[\frac{1}{\#\mathcal{V}}\#\left\{\mathbf{v}\in\mathcal{V}\colon\left.\left|\mathfrak {d}_{d}^{2}(\mathbf{a},\mathbf{v})-B_{\mathbf{a},\mathcal{V}}\right|\geq\frac{1}{bT}\right\} \leq\frac{b^{2}T^{2}}{4d}. \tag{11}\] Now, by Lemma 1 and the definiton of \(B_{\mathbf{a},\mathcal{V}}\) in the hypothesis, we find that \[\sqrt{B_{\mathbf{a},\mathcal{V}}}=\sqrt{\frac{1}{2}+\frac{1}{dN^{2}}\left(\mathbf{ \mid}\mathbf{a}\mathbf{\mid}^{2}-N\mathbf{\mid}\mathbf{\mid}\mathbf{a}\mathbf{\mid}\right)}\geq\sqrt{ \frac{1}{2}-\frac{1}{4}}=\frac{1}{2}\,. \tag{12}\] (Here we have taken into account the fact that the minimum of \(\mathbf{\mid}\mathbf{\mid}\mathbf{\mid}^{2}-N\mathbf{\mid}\mathbf{\mid}\mathbf{a}\mathbf{\mid}\) is attained in the middle of the hypercube, which is a consequence of the fact that, independently on each of the \(d\) coordinates, the minimum of \(x\mapsto x^{2}-Nx\) is reached for \(x=N/2\).) Then, using inequality (12) and the fact that \(\mathfrak{d}_{d}(\mathbf{a},\mathbf{v})\geq 0\), it follows that \[\left|\mathfrak{d}_{d}^{2}(\mathbf{a},\mathbf{v})-B_{\mathbf{a},\mathcal{V}}\right| =\left|\mathfrak{d}_{d}(\mathbf{a},\mathbf{v})-\sqrt{B_{\mathbf{a},\mathcal{V }}}\right|\left(\mathfrak{d}_{d}(\mathbf{a},\mathbf{v})+\sqrt{B_{\mathbf{a},\mathcal{V}}}\right)\] \[\geq\frac{1}{2}\left|\mathfrak{d}_{d}(\mathbf{a},\mathbf{c})-\sqrt{B_{ \mathbf{a},\mathcal{V}}}\right|.\] Therefore, we can tighten the restriction in the set on the left-hand side of inequality (11) by taking \(b=2\), and, as a consequence, we find that \[\frac{1}{\#\mathcal{V}}\#\left\{\mathbf{v}\in\mathcal{V}\colon\left.\left| \mathfrak{d}_{d}(\mathbf{a},\mathbf{v})-\sqrt{B_{\mathbf{a},\mathcal{V}}}\right|\geq\frac{ 1}{T}\right\}\leq\frac{T^{2}}{d}.\] Finally, we take \(T=d^{\eta}\) and then see that this completes the proof of Theorem 4. ## 4. Triangles involving the Vertices of the Hypercube In this section we analyze the set of triangles in which at least one vertex is a corner of the hypercube. We count them to see how many of them are close or away from the average. ### Triangles formed from any fixed point and two vertices of the cube From Theorem 4, it will follow that almost all pairs of distinct vertices \((\boldsymbol{v_{1}},\boldsymbol{v_{2}})\in\mathcal{V}^{2}\) are each at a distance close to \(\sqrt{B_{\boldsymbol{a},\mathcal{V}}}\) from \(\boldsymbol{a}\). As a result, one can say that almost all triangles formed by \(\boldsymbol{a}\) and two vertices are 'almost isosceles'. If we denote by \(\mathcal{T}_{\boldsymbol{a},\mathcal{V}^{2}}\subset\mathcal{V}^{2}\) the set of all pairs of vertices \((\boldsymbol{v_{1}},\boldsymbol{v_{2}})\), which form together with \(\boldsymbol{a}\) a non-degenerate triangle (that is, triangles with distinct vertices), then \[\begin{split}\frac{1}{\#\mathcal{T}_{\boldsymbol{a},\mathcal{V}^ {2}}}&\#\left\{(\boldsymbol{v_{1}},\boldsymbol{v_{2}})\in \mathcal{T}_{\boldsymbol{a},\mathcal{V}^{2}}\colon\,\mathfrak{d}_{d}( \boldsymbol{a},\boldsymbol{v_{1}}),\mathfrak{d}_{d}(\boldsymbol{a}, \boldsymbol{v_{2}})\in\left[\sqrt{B_{\boldsymbol{a},\mathcal{V}}}-\frac{1}{d!},\sqrt{B_{\boldsymbol{a},\mathcal{V}}}+\frac{1}{d!}\right]\right\}\\ &\geq\frac{1}{\#\mathcal{V}}\left(\#\left\{\boldsymbol{v_{1}}\in \mathcal{V}\colon\,\mathfrak{d}_{d}(\boldsymbol{a},\boldsymbol{v_{1}})\in \left[\sqrt{B_{\boldsymbol{a},\mathcal{V}}}-\frac{1}{d!},\sqrt{B_{\boldsymbol {a},\mathcal{V}}}+\frac{1}{d!}\right]\right\}-1\right)\\ &\quad\times\frac{1}{\#\mathcal{V}}\left(\#\left\{\boldsymbol{v_{2 }}\in\mathcal{V}\colon\,\mathfrak{d}_{d}(\boldsymbol{a},\boldsymbol{v_{2}}) \in\left[\sqrt{B_{\boldsymbol{a},\mathcal{V}}}-\frac{1}{d!},\sqrt{B_{ \boldsymbol{a},\mathcal{V}}}+\frac{1}{d!}\right]\right\}-2\right),\end{split} \tag{13}\] where we subtract \(1\) and \(2\), respectively, from the two terms in the right-hand side of (13) to account for the possibilities that \(\boldsymbol{a},\boldsymbol{v_{1}},\boldsymbol{v_{2}}\) form a degenerate triangle. From Theorem 4, we see that the right-hand side of (13) is bounded below by \[\begin{split}\left(1-\frac{1}{d^{1-2\eta}}-\frac{1}{2^{d}}\right) &\left(1-\frac{1}{d^{1-2\eta}}-\frac{2}{2^{d}}\right)\\ &=1-\frac{2}{d^{1-2\eta}}+\frac{1}{d^{2-4\eta}}-\frac{3}{2^{d}}+ \frac{3}{2^{d}d^{1-2\eta}}+\frac{2}{2^{2d}}\\ &\geq 1-\frac{2}{d^{1-2\eta}},\end{split}\] for \(d\geq 8\), since in that range \(1/d^{2-4\eta}-3/2^{d}\geq 0\). We now arrive at the following theorem on isosceles triangles. **Theorem 5**.: _Let \(\boldsymbol{a}\in\mathcal{W}\) be fixed and let \(\mathcal{T}_{\boldsymbol{a},\mathcal{V}^{2}}\) denote the set of triangles with distinct vertices \(\boldsymbol{a}\) and \(\boldsymbol{v}_{1},\boldsymbol{v}_{2}\in\mathcal{V}\). Let \(\eta\in(0,1/2)\) be fixed. Then, for any integers \(d\geq 8\) and \(N\geq 1\), we have_ \[\frac{1}{\#\mathcal{T}_{\boldsymbol{a},\mathcal{V}^{2}}}\#\left\{( \boldsymbol{v_{1}},\boldsymbol{v_{2}})\in\mathcal{T}_{\boldsymbol{a}, \mathcal{V}^{2}}\colon\,|\mathfrak{d}_{d}(\boldsymbol{a},\boldsymbol{v_{1}}) -\mathfrak{d}_{d}(\boldsymbol{a},\boldsymbol{v_{2}})|\leq\frac{2}{d^{\eta}} \right\}\geq 1-\frac{2}{d^{1-2\eta}}.\] Seeing that almost every such triangle is almost isosceles, we may wonder if any of these triangles can be equilateral, or perhaps right triangle. Let \(\boldsymbol{c}=\left(\frac{N}{2},\ldots,\frac{N}{2}\right)\) be the center of the hypercube. Notice that \(\boldsymbol{c}\) may belong or not to \(\mathcal{W}\), but in any case, if \(N\) had been odd, than the distance from \(\boldsymbol{c}\) to a point in \(\mathcal{W}\) would have been not greater than \(\sqrt{d}/2\). This is the same as saying that the normalized distance from \(\boldsymbol{c}\) to \(\mathcal{W}\) is at most \(1/(2N)\) and we may make reasoning with a point with integer coordinates that is close to \(\boldsymbol{c}\) instead of \(\boldsymbol{c}\). For simplicity we may assume here that \(N\) is even, but this is not necessary, since in fact, in the proofs of Theorems 4 and 5, we did not make use of the fact that the coordinates of \(\boldsymbol{a}\) are integers. Note that all vertices from \(\mathcal{V}\) are equally far from the center and \[\mathfrak{d}_{d}(\boldsymbol{v},\boldsymbol{c})=\frac{1}{\sqrt{d}N}\Big{(} \sum_{1\leq j\leq d}\left(N/2\right)^{2}\Big{)}^{1/2}=\frac{1}{2},\text{ for }\boldsymbol{v}\in\mathcal{V}, \tag{14}\] while for arbitrary \(\boldsymbol{a}=(a_{1},\ldots,a_{d})\), the normalized distance to \(\boldsymbol{c}\) is \[\mathfrak{d}_{d}(\boldsymbol{a},\boldsymbol{c})=\frac{1}{\sqrt{d}N}\Big{(} \sum_{1\leq j\leq d}\left(a_{j}-N/2\right)^{2}\Big{)}^{1/2}=\frac{1}{\sqrt{d}N }\left(\frac{dN^{2}}{4}-\boldsymbol{\mid}\boldsymbol{a}\boldsymbol{\mid}N+ \boldsymbol{\mid}\boldsymbol{a}\boldsymbol{\mid}^{2}\right)^{1/2}. \tag{15}\] Now let us point out the following two observations. _Remark 1_.: (1) Taking \(\boldsymbol{a}\) in \(\mathcal{V}\), Theorem 4 tells us that the normalized distance between almost any two vertices in \(\mathcal{V}\) is close to \(1/\sqrt{2}\). (2) By Lemma 1, the normalized average of the squares of distances from \(\boldsymbol{a}\) to vertices in \(\mathcal{V}\) is \(B_{\boldsymbol{a},\mathcal{V}}=A_{\boldsymbol{a},\mathcal{V}}/(dN^{2})=1/2- \boldsymbol{|\boldsymbol{a}|}/(dN)+\left\|\boldsymbol{a}\right\|^{2}/(dN^{2})\). Then, by (14) and (15) this can further be expressed as \[B_{\boldsymbol{a},\mathcal{V}}=\frac{1}{4}+\left(\frac{1}{4}-\frac{\boldsymbol {|\boldsymbol{a}|}}{dN}+\frac{\left\|\boldsymbol{a}\right\|^{2}}{dN^{2}} \right)=\mathfrak{d}_{d}^{2}(\boldsymbol{v},\boldsymbol{c})+\mathfrak{d}_{d}^ {2}(\boldsymbol{a},\boldsymbol{c}),\text{ for any }\boldsymbol{v}\in\mathcal{V}. \tag{16}\] In particular, (16) shows that the average \(B_{\boldsymbol{a},\mathcal{V}}\) depends only on the normalized distance \(\mathfrak{d}_{d}(\boldsymbol{a},\boldsymbol{c})\), which we shall further denote by \(r_{\boldsymbol{a}}\). On combining Theorem 5, (14), and the observations from Remark 1, we see that almost all triangles in \(\mathcal{T}_{\boldsymbol{a},\mathcal{V}^{2}}\) have one side lengths close to \(1/\sqrt{2}\), while the other are both close to \(\sqrt{1/4+r_{\boldsymbol{a}}^{2}}\). Since, by normalization, we know that \(0\leq r_{\boldsymbol{a}}\leq 1/2\), it follows that \[\frac{1}{2}\leq\sqrt{\frac{1}{4}+r_{\boldsymbol{a}}^{2}}\leq\frac{1}{\sqrt{2}}. \tag{17}\] In particular, if \(r_{\boldsymbol{a}}=1/2\), which occurs when \(\boldsymbol{a}\) is a vertex, we see that almost all triangles have each of their side lengths close to \(1/\sqrt{2}\). In other words, almost all triangles formed by three vertices of the hypercube are 'almost equilateral'. On the other hand, if \(r_{\boldsymbol{a}}=0\), which occurs when \(\boldsymbol{a}=\boldsymbol{c}\) is at the center of the hypercube, we see that almost all triangles have side lengths close to \(1/\sqrt{2}\), \(1/2\), and \(1/2\), respectively, that is, they are almost isosceles with an almost right angle in \(\boldsymbol{c}\). Making this more explicit, we argue similarly as in (13) to find the proportion of non-degenerate triangles \((\boldsymbol{a},\boldsymbol{v_{1}},\boldsymbol{v_{2}})\) such that \[\begin{split}\mathfrak{d}_{d}(\boldsymbol{a},\boldsymbol{v_{1}}),\mathfrak{d}_{d}(\boldsymbol{a},\boldsymbol{v_{2}})&\in\left[ \sqrt{B_{\boldsymbol{a},\mathcal{V}}}-\frac{1}{d^{\eta}},\sqrt{B_{\boldsymbol{ a},\mathcal{V}}}+\frac{1}{d^{\eta}}\right],\text{ and }\\ \mathfrak{d}_{d}(\boldsymbol{v_{1}},\boldsymbol{v_{2}})& \in\left[\frac{1}{\sqrt{2}}-\frac{1}{d^{\eta}},\frac{1}{\sqrt{2}}+\frac{1}{d^{ \eta}}\right].\end{split} \tag{18}\] Firstly, for any \(0<\eta<1/2\), from Theorem 4, we know that for any vertex \(\boldsymbol{v}\in\mathcal{V}\), the proportion of vertices \(\boldsymbol{v_{1}}\in\mathcal{V}\) such that \[\mathfrak{d}_{d}(\boldsymbol{a},\boldsymbol{v_{1}})\not\in\left[\sqrt{B_{ \boldsymbol{a},\mathcal{V}}}-\frac{1}{d^{\eta}},\sqrt{B_{\boldsymbol{a}, \mathcal{V}}}+\frac{1}{d^{\eta}}\right]\text{ and }\mathfrak{d}_{d}( \boldsymbol{v_{1}},\boldsymbol{v})\not\in\left[\frac{1}{\sqrt{2}}-\frac{1}{d^ {\eta}},\frac{1}{\sqrt{2}}+\frac{1}{d^{\eta}}\right],\] is bounded above by \[\frac{1}{d^{1-2\eta}}+\frac{1}{d^{1-2\eta}}=\frac{2}{d^{1-2\eta}}.\] Therefore, where \(\boldsymbol{v}\in\mathcal{V}\) can be taken to be any vertex, the proportion of non-degenerate triangles formed by distinct vertices \((\boldsymbol{v_{1}},\boldsymbol{v_{2}},\boldsymbol{a})\), which satisfy conditions in (18), is bounded below by \[\frac{1}{\#\mathcal{V}}\bigg{(}\#\left\{\mathbf{v_{1}}\in\mathcal{V} \colon\,\mathfrak{d}_{d}(\mathbf{a},\mathbf{v_{1}})\in\left[\sqrt{B_{\mathbf{a},\mathcal{V}}}- \frac{1}{d^{\eta}},\sqrt{B_{\mathbf{a},\mathcal{V}}}+\frac{1}{d^{\eta}}\right]\right\} -1\bigg{)}\] \[\times \frac{1}{\#\mathcal{V}}\bigg{(}\#\bigg{\{}\mathbf{v_{2}}\in\mathcal{V} \colon\,\mathfrak{d}_{d}(\mathbf{a},\mathbf{v_{2}})\in\left[\sqrt{B_{\mathbf{a},\mathcal{V} }}-\frac{1}{d^{\eta}},\sqrt{B_{\mathbf{a},\mathcal{V}}}+\frac{1}{d^{\eta}}\right], \text{ and }\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \mathfrak{d}_{d}(\mathbf{v},\mathbf{v_{2}})\in\left[\frac{1}{\sqrt{2}}-\frac{1}{d^{ \eta}},\frac{1}{\sqrt{2}}+\frac{1}{d^{\eta}}\right]\bigg{\}}-2\bigg{)}\] \[\geq\left(1-\frac{1}{d^{1-2\eta}}-\frac{1}{2^{d}}\right)\left(1- \frac{2}{d^{1-2\eta}}-\frac{2}{2^{d}}\right)\] \[\geq 1-\frac{3}{d^{1-2\eta}},\] for \(d\geq 6\). As a consequence, we now have the following theorem. **Theorem 6**.: _Let \(\mathcal{T}_{\mathbf{a},\mathcal{V}^{2}}\) be the set of triangles with distinct vertices \(\mathbf{a}\) and \(\mathbf{v}_{1},\mathbf{v}_{2}\in\mathcal{V}\). Let \(\eta\in(0,1/2)\) be fixed. Then, for any integers \(d\geq 6\), \(N\geq 1\), and any point \(\mathbf{a}\in\mathcal{W}\), we have_ \[\frac{1}{\#\mathcal{T}_{\mathbf{a},\mathcal{V}^{2}}}\#\left\{(\mathbf{v_{1}},\mathbf{v_{2 }})\in\mathcal{T}_{\mathbf{a},\mathcal{V}^{2}}:\mathbf{a},\mathbf{v_{1}},\mathbf{v_{2}}\text{ satisfy \eqref{eq: _where, for any vertex \(\mathbf{v}\in\mathcal{V}\), \(\theta_{\mathbf{v}}\) is the angle between the lines going from \(\mathbf{c}\) to \(\mathbf{a}\) and \(\mathbf{v}\) respectively._ In plain words, Theorem 7 says that as long as \(\mathbf{a}\) is not too close to the center of the cube, almost all triangles formed by \(\mathbf{a}\), \(\mathbf{c}\), and a vertex of the cube are almost right triangles. ## 5. The spacings between a fixed point and the lattice points in the hypercube In this section we first calculate the mean distance from a fixed points to all the lattice points in \(\mathcal{W}\). Afterwards, we use the result to find the second moment about the mean of these distances. This is the farthest opposite case in terms of the dimension from the problem dealt with before. Here, the whole hypercube of lattice points plays the previous role of the vertices. ### The average \(A_{\mathbf{a},\mathcal{W}}(d,N)\) Let \(\mathbf{a}=(a_{1},\ldots,a_{d})\in\mathbb{R}^{d}\) be fixed and denote \[A_{\mathbf{a},\mathcal{W}}(d,N):=\frac{1}{\#\mathcal{W}}\sum_{\mathbf{v}\in\mathcal{W} }\mathfrak{d}^{2}(\mathbf{a},\mathbf{v})\,.\] Using the definitions and rearranging the terms, we find that \[\begin{split} A_{\mathbf{a},\mathcal{W}}(d,N)&=\frac{ 1}{\#\mathcal{W}}\sum_{\mathbf{v}\in\mathcal{W}}\sum_{j=1}^{d}(v_{j}-a_{j})^{2}\\ &=\frac{1}{\#\mathcal{W}}\sum_{\mathbf{v}\in\mathcal{W}}\sum_{j=1}^{ d}v_{j}^{2}-\frac{2}{\#\mathcal{W}}\sum_{\mathbf{v}\in\mathcal{W}}\sum_{j=1}^{d}a_{j}v_ {j}+\|\mathbf{a}\|^{2}\,.\end{split} \tag{19}\] Here, changing the order of summation, the sum of the squares is \[\sum_{j=1}^{d}\sum_{\mathbf{v}\in\mathcal{W}}v_{j}^{2}=d\sum_{\mathbf{v}\in\mathcal{W }}v_{1}^{2}=d(N+1)^{d-1}\sum_{v=0}^{N}v^{2}=\frac{dN(N+1)^{d}(2N+1)}{6}. \tag{20}\] In the same way, the mixed sum in (19) can be written as \[\sum_{j=1}^{d}\sum_{\mathbf{v}\in\mathcal{W}}a_{j}v_{j}=\left|\!\left|\mathbf{a}\right| \!\right|(N+1)^{d-1}\sum_{v=0}^{N}v=\left|\!\left|\mathbf{a}\right|\!\right|\frac{ N(N+1)^{d}}{2}. \tag{21}\] On inserting the results (20) and (21) in (19) we find a closed form expression for \(A_{\mathbf{a},\mathcal{W}}(d,N)\), which we state in the next lemma. **Lemma 3**.: _Let \(d,N\geq 1\) be integers, and let \(\mathbf{a}\in\mathbb{R}^{d}\) be fixed. Let \(\mathcal{W}\) be the set of lattice points in \([0,N]^{d}\). Then, the average of all squares of distances from \(\mathbf{a}\) to points in \(\mathcal{W}\) is_ \[A_{\mathbf{a},\mathcal{W}}(d,N)=\frac{1}{\#\mathcal{W}}\sum_{\mathbf{v}\in\mathcal{W} }\mathfrak{d}^{2}(\mathbf{a},\mathbf{v})=\frac{dN(2N+1)}{6}-\left|\!\left|\mathbf{a} \right|\!\right|N+\left|\!\left|\mathbf{a}\right|\!\right|^{2}. \tag{22}\] ### The second moment about the mean The second moment about the mean for the whole hypercube, denoted by \(A_{\mathbf{a},\mathcal{W}}=A_{\mathbf{a},\mathcal{W}}(d,N)\), is defined by \[\mathfrak{M}_{2;\mathbf{a},\mathcal{W}}(d,N):=\frac{1}{\#\mathcal{W}}\sum_{\mathbf{ v}\in\mathcal{W}}\left(\mathfrak{d}^{2}(\mathbf{v},\mathbf{a})-A_{\mathbf{a},\mathcal{W}} \right)^{2}.\] Rearranging the terms on the summation, we may rewrite \(\mathfrak{M}_{2;\mathbf{a},\mathcal{W}}\) as \[\begin{split}\mathfrak{M}_{2;\mathbf{a},\mathcal{W}}(d,N)& =\tfrac{1}{(N+1)^{d}}\sum_{\mathbf{v}\in\mathcal{W}}\big{(}\mathfrak{ 0}^{4}(\mathbf{v},\mathbf{a})-2\,\mathfrak{0}^{2}(\mathbf{v},\mathbf{a})A_{\mathbf{a},\mathcal{W} }+A_{\mathbf{a},\mathcal{W}}^{2}\big{)}\\ &=\tfrac{1}{(N+1)^{d}}\bigg{(}\sum_{\mathbf{v}\in\mathcal{W}} \mathfrak{0}^{4}(\mathbf{v},\mathbf{a})-2A_{\mathbf{a},\mathcal{W}}\sum_{\mathbf{v}\in\mathcal{ W}}\mathfrak{0}^{2}(\mathbf{v},\mathbf{a})+\sum_{\mathbf{v}\in\mathcal{W}}A_{\mathbf{a}, \mathcal{W}}^{2}\bigg{)}\\ &=\tfrac{1}{(N+1)^{d}}\cdot\Sigma_{\mathbf{a},\mathcal{W}}-A_{\mathbf{a},\mathcal{W}}^{2}.\end{split} \tag{23}\] Here the terms collected in \(\Sigma_{\mathbf{a},\mathcal{W}}\) are the analogs of that from relation (3), so that their sum is \[\Sigma_{\mathbf{a},\mathcal{W}}=\sum_{\mathbf{v}\in\mathcal{W}}\mathfrak{0}^{4}(\mathbf{v},\mathbf{a})=\sum_{\mathbf{v}\in\mathcal{W}}\sum_{m=1}^{d}\sum_{n=1}^{d}h(v_{m},v_{n},a_{m},a_{n}), \tag{24}\] where \(h(v_{m},v_{n},a_{m},a_{n})=(v_{m}-a_{m})^{2}(v_{n}-a_{n})^{2}\) is the same sum of nine monomials from (4). Next we calculate the contribution of each of the nine type of terms to the total sum. In the process, we change the order of summation and take care if the terms are on the diagonal (that is, if \(m=n\)) or not. We denote by \(T_{k}(N)\) the sum of the first \(N\)\(k\)-powers of positive integers, that is, \(T_{k}(N)=1^{k}+2^{k}+\cdots+N^{k}\). Thus, we obtain: \[\begin{split} S_{1}(\mathbf{a},\mathcal{W})&=\sum_{\bm {v}\in\mathcal{W}}\sum_{m=1}^{d}\sum_{n=1}^{d}v_{m}^{2}v_{n}^{2}\\ &=d(N+1)^{d-1}T_{4}(N)+(d^{2}-d)(N+1)^{d-2}T_{2}^{2}(N);\\ S_{2}(\mathbf{a},\mathcal{W})&=\sum_{\mathbf{v}\in\mathcal{W }}\sum_{m=1}^{d}\sum_{n=1}^{d}v_{m}^{2}v_{n}a_{n}\\ &=\big{|}\,\mathbf{a}\big{|}(N+1)^{d-1}T_{3}(N)+\big{|}\,\mathbf{a}\big{|} (d-1)(N+1)^{d-2}T_{1}(N)T_{2}(N);\\ S_{3}(\mathbf{a},\mathcal{W})&=\sum_{\mathbf{v}\in\mathcal{W }}\sum_{m=1}^{d}\sum_{n=1}^{d}v_{m}^{2}a_{n}^{2}=\|\mathbf{a}\|^{2}\,d(N+1)^{d-1} T_{2}(N);\end{split} \tag{25}\] then \[\begin{split} S_{4}(\mathbf{a},\mathcal{W})&=S_{2}(\bm {a},\mathcal{W})=\sum_{\mathbf{v}\in\mathcal{W}}\sum_{m=1}^{d}\sum_{n=1}^{d}v_{m}a _{m}v_{n}^{2}\\ &=\big{|}\,\mathbf{a}\big{|}(N+1)^{d-1}T_{3}(N)+\big{|}\,\mathbf{a} \big{|}(d-1)(N+1)^{d-2}T_{1}(N)T_{2}(N);\\ S_{5}(\mathbf{a},\mathcal{W})&=\sum_{\mathbf{v}\in\mathcal{W }}\sum_{m=1}^{d}\sum_{n=1}^{d}v_{m}a_{m}v_{n}a_{n}\\ &=\|\mathbf{a}\|^{2}\,(N+1)^{d-1}T_{2}(N)+\left(\big{|}\,\mathbf{a} \big{|}^{2}-\|\mathbf{a}\|^{2}\right)(N+1)^{d-2}T_{1}^{2}(N);\\ S_{6}(\mathbf{a},\mathcal{W})&=\sum_{\mathbf{v}\in\mathcal{W }}\sum_{m=1}^{d}\sum_{n=1}^{d}v_{m}a_{m}a_{n}^{2}=\big{|}\,\mathbf{a}\big{|}\,\| \mathbf{a}\|^{2}\,(N+1)^{d-1}T_{1}(N);\end{split} \tag{26}\] and \[S_{7}(\boldsymbol{a},\mathcal{W}) =S_{3}(\boldsymbol{a},\mathcal{W})=\sum_{\boldsymbol{v}\in \mathcal{W}}\sum_{m=1}^{d}\sum_{n=1}^{d}a_{m}^{2}v_{n}^{2}=\left\|\boldsymbol{a }\right\|^{2}d(N+1)^{d-1}T_{2}(N); \tag{27}\] \[S_{8}(\boldsymbol{a},\mathcal{W}) =S_{6}(\boldsymbol{a},\mathcal{W})=\sum_{\boldsymbol{v}\in \mathcal{W}}\sum_{m=1}^{d}\sum_{n=1}^{d}a_{m}^{2}v_{n}a_{n}=\left|\boldsymbol{a }\right|\left\|\boldsymbol{a}\right\|^{2}(N+1)^{d-1}T_{1}(N);\] \[S_{9}(\boldsymbol{a},\mathcal{W}) =\sum_{\boldsymbol{v}\in\mathcal{W}}\sum_{m=1}^{d}\sum_{n=1}^{d}a_ {m}^{2}a_{n}^{2}=\left\|\boldsymbol{a}\right\|^{4}(N+1)^{d}.\] On combining (25), (26), (27), and (4), we find that \(\Sigma_{\boldsymbol{a},\mathcal{W}}\) from (24) equals \[\Sigma_{\boldsymbol{a},\mathcal{W}}= \big{(}S_{1}(\boldsymbol{a},\mathcal{W})-2S_{2}(\boldsymbol{a}, \mathcal{W})+S_{3}(\boldsymbol{a},\mathcal{W})\big{)} \tag{28}\] \[\qquad\qquad\qquad-\big{(}2S_{4}(\boldsymbol{a},\mathcal{W})-4S_ {5}(\boldsymbol{a},\mathcal{W})+2S_{6}(\boldsymbol{a},\mathcal{W})\big{)}\] \[\qquad+\big{(}S_{7}(\boldsymbol{a},\mathcal{W})-2S_{8}( \boldsymbol{a},\mathcal{W})+S_{9}(\boldsymbol{a},\mathcal{W})\big{)}\] \[= (N+1)^{d}\bigg{(}\Big{(}\frac{1}{9}d^{2}-\frac{4}{45}d\Big{)}\,N^ {4}+\Big{(}\frac{1}{9}d^{2}+\frac{17}{90}d+\Big{(}-\frac{2}{3}d-\frac{1}{3} \Big{)}\left|\boldsymbol{a}\right|\Big{)}\,N^{3}\] \[\qquad\qquad\qquad+\Big{(}\frac{1}{36}d^{2}+\frac{1}{180}d+ \Big{(}-\frac{1}{3}d-\frac{2}{3}+\left|\boldsymbol{a}\right|\Big{)}\left| \boldsymbol{a}\right|\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+ \Big{(}\frac{2}{3}d+\frac{1}{3}\Big{)}\left\|\boldsymbol{a}\right\|^{2}\Big{)} \,N^{2}\] \[\qquad\qquad\qquad+\Big{(}-\frac{1}{30}d+\Big{(}\frac{1}{3}d+ \frac{2}{3}-2\left|\boldsymbol{a}\right|\Big{)}\left\|a\right\|^{2}\Big{)}\,N +\left\|\boldsymbol{a}\right\|^{4}\bigg{)}.\] Finally, inserting the results from (28) and (22) into (23), we obtain the needed formula for \(\mathfrak{M}_{2;\boldsymbol{a},\mathcal{W}}(d,N)\). **Lemma 4**.: _Let \(d,N\geq 1\) be integers, and let \(A_{\boldsymbol{a},\mathcal{W}}(d,N)\) be the average distance from a fixed point \(\boldsymbol{a}\) to the points in the hypercube \(\mathcal{W}\). Then, the second moment about \(A_{\boldsymbol{a},\mathcal{W}}(d,N)\) is_ \[\mathfrak{M}_{2;\boldsymbol{a},\mathcal{W}}(d,N) =\frac{1}{(N+1)^{d}}\sum_{\boldsymbol{v}\in\mathcal{W}}\big{(} \mathfrak{d}^{4}(\boldsymbol{v},\boldsymbol{a})-2\,\mathfrak{d}^{2}( \boldsymbol{v},\boldsymbol{a})A_{\boldsymbol{a},\mathcal{W}}+A_{\boldsymbol{a },\mathcal{W}}^{2}\big{)}\] \[=\frac{4}{45}dN^{4}+\Big{(}\frac{17}{90}d-\frac{1}{3}\left| \boldsymbol{a}\right|\Big{)}\,N^{3}\] \[\quad+\Big{(}\frac{1}{180}d-\frac{2}{3}\left|\boldsymbol{a} \right|+\frac{1}{3}\left\|\boldsymbol{a}\right\|^{2}\Big{)}\,N^{2}+\Big{(}- \frac{1}{30}d+\frac{2}{3}\left\|a\right\|^{2}\Big{)}\,N.\] ## 6. The chance to find points in \(\mathcal{W}\) that are at some uncommon spacing from each other The formulas for the average and the second moment obtained in Section 5 allows us to estimate the chance to find points in \(\mathcal{W}\) that are situated at uncommon (away from the average) spacing from each other. It turns out that, as the dimension \(d\) gets larger, the probability to select at random two points from \(\mathcal{W}\) that are closer or farther away from the average is smaller and smaller, reducing to zero as \(d\) tends to infinity. Following the same argument used in the proof of Theorem 4, we obtain the following result, which shows that for any fixed \(\boldsymbol{a}\in\mathbb{R}^{d}\), almost all normalized distances from \(\boldsymbol{a}\) to the points in \(\mathcal{W}\) are close to \(\sqrt{A_{\boldsymbol{a},\mathcal{W}}/dN^{2}}\). **Theorem 8**.: _Let \(\eta\in(0,1/2)\) be fixed. Let \(B_{\mathbf{a},\mathcal{W}}=A_{\mathbf{a},\mathcal{W}}/dN^{2}\) denote the normalized average of the square distance from \(\mathbf{a}\) to points in \(\mathcal{W}\). Then, for any integers \(d\geq 2\), \(N\geq 1\), and any point \(\mathbf{a}\in\mathbb{R}^{d}\), we have_ \[\frac{1}{\#\mathcal{W}}\#\left\{\mathbf{v}\in\mathcal{W}:\mathfrak{d}_{d}(\mathbf{a}, \mathbf{v})\in\left[\sqrt{B_{\mathbf{a},\mathcal{W}}}-\frac{1}{d^{\eta}},\sqrt{B_{\bm {a},\mathcal{W}}}+\frac{1}{d^{\eta}}\right]\right\}\geq 1-\frac{51}{15}\frac{1}{d ^{1-2\eta}}.\] We can continue our quest by looking for triplets of points in \(\mathcal{W}\). In the same way as we proceeded in Section 4, we see that almost all pairs of distinct points \((\mathbf{v_{1}},\mathbf{v_{2}})\in\mathcal{W}^{2}\) have components situated at a distance close to \(\sqrt{B_{\mathbf{a},\mathcal{W}}}\) from \(\mathbf{a}\). This means that almost all triangles formed by \(\mathbf{a}\) and two other points in the cube are 'almost isosceles'. We can make the argument explicit, as we did in Theorem 5 for vertices, to find the following analogous result. **Theorem 9**.: _Let \(\mathcal{T}_{\mathbf{a},\mathcal{W}^{2}}\subset\mathcal{W}^{2}\) be the set of all pairs of integer points \((\mathbf{v_{1}},\mathbf{v_{2}})\) which form a non-degenerate triangle together with \(\mathbf{a}\). Let \(\eta\in(0,1/2)\) be fixed. Then, for any integers \(d\geq 2\), \(N\geq 1\), and any point \(\mathbf{a}\in\mathcal{W}\), we have_ \[\frac{1}{\#\mathcal{T}_{\mathbf{a},\mathcal{W}^{2}}}\#\left\{(\mathbf{v_{1}},\mathbf{v_{2 }})\in\mathcal{T}_{\mathbf{a},\mathcal{W}^{2}}:\,|\mathfrak{d}_{d}(\mathbf{a},\mathbf{v_{1 }})-\mathfrak{d}_{d}(\mathbf{a},\mathbf{v_{2}})|\leq\frac{2}{d^{\eta}}\right\}\geq 1- \frac{102}{15}\frac{1}{d^{1-2\eta}}.\] The sides of these triangles can be found explicitly. To see this, we first use Lemma 3 to express the normalized average solely on the distance from the center of the cube to \(\mathbf{a}\). Thus, using (15), we have \[B_{\mathbf{a},\mathcal{W}} =\frac{1}{3}+\frac{1}{6N}-\frac{|\mathbf{a}|}{dN}+\frac{\|\mathbf{a}\|^{2 }}{dN^{2}}\] \[=\frac{1}{12}+\frac{1}{6N}+\left(\frac{1}{4}-\frac{|\mathbf{a}|}{dN}+ \frac{\|\mathbf{a}\|^{2}}{dN^{2}}\right)\] \[=\frac{1}{12}+\frac{1}{6N}+\mathfrak{d}_{d}^{2}(\mathbf{a},\mathbf{c}).\] Here, employing Theorem 8, the first thing of which we make note, is that for almost all points \(\mathbf{v}\in\mathcal{W}\), the square distance \(\mathfrak{d}_{d}^{2}(\mathbf{c},\mathbf{v})\) is close to \(1/12+1/(6N)\). It also follows that, for almost all pairs of points \(\mathbf{v_{1}},\mathbf{v_{2}}\in\mathcal{W}\), their mutual distance \(\mathfrak{d}_{d}(\mathbf{v_{1}},\mathbf{v_{2}})\) is close to \(\sqrt{B_{\mathbf{v_{1}},\mathcal{W}}}\), which is itself close to \(\sqrt{1/6+1/(3N)}\). Therefore, with our earlier notation \(r_{\mathbf{a}}:=\mathfrak{d}_{d}(\mathbf{a},\mathbf{c})\), we find that almost all triangles \((\mathbf{a},\mathbf{v_{1}},\mathbf{v_{2}})\) have side lengths close to \(\sqrt{1/6+1/(3N)}\), \(\sqrt{1/12+1/(6N)+r_{\mathbf{a}}^{2}}\), and \(\sqrt{1/12+1/(6N)+r_{\mathbf{a}}^{2}}\). If \(r_{\mathbf{a}}=0\), which occurs when \(\mathbf{a}\) is the center of the cube, we see that almost all triangles \((\mathbf{a},\mathbf{v_{1}},\mathbf{v_{2}})\) are almost right triangles. On the other hand, if \(r_{\mathbf{a}}\) is close to \(\sqrt{1/12+1/(6N)}\), then almost all triangles \((\mathbf{a},\mathbf{v_{1}},\mathbf{v_{2}})\) are almost equilateral. In order to make this remarks explicit, we first use the analogue of (11) in the proof of Theorem 8 to see that \[\frac{1}{\#\mathcal{W}}\#\left\{\mathbf{v_{1}}\in\mathcal{W}:\left|\mathfrak{d}_{ d}^{2}(\mathbf{c},\mathbf{v_{1}})-\left(\frac{1}{12}+\frac{1}{6N}\right)\right|\geq \frac{1}{2\sqrt{6}d^{\eta}}\right\}\leq\frac{102}{15}\frac{1}{d^{1-2\eta}}.\] Furthermore, if \(\mathbf{v_{1}}\) is a fixed point such that \[\mathfrak{d}_{d}^{2}(\mathbf{c},\mathbf{v_{1}})\in\left[\frac{1}{12}+\frac{1}{6N}- \frac{1}{2\sqrt{6}d^{\eta}},\ \frac{1}{12}+\frac{1}{6N}+\frac{1}{2\sqrt{6}d^{\eta}}\right],\] then, \[\frac{1}{\#\mathcal{W}}\#\left\{\mathbf{v_{2}}\in\mathcal{W}:\left| \mathfrak{d}_{d}(\mathbf{v_{1}},\mathbf{v_{2}})-\sqrt{\frac{1}{6}+\frac{1}{3N}}\right| \geq\frac{1}{d^{\eta}}\right\}\] \[\leq\frac{1}{\#\mathcal{W}}\#\left\{\mathbf{v_{2}}\in\mathcal{W}: \left|\mathfrak{d}_{d}^{2}(\mathbf{v_{1}},\mathbf{v_{2}})-\left(\frac{1}{6}+\frac{1}{3 N}\right)\right|\geq\frac{1}{\sqrt{6}d^{\eta}}\right\}\] \[\leq\frac{1}{\#\mathcal{W}}\#\left\{\mathbf{v_{2}}\in\mathcal{W}: \left|\mathfrak{d}_{d}^{2}(\mathbf{v_{1}},\mathbf{v_{2}})-B_{\mathbf{v_{1}},\mathcal{W}} \right|\geq\frac{1}{2\sqrt{6}d^{\eta}}\right\}\] \[\leq\frac{102}{15}\cdot\frac{1}{d^{1-2\eta}}.\] Then, we can argue just as we did in the proof of Theorem 6 to find the proportion of non-degenerate triangles \((\mathbf{a},\mathbf{v_{1}},\mathbf{v_{2}})\) such that \[\mathfrak{d}_{d}(\mathbf{a},\mathbf{v_{1}}),\mathfrak{d}_{d}(\mathbf{a},\mathbf{v _{2}}) \in\left[\sqrt{B_{\mathbf{a},\mathcal{W}}}-\frac{1}{d^{\eta}},\ \sqrt{B_{\mathbf{a},\mathcal{W}}}+\frac{1}{d^{\eta}}\right],\ \text{and} \tag{29}\] \[\mathfrak{d}_{d}(\mathbf{v_{1}},\mathbf{v_{2}}) \in\left[\sqrt{\frac{1}{6}+\frac{1}{3N}}-\frac{1}{d^{\eta}},\ \sqrt{\frac{1}{6}+\frac{1}{3N}}+\frac{1}{d^{\eta}}\right],\] and arrive at the following result. **Theorem 10**.: _Let \(\mathcal{T}_{\mathbf{a},\mathcal{W}^{2}}\subset\mathcal{W}^{2}\) be the set of all pairs of integer points \((\mathbf{v_{1}},\mathbf{v_{2}})\), which together with \(\mathbf{a}\), form a non-degenerate triangle. Fix \(\eta\in(0,1/2)\). Then, for any integers \(d\geq 2\), \(N\geq 1\), and any point \(\mathbf{a}\in\mathcal{W}\), we have_ \[\frac{1}{\#\mathcal{T}_{\mathbf{a},\mathcal{W}^{2}}}\#\left\{(\mathbf{v_{1}},\mathbf{v_{2 }})\in\mathcal{T}_{\mathbf{a},\mathcal{W}^{2}}:\mathbf{a},\mathbf{v_{1}},\mathbf{v_{2}}\text{ satisfy \eqref{eq:def}}\right\}\geq 1-\frac{102}{5}\frac{1}{d^{1-2\eta}}.\]
``` $\mathcal{S}$は、$\mathbb{R}^d$の整数点を有限集合とする。この集合は多くの幾何学的対称性を持つものとする。また、$P \in \mathbb{R}^d$ は固定された点とする。$P$ から $\mathcal{S}$ の各点までの距離を計算し、その結果を比較する。ある程度の次元であれば、その結果が予測不能な結論に至るケースもある。例えば、$\mathcal{S}$ が $\mathbb{R}^d$ の立方体で、$P$ が立方体の内部にある場合、ほとんどの三角形 $PAB$ に対して $A,B \in \mathcal{S}$ であると、ほぼ等辺三角形になる。あるいは、$P$ が立方体の中心付近にある場合、$PAB$ の三角形 $PAB$ に対して $A \in \mathcal{S}$
2310.00130
Lower bound for KVol on the minimal stratum of translation surfaces
We are interested in the algebraic intersection of closed curves of a given length on translation surfaces. Namely, we study the quantity KVol which measures how many times can two closed curves of a given length intersect. In this paper, we construct families of translation surfaces in each connected component of the minimal stratum $\mathcal{H}(2g-2)$ of the moduli space of translation surfaces of genus $g \geq 2$ such that KVol is arbitrarily close to the genus of the surface, which is conjectured to be the infimum of KVol on $\mathcal{H}(2g-2)$.
Julien Boulanger
2023-09-29T20:44:02
http://arxiv.org/abs/2310.00130v1
# Lower bound for KVol on the minimal stratum of translation surfaces ###### Abstract In this paper we are interested in algebraic intersection of closed curves of a given length on translation surfaces. We study the quantity KVol, defined in [6] and studied in [6, 7, 2, 3], and we construct families of translation surfaces in each connected component of the minimal stratum \(\mathcal{H}(2g-2)\) of the moduli space of translation surfaces of genus \(g\geq 2\) such that KVol is arbitrarily close to the genus of the surface, which is conjectured to be the infimum of KVol on \(\mathcal{H}(2g-2)\). ## 1 Introduction Given a closed oriented surface \(X\), the algebraic intersection \(\operatorname{Int}(\cdot,\cdot)\) defines a symplectic bilinear form on the first homology group \(H_{1}(X,\mathbb{R})\). When \(X\) is endowed with a Riemannian metric, we can define the quantity \[\operatorname{KVol}(X):=\operatorname{Vol}(X)\sup_{\alpha,\beta}\frac{ \operatorname{Int}(\alpha,\beta)}{l_{g}(\alpha)l_{g}(\beta)}\] where the supremum ranges over all piecewise smooth closed curves \(\alpha\) and \(\beta\) in \(X\). Here, \(\operatorname{Vol}(X)\) denotes the Riemannian volume, and \(l_{g}(\alpha)\) (resp. \(l_{g}(\beta)\)) denotes the length of \(\alpha\) (resp. \(\beta\)) with respect to the metric. The study of KVol originates in the work of Massart [10] and Massart-Muetzel [12]. In fact, KVol is also well defined if the Riemannian metric has isolated singularities, and it has been studied recently specifically in the case of translation surfaces (see [6], [7], [2], [3]) for which one could hope to get explicit computations of KVol. Although it is easy to make KVol go to infinity by pinching a non-separating curve, it cannot be made arbitrarily small: Massart and Muetzel [12] showed that for any closed oriented surface \(X\) with a Riemannian metric, we have 1 with equality if and only if \(X\) is a torus and the metric is flat. In light of this result, it is interesting to wonder what are the Riemannian (resp. hyperbolic, flat) surfaces of fixed genus \(g\) having small KVol. This question turns out to be difficult to answer. In [12], KVol is studied as a function over the moduli space of hyperbolic surfaces of fixed genus: they provide asymptotic bounds when the systolic length goes to zero. In [7] Cheboui, Kessi and Massart extend the study of KVol to the moduli space of translation surfaces of genus 2 having a single singularity. Namely, they investigate the following quantity: \[\mathcal{K}(\mathcal{H}(2)):=\inf_{X\in\mathcal{H}(2)}\mathrm{KVol}(X)\] In particular, they conjecture that \(\mathcal{K}(\mathcal{H}(2))=2\) and show that \(\mathcal{K}(\mathcal{H}(2))\leq 2\) by exhibiting a family of (square-tiled) translation surfaces \(L(n+1,n+1)\) having KVol converging to 2 as \(n\) goes to infinity. In this note, we tackle the same question in any genus \(g\geq 2\). More precisely, we conjecture that: \[\mathcal{K}(\mathcal{H}(2g-2)):=\inf_{X\in\mathcal{H}(2g-2)}\mathrm{KVol}(X)=g\] and we construct surfaces in \(\mathcal{H}(2g-2)\) having their KVol arbitrarily close to \(g\), showing: **Theorem 1**.: _For all \(g\geq 2\),_ \[\mathcal{K}(\mathcal{H}(2g-2))\leq g.\] It has to be remarked that translation surfaces with a single singularity are very specific surfaces and that the infimum of KVol over all Riemannian surfaces of genus \(g\) does not grow linearly with the genus as it is expected in the case of \(\mathcal{H}(2g-2)\). In particular, as suggested by Sabourau to the author, a construction of [5] gives a surface \(X_{g}\) for each genus \(g\geq 1\) such that \[\mathrm{KVol}(X)\leq C\frac{g}{\log(g+1)^{2}}\] for a given constant \(C>0\). This bound can be obtained using Theorem 1.5 of [12], which compares KVol and the systolic volume, and the fact that the (homological) systolic volume of the surfaces constructed in [5] grows as \(C^{\prime}\frac{g}{\log(g+1)^{2}}\). However, in the case of translation surfaces having a single singularity, it is not possible to construct similar surfaces, as Boissy and Geninska [1] (and independantly Judge and Parlier [8]) showed that in this setting the systolic volume has a linear bound in the genus. This is the reason why we expect the infimum of KVol over \(\mathcal{H}(2g-2)\) to grow linearly with \(g\). _Remark 1.0.1_.: Concerning the lower bound on \(\mathrm{KVol}\), Theorem 1.1 of [4] gives directly that for any constant \(A>0\), there exist \(c_{A}>0\) such that for any Riemannian surface \(X\) of genus \(g\) and such that \(\mathrm{SysVol}(X)<A\), we have: \[c_{A}\frac{g}{\log(g+1)^{2}}\leq\mathrm{KVol}(X).\] It would be interesting to know whether the same inequality holds with a universal constant \(c>0\) which does not depend on \(A\). It should be noted that such a result has recently been shown for hyperbolic surfaces in the case where the algebraic intersection is replaced by the geometric intersection, see [13]. The proof in this later case relies on the existence of a short figure eigth geodesic. Connected components of \(\mathcal{H}(2g-2)\).With Theorem 1 in mind, it is interesting to wonder whether the bound \(g\) can be achieved in any connected component of \(\mathcal{H}(2g-2)\). Kontsevich and Zorich [9] classified the connected components of any stratum of translation surfaces, and showed in particular that for any \(g\geq 4\), \(\mathcal{H}(2g-2)\) has three connected components: the hyperelliptic component \(\mathcal{H}^{hyp}(2g-2)\), and two other connected components \(\mathcal{H}^{even}(2g-2)\) and \(\mathcal{H}^{odd}(2g-2)\) distinguished by the spin invariant. In genus \(2\), the only connected component is hyperelliptic while in genus \(3\) there are two connected components : odd spin and hyperelliptic. It turns out that the family of surfaces we construct in Section \(2\) belongs to odd spin for any \(g\geq 2\). In Section \(3\) we give a family of hyperelliptic surfaces \(H^{n}_{g}\) and even spin \(M^{n}_{g}\) surfaces such that both \(\mathrm{KVol}(H^{n}_{g})\) and \(\mathrm{KVol}(M^{n}_{g})\) converge to \(g\) as \(n\) goes to infinity. In particular, we show: **Theorem 2**.: * \(\mathcal{K}(\mathcal{H}^{hyp}(2g-2))\leq g\) _for any_ \(g\geq 2\)_._ * \(\mathcal{K}(\mathcal{H}^{odd}(2g-2))\leq g\) _for any_ \(g\geq 3\)_._ * \(\mathcal{K}(\mathcal{H}^{even}(2g-2))\leq g\) _for any_ \(g\geq 4\)_._ We assume familiarity with the geometry of translation surfaces, and encourage the reader to check out the surveys [15], [14] and [11]. Acknoledgments.I would like to thank E. Lanneau, D. Massart and S. Sabourau for useful and enlightening discussions related to the work presented here. ## 2 Proof of Theorem 1 In this section, we prove Theorem 1 by exhibiting a family of surfaces \(L^{n}_{g}\) for \(g,n\geq 2\), (having odd spin parity and) such that \(L^{n}_{g}\) has genus \(g\) for each \(n\geq 2\) and \[\lim_{n\to\infty}\mathrm{KVol}(L^{n}_{g})=g.\] ### Construction of the surface \(L^{n}_{g}\). Given \(g\geq 2\) and \(n\geq 2\), define \(L^{n}_{g}\) as the \((g(n+1)-1)\)-square translation surface of genus \(g\) with a single conical point which forms a staircase with steps of lengths and height \(n\), as in Figure 1. A basis of the homology.Let \(e_{1},\cdots,e_{g}\) (resp. \(f_{1},\cdots f_{g}\)) be the horizontal (resp. vertical) saddle connections (see Figure 2), seen as homology classes. Notice that for odd \(i\), \(e_{i}\) can be represented by a closed geodesic which do not pass through the singularity. We will refer to such homology classes as _non singular_ homology classes. This is also the case of \(f_{j}\) for even \(j\). On the contrary, for even \(i\) (resp. odd \(j\)), the class \(e_{i}\) (resp. \(f_{j}\)) will be called _singular_ as it can only be represented by closed geodesics passing through the singularity. The intersection matrix of the \(e_{i}\) and \(f_{j}\) is given by the following table: \begin{tabular}{|c||c c c c c c|} \hline Int\((e_{i},f_{j})\) & \(e_{1}\) & \(e_{2}\) & \(e_{3}\) & \(e_{4}\) & \(e_{5}\) & \(\cdots\) & \(e_{g}\) \\ \hline \hline \(f_{1}\) & 1 & -1 & 0 & 0 & 0 & \(\cdots\) & 0 \\ \(f_{2}\) & 0 & 1 & 0 & 0 & 0 & 0 \\ \(f_{3}\) & 0 & -1 & 1 & -1 & 0 & 0 \\ \(f_{4}\) & 0 & 0 & 0 & 1 & 0 & 0 \\ \(f_{5}\) & 0 & 0 & 0 & -1 & 1 & \(\cdots\) & 0 \\ \(\cdots\) & & & & & & \(\cdots\) & \(\cdots\) \\ \(f_{g}\) & 0 & 0 & 0 & 0 & 0 & \(\cdots\) & 1 \\ \hline \end{tabular} To see this, notice that for odd \(i\), the fact that \(e_{i}\) can be represented by a non-singular closed curve gives \(\text{Int}(e_{i},f_{j})=\delta_{i,j}\). The same holds for \(f_{j}\) for Figure 1: The surface \(L^{4}_{3}\) on the left, and \(L^{3}_{4}\) on the right. The identifications are such that each horizontal (resp. vertical) rectangle is a cylinder. even \(j\). Next, given \(i\) even, the holomogy class \(e_{i-1}+e_{i}+e_{i+1}\) corresponds to a non-singular curve in \(L_{g}^{n}\) which intersects \(f_{j}\) if and only if \(j=i\). In particular, \[\operatorname{Int}(e_{i-1},f_{j})+\operatorname{Int}(e_{i},f_{j})+\operatorname {Int}(e_{i+1},f_{j})=\delta_{i,j}\] But the fact that both \(i-1\) and \(i+1\) are odd gives \(\operatorname{Int}(e_{i-1},f_{j})=\delta_{i-1,j}\) and \(\operatorname{Int}(e_{i+1},f_{j})=\delta_{i+1,j}\), so that \[\operatorname{Int}(e_{i},f_{j})=\delta_{i,j}-\delta_{i-1,j}-\delta_{i+1,j}\] Further, as \(\operatorname{Int}(e_{i-1}+e_{i}+e_{i+1},e_{i})=0\) for even \(i\) (resp. \(\operatorname{Int}(f_{j-1}+f_{j}+f_{j+1},f_{j})=0\) for odd \(j\)), the same arguments gives that the \(e_{i}\)'s (resp. the \(f_{j}\)'s) do not intersect each other. As a concluding remark, notice that closed geodesics representing \(e_{1}\) and \(f_{1}\) are intersecting once and have respective length \(1\) and \(n\), and in particular: \[\operatorname{KVol}(L_{g}^{n})\geq\operatorname{Vol}(L_{g}^{n})\cdot\frac{ \operatorname{Int}(e_{1},f_{1})}{l(e_{1})l(f_{1})}=(g(n+1)-1)\cdot\frac{1}{n}.\] Computation of the spin.As explained in [9, Section 3], it is easy to compute the spin parity of an abelian differential \(\omega\) given a symplectic basis of the first homology group \((a_{i},b_{i})_{1\leq i\leq g}\) represented by smooth curves, and we have: \[\varphi(\omega)=\sum_{i=1}^{g}\Omega(a_{i})\Omega(b_{i})\mod 2.\] where \(\Omega(a_{i})=ind_{a_{i}}+1\) and \(2\pi\cdot ind_{a_{i}}\) is the total change of angle between the tangent vector to the curves and the horizontal foliation. Further, for any \(a,b\in H^{1}(X,\omega)\), we have: \[\Omega(a+b)=\Omega(a)+\Omega(b)+\text{Int}(a,b). \tag{1}\] In the case of \(L_{g}^{n}\), we use the basis \((a_{i},b_{i})_{1\leq i\leq g}\) defined by: \[a_{i}=\left\{\begin{array}{cc}e_{i}&\text{ if $i$ is even}\\ e_{i-1}+e_{i}+e_{i+1}&\text{ if $i$ is odd}\end{array}\right.\qquad\qquad\text{ and }\qquad\qquad b_{i}=f_{i}.\] The index of each \(a_{i}\) is \(0\) as well as the index of each \(b_{i}\) for even \(i\) because they correspond to non-singular homology classes. Further, using (1) we show that \(\Omega(b_{1})=0\), as well as \(\Omega(b_{g})=0\) if \(g\) is odd, while \(\Omega(b_{i})=1\) for odd \(i\), \(1<i<g\). In particular, we deduce that the spin structure of \(L_{g}^{n}\) has odd parity. Further, it should be remarked that although \(L_{2}^{n}\) is hyperelliptic for any \(n\geq 2\), \(L_{g}^{n}\) is not hyperelliptic if \(g\geq 3\). This is because an hyperelliptic involution would have to fix each cylinder, and hence must act as an involution of \(R_{1}\cup C_{1}\) (with the notations of Figure 3) so that it must act as an involution on \(C_{1}\), but it must also act as an involution of \(C_{1}\cup R_{2}\cup C_{2}\), which is then impossible. A useful model for the surfaces \(L_{g}^{n}\).Let us finish this section by giving another model for \(L_{g}^{n}\), which, although less intuitive at first sight, turns out to be helpful for the study of the intersections of saddle connections on \(L_{g}^{n}\). This model is obtained from a cut and paste procedure which is described in Figure 3 in the example of \(L_{4}^{3}\). The main idea is to glue together all the squares at the corners of \(L_{g}^{n}\) to form a _core staircase_ to which are attached the _long rectangles_. A general picture is given in Figure 4. ### An upper bound on \(KVol(L_{g}^{n})\) In this section we provide estimates for KVol on the surface \(L_{g}^{n}\). Recall from [12, Section 3] that the supremum in the definition of KVol can be taken over pairs of simple closed geodesics. In the case of translation surfaces, closed geodesics are homologous to unions of saddle connections. Since saddle connections are closed curves on \(L_{g}^{n}\) (which has a single singularity), we have: \[\text{KVol}(L_{g}^{n})=\sup_{\alpha,\beta\text{ saddle connections}}\frac{ \text{Int}(\alpha,\beta)}{l(\alpha)l(\beta)}\] In this setting, we show: **Theorem 2.2.1**.: _For any pair of saddle connections \(\alpha,\beta\) on \(L_{g}^{n}\), we have_ \[\frac{\text{Int}(\alpha,\beta)}{l(\alpha)l(\beta)}\leq\frac{1}{n}(\frac{n+1}{ n})^{2}+\frac{6}{n^{2}} \tag{2}\] From this result, we deduce directly that \[\text{KVol}(L_{g}^{n})\leq(g(n+1)-1)(\frac{(n+1)^{2}}{n^{3}}+\frac{6}{n^{2}}).\] Further, as remarked in Section 2, \[\frac{g(n+1)-1}{n}\leq KVol(L_{g}^{n}),\] so that, in particular, \(\text{KVol}(L_{g}^{n})\longrightarrow g\) as \(n\) goes to infinity, proving Theorem 1. Proof of Theorem 2.2.1.: Let \(\alpha\) and \(\beta\) be two saddle connections on \(L_{g}^{n}\). We decompose the homolgy class of \(\alpha\) (resp. \(\beta\)) in the basis \((e_{1},\cdots,e_{g},f_{1},\cdots,f_{g})\) of the homology. The first case we deal with is as follows: **Lemma 2.2.2**.: _For any saddle connection \(\alpha\) in \(L_{g,n}\) being in homology an integer combination of the \(e_{i}\), \(i\) odd, and the \(f_{j}\), \(j\) even, and any saddle connection \(\beta\), we have_ \[l(\beta)\geq n|\text{Int}(\alpha,\beta)|\] _In particular_ \[\frac{\text{Int}(\alpha,\beta)}{l(\alpha)l(\beta)}\leq\frac{1}{n}.\] Proof of Lemma 2.2.2.: As seen in the table of the intersections, the non-singular \(e_{i}\) or \(f_{j}\) do not intersect each other, and in particular do not intersect \(\alpha\). It Figure 3: \(L_{4}^{3}\) and its alternative model. follows that if we decompose \(\beta\) in the basis of the homology \((e_{1},f_{1},\cdots,e_{g},f_{g})\), the intersection \(\mathrm{Int}(\alpha,\beta)\) will be at most the number of singular \(e_{i}\) and \(f_{j}\) in the decomposition. But each singular \(e_{i}\) or \(f_{j}\) in the decomposition of \(\beta\) corresponds to a trip through a long rectangle \(R_{i}\) and accounts for a length at least \(n\), so that: \[l(\beta)\geq n|\mathrm{Int}(\alpha,\beta)|.\] Given that \(l(\alpha)\geq 1\), we get \[\frac{\mathrm{Int}(\alpha,\beta)}{l(\alpha)l(\beta)}\leq\frac{1}{n}.\] In particular, we deduce from Lemma 2.2.2 that Equation (2) holds if either \(\alpha\) or \(\beta\) is an integer combination of the non-singular \(e_{i}\) and \(f_{j}\) only. In the rest of the proof, we will assume that neither \(\alpha\) nor \(\beta\) correspond to such saddle connections. In the alternative model for \(L^{n}_{g}\), this says exactly that \(\alpha\) and \(\beta\) have to cross a long rectangle \(R_{i}\). Figure 4: The alternative model for \(L^{n}_{g}\) is made of a core staircase to which are attached the long rectangles \(R_{i}\). The curve \(E_{i}\) (resp. \(F_{j}\)) represents the homology class \(e_{i}\) (resp. \(f_{j}\)). In particular, we can decompose the saddle connections \(\alpha\) and \(\beta\) by cutting them each time they enters or leaves a rectangle \(R_{i}\) (lengthwise). This gives a decomposition into smaller (non-closed) segments \(\alpha=\alpha_{1}\cup\cdots\cup\alpha_{k}\) (resp. \(\beta=\beta_{1}\cup\cdots\cup\beta_{l}\)) alternating between: 1. long segments (of length at least \(n\)) inside a long rectangle \(R_{i}\), 2. short segments which stay inside the core staircase of Figure 4. By convention, we will include the endpoints in the short segments, apart from the singularities (the possible singular intersection will be counted separately). Since long segments and short segments are alternating, there are at least \(\max(\lfloor k/2\rfloor,1)\) long segments and there are at most \(\lceil k/2\rceil\) short segments for \(\alpha\). Notice that: * Long segments and short segments do not lie in the same part of the surface, hence they cannot intersect. * Any two short segments \(\alpha_{i}\) and \(\beta_{j}\) intersect at most once, as no side of the core staircase is identified to another side of the core staircase. * Concerning the intersection of long segments, we have: **Lemma 2.2.3**.: _Given two long segments \(\alpha_{i}\) and \(\beta_{j}\) in the same rectangle \(R\), we have_ \[\frac{\#\alpha_{i}\cap\beta_{j}}{l(\alpha_{i})l(\beta_{j})}\leq\frac{1}{n}( \frac{n+1}{n})^{2}+\frac{1}{n^{2}}\] _where \(\#\alpha_{i}\cap\beta_{j}\) denotes the cardinal of the set of intersection points._ Proof.: The proof of this Lemma is similar to the proof of Proposition 2.5 in [7]. We first identify the sides of each long rectangle \(R\) to form a torus \(T\). Then, for each long segment \(\alpha_{i}\) (resp. \(\beta_{j}\)) contained in the long rectangle \(R\), we construct a closed curve \(\tilde{\alpha}_{i}\) (resp. \(\tilde{\beta}_{j}\)) on the corresponding torus \(T\). This construction can be done by adding to \(\alpha_{i}\) (resp. \(\beta_{j}\)) a small portion of curve of length at most one, and removes at most one intersection, so that 1. \(\operatorname{Int}(\tilde{\alpha}_{i},\tilde{\beta}_{j})\geq\#\alpha_{i}\cap \beta_{j}-1\). 2. \(l(\tilde{\alpha}_{i})\leq l(\alpha_{i})+1\) and \(l(\tilde{\beta}_{j})\leq l(\beta_{j})+1\). Moreover, \(l(\alpha_{i})\geq n\) and \(l(\beta_{j})\geq n\) so that: 1. \(l(\alpha_{i})+1\leq l(\alpha_{i})(1+1/n)\) (and the same holds for \(\beta_{j}\)), 2. \(1\leq\frac{l(\alpha_{i})l(\beta_{j})}{n^{2}}\). Now, since \(\text{KVol}(T)=1\) on the flat torus \(T\), and given that the rectangle \(R\) has area \(n\) (and so does the torus \(T\)), we get: \[\frac{\text{Int}(\tilde{\alpha}_{i},\tilde{\beta}_{j})}{l(\tilde{\alpha}_{i})l( \tilde{\beta}_{j})}\leq\frac{1}{n}.\] In particular \[\#\alpha_{i}\cap\beta_{j} \leq\text{Int}(\tilde{\alpha}_{i},\tilde{\beta}_{j})+1 \text{by (a)}\] \[\leq\frac{1}{n}(l(\tilde{\alpha}_{i})l(\tilde{\beta}_{j}))+1\] \[\leq\frac{1}{n}(l(\alpha_{i})+1)(l(\beta_{j})+1)+1 \text{by (b)}\] \[\leq\frac{1}{n}l(\alpha_{i})(1+\frac{1}{n})l(\beta_{j})(1+\frac{ 1}{n})+\frac{l(\alpha_{i})l(\beta_{j})}{n^{2}} \text{by (c) and (d)}.\] This gives Theorem 2.2.1. **End of the proof.** Counting all intersections, we have: \[\text{Int}(\alpha,\beta)\leq(\sum_{i,j}\#\alpha_{i}\cap\beta_{j})+1\] where the added intersection accounts for the possible singular intersection. Using the preceeding estimates, we have: \[\text{Int}(\alpha,\beta) \leq\left(\sum_{\alpha_{i},\beta_{j}\text{ long segments}}\# \alpha_{i}\cap\beta_{j}\right)+\left(\sum_{\alpha_{i},\beta_{j}\text{ short segments}}\#\alpha_{i}\cap\beta_{j}\right)+1\] \[\leq\left(\sum_{\alpha_{i},\beta_{j}\text{ long segments}}( \frac{1}{n}(\frac{n+1}{n})^{2}+\frac{1}{n^{2}})l(\alpha_{i})l(\beta_{j}) \right)+\left(\sum_{\alpha_{i},\beta_{j}\text{ short segments}}1\right)+1\] \[\leq(\frac{1}{n}(\frac{n+1}{n})^{2}+\frac{1}{n^{2}})\left(\sum_{ \alpha_{i},\beta_{j}\text{ long segments}}l(\alpha_{i})l(\beta_{j})\right)+ \lceil\frac{k}{2}\rceil\lceil\frac{l}{2}\rceil+1\] \[\leq(\frac{1}{n}(\frac{n+1}{n})^{2}+\frac{1}{n^{2}})l(\alpha)l( \beta)+\lceil\frac{k}{2}\rceil\lceil\frac{l}{2}\rceil+1\] Now, since there are at least \(\max(\lfloor k/2\rfloor,1)\) long segments of \(\alpha\), each long segment having length at least \(n\), we get \(l(\alpha)\geq\max(\lfloor k/2\rfloor,1)n\), so that \(\frac{k-1}{2}\leq\frac{l(\alpha)}{n}\), and \[\lceil\frac{k}{2}\rceil\leq\frac{k+1}{2}\leq\frac{l(\alpha)}{n}+1\leq\frac{2l (\alpha)}{n}\] where the last inequality comes from \(l(\alpha)\geq n\). Similarly, we have \[\lceil\frac{l}{2}\rceil\leq\frac{l+1}{2}\leq\frac{2l(\beta)}{n}\] so that \[\operatorname{Int}(\alpha,\beta) \leq(\frac{1}{n}(\frac{n+1}{n})^{2}+\frac{1}{n^{2}})l(\alpha)l( \beta)+\frac{4}{n^{2}}l(\alpha)l(\beta)+1\] \[\leq(\frac{1}{n}(\frac{n+1}{n})^{2}+\frac{5}{n^{2}})l(\alpha)l( \beta)+\frac{l(\alpha)l(\beta)}{n^{2}}\] again using that \(l(\alpha)\geq n\) and \(l(\beta)\geq n\). ## 3 Even spin and hyperelliptic families We conclude this paper by giving a hyperelliptic family of surfaces \(H^{n}_{g}\) for \(g\geq 3\) and an even spin family of surfaces \(M^{n}_{g}\) (for \(g\geq 4\)) such that for fixed \(g\), \(\operatorname{KVol}(H^{n}_{g})\) and \(\operatorname{KVol}(M^{n}_{g})\) converge to \(g\) as \(n\) goes to infinity. The proof is in fact similar to the case of \(L^{n}_{g}\), as each surface can be decomposed into core polygons (giving rise to short segments) and long rectangles (giving rise to long segments). These two families of surfaces have the property that each edge of a core polygon is glued to an edge of a long rectangle, which allows to generalize Lemma 2.2.3. Further, the curves staying in the core polygons do not intersect each other and the conclusion of Lemma 2.2.2 can be generalized to these families of surfaces. This allows to give bounds for \(\operatorname{KVol}(H^{n}_{g})\) and \(\operatorname{KVol}(M^{n}_{g})\) which are easily shown to converge to \(g\) as \(g\) is fixed and \(n\) goes to infinity. ### The family \(H^{n}_{g}\) A convenient way to construct a family of hyperelliptic surfaces is to copy the staircase model of the double regular \((2g+1)\)-gon. However, we need each _long rectangle_ to have area \(n\). One way to do this is to set the lengths of the horizontal and vertical curves \(e_{i}\) and \(f_{j}\) drawn in the left of Figure 5 as \[l(e_{i})=n^{\frac{g-i-1}{g-1}}\text{ and }l(f_{j})=n^{\frac{i-1}{g-1}}.\] Next, we distinguish the core polygons \(C_{i}\) and the big rectangles \(R_{i}\) and proceed with the proof as in the case of \(L^{n}_{g}\). Notice that the \(e_{i}\)'s (resp. the \(f_{j}\)'s) are pairwise non-intersecting and that the intersection of the \(e_{i}\) and the \(f_{j}\) is given by the following table: \begin{tabular}{|c||c c c c c c|} \hline \(\operatorname{Int}(e_{i},f_{j})\) & \(e_{1}\) & \(e_{2}\) & \(e_{3}\) & \(e_{4}\) & \(e_{5}\) & \(\cdots\) \\ \hline \hline \(f_{1}\) & 1 & 0 & 0 & 0 & 0 & \(\cdots\) \\ \(f_{2}\) & -1 & 1 & 0 & 0 & 0 & \(\cdots\) \\ \(f_{3}\) & 1 & -1 & 1 & 0 & 0 & \(\cdots\) \\ \(f_{4}\) & -1 & 1 & -1 & 1 & 0 & \(\cdots\) \\ \(f_{5}\) & 1 & -1 & 1 & -1 & 1 & \(\cdots\) \\ \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) \\ \hline \end{tabular} This allows to get an adapted version of Lemma 2.2.2: **Lemma 3.1.1**.: _The closed saddle connections \(\gamma\) contained in the core polygons correspond to the homology classes \(e_{i}\), \(f_{i-1}\) and \(e_{i}+f_{i-1}\) for \(2\leq i\leq g\). For any such saddle connection \(\gamma\) and any other saddle connection \(g\), we have:_ \[\frac{\text{Int}(\gamma,g)}{l(\gamma)l(g)}\leq\frac{1}{n}.\] Further, similarly to Lemma 2.2.3, we have: **Lemma 3.1.2**.: _For any two saddle connections \(\alpha\) and \(\beta\) which are not contained in the core polygons \(C_{i}\), we have_ \[\frac{\text{Int}(\alpha,\beta)}{l(\alpha)l(\beta)}\leq\frac{1}{n}(1+n^{-\frac {1}{g}})^{2}+\frac{6}{n^{2}}.\] Using that the area of \(H_{g}^{n}\) is \(gn+(g-1)n^{\frac{g-1}{g}}\), we get that \[g+(g-1)n^{\frac{-1}{g}}\leq KVol(H_{g}^{n})\leq(g+(g-1)n^{\frac{-1}{g}})((1+n^ {-\frac{1}{g}})^{2}+\frac{6}{n}),\] where the lower bound comes from the fact that \(e_{1}\) and \(f_{1}\) are intersecting once, and \(l(e_{1})l(f_{1})=n\). Hence, for fixed \(g\), \(\text{KVol}(H_{g}^{n})\) goes to \(g\) as \(n\) goes to infinity. Figure 5: On the left, a combinatorial model for \(H_{g}^{n}\). On the right, the example of \(H_{3}^{n}\). ### The family \(M_{g}^{n}\) Similarly to \(L_{g}^{n}\) and \(H_{g}^{n}\), it is possible to construct an even spin family of translation surfaces \(M_{g}^{n}\), such that for any fixed \(g\geq 4\), \(\mathrm{KVol}(M_{g}^{n})\) goes to \(g\) as \(n\) goes to infinity. For example, construct each \(M_{g}^{n}\) from \(H_{3}^{n}\) by adding steps as in \(L_{g}^{n}\), see Figure 6. As we have seen in the case of \(L_{g}^{n}\), the operation of adding steps do not change the parity of the spin structure. In particular, the parity of the spin structure of \(M_{g}^{n}\) is the same as \(H_{3}^{n}\), that is even. Further, similarly to \(L_{g}^{n}\), the surface \(M_{g}^{n}\) is not hyperelliptic as an hyperelliptic involution would have to fix each cylinder, and hence act as an involution of \(C_{2}\), \(R_{3}\) and \(C_{3}\) but also \(C_{2}\cup R_{3}\cup C_{3}\), which is impossible. In the case of \(M_{g}^{n}\), an argument similar to the previous cases show: \[\frac{gn+2\sqrt{n}+(g-3)}{n}\leq\mathrm{KVol}(M_{g}^{n})\leq\frac{gn+2\sqrt{n }+(g-3)}{n}((1+\frac{1}{\sqrt{n}})^{2}+\frac{6}{n}).\]
``` 幾何学的な閉曲線の交差を求めることについて、興味があります。それは、ある長さの閉曲線がどれだけ交差できるかを測る量であるKVolを意味します。この論文では、ジェンダー$g \geq 2$の翻訳面の moduli空間の最小層$\mathcal{H}(2g-2)$の各連結成分にある家族の翻訳表面を構成することで、KVolが表面のジェンダーに任意程度に近くなるようにします。これは、KVolが$\mathcal{H}(2g-2)$の最小層における最大値であることが予想されることを意味します。 ``` Let me know if you would like to translate any other sentences.
2309.14026
Magneto-optical trap performance for high-bandwidth applications
We study the dynamics of a magneto-optical trap (MOT) operating at high-bandwidth. We find the absolute importance of high recapture efficiency between cycles to maintain a practical atom number. We develop a simple model accounting for MOT trapping forces and pressure induced collisions and validate with experimental data using $\mathrm{{}^{87}Rb}$. This is then applied to quantum sensing predicting a shot noise limited sensitivity of $\mathrm{10^{-7}g/\sqrt{Hz}}$ for a gravimeter at 100 Hz operation. The results are useful for understanding MOT operation at high-bandwidth, particularly in the context of developing mobile high-bandwidth quantum inertial sensors targeting dynamic environments and navigation applications.
Benjamin Adams, Sachin Kinge, Kai Bongs, Yu-Hung Lien
2023-09-25T10:43:26
http://arxiv.org/abs/2309.14026v1
# Magneto-optical trap performance for high-bandwidth applications ###### Abstract We study the dynamics of a magneto-optical trap (MOT) operating at high-bandwidth. We find the absolute importance of high recapture efficiency between cycles to maintain a practical atom number. We develop a simple model accounting for MOT trapping forces and pressure induced collisions and validate with experimental data using \({}^{87}\)Rb. This is then applied to quantum sensing predicting a shot noise limited sensitivity of \(1\times 10^{-7}\frac{\mathrm{g}}{\sqrt{\mathrm{Hz}}}\) for a gravimeter at 100 Hz operation. The results are useful for understanding MOT operation at high-bandwidth, particularly in the context of developing mobile high-bandwidth quantum inertial sensors targeting dynamic environments and navigation applications. ## I Introduction The magneto-optical trap (MOT) has been the workhorse of cold atomic and molecular physics since its first demonstration [1; 2]. It efficiently cools and traps target species to a sub-millikelvin temperature and is indispensable to the generation of quantum gases, i.e. BEC and degenerate Fermi gas [3; 4]. The exploration of these fields has resulted in numerous applications in fundamental research and increasingly real-world scenarios such as metrology [5], sensing [6], quantum simulation [7; 8], quantum information processing [9; 10] and so on. Despite the remarkable progress in cold atom physics over the past few decades, most experiments are still conducted in laboratory settings due to the optical, radiofrequency and vacuum requirements for generating and manipulating cold atoms. However, the potential of cold atom technology has been increasingly recognised with efforts made to move experiments out of the laboratory for real-world benefits. Notably, this trend is evident in the area of quantum gravity sensing, with various demonstrator systems performing trials in different application environments [11; 12; 13; 14]. Promising application areas include geophysics, space, civil engineering and oil and mineral prospecting. The potential of the technology is based on its inherent and unparalleled sensitivity, along with the capability of providing drift-free measurements compared to classical approaches. Inertial navigation presents another promising application area for this technology. However, its practical implementation is hindered by the low sampling rate or bandwidth of quantum sensors making them less suited to highly dynamic environments. This limitation primarily arises from the time required for atomic sample preparation, which mainly involves loading the atomic trap, also known as the MOT loading time. As a result, bandwidth is typically limited to roughly 1 Hz. To increase bandwidth, there are various approaches available. One such method is to perform interleaved measurements, starting the next measurement while the previous one is still underway. This approach has demonstrated sampling rates of 3.75 Hz with a measurement time of 801 ms, but it relies on a long drop distance, resulting in a large form factor [15]. While sensitive, this implementation competes with the goal of creating small, robust, deployable devices and does not significantly increase bandwidth. Another approach involves using sequential measurements with a considerably reduced cycle time. This method has the potential to increase measurement bandwidth while minimising dead time due to replenishing trapped atoms between cycles. This approach trades bandwidth for reduced sensitivity and system demands. However, achieving 100 Hz operation restricts the cycle time to 10 ms, leaving only a few milliseconds for loading. Consequently, this approach utilises a short drop distance to maintain a high atom number. This smaller displacement ensures that most atoms can be recaptured between cycles, leading to a significant bandwidth increase. Alternatively, one could consider a short loading time with a long measurement time and adopt a 2D MOT or Zeeman slower to enhance the loading rate [16; 17]. However, this approach will also conflict with the desire for simpler, compact deployable systems. Quantum sensing is not widely explored at high-bandwidth although some atom interferometry has been performed, achieving sensitivities at the \(\sim\)pg/\(\sqrt{\mathrm{Hz}}\) level [18; 19; 20; 21]. This raises the question of how MOT dynamics and bandwidth are fundamentally connected and the implications for quantum sensing. In this paper, we explore high-bandwidth MOT dynamics in detail, making connections between MOT theory and experimental observations. We build a simple model and validate with experimental data before discussing the critical nature of efficient recapture; optimum parameters and limitations of the mechanism are also explored. The results are then applied to quantum sensing exploring the sensitivity performance limits of a high-bandwidth atom interferometer. This work highlights the utility of simple MOT physics in predicting the feasibility of MOT generation for a given bandwidth, duty cycle, trap size and other cloud properties. Study is performed with the \({}^{87}\)Rb\(\,\)D\({}_{2}\,(5^{2}\)S\({}_{1/2}\to 5^{2}\)P\({}_{3/2})\) transition. However, general findings apply to a broader range of cold atom experiments targeting higher bandwidth operation. ## II Model To simulate MOT dynamics we adopt the low-intensity theory of optical molasses for a two level atom in 1D illustrated in Fig. 1 [22]. This framework can be extended to obtain an expression for the MOT restoring force: \(\delta\) corresponds to the detuning from resonance, the \(\pm\) subscript accounts for the different detunings of the right and left directed beams, s denotes the saturation parameter and \(\Gamma\) is the natural linewidth of the transition. This force is numerically integrated to simulate atomic trajectories. Fig. 2 demonstrates the MOT restoring force acting on individual \({}^{87}\)Rb atoms with different initial velocities. This work concerns the \({}^{87}\)Rb D\({}_{2}\) (\(5^{2}\)S\({}_{1/2}\to 5^{2}\)P\({}_{3/2}\)) transition for which \(\Gamma=2\pi\times 6.065(9)\) MHz and \(\lambda=780.241\) nm. \[\mathrm{F}_{\mathrm{MOT}}=\hbar\mathrm{k}\frac{\Gamma}{2}\Bigg{[}\frac{\mathrm{ s}}{1+\mathrm{s}+(\frac{2\delta_{\wedge}}{\Gamma})^{2}}-\frac{\mathrm{s}}{1+ \mathrm{s}+(\frac{2\delta}{\Gamma})^{2}}\Bigg{]}, \tag{1}\] ## III Dynamics ### Intensity dependence For modelling purposes, a simulation cycle is split into two distinct regimes, drop and recapture. For lower bandwidth applications, requirements on MOT loading time are less stringent and so after dropping atoms, loading from background vapour is standard. The timescale for this is pressure dependent but typically takes a few 100 ms. Consequently, efficient recapture of atoms between cycles is essential for high-bandwidth operation. The recapture efficiency will not be 100% but the atom number does not decay to zero as atoms are loaded from the background vapour during recapture. There are two main mechanisms inhibiting recapture; the finite MOT restoring time and collisions between atoms in the MOT and the background vapour. We start by considering the finite restoration time. During freefall atoms move primarily along the vertical and so trajectories are modelled in 1D. For high-bandwidth applications the drop time (T\({}_{\mathrm{drop}}\)) will be \(\sim 5\) ms leading to an atom falling 0.13 mm. Given a typical trap radius of \(\sim 5\) mm, an atom will not fall far from the trap centre. However, despite this short distance, the recapture time is still finite limited by the restoring force towards the MOT centre. Fig. 3 shows a numerical simulation of single atom trajectories over multiple cycles, highlighting that for insufficient power the restoring force is too weak and the atom will not be recaptured. This can be seen in the loss of periodicity for the s = 1 trajectory. Therefore, to maximise bandwidth in experiments, an intensity significantly above the saturation intensity is required to minimise recapture time. Figure 1: Two counter-propagating laser beams of frequency \(\omega\) less than the atomic resonance frequency \(\omega_{0}\) illustrating 1D optical molasses. Atom propagates with velocity \(v_{z}\) towards the rightmost beam. Figure 3: Single atom trajectories in a 100 Hz \({}^{87}\)Rb MOT for variable intensity. s = 1 (blue solid), 3 (yellow dotted), 5 (green dash-dot) and 10 (red dashed). \(\Delta=-3\), duty cycle = 0.75, A = 16 G/cm. The white and grey regions correspond to the drop and recapture phases respectively. Figure 2: Numerical simulation of single atom trajectories for \({}^{87}\)Rb atoms with variable initial velocities illustrating under-damped motion occurring for s = 1, \(\Delta=-3\), A = 16 G/cm. Initial velocity v\({}_{0}\) (ms\({}^{-1}\)): 0.5 (green dashed), 0.2 (orange dash-dotted), 0.1 (blue solid). ### Temperature dependence To extend this, the dynamics of an atomic cloud are explored by simulating a 1000 atoms with numerical trajectories similar to those in Fig. 3. The atomic positions and velocities are normally distributed with \(\sigma_{\rm MOT}\) and \(\sigma_{\rm x}\) respectively. \(\sigma_{\rm MOT}\) is the cloud radius and \(\sigma_{\rm v}=\sqrt{\rm k_{B}T_{\rm MOT}/m_{\rm atom}}\) is the cloud's velocity spread where, \(\rm T_{\rm MOT}\) is the cloud temperature and \(\rm m_{\rm atom}\) is the mass of a single atom. To quantify capture, an atom is considered trapped if its final position is \(\rm|x_{f}|<0.1\) mm from the trap centre and its final speed is \(\rm|v_{f}|<\sigma_{v\,Doppler}\), where \(\sigma_{v\,Doppler}\) is the Doppler velocity. For cooling on the \({}^{87}\)Rb D\({}_{2}\) line, the Doppler cooling limit, \(\rm T_{D}=140\) uK, giving \(\sigma_{v\,Doppler}=0.12\,ms^{-1}\)[22]. The fraction of atoms satisfying the capture criteria at the end of the cycle is the restored fraction, \(\rm P_{\rm restored}\). Unless stated, we fix our bandwidth at \(\rm 100\,Hz\) giving a cycle length of 10 ms. Increasing duty cycle increases the drop time and reduces the recapture time. When the recapture time is \(<3\) ms, there is insufficient time to restore atoms to the MOT centre and the recapture efficiency declines. The restored fraction tends to a finite value for short recapture times (\(\sim 0.05\)). This results from the spatial extent of the MOT with respect to the capture region. For short recapture times, a fraction of atoms have not left the capture criteria region and are considered recaptured. Furthermore, our simple model applies a Gaussian intensity profile across the 1D trap and so for higher temperatures and longer drop times, atoms move further away from the central most intense region and experience weaker restoring forces. In general, low temperature is critical for cold-atom experiments with our simulations highlighting why this can aid recapture and bandwidth. ### Pressure dependence During an operational cycle, atoms in the cloud can also be lost through collisions with atoms in the background vapour. The probability of this not occurring for an atom during a cycle is given by \(\rm P_{\rm no\,collision}\) in Eq. (2). \(\tau\) is the mean free collision time and \(\rm T_{\rm cycle}\) is the time for a complete cycle (drop and recapture) as atoms can be lost from background collisions throughout an entire cycle. \[\rm P_{\rm no\,collision}=e^{-\frac{T_{\rm cycle}}{\tau}}. \tag{2}\] For recapture times \(>3\) ms, restoration losses are typically negligible (\(\rm P_{\rm restored}=1\)) and so Eq. (2) effectively represents the recaptured atom fraction for a single shot. Unless stated, we use MOT parameters of: \(\rm s=3\), \(\Delta=-3\), \(\rm A=14\)\(\rm G/cm\), \(\rm T_{\rm MOT}=300\) uK, \(\sigma_{\rm MOT}=0.5\) mm, \(4\sigma_{\rm r}=20\) mm (\(1/\rm e^{2}\)) diameter, Vapour Pressure \(=2.9\times 10^{-7}\) mbar, \(\rm R=4.5\times 10^{9}\,s^{-1}\), \(\rm L=16.0\,s^{-1}\), \(\sigma_{0}=1\times 10^{-13}\,cm^{2}\), \(\rm C_{v}=21\,ms^{-1}\). \(\sigma_{\rm r}\) defines the trap size, \(\rm C_{v}\) is the capture velocity and R and L define the MOT loading and loss rates respectively. A defines the trap field gradient and \(\sigma_{0}\) defines the collision cross section. More explicit details on these parameters will be given in the subsequent section. Fig. 5 shows the results of computing \(\rm P_{\rm no\,collision}\) and the mean free time over the \(10^{-9}-10^{-6}\) mbar range. For pressures approaching \(10^{-6}\) mbar, the collision timescale is comparable to the cycle time, reducing the recaptured fraction significantly. Note, modelling only considers background collisions with \({}^{87}\)Rb atoms and assumes the absence of other species. ## IV Atom number ### Mott loading The rate of change of atoms in the MOT is given by the balance between loading and loss of atoms, integrating Figure 4: Simulating restored atom fraction for a cloud of \({}^{87}\)Rb atoms in a 100 Hz MOT for variable duty cycle and cloud temperature. \(\rm T_{\rm MOT}\): 10 μK (blue solid), 100 μK (orange dashed), 1000 μK (green dash-dot). Figure 5: \(\rm P_{\rm no\,collision}\) (red solid) and mean free time (blue dashed) for variable pressure for \(\rm T_{\rm cycle}=10\,ms\)[23]. this gives the number of atoms after loading for a period of time, t in Eq. (3a). R and L are the loading and loss rate of the MOT and are given by Eqs. (3b) and (3c) respectively. \(\mathrm{A_{s}}\) is the trap surface area (\(4\pi\sigma_{\mathrm{r}}^{2}\)), the capture velocity, \(\mathrm{C_{v}}\) is assumed to be \(21\,\mathrm{ms^{-1}}\) - see appendix A for details. \(\mathrm{n_{b}}\) is the number density of particles in the background vapour, \(\sigma_{0}\) is the collision cross section and \(\mathrm{v_{th}}\) is the average thermal velocity of the background gas. The number density of the particles is calculated from the ideal gas equation \(\mathrm{n_{b}}=\frac{\mathrm{P}}{\mathrm{ET}}\) with the vapour pressure obtained from the model in [24]. \[\mathrm{N(t)} =\frac{\mathrm{R}}{\mathrm{L}}(1-\mathrm{e^{-Lt}}). \tag{3a}\] \[\mathrm{R} =\frac{2\mathrm{A_{s}}\mathrm{C_{v}^{4}n_{b}}}{\pi^{2}\mathrm{v _{th}^{3}}}.\] (3b) \[\mathrm{L} =\frac{1}{\tau}=\mathrm{n_{b}}\sigma_{0}\mathrm{v_{th}} \tag{3c}\] The rate equation sometimes includes an additional loss for inelastic collisions between atoms in the MOT. This changes the loss rate to \(\mathrm{L}\rightarrow\mathrm{L}+\beta\bar{\mathrm{n}}\), where \(\bar{\mathrm{n}}\) is the mean cloud density and \(\beta\) is a constant characterising this mechanism. This implies that two-body collisions can be neglected if \(\beta\bar{\mathrm{n}}<<\mathrm{L}\). \(\beta\sim 1\times 10^{-11}\,\mathrm{cm^{3}s^{-1}}\) has been reported for a laser detuning of \(\delta=-\Gamma\) and an intensity of \(\mathrm{s}\approx 10\), which are fairly typical operating parameters [25]. Assuming a MOT of around \(10^{8}\) atoms with a radius of \(1\,\mathrm{mm}\) gives a number density of \(\bar{\mathrm{n}}\sim 1\times 10^{10}\,\mathrm{cm^{-3}}\). For typical pressure \(\mathrm{L}\sim 1-10\,\mathrm{s^{-1}}\) which is 1-2 orders higher than the two body loss term. This justifies why this term can be neglected in our simulations. For 100 Hz operation the MOT loading time is only a few ms. Even for relatively high pressures in the low \(10^{-7}\) mbar range the loading rate is a few \(10^{9}/\mathrm{ms}\). This means at most \(\sim 10^{7}\) atoms can be loaded from the background vapour after a few ms; a small fraction of the steady state population reached in the experimental data in Fig. 6. This highlights how efficient recapture of atoms between cycles is essential for high-bandwidth operation. In this regime MOT composition is recapture dominated with a small contribution from background loading. Consider a high-bandwidth MOT containing \(10^{7}\) atoms with a recapture period of \(\sim 1\,\mathrm{ms}\). Assuming recapture is \(90\%\) efficient with a MOT loading rate of \(\mathrm{R}\sim 10^{9}\,\mathrm{s^{-1}}\) the atom number will remain steady. By considering losses from the finite restoration time and collisions independently, an iterative equation is formed describing the shot to shot atom number. \[\mathrm{N_{i+1}}=\mathrm{N_{i}P_{no\,collision}P_{restored}}+\frac{\mathrm{R}}{ \mathrm{L}}(1-\mathrm{e^{-LTR_{\mathrm{load}}}}). \tag{4}\] \(\mathrm{N_{i}}\) denotes the atom number in the \(\mathrm{i^{th}}\) cycle. The first term describes the contribution from recaptured atoms with \(\mathrm{P_{no\,collision}P_{restored}}\) representing the constant shot to shot recapture fraction. The second term describes background loading and is the MOT loading equation with terms as defined in Eq. (3a). The time for loading and recapture is given by \(\mathrm{T_{\mathrm{reload}}}\). Iterating until \(\mathrm{N_{i+1}}=\mathrm{N_{i}}\) gives the operational steady state atom number for the MOT. For higher pressure the loading rate is larger and so more atoms are loaded from the background but fewer atoms are recaptured due to more background collisions and vice versa for lower pressure. Steady state corresponds to the point at which the number of atoms lost due to inefficient recapture is perfectly balanced by the atoms loaded from the background vapour. In Fig. 7 the behaviour of a traditional MOT is simulated and contrasted with a high-bandwidth MOT with a duty cycle of 0.65. In this configuration there are about \(20\%\) the number of atoms when compared with a MOT fully loaded from background vapour. Even with our relatively high pressure, without recapture it would take \(10\mathrm{x}\) longer to load this many atoms. This limits bandwidth to at most \(30\) Hz showing the importance of recapture in maximising bandwidth. Figure 7: Traditional non-dynamic MOT loading (solid), \(100\) Hz high-bandwidth MOT loading simulation at a duty cycle of 0.65 (dashed). Figure 6: Experimental MOT loading data. The following parameters are extracted, \(\mathrm{R}=4.5\times 10^{9}\,\mathrm{s^{-1}}\), \(\mathrm{L}=16.0\,\mathrm{s^{-1}}\) and a \({}^{87}\)Rb vapour pressure of \(2.9\times 10^{-7}\) mbar. Duty cycle A key parameter determining MOT operation is the duty cycle describing the useful fraction of the experimental cycle. In this context it denotes the free-fall time. The remaining portion constitutes time for recapturing and loading atoms back into the trap for the next cycle. Optimising duty cycle is important for experimental applications as increasing measurement time will compromise time available for reloading atoms into the MOT. Naturally, some balance must be achieved within a cycle. To investigate this we vary the parameter experimentally and compare with our simple dynamics model. Fig. 8 presents data at 100 Hz bandwidth, as drop time tends to 0 ms the atom number tends towards the value in Fig. 7 for non-dynamic MOT operation. For increasing drop times up to 6 ms the atom number decays gradually as less cycle time is devoted to reloading. In this regime, the recapture efficiency stays constant as the restoration force is sufficient to recapture atoms for reloading time \(>\) 3.5 ms (P\({}_{\text{restored}}=1\)). The imperfect recapture efficiency comes from the pressure induced collisions with the background vapour, P\({}_{\text{no\,collision}}=85\%\) at 100 Hz. For drop times \(>6.5\) ms the recapture mechanism fails and the atom number declines dramatically with a good fit between model and experimental data. This fit is slightly poorer at 50 Hz but still quite reasonable. Given the 1D model used, further discrepancies might be connected to the 3D nature of the light field, magnetic field and polarisation profiles. To validate our collision model we perform duty cycle scans with fixed cycle times of 2.5, 5, 10 and 20 ms. Using this data we extract the P\({}_{\text{no\,collision}}\) value as drop time tends to 0 ms and plot against Eq. (2) for our operating pressure of \(2.9\times 10^{-7}\) mbar. Fig. 10 presents this data showing a strong fit validating our collision model. To further highlight the importance of recapture we simulate longer drop times with a short reloading time. To model this, the reloading time is fixed, the drop time is incremented and the steady state atom number is computed. After falling \(2\sigma_{\text{r}}=10\) mm, an atom will fall out of the trap centre in \(\sim 45\) ms as reflected in the decline in Fig. 11. For drop times \(\ll 45\) ms the dynamics are recapture dominated as atoms do not fall out of the trapping region. For drop times \(>45\) ms the MOT is no longer in the trapping region and so recapture is not viable. Consequently, the MOT consists entirely of atoms loaded from the background vapour. For longer loading times the drop off is less pronounced highlighting the need for a significant increase in reloading time when leaving the recapture dominated regime. Our model is further validated by calculating and measuring the reloading time for a steady state MOT of \(10^{8}\) atoms. As anticipated, the recapture efficiency experiences a decline to zero at 45 ms of drop time. For small drop times the loading time required tends to the MOT restoration time for a \({}^{87}\)Rb atom (\(\sim 3\) ms) in this regime. When recapture fails, the time required is determined entirely by background loading and is given by \(\frac{1\times 10^{8}}{4.5\times 10^{8}}\sim 25\) ms. For lower pressures (\(\sim 10^{-8}\) mbar) this time will be significantly longer due to the reduced loading rate. Overall, a good fit is observed between the model and experiment. For experiments care is required to ensure sufficient loading time such that recapture is not compromised. Equally, excess time should be avoided to promote measurement bandwidth. To optimise this in different systems analysis similar to Fig. 8 could be performed by increasing the duty cycle until a sharp drop off in atomic signal is observed. This reflects the point at which the recapture mechanism fails determining the necessary trap loading time. Figure 8: Steady state atom number (red solid) and recapture efficiency P\({}_{\text{no\,collision}}\)P\({}_{\text{restored}}\) (blue dashed) for a 100 Hz MOT for variable duty cycle. Experimental data points are scattered. Figure 9: Steady state atom number (red solid) and recapture efficiency P\({}_{\text{no\,collision}}\)P\({}_{\text{restored}}\) (blue dashed) for a 50 Hz MOT for variable duty cycle. Experimental data points are scattered. ## V Discussion ### Application to Quantum Sensing Having validated our simple model for the high-bandwidth MOT we will now apply this to optimise an application. Atom interferometry (AI) was developed in the early 1990s and offers exceptional sensitivity to rotations and accelerations [26]. The technique underpins quantum sensing which shows huge promise for applications in inertial navigation [27; 21]. To explore this we predict the sensitivity performance limit of an atom interferometer operating at 100 Hz. Sensitivity is given by \(\frac{\delta\phi}{\phi}\), where \(\delta\phi\) denotes phase noise and \(\phi\) is the phase signal accumulated over the interrogation period. The noise on a single measurement \(\delta\phi_{\mathrm{s}}\) is limited by quantum projection noise \(\mathrm{N_{Q}}=\sqrt{\mathrm{N_{AI}}}\) and \(\delta\phi_{\mathrm{s}}=\eta\delta\phi_{\mathrm{Q}}=\eta\frac{\mathrm{N_{Q}}} {\mathrm{N_{AI}}}=\frac{\eta}{\sqrt{\mathrm{N_{AI}}}}\), where \(\mathrm{N_{AI}}\) denotes the number of atoms participating in the interferometer with \(\eta\geq 1\) accounting for excessive detection noise. The operating bandwidth is given by \(\mathrm{F}=\frac{1}{(\mathrm{T_{i}}+\mathrm{T_{P}})}\) where \(\mathrm{T_{i}}=\mathrm{T_{drop}}\) is the interrogation (drop) time and \(\mathrm{T_{P}}\) is the sensor preparation time incorporating reloading, cooling and detection. Using these definitions sensitivity can be expressed as in Eq. (5). \[\mathrm{S}=\frac{4\eta}{\mathrm{k_{e}g}\sqrt{\mathrm{N_{AI}}}\sqrt{\mathrm{F} \mathrm{T_{i}^{2}}}}\approx 2.5\times 10^{-8}\frac{\eta}{\sqrt{\mathrm{N_{AI}}}} \frac{\sqrt{\mathrm{F^{3}}}}{(1-\mathrm{FT_{p}})^{2}}. \tag{5}\] For optimal sensitivity the duty cycle requires optimisation to balance the recapture and interrogation periods. Assuming a certain bandwidth, duty cycle and shot noise limited detection the only unknown in Eq. (5) is atoms participating in the interferometer, n. To acquire this the recapture simulation is run for the chosen duty cycle and MOT parameters to obtain the recapture efficiency. The atom number is then computed using Eq. (4). A conservative 1% of atoms are assumed to complete the interferometer, \(\mathrm{N_{AI}=0.01\,\mathrm{N_{MOT}}}\). To account for sub-Doppler cooling, state preparation and launching, a 3 ms preparation time is allocated within the cycle time. We also adopt a cloud temperature of 10 uK following sub-Doppler cooling. Fig. 13 shows the sensitivity simulation at 100 Hz operation for variable duty cycle. For lower duty cycles there are more atoms but the sensitivity improvement from increased interrogation time dominates over the reduced atoms. For reloading times \(<\) 2 ms the capture processes are inhibited and the atom number falls to zero diminishing sensitivity. Fig. 13 suggests a performance limit of \(1\times 10^{-7}\frac{\mathrm{s}}{\sqrt{\mathrm{Hz}}}\) at 100 Hz operation. Given the finite recapture time it is interesting to consider optimal sensitivity for variable bandwidth. To explore this the simulation in Fig. 12 is reprocessed. By adding the drop and reloading time together and including an additional 3 ms of preparation time a certain cycle time and therefore bandwidth is defined. For this bandwidth \(10^{8}\) atoms are generated and so sensitivity can be computed with Eq. (5). Figure 11: Steady state atom number for variable drop time with a fixed loading time: 4.0 ms (blue solid), 10 ms (orange dashed) 50 ms (green dash-dot). Figure 12: Time to load \(10^{8}\) atoms for variable drop time (red solid), recapture efficiency \(\mathrm{P_{no\,collision}P_{restored}}\) (blue dashed). Experimental data points are scattered. Figure 10: Pressure induced collision model, theoretical model (line), experimental data (points). For increasing bandwidth the optimal duty cycle decreases gradually as the necessary reloading time represents a larger fraction of the cycle, see Fig. 14. At a certain bandwidth the cycle time is insufficient to interrogate, recapture and prepare atoms. For short drop time around 2 ms is required to recapture atoms and so with an additional preparation time of 3 ms the limiting bandwidth is \(\frac{1}{\rm 5ms}\simeq 200\) Hz. Given the performance limits it is worth summarising the advantages, disadvantages and future prospects of the high-bandwidth approach for quantum sensing. Quantum sensors offer low bias and high-stability enabling long term inertial navigation measurements not currently feasibly with classical sensors. High-bandwidth quantum sensors would therefore be attractive for navigation where measurement rates \(>\) 100 Hz are needed for operation on mobile platforms. As highlighted bandwidth and sensitivity present a compromise although the reduced free-falling distance at high-bandwidth makes the approach compelling for miniaturisation developing devices more robust to challenging environments [20]. The \(\sim\)\(\rm\mu g/\sqrt{Hz}\) sensitivity offered at high-bandwidth would be useful for inertial navigation with techniques such as large-momentum transfer potentially offering a route to clawing back sacrificed sensitivity [28]. Even presently ship-borne measurements have demonstrated sensitivities at the \(\sim\)\(\rm\mu g\) level [13]. Moreover, hybrid methods have been implemented to increase bandwidth using a quantum sensor to correct a classical device [29]. Further developments could offer potential for absolute positioning on a metre scale independent of environment without satellite navigation. Moreover, high-bandwidth operation would also be desirable for faster civil engineering surveys providing feedback on the condition of water pipes and identifying voids and mine shafts. ## VI Conclusions We show that a simple model simulating atomic trajectories and loss mechanisms performs rather well in explaining experimental MOT dynamics across a range of bandwidths. Traditionally bandwidth is not a primary concern and so traps are loaded to capacity with no concern for recapturing atoms limiting bandwidths to around 1 Hz. In this work we explore the full bandwidth range. At low bandwidth recapture efficiency tends to 0 due to background collisions and atoms falling outside of the trapping region. At high-bandwidth the finite MOT restoring force is critical limiting the recapture time to a few ms for \({}^{87}\)Rb and imposing a maximum bandwidth for MOT generation. We observe that the model provides a good fit to experimental data across a range of bandwidths accounting for pressure, temperature and spatial considerations of the trap. The model is then applied to quantum sensing projecting a performance limit of \(1\times 10^{-7}\,\rm g/\sqrt{Hz}\) at 100 Hz. This is computed by optimising duty cycle for a given bandwidth. Based on this it is deemed beneficial to devote cycle time to interrogation provided recapture is not compromised significantly. In summary, this work shows the power of a simple MOT physics model in predicting the feasibility of MOT generation for a given bandwidth, duty cycle and other trap and cloud properties. More generally, the ubiquitous nature of the MOT means this work could be applied to a broad range of experiments using different atomic species particularly for those targeting higher bandwidth operation. ## Acknowledgments We thank the support of the UK National Quantum Technologies Programme (NQTP) (EP/T001046/1), Defense Science and Technology Laboratory (Dstl) (DSTLXR1000141929) and Toyota Motor Europe. Figure 14: Sensitivity projection for variable bandwidth based on simulation in Fig. 12. For each bandwidth the cycle consists of an additional 3 ms of preparation. Figure 13: Optimising sensitivity by optimising balance between recapture and interrogation time, sensitivity (red solid), participating atoms (blue dashed). The optimised cycle consists of a 5 ms interrogation, 2 ms recapture and a set 3 ms of additional preparation (cooling, state preparation, launching). AI parameters: F = 100 Hz, \(\eta\) = 1, \(\rm N_{AI}=0.01\,\rm N_{MOT}\).
MOTの高帯域動作における動的特性を研究しています。サイクルごとの高い再捕獲効率が、実用的な原子の数を維持するために非常に重要であることがわかりました。MOTトラップ力と圧力誘発の衝突を単純なモデルで説明し、$\mathrm{{}^{87}Rb}$を用いた実験データで検証しました。これらを利用して量子計測を行うと、100Hz動作における重力計のショットノイズ制限された感度が$\mathrm{10^{-7}g/\sqrt{Hz}}$と予測されました。この結果は、高帯域動作におけるMOT動作の理解に役立ち、特に、動的な環境とナビゲーションアプリケーションに向けて、移動式高帯域動作量子慣性センサーの開発に役立ちます。
2309.13749
A Minkowski type inequality for manifolds with positive spectrum
The classical Minkowski inequality implies that the volume of a bounded convex domain is controlled from above by the integral of the mean curvature of its boundary. In this note, we establish an analogous inequality without the convexity assumption for all bounded smooth domains in a complete manifold with its bottom spectrum being suitably large relative to its Ricci curvature lower bound. An immediate implication is the nonexistence of embedded compact minimal hypersurfaces in such manifolds. This nonexistence issue is also considered for steady and expanding Ricci solitons.
Ovidiu Munteanu, Jiaping Wang
2023-09-24T20:39:13
http://arxiv.org/abs/2309.13749v1
# A Minkowski type inequality for manifolds with positive spectrum ###### Abstract. The classical Minkowski inequality implies that the volume of a bounded convex domain is controlled from above by the integral of the mean curvature of its boundary. In this note, we establish an analogous inequality without the convexity assumption for all bounded smooth domains in a complete manifold with its bottom spectrum being suitably large relative to its Ricci curvature lower bound. An immediate implication is the nonexistence of embedded compact minimal hypersurfaces in such manifolds. This nonexistence issue is also considered for steady and expanding Ricci solitons. ## 1. Introduction On a complete Riemannian manifold \((M,g),\) the Laplacian \(\Delta\) is a self-adjoint operator according to [15]. So the spectrum \(\sigma(M)\) of \(M,\) defined as the spectrum \(\sigma(-\Delta)\) of \(-\Delta,\) is a closed subset of \([0,\infty).\) The bottom spectrum is given by \[\lambda_{1}(M):=\min\{\lambda\in\sigma(M)\}.\] Alternatively, it is characterized as the best constant for the Poincare inequality \[\lambda_{1}\,\int_{M}\phi^{2}\leq\int_{M}\left|\nabla\phi\right|^{2}\] for all compactly supported smooth functions \(\phi\) on \(M.\) A result by McKean [26] says that \(\lambda_{1}(M)\geq\frac{(n-1)^{2}}{4}\) for an \(n\)-dimensional, simply connected, complete manifold \(M^{n}\) with sectional curvature \(K\leq-1.\) The famous Sullivan-Patterson theory [31, 33] computes the bottom spectrum for the quotient space \(\mathbb{H}^{n}/\Gamma\) of the \(n\)-dimensional real hyperbolic space \(\mathbb{H}^{n},\) where \(\Gamma\) is a discrete, finitely generated, group of isometries of \(\mathbb{H}^{n}.\) Namely, \(\lambda_{1}(\mathbb{H}^{n}/\Gamma)=\frac{(n-1)^{2}}{4}\) if \(d_{\Gamma}\leq\frac{n-1}{2}\) and \(\lambda_{1}(\mathbb{H}^{n}/\Gamma)=d_{\Gamma}\left(n-1-d_{\Gamma}\right)\) when \(d_{\Gamma}\geq\frac{n-1}{2},\) where \(d_{\Gamma}\) is the Hausdorff dimension of the limit set of \(\Gamma,\) that is, those points \(\theta\) in the ideal boundary at infinity \(S_{\infty}(\mathbb{H}^{n})\) of \(\mathbb{H}^{n}\) such that \(\theta=\lim_{i\to\infty}\gamma_{i}(x)\) for some \(x\in\mathbb{H}^{n}\) and a sequence of \(\gamma_{i}\in\Gamma.\) Another notable result, due to Brooks [4], is that \(\lambda_{1}(M)>0\) for a covering space \(M\) of a compact manifold \(N\) if and only if the covering group is nonamenable. Finally, we mention a result of Lee [21]. Recall that a Riemannian manifold \((M,g)\) is conformally compact if topologically it is the interior of a compact manifold \(\overline{M}\) with boundary \(N\) and its metric \(g=\rho^{-2}\,g_{\overline{M}}\) for some metric \(g_{\overline{M}}\) on \(\overline{M}\) and smooth function \(\rho\) on \(\overline{M}\) with \(\rho=0\) on \(N\) and \(d\rho\neq 0\) on \(N.\) Note that different pairs of \(\rho\) and \(g_{\overline{M}}\) induce the same conformal class on \(N.\) **Theorem 1.1** (Lee).: _Let \((M^{n},g)\) be a conformally compact Einstein manifold with its Ricci curvature normalized to be \(-(n-1).\) If its boundary \(N\) with the induced conformal metric has nonnegative scalar curvature, then \(\lambda_{1}(M)=\frac{(n-1)^{2}}{4}.\)_ A different proof of the result was given by X. Wang [34]. Concerning the upper bound of the bottom spectrum, we have the following classical result due to Cheng [9]. **Theorem 1.2** (Cheng).: _Let \(M^{n}\) be a complete Riemannian manifold with \(\mathrm{Ric}\geq-(n-1)\kappa\) for some nonnegative constant \(\kappa.\) Then_ \[\lambda_{1}(M)\leq\lambda_{1}\left(\mathbb{H}^{n}(-\kappa)\right)=\frac{(n-1) ^{2}}{4}\,\kappa.\] The rigidity issue has been studied by Li and the second author in [23, 24]. **Theorem 1.3** (Li-Wang).: _Suppose \((M^{n},g)\) is complete, \(n\geq 3,\) with \(\lambda_{1}\geq\frac{(n-1)^{2}}{4}\,\kappa\) and \(\mathrm{Ric}\geq-(n-1)\kappa.\) Then either \(M\) is connected at infinity or \(M^{n}=\mathbb{R}\times N^{n-1}\) for some compact \(N\) with \(g=dt^{2}+e^{2\sqrt{\kappa}\,t}\,g_{N}\) for \(n\geq 3\) or \(g=dt^{2}+\cosh^{2}(\sqrt{\kappa}\,t)\,g_{N}\) when \(n=3.\)_ Note that as \(\kappa\) goes to \(0,\) the result recovers a weak version of the famous Cheeger-Gromoll [5] splitting theorem for complete manifolds with nonnegative Ricci curvature. Our main purpose here is to establish the following Minkowski type inequality for complete manifolds with positive bottom spectrum. **Theorem 1.4**.: _Let \((M^{n},g)\) be a complete Riemannian manifold of dimension \(n\geq 5\) with \(\mathrm{Ric}\geq-\left(n-1\right)\) and_ \[\lambda_{1}\left(M\right)\geq\left(\frac{n-2}{n-1}\right)^{2}\left(2n-3\right).\] _Then for any compact smooth domain \(\Omega\subset M,\)_ \[\frac{2}{3}\sqrt{n}\ \lambda_{1}\left(M\right)\mathrm{Vol}\left(\Omega\right) \leq\int_{\partial\Omega}|H|^{\frac{2n-3}{n-1}}\,,\] _where \(H\) is the mean curvature of \(\partial\Omega.\)_ The result seems to be new even for the hyperbolic space \(\mathbb{H}^{n}.\) We remark that it is necessary to assume \(\lambda_{1}(M)>n-2.\) Indeed, for \(M^{n}=\mathbb{R}\times N^{n-1}\) with \(g=dt^{2}+\cosh^{2}(t)\,g_{N},\)\(\lambda_{1}(M)=n-2\) and \(\mathrm{Ric}\geq-(n-1)\) when \(\mathrm{Ric}_{\mathrm{N}}\geq-(n-2).\) Yet, the domain \(\Omega\) given by \(\left\{0<t<\varepsilon\right\}\) violates the inequality when \(\varepsilon\) is small. Certainly, this example also shows that the result can not hold for \(n=3.\) However, it remains unclear what to expect for \(n=4.\) One may wish to compare the result to the classical Minkowski inequality [27] for the Euclidean space \(\mathbb{R}^{n}\) and that for the hyperbolic space \(\mathbb{H}^{n}\)[16]. The advantage here is that no convexity is assumed for the domains. **Theorem 1.5** (Minkowski).: _If \(\Omega\subset\mathbb{R}^{n},\)\(n\geq 3,\) is a convex domain with smooth boundary \(\Sigma\) and \(\mathrm{H}\) is the mean curvature of \(\Sigma\) with respect to the outward unit normal, then there exists a sharp constant \(c(n)\) so that_ \[\mathrm{Vol}\left(\Omega\right)\leq c(n)\,\left(\int_{\Sigma}H\right)^{\frac{ n}{n-2}}.\] _Equality holds if and only if \(\Omega\) is a ball._ The convexity can be relaxed to mean convex and star shaped by the work of Guan-Li [17, 18], where they produced a different proof using a new mean curvature flow. In fact, their proof yields the more general Alexandrov-Fenchel inequalities of quermassintegrals and extends to other space forms as well. For more related results, we refer to [2, 3, 7, 13, 19]. An immediate consequence of our result is the nonexistence of compact minimal hypersurfaces. **Corollary 1.6**.: _Let \(\left(M^{n},g\right)\) be a complete Riemannian manifold of dimension \(n\geq 5\) with \(\mathrm{Ric}\geq-\left(n-1\right)\) and_ \[\lambda_{1}\left(M\right)=\frac{\left(n-1\right)^{2}}{4}.\] _Then \(M\) has no embedded compact minimal hypersurface. In particular, this holds for a conformally compact Einstein manifold with its boundary having nonnegative scalar curvature._ Note that the result is not true for \(n=3\). Indeed, for \(M^{3}=\mathbb{R}\times N^{2}\) with \(g=dt^{2}+\cosh^{2}(t)\,g_{N},\)\(\lambda_{1}(M)=1\) and \(\mathrm{Ric}\geq-2\) when \(\mathrm{Ric}_{\mathrm{N}}\geq-1.\) Yet, the hypersurface given by \(\{t=0\}\) is totally geodesic. The corollary follows from Theorem 1.4 by verifying that \(\Sigma\) must enclose a bounded domain \(\Omega\) in \(M.\) Indeed, observe that \(M\) must be connected at infinity as otherwise by Theorem 1.3, \[M=\mathbb{R}\times N,\ \ ds_{M}^{2}=dt^{2}+e^{2t}\,ds_{N}.\] Since \(f(t,y)=t\) on \(M\) is convex, by maximum principle, \(\Sigma\) must be one of the level sets of \(f.\) However, each level set has constant mean curvature \(n-1,\) which is a contradiction. The same argument shows that every double cover of \(M\) is connected at infinity as well. One then concludes from a result by Carron and Pedon [6] that the integral homology \(H_{n-1}(M,\mathbb{Z})=0.\) It then follows that \(\Sigma\) must enclose a bounded domain \(\Omega\) in \(M.\) We now quickly sketch the proof of Theorem 1.4. First, there exists \(v>0\) such that \(\Delta v=-\lambda_{1}(M)\,v\). Consider the function \(h=\ln v,\) for which \(\Delta h=-\lambda_{1}(M)-|\nabla h|^{2}\). Then \[\lambda_{1}\left(M\right)\mathrm{Vol}\left(\Omega\right) \leq \int_{\Omega}\left(\lambda_{1}\left(M\right)+\left|\nabla h\right| ^{2}\right)\] \[= -\int_{\Omega}\Delta h\] \[= \int_{\partial\Omega}h_{\nu},\] where \(\nu\) is the inward unit normal to \(\partial\Omega.\) The proof is then reduced to estimating \(\int_{\partial\Omega}h_{\nu}.\) To do so we consider the harmonic function \(u\) on \(M\setminus\Omega\) obtained as \(u=\lim_{R\rightarrow\infty}u_{R},\) where \(\Delta u_{R}=0\) on \(\left(M\setminus\Omega\right)\cap B_{p}(R)\) with \(u_{R}=1\) on \(\partial\Omega\) and \(u_{R}=0\) on \(\partial B_{p}(R).\) The upshot is to show \[c(n)\,\int_{\partial\Omega}h_{\nu}\leq\int_{\partial\Omega}\left(\left|\nabla u \right|^{\alpha}\right)_{\nu}-\int_{\partial\Omega}\left(u^{\beta}\right)_{ \nu}\left|\nabla u\right|^{\alpha},\] where \(\alpha=\frac{n-2}{n-1}\) and \(\beta=\frac{n-2}{3n-5}.\) For that, we drew inspiration from the monotonicity formulas for the Green's function on manifolds with nonnegative Ricci curvature [11, 12, 1], as well as on \(3\)-dimensional manifolds with scalar curvature bounded below [28]. In the process, the following generalized Poincare inequality also comes into play. **Proposition 1.7**.: _Let \(\left(M,g\right)\) be a complete manifold with \(\mathrm{Ric}\geq-\left(n-1\right)\) and \(\lambda_{1}\left(M\right)>0.\) Let \(K\subset M\) be an open subset with (possibly noncompact) boundary \(\partial K.\) Then the Poincare inequality_ \[\lambda_{1}\left(M\right)\int_{K}\phi^{2}\leq\int_{K}\left|\nabla\phi\right|^{ 2}-\int_{\partial K}h_{\nu}\,\phi^{2}\] _holds for any Lipschitz function \(\phi\) with compact support in \(\overline{K},\) where \(\nu\) is the outward unit normal to \(\partial K.\)_ Concerning the nonexistence of compact minimal hypersurfaces, we also extend our consideration to Ricci solitons. **Theorem 1.8**.: _Let \(\left(M^{n},g,f\right)\) be a steady Ricci soliton. If there exists a smooth compact embedded minimal hypersurface \(\Sigma\) in \(M,\) then \(\left(M,g\right)\) splits isometrically as a direct product \(\mathbb{R}\times\Sigma.\)_ A similar result is also established for expanding Ricci solitons. Recall that a gradient Ricci soliton is a manifold \(\left(M,g\right)\) such that there exists a smooth function \(f\) satisfying \[\mathrm{Ric}_{f}=\mathrm{Ric}+\mathrm{Hess}\left(f\right)=\lambda g\] for some constant \(\lambda\in\mathbb{R}.\) Solitons are classified as shrinking, steady or expanding, according to \(\lambda>0,\ \lambda=0\) or \(\lambda<0,\) respectively. The function \(f\) is called the potential. Customarily, the constant \(\lambda\) is assumed to be \(1/2,\)\(0,\) or \(-1/2\) by scaling. Obviously, Ricci solitons are natural generalizations of Einstein manifolds. More significantly, they are the self similar solutions to the Ricci flows, and play a crucial role in the study of singularities of the flows [10]. As pointed out in [29], an important feature of a steady Ricci soliton to us is that its bottom spectrum with respect to the weighted Laplacian \(\Delta_{f},\) defined by \(\Delta_{f}u=\Delta u-\left\langle\nabla u,\nabla f\right\rangle\), always achieves the maximum value \(\frac{1}{4}\) among all weighted manifolds \(\left(M,g,e^{-f}dv\right)\) with \(\mathrm{Ric}_{f}\geq 0\) and \(\left|\nabla f\right|\leq 1.\) The paper is arranged as follows. In Section 2, we prove our main result Theorem 1.4. In Section 3, we consider Ricci solitons and prove Theorem 1.8. ## 2. Minkowski inequality In this section, we prove Theorem 1.4. First, we make some general consideration. Throughout this section, unless otherwise noted, \(\left(M^{n},g\right)\) is assumed to be an \(n\)-dimensional complete manifold with positive spectrum \(\lambda_{1}\left(M\right)>0\) and its Ricci curvature \(\mathrm{Ric}\geq-\left(n-1\right).\) It is well-known [14] that there exists \(v>0\) such that \[\Delta v=-\lambda_{1}\left(M\right)v. \tag{2.1}\] Hence, \[h=\ln v \tag{2.2}\] satisfies \[\Delta h=-\lambda_{1}\left(M\right)-\left|\nabla h\right|^{2}. \tag{2.3}\] Also, by [35] (or see Chapter 6 in [22]), positive solutions of (2.1) satisfy the gradient estimate \[\left|\nabla h\right|\leq\frac{n-1}{2}+\sqrt{\frac{\left(n-1\right)^{2}}{4}- \lambda_{1}\left(M\right)}. \tag{2.4}\] The following generalized Poincare inequality will be of use later. **Proposition 2.1**.: _Let \(\left(M,g\right)\) be a complete manifold with \(\mathrm{Ric}\geq-\left(n-1\right)\) and \(\lambda_{1}\left(M\right)>0.\) Let \(K\subset M\) be an open subset with boundary \(\partial K\). Then_ \[\lambda_{1}\left(M\right)\int_{K}\phi^{2}\leq\int_{K}\left|\nabla\phi\right|^{ 2}-\int_{\partial K}\left\langle\nabla h,\nu\right\rangle\phi^{2}\] _holds for any Lipschitz function \(\phi\) with compact support in \(\overline{K},\) where \(\nu\) is the outward unit normal to the boundary \(\partial K.\) In particular,_ \[\lambda_{1}\left(M\right)\int_{K}\phi^{2}\leq\int_{K}\left|\nabla\phi\right|^ {2}+A\int_{\partial K}\phi^{2},\] _where_ \[A=\frac{n-1}{2}+\sqrt{\frac{\left(n-1\right)^{2}}{4}-\lambda_{1}\left(M\right)}.\] Proof.: According to (2.3), for any Lipschitz function \(\phi\) with compact support in \(\overline{K}\) we have \[\lambda_{1}\left(M\right)\int_{K}\phi^{2} = \int_{K}\left(-\Delta h-\left|\nabla h\right|^{2}\right)\phi^{2}\] \[= \int_{K}\left(\left\langle\nabla h,\nabla\phi^{2}\right\rangle- \left|\nabla h\right|^{2}\phi^{2}\right)\] \[-\int_{\partial K}h_{\nu}\phi^{2}.\] Observe that \[2\phi\left\langle\nabla h,\nabla\phi\right\rangle\leq\left|\nabla h\right|^{ 2}\phi^{2}+\left|\nabla\phi\right|^{2}.\] Therefore, \[\lambda_{1}\left(M\right)\int_{K}\phi^{2}\leq\int_{K}\left|\nabla\phi\right|^ {2}-\int_{\partial K}\left\langle\nabla h,\nu\right\rangle\phi^{2}.\] This proves the result. We will apply this Poincare inequality on the sublevel sets of the harmonic function \(u\) constructed below. Given a compact domain \(\Omega\subset M,\) according to [24], an unbounded component of \(M\setminus\Omega\) is parabolic if it has finite volume, and nonparabolic if it has infinite volume. Let \(E_{1},..,E_{k}\) be all the infinite volume connected components of \(M\setminus\Omega.\) Denote with \(E=E_{1}\cup\cdots\cup E_{k}\) and \[D = M\setminus E\] \[\Sigma = \partial D=\partial E. \tag{2.5}\] Alternatively, \(D\) is the union of \(\Omega\) with all the finite volume components of \(M\setminus\Omega.\) Consider the following function \(u_{i}\) with respect to a sequence \(R_{i}\rightarrow\infty.\) \[\Delta u_{i} = 0\text{\ \ on }B_{p}\left(R_{i}\right)\setminus D\] \[u_{i} = 1\text{\ \ on }\partial D\] \[u_{i} = 0\text{\ on }\partial B_{p}\left(R_{i}\right)\cap E. \tag{2.6}\] As \(\lambda_{1}\left(M\right)>0,\) from [23], the sequence \(\left\{u_{i}\right\}_{i=1}^{\infty}\) converges to a positive nonconstant harmonic function \(u:M\setminus D\rightarrow\left[0,1\right]\) such that \(u=1\) on \(\partial D.\) The strong maximum principle implies that \(\left|\nabla u\right|>0\) on \(\Sigma=\partial D\). Moreover, by [24] \[\int_{M\setminus\left(D\cup B_{p}\left(R\right)\right)}u^{2}\leq C\,e^{-2 \sqrt{\lambda_{1}\left(M\right)}R} \tag{2.7}\] for all \(R>0\) large enough. As \(D\) is the union of \(\Omega\) together with all the finite volume components of \(M\setminus\Omega,\) by [24] the following volume estimate holds. \[\operatorname{Vol}\left(D\setminus B_{p}\left(R\right)\right)\leq C\,e^{-2 \sqrt{\lambda_{1}\left(M\right)}R}. \tag{2.8}\] We denote with \[L\left(\alpha,\beta\right) = \left\{x\in M\setminus D:\alpha<u\left(x\right)<\beta\right\}\] \[\ell\left(t\right) = \left\{x\in M\setminus D:u\left(x\right)=t\right\}.\] Note that these sets may be noncompact in general. However, (2.7) implies that \[\operatorname{Vol}\left(L\left(\alpha,\beta\right)\right)\leq\frac{1}{\alpha^ {2}}\int_{M\setminus D}u^{2}<\infty. \tag{2.9}\] According to [25], \[\zeta=\int_{\ell\left(t\right)}\left|\nabla u\right| \tag{2.10}\] is a constant independent of \(t\in\left[0,1\right]\). Hence, for any function \(F,\) by the co-area formula, \[\int_{L\left(\alpha,\beta\right)}\left|\nabla u\right|^{2}F\left(u\right)= \zeta\int_{\alpha}^{\beta}F\left(t\right)dt. \tag{2.11}\] The gradient estimate for positive harmonic functions states that \[\left|\nabla u\right|\leq C\,u\text{\ \ on }M\setminus D, \tag{2.12}\] where the constant \(C\) depends only on the dimension \(n\) and the maximum of the mean curvature \(\max_{\Sigma}\left|H_{\Sigma}\right|.\) Recall the Bochner formula \[\frac{1}{2}\Delta\left|\nabla u\right|^{2}\geq\left|\nabla^{2}u\right|^{2}- \left(n-1\right)\left|\nabla u\right|^{2} \tag{2.13}\] and the improved Kato inequality \[\left|\nabla^{2}u\right|^{2}\geq\frac{n}{n-1}\left|\nabla\left|\nabla u\right| \right|^{2}. \tag{2.14}\] We begin with the following preliminary estimates. **Lemma 2.2**.: _Let \(\left(M^{n},g\right)\) be a complete manifold with \(\mathrm{Ric}\geq-\left(n-1\right)\) and \(\lambda_{1}\left(M\right)>0.\) There exists a constant \(C>0\) such that for all \(0<t<1,\)_ \[\int_{L\left(t,1\right)}\left(u+\left|\nabla\left|\nabla u\right|\right|^{2} \left|\nabla u\right|^{-1}\right)\leq C\left(1-\ln t\right) \tag{2.15}\] _and_ \[\int_{L\left(\frac{1}{2}t,t\right)}\left(u+\left|\nabla\left|\nabla u\right| \right|^{2}\left|\nabla u\right|^{-1}\right)\leq C. \tag{2.16}\] Proof.: We first prove (2.15). Let \(\psi\) and \(\chi\) be the cut-off functions \[\psi\left(x\right)=\left\{\begin{array}{cc}1&\text{on }B_{p}\left(R\right) \\ R+1-r\left(x\right)&\text{on }B_{p}\left(R+1\right)\setminus B_{p}\left(R\right) \\ 0&\text{on }M\setminus B_{p}\left(R+1\right)\end{array}\right.\] and \[\chi\left(x\right)=\left\{\begin{array}{cc}1&\text{on }L\left(t,1\right) \\ \frac{\ln u\left(x\right)-\ln\left(\frac{1}{2}t\right)}{\ln 2}&\text{on }L\left(\frac{1}{2}t,t \right)\\ 0&\text{otherwise}\end{array}\right..\] We extend \(u=1\) on \(D\), and let \(\phi=u^{\frac{1}{2}}\chi\psi\) in \[\lambda_{1}\left(M\right)\int_{M}\phi^{2}\leq\int_{M}\left|\nabla\phi\right|^ {2}\] to obtain \[\lambda_{1}\left(M\right)\int_{M}u\chi^{2}\psi^{2} \leq 2\int_{M}\left|\nabla u^{\frac{1}{2}}\right|^{2}\chi^{2}\psi^{2}+ 2\int_{M}u\left|\nabla\left(\chi\psi\right)\right|^{2}\] \[\leq \frac{1}{2}\int_{M}\left|\nabla u\right|^{2}u^{-1}\chi^{2}\psi^{ 2}+4\int_{M}u\left|\nabla\chi\right|^{2}\psi^{2}\] \[+4\int_{M}u\left|\nabla\psi\right|^{2}\chi^{2}.\] By (2.11) we immediately see that \[\int_{M}\left|\nabla u\right|^{2}u^{-1}\chi^{2}\psi^{2}\leq\int_{L\left(\frac {1}{2}t,1\right)}\left|\nabla u\right|^{2}u^{-1}=\zeta\ln\frac{2}{t}\] and that \[\int_{M}u\left|\nabla\chi\right|^{2}\psi^{2}\leq\frac{1}{\left(\ln 2\right)^{ 2}}\int_{L\left(\frac{1}{2}t,t\right)}\left|\nabla u\right|^{2}u^{-1}=\frac{ \zeta}{\ln 2}. \tag{2.17}\] Finally, by (2.7) and (2.8) we have \[\int_{M}u\left|\nabla\psi\right|^{2}\chi^{2}\leq\frac{2}{t}\int_{M\setminus B _{p}\left(R\right)}u^{2}\leq\frac{C}{t}e^{-2\sqrt{\lambda_{1}\left(M\right)}R}. \tag{2.18}\] This proves that \[\int_{L\left(t,1\right)\cap B_{p}\left(R\right)}u\leq C\left(1-\ln t\right)+ \frac{C}{t}e^{-2\sqrt{\lambda_{1}\left(M\right)}R}\] for all \(R\geq 1.\) Making \(R\rightarrow\infty\) implies that \[\int_{L\left(t,1\right)}u\leq C\left(1-\ln t\right) \tag{2.19}\] for all \(0<t<1.\) By (2.13) and (2.14) we have that \[\Delta\left|\nabla u\right|\geq\frac{1}{n-1}\left|\nabla\left|\nabla u\right| \right|^{2}\left|\nabla u\right|^{-1}-\left(n-1\right)\left|\nabla u\right|\] on \(M\setminus D\). It then follows that \[\frac{1}{n-1}\int_{M\setminus D}\left|\nabla\left|\nabla u\right| \right|^{2}\left|\nabla u\right|^{-1}\chi^{2}\psi^{2} \leq \int_{M\setminus D}\chi^{2}\psi^{2}\Delta\left|\nabla u\right|\] \[+\left(n-1\right)\int_{M\setminus D}\left|\nabla u\right|\chi^{2 }\psi^{2}.\] Note that by the gradient estimate (2.12) and (2.19) we have \[\int_{M\setminus D}\left|\nabla u\right|\chi^{2}\psi^{2} \leq C\int_{L\left(\frac{1}{2}t,1\right)}u\] \[\leq C\left(1-\ln t\right).\] Moreover, integrating by parts implies that \[\int_{M\setminus D}\chi^{2}\psi^{2}\Delta\left|\nabla u\right| = -\int_{M\setminus D}\left\langle\nabla\left(\chi^{2}\psi^{2} \right),\nabla\left|\nabla u\right|\right\rangle+\int_{\ell\left(1\right)} \left|\nabla u\right|_{\nu}\] \[\leq \frac{1}{2\left(n-1\right)}\int_{M\setminus D}\left|\nabla \left|\nabla u\right|\right|^{2}\left|\nabla u\right|^{-1}\chi^{2}\psi^{2}\] \[+2\left(n-1\right)\int_{M\setminus D}\left|\nabla u\right|\left| \nabla\left(\chi\psi\right)\right|^{2}+\int_{\ell\left(1\right)}\left|\nabla u \right|_{\nu}\] \[\leq \frac{1}{2\left(n-1\right)}\int_{M\setminus D}\left|\nabla \left|\nabla u\right|\right|^{2}\left|\nabla u\right|^{-1}\chi^{2}\psi^{2}\] \[+C\left(1-\ln t\right)+\frac{C}{t}e^{-2\sqrt{\lambda_{1}\left(M \right)}R},\] where we have used (2.12), (2.17) and (2.18) to obtain the last line. Plugging this estimate in (2.20) and making \(R\rightarrow\infty\) we obtain that \[\int_{L\left(t,1\right)}\left|\nabla\left|\nabla u\right|\right|^{2}\left| \nabla u\right|^{-1}\leq C\left(1-\ln t\right)\] as claimed. The second estimate (2.16) follows verbatim from the preceding argument by modifying the function \(\chi\) to \[\chi\left(x\right)=\left\{\begin{array}{cl}1&\text{on }L\left(\frac{1}{2}t,t \right)\\ \frac{\ln u\left(x\right)-\ln\left(\frac{1}{4}t\right)}{\ln 2}&\text{on }L \left(\frac{1}{4}t,\frac{1}{2}t\right)\\ \frac{\ln\left(2t\right)-\ln u}{\ln 2}&\text{on }L\left(t,2t\right)\\ 0&\text{otherwise}\end{array}\right..\] We are ready to prove the main result of this section. **Theorem 2.3**.: _Let \(\left(M^{n},g\right)\) be a complete Riemannian manifold of dimension \(n\geq 5\) with \(\operatorname{Ric}\geq-\left(n-1\right)\) and_ \[\lambda_{1}\left(M\right)\geq\left(\frac{n-2}{n-1}\right)^{2}\left(2n-3\right).\] _Then for any compact smooth domain \(\Omega\subset M,\)_ \[\frac{2}{3}\sqrt{n}\ \lambda_{1}\left(M\right)\operatorname{Vol}\left(\Omega \right)\leq\int_{\partial\Omega}\left|H\right|^{\frac{2n-3}{n-1}},\] _where \(H\) is the mean curvature of \(\partial\Omega.\)_ Proof.: As in (2.5) we let \(D\) be the union of \(\Omega\) with all the finite volume components of \(M\setminus\Omega.\) Define the harmonic function \(u\) on \(M\setminus D\) as the limit of a subsequence of \(\left\{u_{i}\right\}_{i=1}^{\infty}\) from (2.6). Let \(\psi\) and \(\chi\) be the cut-off functions \[\psi\left(x\right)=\left\{\begin{array}{cl}1&\text{on }B_{p}\left(R\right) \\ R+1-r\left(x\right)&\text{on }B_{p}\left(R+1\right)\setminus B_{p}\left(R\right) \\ 0&\text{on }M\setminus B_{p}\left(R+1\right)\end{array}\right.\] and \[\chi\left(x\right)=\left\{\begin{array}{cl}1&\text{on }L\left(t,1\right) \\ \frac{\ln u\left(x\right)-\ln\left(\frac{1}{t}t\right)}{\ln 2}&\text{on }L\left(\frac{1}{2}t,t \right)\\ 0&\text{otherwise}\end{array}\right..\] The Bochner formula (2.13) and the inequality (2.14) imply the inequality (cf. [23]) \[\Delta\left|\nabla u\right|^{\alpha}\geq-\left(n-2\right)\left|\nabla u\right| ^{\alpha}, \tag{2.21}\] where \[\alpha=\frac{n-2}{n-1}. \tag{2.22}\] For \[\beta>\frac{1}{n-1} \tag{2.23}\] to be specified later, we multiply (2.21) by \(u^{\beta}\chi^{2}\psi^{2}\) and integrate it over \(M\setminus D\) to obtain that \[-\int_{M\setminus D}\left(\Delta\left|\nabla u\right|^{\alpha}\right)u^{ \beta}\chi^{2}\psi^{2}\leq\left(n-2\right)\int_{M\setminus D}\left|\nabla u \right|^{\alpha}u^{\beta}\chi^{2}\psi^{2}. \tag{2.24}\] Integrating by parts one sees that the left side of (2.24) becomes \[-\int_{M\setminus D}\left(\Delta\left|\nabla u\right|^{\alpha} \right)u^{\beta}\chi^{2}\psi^{2} = \int_{M\setminus D}\left\langle\nabla\left|\nabla u\right|^{ \alpha},\nabla u^{\beta}\right\rangle\chi^{2}\psi^{2}\] \[+\int_{M\setminus D}\left\langle\nabla\left|\nabla u\right|^{ \alpha},\nabla\chi^{2}\right\rangle u^{\beta}\psi^{2}\] \[+\int_{M\setminus D}\left\langle\nabla\left|\nabla u\right|^{ \alpha},\nabla\psi^{2}\right\rangle u^{\beta}\chi^{2}\] \[-\int_{\ell\left(1\right)}\left(\left|\nabla u\right|^{\alpha} \right)_{\nu},\] where \(\nu=\frac{\nabla u}{\left|\nabla u\right|}\) is the inward unit normal to \(\partial D=\ell\left(1\right).\) By Lemma 2.2, (2.12), and (2.7) we have that \[\left|\int_{M\setminus D}\left\langle\nabla\left|\nabla u\right|^{ \alpha},\nabla\psi^{2}\right\rangle u^{\beta}\chi^{2}\right| \leq 2\alpha\int_{L\left(\frac{1}{2}t,1\right)\setminus B_{p}(R)} \left|\nabla\left|\nabla u\right|\right|\left|\nabla u\right|^{\alpha-1}u^{\beta}\] \[\leq Ce^{-\sqrt{\lambda_{1}(M)}R}\int_{L\left(\frac{1}{2}t,1\right)} \left|\nabla\left|\nabla u\right|\right|^{2}\left|\nabla u\right|^{-1}\] \[+Ce^{\sqrt{\lambda_{1}(M)}R}\int_{L\left(\frac{1}{2}t,1\right) \setminus B_{p}(R)}u^{2(\alpha+\beta)-1}\] \[\leq Ce^{-\sqrt{\lambda_{1}(M)}R}\left(1-\ln t\right)\] \[+\frac{C}{t}e^{\sqrt{\lambda_{1}(M)}R}\int_{L\left(\frac{1}{2}t,1 \right)\setminus B_{p}(R)}u^{2}\] \[\leq \frac{C}{t}e^{-\sqrt{\lambda_{1}(M)}R}.\] Moreover, Lemma 2.2 also implies that \[\left|\int_{M\setminus D}\left\langle\nabla\left|\nabla u\right| ^{\alpha},\nabla\chi^{2}\right\rangle u^{\beta}\psi^{2}\right| \leq C\int_{L\left(\frac{1}{2}t,t\right)}\left|\nabla\left|\nabla u \right|\right|\left|\nabla u\right|^{\alpha}u^{\beta-1}\] \[\leq Ct^{\beta-\frac{1}{n-1}}\int_{L\left(\frac{1}{2}t,t\right)} \left|\nabla\left|\nabla u\right|\right|^{2}\left|\nabla u\right|^{-1}\] \[+\frac{C}{t^{\beta-\frac{1}{n-1}}}\int_{L\left(\frac{1}{2}t,t \right)}\left|\nabla u\right|^{1+2\alpha}u^{2\beta-2}\] \[\leq Ct^{\beta-\frac{1}{n-1}}+\frac{C}{t^{\beta-\frac{1}{n-1}}}\int_ {L\left(\frac{1}{2}t,t\right)}\left|\nabla u\right|^{2}u^{2(\alpha+\beta)-3},\] where in the last line we have applied (2.12). On the other hand, by (2.22), (2.23), and (2.11) we get \[\int_{L\left(\frac{1}{2}t,t\right)}\left|\nabla u\right|^{2}u^{2( \alpha+\beta)-3} = \zeta\int_{\frac{1}{2}t}^{t}r^{2\beta-\frac{n+1}{n-1}}dr\] \[\leq \frac{1}{2\left(\beta-\frac{1}{n-1}\right)}t^{2\left(\beta-\frac {1}{n-1}\right)}\zeta.\] In conclusion, \[\left|\int_{M\setminus D}\left\langle\nabla\left|\nabla u\right|^{\alpha}, \nabla\chi^{2}\right\rangle u^{\beta}\psi^{2}\right|\leq Ct^{\beta-\frac{1}{n- 1}}.\] Hence, this proves that \[-\int_{M\setminus D}\left(\Delta\left|\nabla u\right|^{\alpha} \right)u^{\beta}\chi^{2}\psi^{2} \geq \int_{M\setminus D}\left\langle\nabla\left|\nabla u\right|^{ \alpha},\nabla u^{\beta}\right\rangle\chi^{2}\psi^{2}\] \[-\int_{\ell(1)}\left(\left|\nabla u\right|^{\alpha}\right)_{\nu}\] \[-\frac{C}{t}e^{-\sqrt{\lambda_{1}(M)}R}-Ct^{\beta-\frac{1}{n-1}}.\] We now estimate the first term on the right hand side. Integration by parts implies that \[\int_{M\setminus D}\left\langle\nabla\left|\nabla u\right|^{\alpha}, \nabla u^{\beta}\right\rangle\chi^{2}\psi^{2} = -\int_{M\setminus D}\left|\nabla u\right|^{\alpha}\left(\Delta u ^{\beta}\right)\chi^{2}\psi^{2}\] \[-\int_{M\setminus D}\left|\nabla u\right|^{\alpha}\left\langle \nabla u^{\beta},\nabla\left(\chi^{2}\psi^{2}\right)\right\rangle\] \[+\int_{\ell(1)}\left(u^{\beta}\right)_{\nu}\left|\nabla u\right| ^{\alpha}.\] By (2.7) and (2.12) we have that \[\int_{M\setminus D}\left|\nabla u\right|^{\alpha}\left|\left\langle \nabla u^{\beta},\nabla\psi^{2}\right\rangle\chi^{2}\right| \leq C\int_{L\left(\frac{1}{2}t,1\right)\setminus B_{p}(R)}u^{ \alpha+\beta}\chi^{2}\] \[\leq \frac{C}{t^{2-(\alpha+\beta)}}\int_{L\left(\frac{1}{2}t,1\right) \setminus B_{p}(R)}u^{2}\] \[\leq \frac{C}{t}e^{-2\sqrt{\lambda_{1}(M)}R}.\] Moreover, (2.11) and (2.12) imply that \[\int_{M\setminus D}\left|\nabla u\right|^{\alpha}\left\langle \nabla u^{\beta},\nabla\chi^{2}\right\rangle\psi^{2} \leq C\int_{L\left(\frac{1}{2}t,t\right)}\left|\nabla u\right|^{ \alpha+2}u^{\beta-2}\] \[\leq C\int_{L\left(\frac{1}{2}t,t\right)}\left|\nabla u\right|^{2}u^{ \beta-\frac{n}{n-1}}\] \[\leq \frac{C}{\beta-\frac{1}{n-1}}t^{\left(\beta-\frac{1}{n-1}\right)}\zeta.\] Plugging these estimates in (2.25) yields that \[-\int_{M\setminus D}\left(\Delta\left|\nabla u\right|^{\alpha} \right)u^{\beta}\chi^{2}\psi^{2} \geq \beta\left(1-\beta\right)\int_{M\setminus D}\left|\nabla u \right|^{\alpha+2}u^{\beta-2}\chi^{2}\psi^{2}\] \[-\int_{\ell(1)}\left(\left|\nabla u\right|^{\alpha}\right)_{\nu}+ \int_{\ell(1)}\left(u^{\beta}\right)_{\nu}\left|\nabla u\right|^{\alpha}\] \[-\frac{C}{t}e^{-\sqrt{\lambda_{1}(M)}R}-Ct^{\beta-\frac{1}{n-1}}.\] Therefore (2.24) becomes that \[\left(n-2\right)\int_{M\setminus D}\left|\nabla u\right|^{\alpha }u^{\beta}\chi^{2}\psi^{2} \geq \beta\left(1-\beta\right)\int_{M\setminus D}\left|\nabla u \right|^{\alpha+2}u^{\beta-2}\chi^{2}\psi^{2}\] \[-\int_{\ell(1)}\left(\left|\nabla u\right|^{\alpha}\right)_{\nu}+ \int_{\ell(1)}\left(u^{\beta}\right)_{\nu}\left|\nabla u\right|^{\alpha}\] \[-\frac{C}{t}e^{-\sqrt{\lambda_{1}(M)}R}-Ct^{\beta-\frac{1}{n-1}}.\] We now estimate the left hand side. By Young's inequality we have that \[\left|\nabla u\right|^{\alpha}\leq\frac{\alpha}{\alpha+2}A^{-2}\left|\nabla u \right|^{\alpha+2}u^{-2}+\frac{2}{\alpha+2}A^{\alpha}u^{\alpha},\] where \(A>0\) is a constant to be specified later. Hence, we obtain \[\int_{M\setminus D}\left|\nabla u\right|^{\alpha}u^{\beta}\chi^{2} \psi^{2} \leq \frac{\alpha}{\alpha+2}A^{-2}\int_{M\setminus D}\left|\nabla u \right|^{\alpha+2}u^{\beta-2}\chi^{2}\psi^{2}\] \[+\frac{2}{\alpha+2}A^{\alpha}\int_{M\setminus D}u^{\alpha+\beta} \chi^{2}\psi^{2}.\] Plugging this into (2.27) yields \[\Lambda_{1}\int_{M\setminus D}\left|\nabla u\right|^{\alpha+2}u^ {\beta-2}\chi^{2}\psi^{2} \leq \Lambda_{2}\int_{M\setminus D}u^{\alpha+\beta}\chi^{2}\psi^{2}\] \[+\int_{\ell(1)}\left(\left|\nabla u\right|^{\alpha}\right)_{\nu}- \int_{\ell(1)}\left(u^{\beta}\right)_{\nu}\left|\nabla u\right|^{\alpha}\] \[+\frac{C}{t}e^{-\sqrt{\lambda_{1}(M)}R}+Ct^{\beta-\frac{1}{n-1}},\] where \[\Lambda_{1} = \beta\left(1-\beta\right)-\frac{\alpha\left(n-2\right)}{\alpha+2 }A^{-2}\] \[\Lambda_{2} = \frac{2\left(n-2\right)}{\alpha+2}A^{\alpha}.\] We apply Proposition 2.1 for \(K=M\setminus D\) and \(\phi=u^{\frac{\alpha+\beta}{2}}\chi\psi\). As \(\partial K=\ell(1)\), and \(\nu=\frac{\nabla u}{\left|\nabla u\right|}\) is the outward unit normal to \(K\), we obtain that \[\lambda_{1}\left(M\right)\int_{M\setminus D}u^{\alpha+\beta}\chi^{2}\psi^{2} \leq\int_{M\setminus D}\left|\nabla\left(u^{\frac{\alpha+\beta}{2}}\chi\psi \right)\right|^{2}-\int_{\ell(1)}h_{\nu}.\] Note that \[\int_{M\setminus D}\left|\nabla\left(u^{\frac{\alpha+\beta}{2}} \chi\psi\right)\right|^{2} = \frac{\left(\alpha+\beta\right)^{2}}{4}\int_{M\setminus D} \left|\nabla u\right|^{2}u^{\alpha+\beta-2}\chi^{2}\psi^{2}\] \[+\int_{M\setminus D}u^{\alpha+\beta}\left|\nabla\left(\chi\psi \right)\right|^{2}\] \[+\frac{1}{2}\int_{M\setminus D}\left\langle\nabla u^{\alpha+ \beta},\nabla\left(\chi\psi\right)^{2}\right\rangle.\] By (2.7), (2.22), and (2.23) we conclude that \[\int_{M\setminus D}u^{\alpha+\beta}\left|\nabla\psi\right|^{2} \chi^{2} \leq \int_{L\left(\frac{1}{2}t,1\right)\setminus B_{p}(R)}u^{\alpha+ \beta}\chi^{2}\] \[\leq \frac{2}{t}\int_{L\left(\frac{1}{2}t,1\right)\setminus B_{p}(R)} u^{2}\chi^{2}\] \[\leq \frac{C}{t}e^{-2\sqrt{\lambda_{1}(M)}R}.\] Similarly, by additionally using (2.12) we get that \[\frac{1}{2}\int_{M\setminus D}\left\langle\nabla u^{\alpha+\beta},\nabla \psi^{2}\right\rangle\chi^{2}\leq\frac{C}{t}e^{-2\sqrt{\lambda_{1}(M)}R}.\] By (2.11), (2.22), and (2.23) we have \[\int_{M}u^{\alpha+\beta}\left|\nabla\chi\right|^{2}\psi^{2} \leq C\int_{L\left(\frac{1}{2}t,t\right)}\left|\nabla u\right|^{2}u^{ \beta-\frac{n}{n-1}}\] \[\leq \frac{C}{\beta-\frac{1}{n-1}}t^{\beta-\frac{1}{n-1}}.\] Similarly, \[\int_{M}\left\langle\nabla u^{\alpha+\beta},\nabla\chi^{2}\right\rangle\psi^{ 2}\leq\frac{C}{\beta-\frac{1}{n-1}}t^{\beta-\frac{1}{n-1}}.\] Combining all these estimates we arrive at \[\lambda_{1}\left(M\right)\int_{M\setminus D}u^{\alpha+\beta}\chi^ {2}\psi^{2}\] \[\leq \frac{\left(\alpha+\beta\right)^{2}}{4}\int_{M\setminus D}\left| \nabla u\right|^{2}u^{\alpha+\beta-2}\chi^{2}\psi^{2}\] \[-\int_{\ell(1)}h_{\nu}+\frac{C}{t}e^{-2\sqrt{\lambda_{1}\left(M \right)}R}+Ct^{\beta-\frac{1}{n-1}}.\] By Young's inequality, \[\left|\nabla u\right|^{2}\leq\frac{2}{\alpha+2}B^{-\alpha}\left|\nabla u \right|^{\alpha+2}u^{-\alpha}+\frac{\alpha}{\alpha+2}B^{2}u^{2}\] for a constant \(B>0\) to be specified later. This yields \[\int_{M\setminus D}\left|\nabla u\right|^{2}u^{\alpha+\beta-2} \chi^{2}\psi^{2} \leq \frac{2}{\alpha+2}B^{-\alpha}\int_{M\setminus D}\left|\nabla u \right|^{\alpha+2}u^{\beta-2}\chi^{2}\psi^{2}\] \[+\frac{\alpha}{\alpha+2}B^{2}\int_{M\setminus D}u^{\alpha+\beta} \chi^{2}\psi^{2}.\] Plugging this into (2.29), one concludes that \[\left(\lambda_{1}\left(M\right)-\frac{\alpha}{\alpha+2}\frac{ \left(\alpha+\beta\right)^{2}}{4}B^{2}\right)\int_{M\setminus D}u^{\alpha+ \beta}\chi^{2}\psi^{2}\] \[\leq \frac{\left(\alpha+\beta\right)^{2}}{2\left(\alpha+2\right)}B^{- \alpha}\int_{M\setminus D}\left|\nabla u\right|^{\alpha+2}u^{\beta-2}\chi^{2} \psi^{2}\] \[-\int_{\ell(1)}h_{\nu}+\frac{C}{t}e^{-2\sqrt{\lambda_{1}\left(M \right)}R}+Ct^{\beta-\frac{1}{n-1}}.\] By the assumption, \[\lambda_{1}\left(M\right)\geq\frac{\left(n-1\right)^{2}\delta^{2}}{4}\] with \[\delta=\frac{2\left(n-2\right)}{\left(n-1\right)^{2}}\sqrt{2n-3}. \tag{2.30}\] Therefore, \[\left(\frac{\left(n-1\right)^{2}\delta^{2}}{4}-\frac{\alpha}{\alpha+2} \frac{\left(\alpha+\beta\right)^{2}}{4}B^{2}\right)\int_{M\backslash D}u^{ \alpha+\beta}\chi^{2}\psi^{2}\] \[\leq \frac{\left(\alpha+\beta\right)^{2}}{2\left(\alpha+2\right)}B^{- \alpha}\int_{M\backslash D}|\nabla u|^{\alpha+2}\,u^{\beta-2}\chi^{2}\psi^{2}\] \[-\int_{\ell(1)}h_{\nu}+\frac{C}{t}e^{-2\sqrt{\lambda_{1}\left(M \right)}R}+Ct^{\beta-\frac{1}{n-1}}.\] We optimize this inequality by choosing \[B=\frac{\left(n-1\right)\delta}{\alpha+\beta}\] and obtain that \[\int_{M\backslash D}u^{\alpha+\beta}\chi^{2}\psi^{2} \leq \left(\frac{\alpha+\beta}{\left(n-1\right)\delta}\right)^{\alpha +2}\int_{M\backslash D}|\nabla u|^{\alpha+2}\,u^{\beta-2}\chi^{2}\psi^{2}\] \[-\frac{2\left(\alpha+2\right)}{\left(n-1\right)^{2}\delta^{2}} \int_{\ell(1)}h_{\nu}\] \[+\frac{C}{t}e^{-2\sqrt{\lambda_{1}\left(M\right)}R}+Ct^{\beta- \frac{1}{n-1}}.\] Plugging this into (2.28), we conclude that \[\Lambda\int_{M\backslash D}|\nabla u|^{\alpha+2}\,u^{\beta-2}\chi ^{2}\psi^{2} \leq -\frac{2\left(\alpha+2\right)}{\left(n-1\right)^{2}\delta^{2}} \Lambda_{2}\int_{l(1)}h_{\nu}\] \[+\int_{\ell(1)}\left(|\nabla u|^{\alpha}\right)_{\nu}-\int_{\ell (1)}\left(u^{\beta}\right)_{\nu}|\nabla u|^{\alpha}\] \[+\frac{C}{t}e^{-\sqrt{\lambda_{1}\left(M\right)}R}+Ct^{\beta- \frac{1}{n-1}},\] where \[\Lambda_{2}=\frac{2\left(n-2\right)}{\alpha+2}A^{\alpha}\] and \[\Lambda = \Lambda_{1}-\left(\frac{\alpha+\beta}{\left(n-1\right)\delta} \right)^{\alpha+2}\Lambda_{2}\] \[= \beta\left(1-\beta\right)-\frac{\alpha\left(n-2\right)}{\alpha+2 }A^{-2}\] \[-\left(\frac{\alpha+\beta}{\left(n-1\right)\delta}\right)^{\alpha +2}\frac{2\left(n-2\right)}{\alpha+2}A^{\alpha}.\] We optimize \(\Lambda\) by choosing \[A=\frac{\left(n-1\right)\delta}{\alpha+\beta}\] and obtain that \[\Lambda=\beta\left(1-\beta\right)-\frac{\left(n-2\right)\left(\alpha+\beta \right)^{2}}{\left(n-1\right)^{2}\delta^{2}}. \tag{2.32}\] Hence, (2.31) becomes \[\Lambda\int_{M\setminus D}\left|\nabla u\right|^{\alpha+2}u^{\beta-2} \chi^{2}\psi^{2} \leq -\frac{4\left(n-2\right)}{\left(\alpha+\beta\right)^{\alpha}\left( \left(n-1\right)\delta\right)^{2-\alpha}}\int_{\ell\left(1\right)}h_{\nu}\] \[+\int_{\ell\left(1\right)}\left(\left|\nabla u\right|^{\alpha} \right)_{\nu}-\int_{\ell\left(1\right)}\left(u^{\beta}\right)_{\nu}\left| \nabla u\right|^{\alpha}\] \[+\frac{C}{t}e^{-\sqrt{\lambda_{1}\left(M\right)}R}+Ct^{\beta- \frac{1}{n-1}}. \tag{2.33}\] Recall that \[\alpha = \frac{n-2}{n-1}\] \[\delta^{2} = \frac{4\left(n-2\right)^{2}}{\left(n-1\right)^{4}}\left(2n-3 \right),\] as specified in (2.22) and (2.30). We let \[\beta=\frac{n-2}{3n-5}. \tag{2.34}\] Note that for any \(n\geq 5\) we have \(\beta>\frac{1}{n-1},\) as required by (2.23). Furthermore, it follows that \(\Lambda=0\) by direct calculation. Consequently, (2.33) reduces to \[0 \leq -\frac{4\left(n-2\right)}{\left(\alpha+\beta\right)^{\alpha}\left( \left(n-1\right)\delta\right)^{2-\alpha}}\int_{\ell\left(1\right)}h_{\nu}\] \[+\int_{\ell\left(1\right)}\left(\left|\nabla u\right|^{\alpha} \right)_{\nu}-\int_{\ell\left(1\right)}\left(u^{\beta}\right)_{\nu}\left| \nabla u\right|^{\alpha}\] \[+\frac{C}{t}e^{-\sqrt{\lambda_{1}\left(M\right)}R}+Ct^{\beta- \frac{1}{n-1}}.\] However, by (2.3), \[\lambda_{1}\left(M\right)\operatorname{Vol}\left(D\cap B_{p}\left( R\right)\right) \leq \lambda_{1}\left(M\right)\int_{D}\psi^{2}\] \[= -\int_{D}\left(\Delta h+\left|\nabla h\right|^{2}\right)\psi^{2}\] \[= \int_{\ell\left(1\right)}h_{\nu}+\int_{D}\left\langle\nabla h, \nabla\psi^{2}\right\rangle\] \[-\int_{D}\left|\nabla h\right|^{2}\psi^{2}\] \[\leq \int_{\ell\left(1\right)}h_{\nu}+\int_{D}\left|\nabla\psi\right| ^{2},\] where \(\nu=\frac{\nabla u}{\left|\nabla u\right|}\) is the inward unit normal to \(\partial D=\ell\left(1\right).\) According to (2.8) it follows that \[\lambda_{1}\left(M\right)\operatorname{Vol}\left(D\cap B_{p}\left(R\right) \right)\leq\int_{\ell\left(1\right)}h_{\nu}+Ce^{-2\sqrt{\lambda_{1}\left(M \right)}R}.\] Making \(R\rightarrow\infty\) yields \[\lambda_{1}\left(M\right)\operatorname{Vol}\left(D\right)\leq\int_{\ell\left( 1\right)}h_{\nu}.\] As \(\Omega\subset D\), we conclude from (2.35) that \[\frac{4\left(n-2\right)\lambda_{1}\left(M\right)}{\left(\alpha+ \beta\right)^{\alpha}\left(\left(n-1\right)\delta\right)^{2-\alpha}}\text{Vol} \left(\Omega\right)\] \[\leq \int_{\ell\left(1\right)}\left(\left|\nabla u\right|^{\alpha} \right)_{\nu}-\int_{\ell\left(1\right)}\left(u^{\beta}\right)_{\nu}\left| \nabla u\right|^{\alpha}\] \[+\frac{C}{t}e^{-\sqrt{\lambda_{1}\left(M\right)}R}+Ct^{\beta- \frac{1}{n-1}}. \tag{2.36}\] Letting \(R\rightarrow\infty\) first and then \(t\to 0\) in (2.36) we arrive at \[\frac{4\left(n-2\right)\lambda_{1}\left(M\right)}{\left(\alpha+\beta\right)^{ \alpha}\left(\left(n-1\right)\delta\right)^{2-\alpha}}\text{Vol}\left(\Omega \right)\leq\int_{\ell\left(1\right)}\left(\left|\nabla u\right|^{\alpha} \right)_{\nu}-\int_{\ell\left(1\right)}\left(u^{\beta}\right)_{\nu}\left| \nabla u\right|^{\alpha}.\] Note that the mean curvature of \[\Sigma=\ell\left(1\right)=\partial\left(M\setminus D\right)\] satisfies \[H_{\Sigma}=-\frac{\left\langle\nabla\left|\nabla u\right|,\nabla u\right\rangle }{\left|\nabla u\right|^{2}}.\] Hence, \[\left(\left|\nabla u\right|^{\alpha}\right)_{\nu}=-\alpha H_{\Sigma}\left| \nabla u\right|^{\alpha}.\] This proves that \[\frac{4\left(n-2\right)\lambda_{1}\left(M\right)}{\left(\alpha+\beta\right)^{ \alpha}\left(\left(n-1\right)\delta\right)^{2-\alpha}}\text{Vol}\left(\Omega \right)\leq-\alpha\int_{\Sigma}H_{\Sigma}\left|\nabla u\right|^{\alpha}-\beta \int_{\Sigma}\left|\nabla u\right|^{\alpha+1}.\] From Young's inequality that \[\alpha\left|H_{\Sigma}\right|\left|\nabla u\right|^{\alpha}\leq\beta\left| \nabla u\right|^{\alpha+1}+\frac{\alpha}{\alpha+1}\left(\frac{\alpha^{2}}{ \left(\alpha+1\right)\beta}\right)^{\alpha}\left|H_{\Sigma}\right|^{\alpha+1},\] we conclude \[\Gamma\,\text{Vol}\left(\Omega\right)\leq\int_{\Sigma}\left|H_{\Sigma}\right| ^{\alpha+1}\leq\int_{\partial\Omega}\left|H\right|^{\alpha+1}, \tag{2.37}\] where \[\Gamma=\frac{4\left(n-2\right)}{\left(\alpha+\beta\right)^{\alpha}\left(\left( n-1\right)\delta\right)^{2-\alpha}}\frac{\alpha+1}{\alpha}\left(\frac{\left( \alpha+1\right)\beta}{\alpha^{2}}\right)^{\alpha}\lambda_{1}\left(M\right)\] and \[\alpha = \frac{n-2}{n-1},\ \ \ \ \beta=\frac{n-2}{3n-5},\] \[\delta = \frac{2\left(n-2\right)}{\left(n-1\right)^{2}}\sqrt{2n-3}.\] For \(n\geq 5\) we have that \[\delta < 1,\] \[\alpha+\beta < \frac{4}{3},\] \[\frac{\left(\alpha+1\right)\beta}{\alpha^{2}} > \frac{1}{2},\] \[\left(n-1\right)^{2-\alpha} < \sqrt{2}\left(n-1\right).\] Therefore, \[\Gamma>\frac{2}{3}\sqrt{n}\;\lambda_{1}\left(M\right).\] In conclusion, by (2.37) we have \[\frac{2}{3}\sqrt{n}\;\lambda_{1}\left(M\right)\operatorname{Vol}\left(\Omega \right)\leq\int_{\partial\Omega}\left|H\right|^{\frac{2n-3}{n-1}}.\] ## 3. Splitting of Ricci solitons In this section, we address the issue of nonexistence of compact minimal hypersurfaces in Ricci solitons. We begin with the case of steady solitons. Let \(\left(M,g,f\right)\) be a gradient steady Ricci soliton. Then the potential \(f\) satisfies the soliton equation \[\operatorname{Ric}+\operatorname{Hess}\left(f\right)=0.\] It is known [20] that \(f\) may be normalized so that \[S+\left|\nabla f\right|^{2}=1,\] where \(S\) is the scalar curvature. It is also known [8] that \(S>0\) unless \(\left(M,g\right)\) is Ricci flat. **Theorem 3.1**.: _Let \(\left(M^{n},g,f\right)\) be a steady Ricci soliton. Assume that there exists a smooth compact embedded minimal hypersurface \(\Sigma\) in \(M.\) Then \(\left(M,g\right)\) splits isometrically as a direct product \(\mathbb{R}\times\Sigma.\)_ Proof.: By the splitting theorem in [29] we may assume that \(M\) and its double covers all have one end. Hence, according to Proposition 5.2 in [6], the integral homology \[H_{n-1}\left(M,\mathbb{Z}\right)=\left\{0\right\}.\] In particular, \(\Sigma\) bounds a compact domain \(D\) in \(M.\) In [29] it was proved that \(\Delta_{f}\) has positive spectrum. Consequently, \(M\) is \(f\)-nonparabolic. This implies that there exists \(w>0\) on \(M\setminus D\) such that \[\Delta_{f}w = 0\;\text{ on }M\setminus D\] \[w = 1\text{ on }\Sigma\] \[\inf_{M\setminus D}w = 0.\] Moreover, \[\int_{M\setminus D}\left|\nabla w\right|^{2}e^{-f}<\infty. \tag{3.1}\] The Bochner formula implies that \[\frac{1}{2}\Delta_{f}\left|\nabla w\right|^{2}\geq\left|\nabla\left|\nabla w\right| \right|^{2}. \tag{3.2}\] We now prove, similar to Proposition 2.1, that \[0\leq\int_{M\setminus D}\left|\nabla\phi\right|^{2}e^{-f}-\int_{\Sigma}f_{ \nu}\phi^{2}e^{-f} \tag{3.3}\] for any smooth function \(\phi\) on \(M\setminus D\) that vanishes at infinity, where \(\nu=\frac{\nabla w}{\left|\nabla w\right|}\) is the outward unit normal to \(\partial\left(M\setminus D\right)=\Sigma=\left\{w=1\right\}.\) Note that \[\Delta_{f}\left(f\right) = \Delta f-\left|\nabla f\right|^{2}\] \[= -S-\left|\nabla f\right|^{2}\] \[= -1.\] Therefore, \[\int_{M\setminus D}\phi^{2}e^{-f} = -\int_{M\setminus D}\left(\Delta_{f}\left(f\right)\right)\phi^{2 }e^{-f}\] \[= \int_{M\setminus D}\left\langle\nabla\phi^{2},\nabla f\right\rangle e ^{-f}-\int_{\Sigma}f_{\nu}\phi^{2}e^{-f}\] \[\leq \int_{M\setminus D}\phi^{2}e^{-f}+\int_{M\setminus D}\left| \nabla\phi\right|^{2}e^{-f}\] \[-\int_{\Sigma}f_{\nu}\phi^{2}e^{-f}.\] This proves (3.3). Let \(\psi\) be a smooth function on \(M\setminus D\) with \(\psi=1\) on \(\Sigma\) and \(\psi=0\) outside a sufficiently large ball \(B_{p}(R).\) Setting \(\phi=\left|\nabla w\right|\psi\) in (3.3) we get that \[0 \leq \int_{M\setminus D}\left|\nabla\left(\left|\nabla w\right|\psi \right)\right|^{2}e^{-f}-\int_{\Sigma}f_{\nu}\left|\nabla w\right|^{2}e^{-f}\] \[= \int_{M\setminus D}\left|\nabla\left|\nabla w\right|\right|^{2} \psi^{2}e^{-f}+\frac{1}{2}\int_{M\setminus D}\left\langle\nabla\left|\nabla w \right|^{2},\nabla\psi^{2}\right\rangle e^{-f}\] \[+\int_{M\setminus D}\left|\nabla\psi\right|^{2}\left|\nabla w \right|^{2}e^{-f}-\int_{\Sigma}f_{\nu}\left|\nabla w\right|^{2}e^{-f}.\] By (3.2), this yields \[0 \leq \frac{1}{2}\int_{M\setminus D}\left(\Delta_{f}\left|\nabla w \right|^{2}\right)\psi^{2}+\frac{1}{2}\int_{M\setminus D}\left\langle\nabla \left|\nabla w\right|^{2},\nabla\psi^{2}\right\rangle e^{-f}\] \[+\int_{M\setminus D}\left|\nabla\psi\right|^{2}\left|\nabla w \right|^{2}e^{-f}-\int_{\Sigma}f_{\nu}\left|\nabla w\right|^{2}e^{-f}\] \[= \frac{1}{2}\int_{\Sigma}\left(\left|\nabla w\right|^{2}\right)_{ \nu}e^{-f}+\int_{M\setminus D}\left|\nabla\psi\right|^{2}\left|\nabla w \right|^{2}e^{-f}\] \[-\int_{\Sigma}f_{\nu}\left|\nabla w\right|^{2}e^{-f}.\] However, as \(\nu=\frac{\nabla w}{\left|\nabla w\right|}\) and \(H_{\Sigma}=0\), we see that \[\frac{1}{2}\left(\left|\nabla w\right|^{2}\right)_{\nu} = \left\langle\nabla\left|\nabla w\right|,\nabla w\right\rangle\] \[= \left(\Delta w\right)\left|\nabla w\right|\] \[= \left\langle\nabla f,\nabla w\right\rangle\left|\nabla w\right|\] \[= f_{\nu}\left|\nabla w\right|^{2}.\] Hence, (3.4) becomes an equality by letting \(R\rightarrow\infty\), which in turn forces (3.2) to be an equality. This implies the splitting of the manifold as a direct product \(\mathbb{R}\times\Sigma\). We refer to [29] for the details. An analogous result for expanding Ricci solitons holds true as well. Recall that an expanding Ricci soliton \(\left(M,g,f\right)\) satisfies the equation \[\mathrm{Ric}+\mathrm{Hess}\left(f\right)=-\frac{1}{2}g.\] We may normalize \(f\) (see [20]) such that \[S+\left|\nabla f\right|^{2}=-f.\] Moreover, the scalar curvature \(S\geq-\frac{n}{2}\) on \(M\) by [32]. **Theorem 3.2**.: _Let \(\left(M^{n},g,f\right)\) be an expanding gradient Ricci soliton with \(S\geq-\frac{n-1}{2}\) on \(M.\) Assume that there exists an embedded compact minimal hypersurface \(\Sigma\) in \(M.\) Then \(M\) splits isometrically as a direct product \(\mathbb{R}\times\Sigma.\)_ Proof.: Recall that by [30] such an expanding Ricci soliton must have one end or it splits as a direct product. Hence, by [6] we may assume as before that the integral homology \[H_{n-1}\left(M,\mathbb{Z}\right)=\left\{0\right\}\] and that \(\Sigma\) bounds a compact domain \(D\) in \(M.\) As \(\Delta_{f}\) has positive spectrum [30], \(M\) is \(f\)-nonparabolic. In particular, there exists function \(w>0\) on \(M\setminus D\) such that \[\Delta_{f}w = 0\text{ on }M\setminus D\] \[w = 1\text{ on }\Sigma\] \[\inf_{M\setminus D}w = 0.\] Moreover, \[\int_{M\setminus D}\left|\nabla w\right|^{2}e^{-f}<\infty.\] We now prove, similar to Proposition 2.1, that \[\frac{1}{2}\int_{M\setminus D}\phi^{2}e^{-f}\leq\int_{M\setminus D}\left| \nabla\phi\right|^{2}e^{-f}-\int_{\Sigma}f_{\nu}\phi^{2}e^{-f} \tag{3.5}\] for any smooth function \(\phi\) on \(M\setminus D\) that vanishes near infinity, where \(\nu=\frac{\nabla w}{\left|\nabla w\right|}\) is the unit normal to \(\Sigma=\left\{w=1\right\}.\) Direct calculation gives \[\Delta_{f}\left(f\right) = \Delta f-\left|\nabla f\right|^{2}\] \[= -\frac{n}{2}-S-\left|\nabla f\right|^{2}\] \[\leq -\frac{1}{2}-\left|\nabla f\right|^{2}.\] So we have \[\frac{1}{2}\int_{M\setminus D}\phi^{2}e^{-f} \leq -\int_{M\setminus D}\left(\Delta_{f}\left(f\right)\right)\phi^{2}e^ {-f}-\int_{M\setminus D}\left|\nabla f\right|^{2}\phi^{2}e^{-f}\] \[= \int_{M\setminus D}\left\langle\nabla\phi^{2},\nabla f\right\rangle e ^{-f}-\int_{\Sigma}f_{\nu}\phi^{2}e^{-f}-\int_{M\setminus D}\left|\nabla f \right|^{2}\phi^{2}e^{-f}\] \[\leq \int_{M\setminus D}\left|\nabla\phi\right|^{2}e^{-f}-\int_{ \Sigma}f_{\nu}\phi^{2}e^{-f}.\] This proves (3.5). We apply (3.5) to \(\phi=\left|\nabla w\right|\psi\), where \(\psi\) is a cut-off function as in Theorem 3.1 that \(\psi=1\) on \(\Sigma\) and that \(\psi=0\) outside the geodesic ball \(B_{p}(R)\) when \(R\) is large. It follows that \[\frac{1}{2}\int_{M\setminus D}\left|\nabla w\right|^{2}\psi^{2}e ^{-f} \leq \int_{M\setminus D}\left|\nabla\left|\nabla w\right|\right|^{2} \psi^{2}e^{-f}\] \[+\int_{M\setminus D}\left|\nabla w\right|^{2}\left|\nabla\psi \right|^{2}e^{-f}\] \[+\frac{1}{2}\int_{M\setminus D}\left\langle\nabla\left|\nabla w \right|^{2},\nabla\psi^{2}\right\rangle e^{-f}\] \[-\int_{\Sigma}f_{\nu}|\nabla w|^{2}e^{-f}.\] Recall the Bochner formula \[\frac{1}{2}\Delta_{f}\left|\nabla w\right|^{2}\geq\left|\nabla\left|\nabla w \right|\right|^{2}-\frac{1}{2}\left|\nabla w\right|^{2}\ \ \text{on}\ M\setminus D.\] Plugging into (3.6) yields that \[\frac{1}{2}\int_{M\setminus D}\left|\nabla w\right|^{2}\psi^{2}e ^{-f} \leq \frac{1}{2}\int_{M\setminus D}\left(\Delta_{f}\left|\nabla w \right|^{2}\right)\psi^{2}e^{-f}+\frac{1}{2}\int_{M\setminus D}\left|\nabla w \right|^{2}\psi^{2}e^{-f}\] \[+\int_{M\setminus D}\left|\nabla w\right|^{2}\left|\nabla\psi \right|^{2}e^{-f}+\frac{1}{2}\int_{M\setminus D}\left\langle\nabla\left| \nabla w\right|^{2},\nabla\psi^{2}\right\rangle e^{-f}\] \[-\int_{\Sigma}f_{\nu}\left|\nabla w\right|^{2}e^{-f}\] \[= \frac{1}{2}\int_{\Sigma}\left(\left|\nabla w\right|^{2}\right)_{ \nu}e^{-f}+\frac{1}{2}\int_{M\setminus D}\left|\nabla w\right|^{2}\psi^{2}e ^{-f}\] \[+\int_{M\setminus D}\left|\nabla w\right|^{2}\left|\nabla\psi \right|^{2}e^{-f}-\int_{\Sigma}f_{\nu}\left|\nabla w\right|^{2}e^{-f}.\] In conclusion, \[0 \leq \frac{1}{2}\int_{\Sigma}\left(\left|\nabla w\right|^{2}\right)_{ \nu}e^{-f}-\int_{\Sigma}f_{\nu}\left|\nabla w\right|^{2}e^{-f}\] \[+\int_{M\setminus D}\left|\nabla w\right|^{2}\left|\nabla\psi \right|^{2}e^{-f}.\] Making \(R\to\infty\), we get \[0\leq\frac{1}{2}\int_{\Sigma}\left(\left|\nabla w\right|^{2}\right)_{\nu}e^{ -f}-\int_{\Sigma}f_{\nu}\left|\nabla w\right|^{2}e^{-f}.\] Since \(\nu=\frac{\nabla w}{\left|\nabla w\right|},\) it follows that \[0\leq\int_{\Sigma}\left\langle\nabla\left|\nabla w\right|,\nabla w\right\rangle e ^{-f}-\int_{\Sigma}\left\langle\nabla f,\nabla w\right\rangle\left|\nabla w \right|e^{-f}.\] However, as \(w\) is \(f\)-harmonic, \[\left\langle\nabla\left|\nabla w\right|,\nabla w\right\rangle-\left\langle \nabla f,\nabla w\right\rangle\left|\nabla w\right|=-H_{\Sigma}\left|\nabla w \right|^{2}.\] Since \(\Sigma\) is minimal, this again shows the above inequality must be equality, which in turn forces the Bochner formula itself is also an equality. This suffices to conclude that \(M=\mathbb{R}\times\Sigma.\) One may refer to [30] for details. **Acknowledgment:** We wish to thank Pengfei Guan for his interest and comments. The first author was partially supported by the NSF grant DMS-1811845 and by a Simons Foundation grant.
古典のMinkowski不等式から、境界の平均曲率の積で、有界凸領域の体積が上界に抑えられることがわかります。このノートでは、凸性が仮定されていない、有界滑らかな領域を含む、完備された多様体における、この不等式を類似のものとして確立します。その多様体の底スペクトルがRicci曲率の低い上限と比較して十分に大きい条件下で、その領域の境界が滑らかな領域で、その境界の平均曲率の積で、有界凸領域の体積が上界に抑えられることがわかります。この不等式は、その多様体における埋め込み可能なコンパクト最小のHypersurfaceの非存在を示唆しています。この非存在問題は、安定または拡張Ricci solitonにも適用されます。
2309.04235
Quasi-integrability and nonlinear resonances in cold atoms under modulation
Quantum dynamics of a collection of atoms subjected to phase modulation has been carefully revisited. We present an exact analysis of the evolution of a two-level system (represented by a spinor) under the action of a time-dependent matrix Hamiltonian. The dynamics is shown to evolve on two coupled potential energy surfaces, one of them binding while the other one scattering type. The dynamics is shown to be quasi-integrable with nonlinear resonances. The bounded dynamics with intermittent scattering at random moments presents the scenario reminiscent to Anderson and dynamical localization. We believe that a careful analytical investigation of a multi-component system which is classically non-integrable is relevant to many other fields, including quantum computation with multi-qubit system.
Rahul Gupta, Manan Jain, Sudhir R. Jain
2023-09-08T09:42:25
http://arxiv.org/abs/2309.04235v1
# Quasi-integrability and nonlinear resonances in cold atoms under modulation ###### Abstract Quantum dynamics of a collection of atoms subjected to phase modulation has been carefully revisited. We present an exact analysis of the evolution of a two-level system (represented by a spinor) under the action of a time-dependent matrix Hamiltonian. The dynamics is shown to evolve on two coupled potential energy surfaces, one of them binding while the other one scattering type. The dynamics is shown to be quasi-integrable with nonlinear resonances. The bounded dynamics with intermittent scattering at random moments presents the scenario reminiscent to Anderson and dynamical localization. We believe that a careful analytical investigation of a multi-component system which is classically non-integrable is relevant to many other fields, including quantum computation with multi-qubit system. ## 1 Introduction Evolution in the fields of ultracold atoms and quantum physics in the past few decades has led to recognition of these fields as a huge well-acclaimed arena for the exploration of popular subjects like quantum chaos [1], Feshbach resonances[2, 4, 3, 5, 6, 7, 8, 9, 10, 11, 12] and ultracold atomic mixtures[13, 14, 15, 16, 17], atom interferometry[18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32], atomic clocks [33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43], quantum diffraction [44, 45] & quantum thermodynamics[46, 47, 48, 49]. This is due to rich internal structures, longer de Broglie wavelengths and tunable long-range interactions possessed by ultracold atoms. Furthermore, the research in the regime of lower temperatures has also been extended to molecules [50, 51]. Apart from these recent developments, there has been a sustained effort to realize parallels between atomic and condensed matter physics [52]. One of the ideas pursued with great interest is the localization of states in disordered systems, pioneered by Anderson [53]. Due to a common sense analogy between disorder and chaos, a connection between localization of wavefunctions of classically chaotic systems, and, disordered lattices of infinite [54] and finite extent [55] was brought out. Even in matter waves, the phenomenon of localization has been experimentally demonstrated [56]. Many years ago, an experiment carried out by the group led by Raizen [1] demonstrated the dynamical analog of Anderson Localization in a system of cold atoms. In this experiment, about one hundred thousand \({}^{23}\)Na atoms were trapped in a spherical volume of 300 \(\mu\)m at a temperature of 17 \(\mu\)K. At the end of the preparation step, the temperature was turned off and a modulated standing light field was switched on for 10 \(\mu\)s. The Hamiltonian describing the interaction of the sodium atom with the light field is given by [62] \[H_{0}=H_{\rm el}+\frac{p^{2}}{2m}+eF\cos\{k_{L}[x-\Delta L\sin\omega t]\}\cos \omega_{L}t. \tag{1}\] Here, \(H_{\rm el}\) contains the interaction of valence electrons with an atom. The last term denotes the electric dipole interaction of the electromagnetic field with an electron. Laser frequency and wavenumber are respectively denoted by \(\omega_{L}\) and \(k_{L}\), and \(\omega\) is the modulation frequency. Standing waves are generated by directing two counter-propagating laser beams into the trap, and, the modulation is achieved by passing one beam through an electro-optical phase modulator. The beam is made to strike a mirror in a cavity of length \(\Delta L\) which is moving with the modulation frequency, \(\omega\). The laser frequency was chosen close to the D\({}_{2}\) line of sodium. The electronic Hamiltonian can be reduced to a two-level system containing ground state \(\psi^{-}|g\rangle\) and an excited state, \(\psi^{+}|e\rangle\). \[\psi=\left(\begin{array}{c}\psi^{+}\\ \psi^{-}\end{array}\right)=\psi^{+}|e\rangle+\psi^{-}|g\rangle \tag{2}\] Taking the energy average of the two states as zero energy, the matrix elements of \(H_{\rm el}\) and \(eF\) together give \[H_{\rm el}+eF=\begin{pmatrix}\hbar\omega_{0}/2&\hbar\Omega\\ \hbar\Omega&-\hbar\omega_{0}/2\end{pmatrix}=\frac{\hbar\omega_{0}}{2}\sigma_{ z}+\hbar\Omega\sigma_{x} \tag{3}\] where the transition frequency is denoted by \(\omega_{0}\), \(\Omega\) denotes Rabi frequency, and \(\sigma^{\prime}s\) are the Pauli matrices. Thus, \(H_{0}\) may be written as \[H_{0}=\frac{p^{2}}{2m}{\bf I}+\frac{\hbar\omega_{0}}{2}\sigma_{z}+\hbar\Omega \cos\{k_{L}[x-\Delta L\sin\omega t]\}\cos(\omega_{L}t)\sigma_{x}. \tag{4}\] where \({\bf I}\) denotes an identity matrix. After we present the general Hamiltonian below, in SS2, we present the Hamiltonian under Rotating Wave Approximation. Within this approximation, the case of adiabatic perturbation for the two cases of small and large detuning is considered. In SS3, the exact solution for this matrix Hamiltonian is given. The method transforms the dynamics under the matrix Hamiltonian to dynamics on potential energy surfaces. Classical dynamics reveals the presence of nonlinear resonances in SS4. The classical system obeys the Kolmogorov-Arnold-Moser (KAM) theorem [57], and hence is quasi-integrable [58]. In a related context of quantum Rabi model, a discussion on integrability [60] and symmetries [61] has been presented relatively recently. Special solutions are discussed as they have been used to analyze experiments carried out by different groups. For each case discussed at the quantum mechanical level, we also present classical phase space pictures and show that this atomic system presents a very interesting and deep instance of the association of quasi-integrability and dynamical localization. The phase space pictures exhibit certain misleading features in the approximated Hamiltonian, compared to the exact Hamiltonian obtained by systematic expansion in powers of \(\hbar\). _General Hamiltonian_ We now transform to a frame which is rotating with \(\omega_{L}\) about the \(z\)-axis in spin space: \[\psi_{\rm rot}=\exp\left(i\omega_{L}\sigma_{z}t/2\right)\psi. \tag{5}\] Substituting \(\psi\) in the Schrodinger equation, \(i\hbar\partial\psi/\partial t=H_{0}\psi\), we have the equation for the rotated wavefunction: \[H_{\rm rot}=\frac{p^{2}}{2m}{\bf I}+\frac{\hbar(\omega_{0}- \omega_{L})}{2}\sigma_{z}+\hbar\Omega\cos\{k_{L}[x-\Delta L\sin\omega t]\}.\] \[.\cos(\omega_{L}t)e^{i\omega_{L}\sigma_{z}t/2}\sigma_{x}e^{-i \omega_{L}\sigma_{z}t/2}. \tag{6}\] Using the standard identity, \(e^{i\omega_{L}\sigma_{z}t/2}\sigma_{x}e^{-i\omega_{L}\sigma_{z}t/2}=\sigma_{x }\cos\omega_{L}t-\sigma_{y}\sin\omega_{L}t\), we have the transformed Hamiltonian: \[H_{\rm rot}=\frac{p^{2}}{2m}{\bf I}+\frac{\hbar(\omega_{0}- \omega_{L})}{2}\sigma_{z}+\frac{\hbar\Omega}{2}\cos\{k_{L}[x-\Delta L\sin \omega t]\}.\] \[.[\sigma_{x}(1+\cos 2\omega_{L}t)-\sigma_{y}\sin 2\omega_{L}t]. \tag{7}\] This is the general Hamiltonian for the physical situation described above where there are terms oscillating with twice the \(\omega_{L}\). ## 2 Rotating Wave Approximation The Schrodinger equation for \(H_{\rm rot}\) is usually solved under the Rotating Wave Approximation (RWA) [62, 65]. Here the terms oscillating with frequency \(2\omega_{L}\) are neglected. This leads to a simplified Hamiltonian, \[H_{rot}^{\rm RWA}=\frac{p^{2}}{2m}{\bf I}+\hbar\Omega_{\rm eff}(\sigma_{z}\cos \alpha+\sigma_{x}\sin\alpha) \tag{8}\] where \[\Omega_{\rm eff}=\frac{1}{2}[(\omega_{0}-\omega_{L})^{2}+\Omega^{ 2}\cos^{2}\{k_{L}(x-\Delta L\sin\omega t)]\}]^{1/2},\] \[\tan\alpha=\frac{\Omega\cos[k_{L}(x-\Delta L\sin\omega t)]}{ \omega_{0}-\omega_{L}}. \tag{9}\] Let us rotate the state of this Hamiltonian further in the spin space by an angle \((-\alpha/2)\) about the \(y\)-axis, to obtain a new state, \(\psi^{\prime}=\psi^{\prime+}|e\rangle+\psi^{\prime-}|g\rangle=\exp(i\alpha \sigma_{y}/2)\psi_{rot}\) \[\psi^{\prime}=\left(\begin{array}{c}\cos(\alpha/2)e^{i\omega_{L}t/2}\psi^{+}+ \sin(\alpha/2)e^{-i\omega_{L}t/2}\psi^{-}\\ -\sin(\alpha/2)e^{i\omega_{L}t/2}\psi^{+}+\cos(\alpha/2)e^{-i\omega_{L}t/2} \psi^{-}\end{array}\right) \tag{10}\] in which the second term is diagonal. Consequently, the equation satisfied by \(\psi^{\prime}\) is \[i\hbar\frac{\partial\psi^{\prime}}{\partial t}=-\frac{\hbar}{2}\frac{\partial \alpha}{\partial t}\sigma_{y}\psi^{\prime}+e^{i\alpha\sigma_{y}/2}H_{\rm rot }^{\rm RWA}e^{-i\alpha\sigma_{y}/2}\psi^{\prime}=H_{\rm eff}^{\rm RWA}\psi^{ \prime}. \tag{11}\] But this will transform the kinetic term as [66]: \[e^{i\alpha\sigma_{y}/2}p^{2}{\bf I}e^{-i\alpha\sigma_{y}/2}\psi ^{\prime}=\left(p{\bf I}-\hbar{\bf A}\right)^{2}\psi^{\prime}={\bf\Pi}^{2} \psi^{\prime} \tag{12}\] \[{\bf A}=\frac{\sigma_{y}}{2}\frac{\partial\alpha}{\partial x}= \frac{-k_{L}\delta_{L}\Omega\sin[k_{L}(x-\Delta L\sin\omega t)]\sigma_{y}}{2 \left({\delta_{L}}^{2}+\Omega^{2}\cos^{2}[k_{L}(x-\Delta L\sin\omega t)]\right)} \tag{13}\] where \({\bf I}\) is an identity matrix. Now we can employ the well-known identity: \[e^{i\alpha(\hat{n}.\vec{\sigma})}\vec{\sigma}e^{-i\alpha(\hat{n}.\vec{\sigma} )}=\vec{\sigma}\cos 2\alpha+\hat{n}\times\vec{\sigma}\sin 2\alpha+\hat{n}( \hat{n}.\vec{\sigma})(1-\cos 2\alpha). \tag{14}\] While the "potential" part of the Hamiltonian becomes diagonal with these unitary transformations, the kinetic term modifies to \((p{\bf I}-\hbar{\bf A})^{2}\). This has terms of order 1, \(\hbar\), and \(\hbar^{2}\) - thus, a semiclassical expansion (and not a perturbative expansion) appears naturally. Moreover, since \({\bf A}\) has non-zero diagonal matrix elements, there is a possibility of a geometric phase appearing in the state of the atoms as the system evolves. This is indeed due to the cavity modulation. Dimensionally, \(\hbar{\bf A}/e\) is a magnetic vector potential. \(H_{\rm eff}^{\rm RWA}\) can be written as: \[H_{\rm eff}^{\rm RWA} =\frac{{\bf\Pi}^{2}}{2m}+\hbar\Omega_{\rm eff}\sigma_{z}-\frac{ \hbar}{2}\frac{\partial\alpha}{\partial t}\sigma_{y}, \tag{15}\] \[=\left[\frac{p^{2}}{2m}-\frac{\hbar^{2}}{4}\left(\frac{\partial \alpha}{\partial x}\right)^{2}\right]{\bf I}+\hbar\Omega_{\rm eff}\sigma_{z}+ \left(-\frac{\hbar}{2}\frac{\partial\alpha}{\partial t}-\hbar\frac{\partial \alpha}{\partial x}p+\frac{i\hbar^{2}}{2}\frac{\partial\alpha}{\partial x} \right)\sigma_{y}. \tag{16}\] Except for terms of order O(\(\hbar^{2}\)), each of the terms can make a significant contribution. At this point, one of the possible simplifications occurs if \(\alpha\) is slowly varying with time. This leads us to consider applying the adiabatic approximation, which we discuss now. ### Adiabatic variation We may neglect the term \(\hbar\sigma_{y}d\alpha/dt\). But note that in this case: \[\hbar\sigma_{y}\frac{d\alpha}{dt}=\hbar\frac{\partial\alpha}{\partial x}p \sigma_{y}+\hbar\frac{\partial\alpha}{\partial t}\sigma_{y}\to 0. \tag{17}\] The adiabatic Hamiltonian is: \[H_{\rm ad}^{\rm RWA}=\left[\frac{p^{2}}{2m}-\frac{\hbar^{2}}{4}\left(\frac{ \partial\alpha}{\partial x}\right)^{2}\right]{\bf I}+\hbar\Omega_{\rm eff} \sigma_{z}+\left(\frac{\hbar}{2}\frac{\partial\alpha}{\partial t}+\frac{i \hbar^{2}}{2}\frac{\partial\alpha}{\partial x}\right)\sigma_{y}. \tag{18}\] It matters a lot if the detuning is small or large. This is because \[\frac{\partial\alpha}{\partial x}=-\frac{k_{L}\frac{\delta_{L}}{\Omega}\sin[k_{L}(x -\Delta L\sin\omega t)]}{\left(\frac{\delta_{L}}{\Omega}\right)^{2}+\cos^{2}[k _{L}(x-\Delta L\sin\omega t)]};\qquad\frac{\partial\alpha}{\partial t}=\frac{ \omega\frac{\delta_{L}}{\Omega}\sin[k_{L}(x-\Delta L\sin\omega t)]\cos\omega t} {\left(\frac{\delta_{L}}{\Omega}\right)^{2}+\cos^{2}[k_{L}(x-\Delta L\sin \omega t)]}. \tag{19}\] So either for small or large detuning, \[\delta_{L}\ll\Omega\quad\text{or}\quad\delta_{L}\gg\Omega\quad\Rightarrow \quad\frac{\partial\alpha}{\partial t},\frac{\partial\alpha}{\partial x}\to 0. \tag{20}\] #### 2.1.1 Small detuning Here, \(\omega_{0}\sim\omega_{L}\), thus \(\tan\alpha\to\infty\) or \(\alpha\sim\pi/2\). Considering (20) and keeping the terms up O(\(\hbar\)), the adiabatic Hamiltonian further simplifies to \[H^{\text{RWA}}_{\text{ad,s}}=\frac{p^{2}}{2m}\mathbf{I}+\hbar\Omega_{\text{ eff}}\sigma_{z}. \tag{21}\] Exploiting the smallness of detuning, we may expand binomially to obtain \[H^{\text{RWA},\pm}_{\text{ad,s}} =\frac{p^{2}}{2m}\pm\frac{\hbar\Omega}{2}\cos[k_{L}(x-\Delta L\sin \omega t)]\left[1+\frac{(\omega_{0}-\omega_{L})^{2}}{2\Omega^{2}\cos^{2}[k_{L}( x-\Delta L\sin\omega t)]}\right]\] \[+\mathcal{O}\left(\left(\frac{\omega_{0}-\omega_{L}}{\Omega} \right)^{3}\right). \tag{22}\] These provide the two potential energy surfaces on which the two-level system evolves, connected by tunneling. This can be seen by the fact that the intersection of the two curves occurs when \(\Omega_{\text{eff}}\) is zero, leading to \[x =\Delta L\,\sin\omega t+\frac{\pi}{2k_{L}}+i\log\left(\sqrt{1- \frac{\delta_{L}^{2}}{2\Omega^{2}}}-\frac{\delta_{L}}{\sqrt{2}\Omega}\right)\] \[\simeq\Delta L\,\sin\omega t+\frac{\pi}{2k_{L}}-i\frac{\sqrt{2} \delta_{L}}{2\Omega} \tag{23}\] for small detuning. The binding part of the potential in (22) supports eigenvalues. However, since the Hamiltonian is periodic in time, the eigenvalues are quasienergies. Owing to the imaginary part, these are more precisely "quasienergy resonances". #### 2.1.2 Large detuning We consider the case where we have RWA and adiabatic approximation but \(\delta_{L}\gg\Omega\). Then we have the Hamiltonian, \[H^{\text{RWA}}_{\text{ad,l}}=\begin{pmatrix}p^{2}/2m+\hbar\Omega_{\text{eff}}& 0\\ 0&p^{2}/2m-\hbar\Omega_{\text{eff}}\end{pmatrix}. \tag{24}\] This can be decomposed into two Hamiltonians: \[H^{\rm RWA,\pm}_{\rm ad,l}=\frac{p^{2}}{2m}\pm\frac{\hbar\delta_{L}}{2}\left[1+ \frac{\Omega^{2}}{2\delta_{L}^{2}}\cos^{2}[k_{L}(x-\Delta L\sin\omega t)]\right]+ \mathcal{O}\left(\left(\frac{\Omega}{\omega_{0}-\omega_{L}}\right)^{3}\right). \tag{25}\] The potential energy curves intersect when \[x(t)=\left(n+\frac{1}{2}\right)\frac{\pi}{k_{L}}+\Delta L\,\sin\omega t. \tag{26}\] Here the intersection points are real where the real part is the same as for small detuning. The potential energy curves support sharp quasienergies. ## 3 Exact solution We now return to the (7) and lift all the approximations considered in the last Section. The Hamiltonian is written as \[H_{\rm rot}=\frac{p^{2}}{2m}\mathbf{I}+\begin{pmatrix}a&b\\ b*&-a\end{pmatrix}\equiv\frac{p^{2}}{2m}\mathbf{I}+\mathcal{M} \tag{27}\] where \(a=\hbar(\omega_{0}-\omega_{L})/2\), \(b=b_{1}+ib_{2}\) with \[b_{1} =\frac{\hbar\Omega}{2}\cos[k_{L}(x-\Delta L\sin\omega t)](1+\cos 2 \omega_{L}t),\] \[b_{2} =\frac{\hbar\Omega}{2}\cos[k_{L}(x-\Delta L\sin\omega t)]\sin 2 \omega_{L}t. \tag{28}\] The matrix, denoted by \(\mathcal{M}\) in (27) can be diagonalized by a matrix \(\mathcal{S}\) to get the diagonal matrix, \(\mathcal{J}\). The matrices are \[\mathcal{S}=\begin{pmatrix}\frac{(a-\sqrt{a^{2}+b_{1}^{2}+b_{2}^{2}})(b_{1}+ib_ {2})}{b_{1}^{2}+b_{2}^{2}}&\frac{(a+\sqrt{a^{2}+b_{1}^{2}+b_{2}^{2}})(b_{1}+ib _{2})}{b_{1}^{2}+b_{2}^{2}}\\ 1&1\end{pmatrix} \tag{29}\] and \[\mathcal{J}=\begin{pmatrix}-\sqrt{a^{2}+b_{1}^{2}+b_{2}^{2}}&0\\ 0&\sqrt{a^{2}+b_{1}^{2}+b_{2}^{2}}\end{pmatrix}. \tag{30}\] Define \(\psi_{1}=\mathcal{S}^{-1}\psi_{\rm rot}\) with \(i\hbar\partial\psi_{\rm rot}/\partial t=\mathcal{H}\psi_{\rm rot}\). The equation for the time evolution of \(\psi_{1}\) is \[i\hbar\frac{\partial\psi_{1}}{\partial t}=-i\hbar\mathcal{S}^{-1}\frac{ \partial\mathcal{S}}{\partial t}\psi_{1}+\mathcal{S}^{-1}\frac{p^{2}}{2m} \mathbf{I}\mathcal{S}\psi_{1}+\mathcal{J}\psi_{1}. \tag{31}\] Now, \(\mathcal{S}^{-1}p^{2}\mathcal{S}=(\mathcal{S}^{-1}p\mathcal{S})^{2}=(p-i \hbar\mathcal{S}^{-1}\partial\mathcal{S}/\partial x)^{2}\). Here we again have a vector potential which is an artificial gauge field. The Hamiltonian is thus written as an expansion [66, 67], \[H=H_{0}+\hbar H_{1}+\hbar^{2}H_{2} \tag{32}\] with \(H_{0}\) has a simple form: \[H_{0}=\frac{p^{2}}{2m}\mathbf{I}+\begin{pmatrix}-\sqrt{a^{2}+b_{1}^{2}+b_{2}^{2 }}&0\\ 0&\sqrt{a^{2}+b_{1}^{2}+b_{2}^{2}}.\end{pmatrix} \tag{33}\] Writing \(\psi_{1}=(\psi_{1}^{(+)}\quad\psi_{1}^{(-)})^{T}\) with the superscript, \(T\) denoting the transpose, we have written the state with two components. The classical Hamiltonians corresponding to the states, \(\psi_{1}^{(\pm)}\) are \[H_{0}^{(\pm)}=\frac{p^{2}}{2m}\pm\frac{\hbar(\omega_{0}-\omega_{L})}{2}\left( 1+\frac{4\Omega^{2}}{(\omega_{0}-\omega_{L})^{2}}\cos^{2}[k_{L}(x-\Delta L\sin \omega t)]\cos^{2}\omega_{L}t\right)^{1/2}. \tag{34}\] Usually, \(\psi_{1}^{(+)}\) is subjected to a binding potential and \(\psi_{1}^{(-)}\) is evolving on a scattering potential. There are two potential energy surfaces, \(\pm\sqrt{a^{2}+b_{1}^{2}+b_{2}^{2}}\) on which the full two-component wavefunction, \(\psi_{1}\) evolves. The potential energy surfaces meet at the solution of \[a^{2}+b_{1}^{2}+b_{2}^{2}=0. \tag{35}\] The solution is \[x =\Delta L\sin\omega t+\frac{1}{k_{L}}\cos^{-1}\left[\pm i\,\frac{ (\omega_{0}-\omega_{L})}{2\Omega}\sec(\omega_{L}t)\right]\] \[=\Delta L\sin\omega t+\frac{\pi}{2k_{L}}+i\,\frac{1}{k_{L}}\log \left[1\mp\frac{\delta_{L}}{2\Omega}\sec(\omega_{L}t)+\frac{\delta_{L}^{2}}{ 8\Omega^{2}}\sec^{2}(\omega_{L}t)\right]. \tag{36}\] For small detuning (\(\delta_{L}\ll\Omega\)), the potential curves intersect at \[x=\Delta L\sin\omega t+\frac{\pi}{2k_{L}}\mp i\,\frac{\delta_{L}}{2\Omega} \sec(\omega_{L}t)\pm i\,\frac{\delta_{L}^{3}}{48\Omega^{3}}\sec^{3}(\omega_{L} t). \tag{37}\] Figure 1: Potential Energy Surface (PES) at (a) large detuning (\(\delta_{L}\gg\Omega\)) and (b) small detuning (\(\delta_{L}\ll\Omega\)). At large detuning the gap shrinks allowing a larger region for space for crossing of PES The complex value of crossing of the potential energy surfaces implies the tunneling of atoms. The tunneling across these surfaces where the underlying dynamics is nonlinear has some very interesting related phenomena like resonance assisted tunneling [63], which have been recently experimentally realized [64]. The Fig. 1 (a) and (b) show these crossings along the complex position plane. We note that the crossing gap at the null imaginary position plane vanishes as one reaches closer to resonance (at small detuning) and remains wide open at large detuning. In (34), for large detuning, \(\Omega^{2}/(\omega_{0}-\omega_{L})^{2}\ll 1\), a Taylor expansion immediately yields \[H_{0,l}^{(\pm)}=\frac{p^{2}}{2m}\pm\frac{\hbar(\omega_{0}-\omega_{L})}{2} \left(1+\frac{2\Omega^{2}}{(\omega_{0}-\omega_{L})^{2}}\cos^{2}[k_{L}(x-\Delta L \sin\omega t)]\cos^{2}\omega_{L}t\right). \tag{38}\] Among the two Hamiltonians, \(H_{0,l}^{(-)}\) is binding; it can be seen that the second term in the Taylor expansion of \(\cos[k_{L}(x-\Delta L\sin\omega t)]\) along with an overall negative sign will make this roughly parabolic for small arguments, at least. For the same reason, \(H_{0}^{(+)}\) is a scattering Figure 2: Comparison of Poincare sections for Hamiltonians under different approximations for the case of large detuning for the same set of parameters used in Fig. 3. (a) Shows the un-approximated case corresponding to the exact solution. (b) Shows the application of binomial approximation to the exact solution. (c) Corresponds to the RWA+Adiabatic approximation and (d) corresponds to the RWA+adiabatic+Binomial approximation. Initial conditions and number of evolution steps are kept the same for all cases here. potential. The differences in Poincare sections for various cases can be seen in the following figure. We found that the 3 island ring which is present in both un-approximated case and RWA+Adiabatic case vanishes if we make a binomial approximation implying origin of this resonance is purely because of higher order terms of (38) and (22). We also note that the chaos is more apparent in the binomial case but less severe in all other cases. We now study the classical mechanics of these Hamiltonians. ## 4 Quasi-integrability In this Section, we study the classical dynamics of the Hamiltonians obtained above under different approximations. We begin with the exact Hamiltonian, namely (32), and consider only \(H_{0}^{(-)}\) in (34). We make the following transformations to convert it to a dimensionless form almost similar to [65]. \[t\rightarrow\frac{t}{\omega}\,\ x\rightarrow\frac{x}{2k_{L}} \,\ p\rightarrow\frac{M\omega p}{2k_{L}}\,\ H_{0}^{-}\rightarrow\frac{M \omega^{2}H_{0}^{-}}{4K_{L}^{2}}\] \[\lambda=2k_{L}\Delta L\,\ \gamma=\frac{\omega_{L}}{\omega}\,\ \eta= \left(\frac{\Omega}{\delta_{L}}\right)^{2}\,\ K=\frac{\hbar k_{L}^{2}\Omega^{2}}{2M \omega^{2}\delta_{L}} \tag{39}\] where \(\eta\) is strength of Rabi resonance and \(\delta_{L}=\omega_{0}-\omega_{L}\) is the detuning of laser. The simplified Hamiltonian yields: \[H_{0}^{-}=\frac{p^{2}}{2}-\frac{4K}{\eta}\left[1+2\eta(1+\cos(x-\lambda\sin t ))\cos^{2}\gamma t\right]^{\frac{1}{2}} \tag{40}\] Now, using the same transformations (39), we write the Hamiltonians for large detuning, neglecting the constant terms: \[H_{0,l}^{-}\simeq\frac{p^{2}}{2}-4K\cos(x-\lambda\sin t)\cos^{2 }\gamma t, \tag{41}\] \[H_{\rm ad,l}^{\rm RWA,-}\simeq\frac{p^{2}}{2}-K\cos(x-\lambda \sin t). \tag{42}\] This clearly implies a drastic change in the equation since if \(\gamma\gg 1\), thus even if we use \(\langle\cos^{2}\gamma t\rangle=1/2\), the second term contributes double compared to the contribution coming from the usual case with adiabatic and RWA approximation. In order to understand the underlying phase space structure, we initialize 1000 ultracold atoms (blue dots) in one of the island in the Poincare section taken in steps of modulation time period \(T\) as shown in Fig. 3 (top) and look at its stroboscopic evolution in multiples of the modulation time period. We find that after each modulation period, atoms move from one island to another lying around the same larger elliptic-like orbit (Fig. 3 (middle)). Similarly, we find that the number of islands is equal to (or twice if \(n\) is even) the number of modulation periods \(n\) for the marked islands in Fig. 3 (bottom). In other words, these islands satisfies \(T_{\rm orbit}=nT\) or \(\Omega_{\rm orbit}/\omega=1/n\). To study the origin of these patterns in resonance structures, we write the dimensionless Hamiltonian (42) in action-angle variables. Let us write one of the RWA Hamiltonians as a perturbed harmonic oscillator: \[H_{0,\mathrm{l}}^{\mathrm{RWA},-} =\frac{p^{2}}{2}+\frac{Kx^{2}}{2}-\left(K\cos(x-\lambda\sin t)+ \frac{Kx^{2}}{2}\right) \tag{43}\] \[=H_{\mathrm{h.o.}}+\epsilon\Delta H. \tag{44}\] where \(\epsilon\) is introduced for book-keeping (eventually, we shall put \(\epsilon=1\)). Employing the oscillator action-angle variables, \((J,\theta)\), with \(x=\sqrt{\frac{J}{\pi\Omega}}\sin(\theta)\) and \(p=\sqrt{\frac{J\Omega}{\pi}}\cos(\theta)\) with \(K=\Omega^{2}\), the Hamiltonians are: \[H_{h.o.} =\frac{\Omega J}{2\pi} \tag{45}\] \[\Delta H =-\Omega^{2}\cos\left(\sqrt{\frac{J}{\pi\Omega}}\sin\theta- \lambda\sin t\right)-\frac{J\Omega}{2\pi}\sin^{2}\theta. \tag{46}\] We use the classical time-dependent perturbation theory [57] to calculate the associated action of this Hamiltonian up to first order in perturbation. For this, we transform the action variables in a way that the new Hamiltonian \(\bar{H}\) is only a function of the new action variable \(\bar{J}\) alone. We obtain \[\langle\Delta H\rangle =\frac{1}{2\pi}\int_{0}^{2\pi}dt\frac{1}{2\pi}\int_{0}^{2\pi}d \theta\Delta H(J,\theta,t)\] \[=-\Omega^{2}J_{0}\left(\sqrt{\frac{\bar{J}}{\Omega\pi}}\right)J_ {0}(\lambda)-\frac{\bar{J}\Omega}{4\pi} \tag{47}\] \[\bar{H}(\bar{J}) =\frac{\Omega\bar{J}}{2\pi}-\epsilon\Omega^{2}J_{0}\left(\sqrt{ \frac{\bar{J}}{\Omega\pi}}\right)J_{0}(\lambda)-\epsilon\frac{\bar{J}\Omega}{ 4\pi} \tag{48}\] Figure 3: Poincaré Sections taken in steps of modulation period using the same parameter as in [1]. (a) 1000 ultracold atoms (purple dots) are loaded in one of the islands of stability in the Poincare section taken in steps of the driving period T. (b) stroboscopic evolution of the ultracold atoms reveals that they evolve with a period 4T. (c) Similarly, loading on different islands of stability shows the existence of 3T, 11T/3, 4T and 5T periods predominantly. where \(J_{0}(.)\) is the cylindrical Bessel function of order zero. The new frequency is \[\Omega^{\prime}(\bar{J})=2\pi\frac{\partial\bar{H}}{\partial\bar{J}}=\Omega(1- \epsilon/2)-2\epsilon\pi\Omega^{2}J_{0}^{\prime}\left(\sqrt{\frac{\bar{J}}{ \Omega\pi}}\right)J_{0}(\lambda) \tag{49}\] where prime on the Bessel function denotes a derivative with respect to its argument. We subtract this \(\epsilon(\Delta H)\) from \(\epsilon\Delta H\) to obtain the oscillating part \(\epsilon\{\Delta H\}\). For calculating the integral, we expand the potential term using Jacobi-Anger expansion [59]\(e^{iz\sin\theta}=\sum_{n=-\infty}^{+\infty}J_{n}(z)e^{in\theta}\): \[\{\Delta H\} =-\sum_{n,m=-\infty}^{\infty}\Omega^{2}J_{n}\left(\sqrt{\frac{ \bar{J}}{\Omega\pi}}\right)J_{m}(\lambda)\cos(n\bar{\theta}-mt)+\frac{\bar{J} \Omega}{4\pi}\cos 2\bar{\theta} \tag{50}\] \[\equiv\sum_{n,m=-\infty}^{\infty}\Delta H_{n,m}(\bar{J},\bar{ \theta},t)+\frac{\bar{J}\Omega}{4\pi}\cos 2\bar{\theta} \tag{51}\] where both \(n,m\) are non-zero. The change in action \(\epsilon\Delta S\) can be calculated as \[\epsilon\Delta S =-\int^{t}dt\epsilon\{\Delta H\} \tag{52}\] \[=\sum_{n,m=-\infty}^{\infty}\epsilon\Delta S_{n,m}(\bar{J},\bar{ \theta},t)+\frac{\epsilon\bar{J}\Omega}{8\pi\bar{\Omega}(\bar{J})}\sin 2\bar{\theta} \tag{53}\] where \[\epsilon\Delta S_{n,m}=\frac{-\epsilon\Omega^{2}}{n\bar{\Omega}(\bar{J})-m}J_ {n}\left(\sqrt{\frac{\bar{J}}{\Omega\pi}}\right)J_{m}(\lambda)\sin(n\bar{ \theta}-mt) \tag{54}\] Consequent to the above, \[\bar{J}=J-\epsilon\frac{\partial\Delta S}{\partial\theta}(J,\theta,t)\ ;\ \bar{ \theta}=\theta+\epsilon\frac{\partial\Delta S}{\partial J}(J,\theta,t). \tag{55}\] The new action-angle variables can be calculated up to first order as \[\bar{J} =J+\epsilon\frac{n\Omega^{2}}{n\bar{\Omega}(J)-m}J_{n}\left( \sqrt{\frac{J}{\Omega\pi}}\right)J_{m}(\lambda)\cos(n\theta-mt)-\epsilon\frac {J\Omega}{4\pi}\cos 2\theta, \tag{56}\] \[\bar{\theta} =\theta+\epsilon\frac{-\Omega^{2}}{n\bar{\Omega}(J)-m}J_{n}^{ \prime}\left(\sqrt{\frac{J}{\Omega\pi}}\right)J_{m}(\lambda)\sin(n\theta-mt)+ \frac{\epsilon\Omega}{8\pi\bar{\Omega}(\bar{J})}\sin 2\theta. \tag{57}\] Thus we have obtained the action with resonant denominators which leads to resonant condition \[n\bar{\Omega}(\bar{J})=m\omega \tag{58}\] where \(\omega\) is the modulation frequency and \(\bar{\Omega}(\bar{J})\) is the frequency of the orbit, \(\omega\) is obtained when we substitute actual time, \(t\) in place of dimensionless time from (39). This explains the observed pattern in Fig. 3 : the orbital periods are integral multiples of the modulation period at the resonance. The strength of \((\mathrm{n},\mathrm{m})^{\mathrm{th}}\) resonance is determined by the product of two Bessel functions \(J_{n}(\sqrt{J/\Omega\pi})\) and \(J_{m}(\lambda)\). Using the first-order correction in the frequency \(\Omega(J)\), we plot it as a function of \(J\) in Fig. 4. We see that only the 1:3 resonance is allowed under first-order correction. This means that all other resonances in Fig. 3 must originate from the higher-order perturbation terms in correction for \(\bar{\Omega}\) and \(\bar{J}\). That explains the dominance of primary islands in (n,m)=(3,1) resonance and the presence of secondary islands in other resonances. For the expression without binomial approximation (42) where in Fig.2 we saw (3,1) resonance to be dominantly present, but without binomial approximation (25), this resonance is suppressed and doesn't appear. This can lead to significant corrections for both quantum and classical equations despite being in large detuning limit. Similarly, very high-ordered resonances are enhanced by binomial approximation as the chaotic regime can be seen enhanced around the edges for this case. ## 5 Dynamical localization Let us imagine that we prepare the initial state of the atoms as a localized wavepacket. As the system evolves, the wavepacket spreads. The wavefunction of the two-state system is shown to evolve, in all versions of description, on a pair of potential energy surfaces. The form of these potentials readily support bounded dynamics on one of the potentials. The complex intersection points provide paths for tunneling. The succession of these two dynamical features leads to localization of the wavepacket. The physics of this is nothing but the well-known argument by Mott [68] and Anderson [53], adapted in recent times in quantum chaos [54, 55]. Figure 4: Only those resonances whose frequency ratio \(\Omega(J):\omega\) (\(\omega\)=1 here) intersect with the \(\Omega(J)\) are allowed. Conclusions The matrix Hamiltonian driving a two-level atom has been unitarily transformed to a series of Hamiltonians arranged in the powers of Planck constant - which is the precise meaning of semiclassical expansion. A successive application of these transformations brings out an effective Hamiltonian to any desired level of accuracy. In principle, one could perform computations to all orders of \(\hbar\). The system is shown to tunnel between two potential energy surfaces, the underlying dynamics is quasi-integrable in the KAM sense. The analysis has been carried out in the past by employing physically appealing and rather standard approximations. We recapitulated these and then have provided exact solution where by "exact", we mean in the sense described in the preceding paragraph. We have seen that a matrix Hamiltonian for a spinor eigenstate. At different orders of Planck constant, there are different potential energy surfaces on which the system is shown to evolve. If one makes a binomial approximation in the Hamiltonian to treat the system, the detailed features in the Poincare surfaces of section differ. The approximated analysis has certain appeal insofar as tunneling between islands is seen clearly. However, to establish that the existence of islands and tunneling, We show that the onset of islands of stability can be seen from the first-order perturbation theory. The analysis reveals a vector potential that is related to an artificial gauge field. We believe that knowing the form for this could be useful for experiments with cold atoms and in developing fields of Hamiltonian engineering, quantum sensing and quantum interference. We have not developed these aspects here. As referred to in the Introduction, our results add to the discussion of integrability in matrix models for atomic systems, in particular to the work on quantum Rabi model [60]. In the future, by adding nonlinear terms to incorporate interactions that allow control of atomic states, these works could be useful for critical quantum metrology [69]. Control of states of multi-qubit systems [70] and their protection [71] belongs to the present theme in a rather compelling manner. **Acknowledgements** We thank Sandeep Joshi for several helpful discussions. RG acknowledges the fellowship support received from CSIR-HRDG.
量子力学の原子集合体に対する相変動の影響を再考して、厳密な解析を行った。時間依存性行列ハミルトニアン作用下で2段階系(スピнорで表される)の進化を説明する。この運動は2つの結合型エネルギー面で推移し、1つは結合している一方、もう1つは散乱型である。この運動は準可積分であり、非線形共鳴を持つ。ランダムな瞬間の散乱による有界運動は、アンダーソンと動的局在化に類似するシナリオを示す。この多成分系の厳密な分析が、古典的に非可積分であるにもかかわらず、量子計算のマルチクビット系を含む他の分野においても関連性があると考えている。
2309.04553
Extremum seeking control of quantum gates
To be useful for quantum computation, gate operations must be maintained at high fidelities over long periods of time. In addition to decoherence, slow drifts in control hardware leads to inaccurate gates, causing the quality of operation of as-built quantum computers to vary over time. Here, we demonstrate a data-driven approach to stabilized control, combining extremum-seeking control (ESC) with direct randomized benchmarking (DRB) to stabilize two-qubit gates under unknown control parameter fluctuations. As a case study, we consider these control strategies in the context of a trapped ion quantum computer using physically-realistic simulation. We then experimentally demonstrate this control strategy on a state-of-the-art, commercial trapped-ion quantum computer.
Erfan Abbasgholinejad, Haoqin Deng, John Gamble, J. Nathan Kutz, Erik Nielsen, Neal Pisenti, Ningzhi Xie
2023-09-08T18:56:07
http://arxiv.org/abs/2309.04553v1
# Extremum seeking control of quantum gates ###### Abstract To be useful for quantum computation, gate operations must be maintained at high fidelities over long periods of time. In addition to decoherence, slow drifts in control hardware leads to inaccurate gates, causing the quality of operation of as-built quantum computers to vary over time. Here, we demonstrate a _data-driven_ approach to stabilized control, combining extremum-seeking control (ESC) with direct randomized benchmarking (DRB) to stabilize two-qubit gates under unknown control parameter fluctuations. As a case study, we consider these control strategies in the context of a trapped ion quantum computer using physically-realistic simulation. We then experimentally demonstrate this control strategy on a state-of-the-art, commercial trapped-ion quantum computer. ## I Introduction In order to realize the promise of quantum computing, researchers are investigating a wide variety of hardware technology platforms. In all of these systems, the as-built processor suffers imperfections that impact performance. Some of these problems occur due to quantum effects intrinsic to the qubits, such as decoherence. However, in many platforms, imperfections in the _classical_ control dominate the error budget, entering as either coherent errors (_e.g._, over-rotating due to an imperfectly calibrated control gain) or as incoherent errors (_e.g._, due to stochastic noise coupling through the control fields to the qubit). Eliminating these technical errors is thus imperative for reliable quantum computation. Increasing the performance of quantum operations via control stabilization has a long history, spanning the multitude of physical architectures currently being pursued. Open-loop strategies have been employed since Nuclear Magnetic Resonance systems to design robust gate pulses [1], and these techniques have been adapted to superconductors [2], trapped ions [3], and spin qubits [4], among others. In addition, closed-loop control strategies have been investigated for tuning QPU performance [5, 6, 7]. As qubit performance improves, this optimization becomes more challenging - the models required for most closed- and open-loop strategies need to be high enough fidelity to resolve small deviations from ideal behavior. Models, especially those which capture the impact of experimental control parameters on qubit gate performance, are not perfect. Hence, model-based approaches can become dominated by out-of-model errors, or else require prohibitively large model training times. The alternative is to use a model-free approach, but this also comes with downsides. In particular, model-free approaches need to construct gradients of the local objective function landscape empirically and with high precision. This procedure is intensive in the number of required experiments, and often does not scale well in the number of control parameters. Here, we employ Extremum Seeking Control (ESC) [8], a widely-used model-free control strategy that is both simple and has favorable scaling with control space dimensionality. We combine ESC with direct randomized benchmarking (DRB) [9] and demonstrate a model-free closed-loop control paradigm. As a case-study, we consider a trapped ion system with three physical control parameters, demonstrating that ESC works in both simulation and experiment. In practice, the ESC strategy could be deployed alongside efficient model-based feedback strategies as an error polishing step to eliminate residual out-of-model contributions to gate infidelity, instead of as a standalone feedback controller. This work is organized as follows. In Sec. II, we define the control problem and discuss how ESC works. We then present a case-study in Sec. III, simulating ESC on a trapped-ion quantum computer with realistic parameter drift. Sec. IV shows a demonstration of this method in experiment, and Sec. V offers concluding remarks and future directions. Fig. 1: Block diagram of the closed-loop control employed in this work. The quantum system (either a simulator or a physical system), takes as input control parameters and time varying physical parameters. We use quantum characterization (DRB in our case) to learn an objective function and feed this to the controller. The controller then determines a correction to the control parameters, feeding this back to the quantum system. ## II Control strategy Our general control strategy is shown in Fig. 1. The objective function studied in this work is the gate fidelity, which is not directly observable but can be measured indirectly via standard quantum characterization methods. We use direct randomized benchmarking (DRB) for this purpose. We use an ESC controller [8] to stabilize the control parameters against physical drifts, shown schematically in Fig. 2, by optimizing the DRB objective function \(\hat{F}\). ESC starts with a set of control parameters which are assumed to be near the optimal point, noted as \(\alpha_{0},\,\beta_{0}\) in Fig. 2. Small sinusoidal perturbations \(\tilde{\alpha},\,\tilde{\beta}\) with distinct frequency and phase are added to the starting control parameters: \[\tilde{\alpha}_{i} =A_{\alpha}\sin\left(\omega_{\alpha}t_{i}+\phi_{\alpha}\right), \tag{1}\] \[\tilde{\beta}_{i} =A_{\beta}\sin\left(\omega_{\beta}t_{i}+\phi_{\beta}\right), \tag{2}\] where \(t_{i}\) ranges uniformly from 0 to 1, and \(N_{t}\) denotes the number of points. The perturbations in control parameters will result in a deliberate disturbance of the objective function, \(\tilde{F}\), which is separated from slow drift via a high pass filter. This filtered signal is demodulated by integrating against each control parameter perturbation signal to obtain \(\xi\), which indicates the local derivative of the corresponding control parameter. These local derivatives are multiplied by gain factors \(g\) to obtain corrections for the control parameters, \(\Delta\alpha,\,\Delta\beta\): \[\begin{array}{ll}\xi_{\alpha}=\sum_{i}^{N_{t}}\tilde{\alpha}_{i}\cdot \tilde{F}_{i}/N_{t},&\Delta\alpha=\xi_{\alpha}g_{\alpha}\\ \xi_{\beta}=\sum_{i}^{N_{t}}\tilde{\beta}_{i}\cdot\tilde{F}_{i}/N_{t},&\Delta \beta=\xi_{\beta}g_{\beta}\end{array}. \tag{3}\] This process proceeds iteratively so that the control parameters converge to an optimal point where the objective function is maximized. The hyperparameters for ESC are given by \(N_{t}\) and \(\{A,\,g,\,\omega,\,\phi\}\) for each control parameter, and can be tuned to increase convergence speed and maintain robustness against noise in the objective function \(\hat{F}\). ## III Simulation results ### _Physical model_ As a case study, we consider a simple model of a trapped-ion quantum processor. Here, the qubit is defined by the two internal spin hyperfine levels of the atom. The energy eigenstates of these two levels form the computational basis of the qubit, \(\{\ket{0},\ket{1}\}\), and a two-photon Raman transition is used to coherently manipulate the qubit. When the frequency difference between the two Raman laser beams is near-resonant with the qubit frequency splitting, the qubit experiences Rabi dynamics [10]. Individual parameters of the Hamiltonian can be controlled by adjusting the phase, frequency, and amplitude of RF control signals in the apparatus. The native single-qubit gate is the rotation \(R(\theta,\phi)\) with rotation angle \(\theta\) about axis \(\phi\). The angle \(\theta=\Omega t\) is determined by the rotation time \(t\) multiplied by the Rabi rate \(\Omega\), which is proportional to the product of electric field amplitudes of the two Raman beams. The phase \(\phi\) is determined by the relative phase difference of the two Raman beams. The native two-qubit gate is the Molmer-Sorensen (MS) gate, \(MS(\chi,\phi_{1},\phi_{2})\)[11]. It implements a two-qubit rotation about \(\sigma_{\phi_{1}}\otimes\sigma_{\phi_{2}}\) by detuning the two-photon Raman transition near the red and blue motional sidebands of the ion chain. The gate parameters \(\phi_{1},\,\phi_{2}\) are the rotation axes of the two qubits, and \(\chi\) is the rotation angle. We introduce physical hardware drift in the model via latent perturbations in the phase and amplitude of the Raman beams. To do this, we parameterize the physical state \(P\) for each qubit in terms of \(g2e\), the gain-to-electric-field proportionality, and \(\psi^{2q}\), the phase offset that accounts for differences between the single- and two-qubit rotation axes. \(g2e\) represents the conversion of the RF signal gain to electric field amplitude at the ion (controlling the Rabi rate), and aggregates several physical effects. The drifting physical parameters are compensated by adjusting the RF control parameters that affect the beam alignment, Fig. 3: Schematic illustration of the experimental apparatus. Pairs of steerable beams can address each qubit, where the displacement, electric field amplitude, phase, and frequency of each beam can be controlled by synthesized RF signals sent to acousto-optic devices in the apparatus. **(inset)** The electric field amplitude of each beam evaluated at the ion position is directly proportional to the Rabi rate \(\Omega\), while the phase and frequency (not shown) correspond to other parameters appearing in the atomic physics Hamiltonian. The calibration controller must stabilize each of the Hamiltonian parameters by adjusting the RF control signals under unknown, noisy drift of the physical hardware. Fig. 2: Control diagram for ESC. As explained in the main text, ESC functions by frequency multiplexing deliberate control perturbations to approximate the local derivative of an objective function. Unlike more complex optimization techniques, ESC is extremely simple to implement and analyze (either in software or hardware). amplitude, and phase, as shown schematically in Fig. 3. We control the RF amplitude (gain) \(g\) of one of the control signals for a given beam pair, and the phase difference between the two RF signals for a given beam pair \(\psi\), which translates to an optical phase difference between the two beams. Mathematically, we have a map from our physical and control parameters to our single-qubit gate \(\{P(g2e,\psi^{2q}),C(g,\psi)\}\to R(\theta,\phi)\), defined by \(\theta\varpropto g\cdot g2e\) and \(\phi=\psi\). Similarly, for an MS gate we have two sets of physical and control parameters (one for each qubit) given by \(\{P_{1},C_{1}\}\) and \(\{P_{2},C_{2}\}\). The mapping \(\{P_{1},C_{1}\},\{P_{2},C_{2}\}\to MS(X,\phi_{1},\phi_{2})\) is given by \(\chi\varpropto(g_{1}\cdot g2e_{1})\cdot(g_{2}\cdot g2e_{2})\) and \(\phi_{i}=\psi_{i}+\psi_{i}^{2q}\). We note that in an MS gate, many other parameters of the system can contribute to gate infidelity. For example, drifts in the motional mode frequencies can result in incomplete decoupling of the modes at the end of the gate. This particular error can be mitigated by open-loop pulse shape design [3, 12]. The focus of this work is on the coherent gate control described above, but it can be readily combined with these other gate design approaches. We synthesize drifting physical parameters for a pair of qubits to simulate the real physical system based on historical data from the experiment. The evolution of the physical parameters over time are considered to be sinusoidal functions with randomly drifting amplitudes and frequencies \(a(t)=A(t)\sin\left(\omega(t)\cdot t\right)\), with \(\Delta A/\Delta t=\tau_{a}\) and \(\Delta\omega/\Delta t=\tau_{\omega}\), where \(\tau_{a}\sim\mathcal{N}(0,\sigma_{a})\) and \(\tau_{\omega}\sim\mathcal{N}(0,\sigma_{\omega})\) ### _Stabilized gate operation_ We simulate extremum-seeking control of an MS gate under the realistic noise trajectories described above. First, we decide the hyperparameters in the ESC control, which are the amplitude, frequency, and phase of the sinusoidal perturbation and the gain factor for the variables (control parameters in the trapped ion quantum system) in the ESC control. These parameters are summarized in Table I. Subsequently, we define the hyperparameters in the DRB, which are the number of circuits per DRB depth and the number of shots per circuit. These two hyperparameters, along with the number of sampling points in ESC perturbation signal, the number of ESC iterations per calibration, and the calibration interval, decide the runtime cost and the effectiveness of the ESC control. We perform a grid search over these hyperparameters, computing the infidelity suppression as a function of the runtime cost given a noise trajectory sampled as described in the previous section. The results are shown in Fig. 4, and Table II shows the hyperparameters for the highlighted points along with the estimated runtime. For these simulations, we fix an RB design consisting of depths [1, 32, 128], a two-qubit gate fraction of 0.75, and use a gate error model consisting of systematic offsets for the relevant gate parameters as determined by the noise trajectory. We pick a candidate set of hyperparameters (set #1) that balances the infidelity suppression with reasonable runtime cost, and in Fig. 5 we plot the simulated evolution of the gate fidelity under ESC over 15 hours. We actively control three parameters: \(\psi_{1}\), \(\psi_{2}\), and \(g_{1}g_{2}\), stabilizing the internal gate parameters \(\phi_{1},\phi_{2}\), and \(\chi\). A large improvement over the Fig. 4: The runtime and 2Q gate error rate suppression ratio for different hyperparameter sets in the grid search. Table II lists the specific hyperparameter values for the three highlighted points. Fig. 5: Simulated evolution of the 2Q gate error rate and the control parameters under ESC control using hyperparameter set 1 from Table II. uncontrolled fidelity can be seen, as the ESC controller tracks the underlying parameter drift. In general, higher infideltiy suppression can be achieved with larger runtime overheads, and this tradeoff evaluated in the context of the experiment. ## IV Experiment We validate the ESC algorithm experimentally using an IonQ trapped-ion quantum computer. As described in Ref. [13], a linear chain of \({}^{171}\)Yb\({}^{+}\) ions is trapped on a surface-electrode trap, with the qubit defined between the two \(|m_{F}=0\rangle\) hyperfine levels of the ground state. Dynamically steerable pairs of tightly focused 355 nm laser beams drive a two-photon Raman transition to implement single- and two-qubit gate operations. Phase, frequency, and amplitude control of the Raman laser beams is achieved via programmable RF signals sent to AOMs inserted along the beam path. This provides direct control over the Hamiltonian-level parameters that describe the unitary evolution of each quantum gate. The two-qubit (MS) gate used in this demonstration is produced by a robust, amplitude-modulated pulse that decouples the entangled qubit states from the motional states of the ion chain at the end of the gate operation [14]. The amplitude-modulated envelope is designed offline, but it is parameterized by an overall scale factor (which sets \(\chi\)), a detuning from the qubit transition, and phase offsets of the red- and blue-sideband (RSB, BSB) tones used to generate the MS Hamiltonian. Specifically, our ESC controller modifies a pulse envelope scale factor on each qubit via \(g_{1},\,g_{2}\), and the RSB/BSB phase offsets via \(\psi_{1},\,\psi_{2}\). In Fig. 6, we show experimental results over eight iterations of the ESC algorithm. Before the first iteration, we displace the two spin phase controls and observe the controller drive these back to their optimal values. There is an additional drift in the \(\chi\) parameter over the course of the data run, which the ESC algorithm correctly tracks by adjusting \(g_{1}g_{2}\). As an out-of-loop probe of the controller performance, we perform a "reference" DRB experiment to probe the gate error as the three control parameters are updated by ESC. For this demonstration, the ESC algorithm uses a DRB design with number of ciruit per RB depth = 4, number of shots per circuit =100, and \(N_{t}=25\), sampled at RB depths of \([1,50]\) and a two-qubit gate fraction of 0.75. This choice balances certain classical overheads in the experiment with noise in evaluating the RB objective function. The total runtime of each ESC iteration is \(\approx 9.5\) minutes. ## V Conclusions The extremum-seeking control strategy demonstrated in this work provides another tool for calibrating quantum systems with unknown drift. It does not rely on detailed models of the physical hardware, instead operating with a generic cost function (DRB) that is applicable to any quantum computing hardware. The ESC procedure enables several control parameters to be extracted simultaneously under drift and noise, making this an efficient strategy for optimizing gate performance in a model-free way. While the absolute execution time is longer than what can be achieved using detailed microscopic models, this strategy may still find utility to polish gate errors after model-based calibration routines have been applied. Future work might involve optimizing the ESC algorithm over different objective functions, or comparing ESC to other strategies such as Nelder-Mead, Bayesian optimization, or simultaneous perturbation stochastic approximation. ## VI Acknowledgements We gratefully acknowledge support for this project from the Advancing Quantum-Enabled Technologies (AQET) traineeship program supported by NSF Award 202154 and QuantumX at the University of Washington. Fig. 6: Experimental demonstration of the ESC algorithm described in Sec. II, controlling the envelope gain \(g_{1}g_{2}\propto\chi\), and the \(\phi_{1},\,\phi_{2}\) parameters of the MS unitary via \(\psi_{1},\,\psi_{2}\). For each iteration of the algorithm, we perform a “reference” DRB experiment (upper panel) using the parameters computed by the ESC controller (lower panel). At the first iteration of the control, we introduce an artificial displacement of the spin phase parameters \(\psi_{1},\,\psi_{2}\) from their nominal values, and see the controller correctly eliminate this offset. Additionally, the controller tracks a real underlying drift in the \(\chi\) parameter over the course of the data run.
量子計算に有用なために、ゲート操作は長期間にわたって高い fidelities を維持する必要があります。 decoherenceに加えて、制御ハードウェアの遅いドリフトは、誤ったゲートを発生させ、構築された量子計算機の動作の質は時間とともに変化します。ここでは、データに基づく安定制御方法を示します。極限探索制御 (ESC) を直接ランダム化ベンチマーク (DRB) と組み合わせることで、未知の制御パラメータ変動下で 2 量子ゲートを安定させることを示します。この制御戦略は、トラップイオン量子コンピュータのケーススタディとして、物理的に現実的なシミュレーションの文脈で検討されます。そして、この制御戦略を実験的に、最新の商用トラップイオン量子コンピュータで確認します。
2309.14740
Fraction Constraint in Partial Wave Analysis
To resolve the non-convex optimization problem in partial wave analysis, this paper introduces a novel approach that incorporates fraction constraints into the likelihood function. This method offers significant improvements in both the efficiency of pole searching and the reliability of resonance selection within partial wave analysis.
Xiang Dong, Chu-Cheng Pan, Yu-Chang Sun, Ao-Yan Cheng, Ao-Bo Wang, Hao Cai, Kai Zhu
2023-09-26T08:08:18
http://arxiv.org/abs/2309.14740v1
# Fraction Constraint in Partial Wave Analysis ###### Abstract To resolve the non-convex optimization problem in partial wave analysis, this paper introduces a novel approach that incorporates fraction constraints into the likelihood function. This method offers significant improvements in both the efficiency of pole searching and the reliability of resonance selection within partial wave analysis. ## 1 Introduction Partial wave analysis (PWA) is a powerful technique used in particle physics to study the angular distributions of particles produced in scattering or decay processes [1]. By decomposing the final-state wave functions into a sum of partial waves with different angular momentum quantum numbers, PWA allows people to extract valuable information about the underlying dynamics of the interaction[2, 3]. This method enables people to identify and study resonances, determine their properties such as masses and widths, and understand the contributing amplitudes and phase shifts. PWA is particularly useful in experiments involving complex final states or multiple particles, where it helps disentangle the different contributions and extract meaningful physical observables. PWA is widely employed in experiments involving hadron colliders, electron-positron colliders, and other facilities, making it an essential tool for studying the fundamental building blocks of matter and the forces that govern their interaction. However, PWA usually suffers from non-convex optimization problems. Non-convexity arises due to the complex nature of the underlying physics and the presence of multiple resonances, therefore numerous undetermined parameters in a fitting model [4]. Unlike convex optimization problems that have a unique global minimum, non-convex optimization problems have multiple local minima. This makes finding the best fit parameters challenging, as traditional optimization algorithms can get trapped in local minima and fail to find the global or near-global minimum. The non-convex nature of the problem introduces uncertainties and can lead to biased or inaccurate results. Overcoming these challenges requires the development and application of specialized non-convex optimization techniques that can effectively explore the parameter space and find the best fit solutions. In this paper, we propose to mitigate the non-convex optimization problem in PWA by modifying the likelihood function with an additional penalty term. This term is related to a sum of all resonance state fractions. After introduce the definition of the additional penalty term, we perform two simplified PWAs, one is without the penalty term but the other one with, on a toy Monte Carlo (MC) sample. General features are obtained for the proposed PWA method, and compared with the conventional one. Then we discuss how to obtain a crucial parameter in the penalty term by a scanning method, that is more practical in a real measurement than the previously pedagogical one. Meanwhile, we show the proposed method is helpful to select reasonable contributions of resonances. A short summary then ends this paper. ## 2 Fraction Constraints to the Partial Wave Analysis As mentioned in the introduction, there are usually many undetermined parameters in a PWA, so the fitting is essentially a non-convex optimization problem, that will result in a non-global minimum point, sometimes as an unreasonable result. To resolve this problem, we propose to add a penalty term to the traditional logarithm of the likelihood, \(-\ln L\), to construct a new target function \(\tilde{M}\): \[\tilde{M}=-\ln L+\lambda(\mathbf{SF}-\overline{\mathbf{SF}})^{2}\, \tag{1}\] where \(\mathbf{SF}\) is the sum of the fractions of total events as \(\mathbf{SF}=\sum_{k}\mathbf{F}_{k}\) and \(\overline{\mathbf{SF}}\) is its expected value, where \(k\) is the index of the amplitude, and \(\lambda\) is the strict-factor. The determination of \(\overline{\mathbf{SF}}\) and \(\lambda\) are based on the situations that will be discussed later. Explicitly, the fraction of each channel is defined as: \[\mathbf{F}_{k}=\frac{1}{N}\sum_{i=1}^{N}\frac{\left|c_{k}M_{k}\left(\zeta_{i} \right)\right|^{2}}{\left|\sum_{k}c_{k}e^{i\phi_{k}}M_{k}\left(\zeta_{i}\right) \right|^{2}}\, \tag{2}\] where \(N\) is the number of events, \(M_{k}\) are the (normalized) amplitude with respect to \(\zeta_{i}\) representing both physical and nuisance parameters that may dynamically depend on the \(i\)th event, \(c_{k}\) and \(\phi_{k}\) are the magnitude and phase of each amplitude. By introducing this additional term, we restrict the feasible region and transform the original optimization problem into a "constrained non-convex optimization", that potentially is more tractable. Here, \(\overline{\mathbf{SF}}\) is the expected value of \(\mathbf{SF}\). Since \(\mathbf{SF}\) represents only the contribution from non-interference effect, the value of \(\mathbf{SF}\) is usually not 100%. When constructive interference dominates between resonance states, \(\mathbf{SF}\) will be less than 100%; when destructive interference dominates between resonance states, \(\mathbf{SF}\) will be greater than 100%. But no matter the interference is constructive or destructive, we expect the \(\mathbf{SF}\) based on a reasonable physical solution will not extremely deviate from 100%. Obviously, when \(\lambda\) is close to zero, \(\tilde{M}\) will be reduced to \(-\ln L\); but when \(\lambda\) is large enough, \(\mathbf{SF}\) will be restricted to \(\overline{\mathbf{SF}}\), i.e., the interference effect is under control, then the parameter space will be deduced, and the convexity is improved. ## 3 Partial Wave Analysis without or with Fraction Constraints For demonstration, an MC sample containing 10,000 events have been generated based on a PWA model that describes the process \(\psi(2S)\rightarrow\phi K^{+}K^{-}\)[5] with various intermediate resonances decaying into \(K^{+}K^{-}\). For convenience, this PWA model is denoted as \(R_{0}\) and the MC sample is denoted as \(S_{0}\). In \(R_{0}\), resonances such as \(f_{0}(980)\)[6, 7], \(f_{2}(1270)\)[8], \(f_{2}^{\prime}(1525)\)[9], \(f_{0}(1710)\)[10], \(f_{2}(2150)\)[11], and \(f_{2}(2340)\)[12] are included with description according to the corresponding references, respectively. Their masses, widths, and relevant fractions are shown in Table 1. In the \(R_{0}\) model, covariant tensors are applied to describe the partial wave amplitudes. It should be noted that Table 1 lists the fractions of each resonance, and the sum of the fractions yields a \(\mathbf{SF}\) value of approximately 115%. The Dalitz plot corresponding to the generated events is shown in Fig. 1, and the distribution on the \(K^{+}K^{-}\) invariant mass spectrum is shown in Fig. 2. The existence of both narrow and broad resonances makes \(R_{0}\) not a naive model. It should be noted that this MC sample is just designed for studying the PWA method, but does not intend to simulate the three-body decay \(\psi(2S)\rightarrow\phi K^{+}K^{-}\) in the real world. Firstly, we fit the MC sample \(S_{0}\) with the \(R_{0}\) model 300 times by using the target function \(-\ln L\). Figure 3 shows the obtained logarithm of the likelihood and the sum of the fractions. It is apparently \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(R_{0}\) & Name & \(F_{i}(\%)\) & Mass (GeV) & Width (GeV) \\ \hline 1 & \(f_{0}(980)\) & 39.5 & 0.979 & 0.107 \\ 2 & \(f_{2}(2340)\) & 37.1 & 2.548 & 0.324 \\ 3 & \(f_{2}^{\prime}(1525)\) & 24.7 & 1.522 & 0.089 \\ 4 & \(f_{0}(1710)\) & 8.30 & 1.676 & 0.163 \\ 5 & \(f_{2}(1270)\) & 3.16 & 1.290 & 0.196 \\ 6 & \(f_{2}(2150)\) & 2.22 & 2.162 & 0.159 \\ \hline & \(\mathbf{SF}\) & 115.0 & & \\ \hline \end{tabular} \end{table} Table 1: Resonances incorporated in PWA model \(R_{0}\), and their corresponding parameters. Figure 1: The Dalitz plot from the MC sample \(S_{0}\) generated by the \(R_{0}\) model. Figure 2: The \(K^{+}K^{-}\) invariant mass spectrum for the MC sample \(S_{0}\) generated by the \(R_{0}\) model. that even the fitting PWA model is perfectly matched to the data-producing model, there is still a large probability that the fitting results deviate significantly from the true values, while good fit results, in which the global minimum is found, always provide correct **SF** values. The red box of Fig. 3 represents a region enclosing good fits. The number of points in it is 41, that accounts for only about 14% of the total fitting times. The unreliability of the fitting results is the so called non-convex problem, that is caused by the complexity of the PWA, resulting in various local minima of the likelihood function in the parameter space. One way to avoid this problem and find the global minima is by re-fitting data in huge number of times, with varied initial parameters, and this is a critical reason for the low efficiency of the PWA. Secondly, we redo the fits again by replacing the target function from \(-\ln L\) to \(\tilde{M}\). Usually, the expected sum of fractions \(\overline{\mathbf{SF}}\) can be determined by a scanning method that will be described in Sec. 4 along with the resonance selection. Here, we just adopt the result and set it to 120%, and set the strict-factor \(\lambda=10^{-2}\) by practical experience. The results of 300 fits are shown in Fig. 4. There are 46 points in the red box of Fig. 4, which is slightly higher than the number in Fig. 3. It can be seen that the penalty term limits the range of **SF** as expected and increases the probability of the fitting result reaching the global optimum. Although it needs more computation source to calculate the penalty term **SF**, against one's intuition, the whole fitting time required by \(\tilde{M}\) is less than that of \(-\ln L\). This timing reduction is mainly caused by the less tempts to find a minimal in a reduced parameter space. To investigate the impact on computation time, a time analysis is performed to obtain the results in Fig. 3 and Fig. 4. The costumed time is shown in Fig. 5. From it, the average fitting time for \(\tilde{M}\) is approximately 500 s, while the average fitting time for \(-\ln L\) is around 750 s. A significant speed-up is found. This result is obtained in our own testing environment, and factors such as the PWA program, fitting method, and hardware platform can affect the results. However, just like the role of penalty terms in the field of deep learning, the inclusion of penalty terms in this context serves to prevent large, ineffective attempts during the fitting process. These penalty terms provide additional gradients (on the boundaries of the parameter space) that are independent of the program, software, and hardware platforms used. To check the feasibility of the new PWA method, the fitting results corresponding to the global optimal points, without or with the penalty, are listed in Table 2 and Table 3 for comparison. It can be seen that the two fitting results, including both mean values and statistical uncertainties, are consistent with each other. To test the fit stability of the PWA with the additional penalty term, we have generated 300 sets of samples using the same \(R_{0}\) model only with various random number seeds, and performed fitting on each set. Figure 6 shows the distribution of the sum of fractions. A fit with a Gaussian function gives the result is \(1.13\pm 0.02\), that is consistent with the input value 1.14 considering the uncertainty. Figure 3: The distribution of likelihood values and **SF** values of the fitting results corresponding to the resonance combination \(R_{0}\). The red vertical line represents the true value of **SF**, and the red box contains the points of good fits. Figure 4: The likelihood value and SF value distribution of the resonance state combination \(R_{0}\) corresponding to the fitting result when \(\mathbf{SF}=120\%\) and \(\lambda=10^{-2}\). The red vertical line represents the true value of \(\mathbf{SF}\), and the red box contains the points of good fits. Figure 5: Compare the fitting time used by \(-\ln L\) and \(\tilde{M}\). \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(R_{0}\) & Name & \(F_{i}(\%)\) & Mass (GeV) & Width (GeV) \\ \hline 1 & \(f_{0}(980)\) & \(39.2\pm 1.5\) & \(1.015\pm 0.043\) & \(0.102\pm 0.030\) \\ 2 & \(f_{2}(2340)\) & \(37.5\pm 1.6\) & \(2.571\pm 0.015\) & \(0.281\pm 0.017\) \\ 3 & \(f_{2}^{\prime}(1525)\) & \(23.5\pm 1.0\) & \(1.523\pm 0.002\) & \(0.084\pm 0.003\) \\ 4 & \(f_{0}(1710)\) & \(8.7\pm 0.9\) & \(1.671\pm 0.005\) & \(0.159\pm 0.010\) \\ 5 & \(f_{2}(1270)\) & \(2.7\pm 0.6\) & \(1.288\pm 0.013\) & \(0.181\pm 0.027\) \\ 6 & \(f_{2}(2150)\) & \(2.5\pm 0.6\) & \(2.152\pm 0.012\) & \(0.170\pm 0.026\) \\ \hline & \(\mathbf{SF}\) & \(114.0\) & & \\ \hline \end{tabular} \end{table} Table 2: Fitting results of the PWA model \(R_{0}\) with \(-\ln L\). \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(R_{0}\) & Name & \(F_{i}(\%)\) & Mass (GeV) & Width (GeV) \\ \hline 1 & \(f_{0}(980)\) & \(39.3\pm 1.6\) & \(1.017\pm 0.039\) & \(0.101\pm 0.035\) \\ 2 & \(f_{2}(2340)\) & \(37.5\pm 1.8\) & \(2.571\pm 0.016\) & \(0.282\pm 0.018\) \\ 3 & \(f_{2}^{{}^{\prime}}(1525)\) & \(23.6\pm 1.0\) & \(1.523\pm 0.002\) & \(0.084\pm 0.003\) \\ 4 & \(f_{0}(1710)\) & \(8.7\pm 1.0\) & \(1.671\pm 0.005\) & \(0.159\pm 0.010\) \\ 5 & \(f_{2}(1270)\) & \(2.7\pm 0.6\) & \(1.288\pm 0.014\) & \(0.182\pm 0.026\) \\ 6 & \(f_{2}(2150)\) & \(2.5\pm 0.6\) & \(2.152\pm 0.012\) & \(0.170\pm 0.027\) \\ \hline & **SF** & 114.3 & & \\ \hline \end{tabular} \end{table} Table 3: Fitting results of the PWA model \(R_{0}\) with \(\dot{M}\). Figure 6: The distribution of the sum of fractions in 300 test MC samples that are generated with the model \(R_{0}\). The red curve represents the Gaussian function utilized in the fit. ## 4 Fraction Constraint Scanning and Resonant State Selection In the last section, both PWAs are performed with a perfect model, that is, exactly the one used in generating the MC sample. However, in a real PWA, to determine which resonances should be included is an important and difficult issue to be addressed [13]. Typically, this is done by comparing the likelihood values of different combinations of resonances and calculating corresponding significance. But how to determine a baseline, that is crucial for the significance calculation, is a frequently debated question in PWA. Furthermore, whether to include a resonance or not should be beyond the sole goodness of a fit. In addition to considering the significance of a resonance, more information, such as the branching fraction, physical rules conservation, complexity of a PWA model, etc., need to be considered. Some researchers have already borrowed some mature theories from information theory, such as AIC and BIC [14], to balance the model complexity and goodness of a fit. Similar to AIC and BIC, the fraction constraint method, proposed by us, try to control the model complexity by introducing the penalty term. Using \(\tilde{M}\), we can quickly obtain the best fit results for different PWA models with various resonance combinations, when the strict-factor \(\lambda\) is set to be a somewhat large value, such as \(10^{2}\). Based on this advantage, the value of \(\overline{\mathbf{SF}}\) is obtained by scanning in a series of fits, and the results are shown in Fig. 7. Here \(R_{-1}\) represents the PWA model subtracting resonance \(f_{2}(1270)\) from \(R_{0}\), and \(R_{-2}\) represents subtracting resonance \(f_{2}(2150)\); while \(R_{+1}\) represents adding resonance \(f_{0}(1370)\)[15], \(R_{+2}\) represents adding resonance \(f_{2}(2010)\)[16]. From Fig. 7, it can be seen that there is a large gap between \(R_{-1}\) ((\(R_{-2}\)) and \(R_{0}\). The difference in the y-axis, i.e., the logarithm of the likelihood, indicates the models with subtracting resonances is not complex enough to describe the data, compared with the \(R_{0}\). But the gap between \(R_{+1}\) (\((R_{+2})\) and \(R_{0}\) is very small, indicating that the parameters of models with additional resonances are overpopulated. Therefore, \(R_{0}\) is the best PWA model to describe the data. So the scan method can help to select a reasonable set of resonances in a PWA model. And from the scan curve the best \(\mathbf{SF}\) can be determined from the minimum, that should be considered as the expected value of \(\mathbf{SF}\). ## 5 Summary This article proposes the use of \(\tilde{M}\) instead of \(-\ln L\) in PWA by evaluating the likelihood value as a function of fraction constraints, thereby improving analysis efficiency. An analysis conducted on the MC sample demonstrates the reliability of the fitted center values and statistical uncertainties based on the new method. Additionally, the relationship between the likelihood value of the fitting results and the \(\mathbf{SF}\) value provides a fresh perspective on addressing the resonance selection issue. By constraining the \(\mathbf{SF}\) values, redundant resonances can be effectively reduced, thereby mitigating the Figure 7: \(\mathbf{SF}\) scanning curves. The blue, green, yellow, red, and purple lines represent the PWA models \(R_{-2}\), \(R_{-1}\), \(R_{0}\), \(R_{+1}\), \(R_{+2}\), respectively. overestimation of systematic uncertainties resulting from the selection of resonance states. While the use of \(\tilde{M}\) instead of \(-\ln L\) does not offer a definitive solution to the increasingly complex nature of PWA driven by expanding data volumes, it has proven to enhance efficiency and minimize debates surrounding resonance states through practical implementation.
非対称最適化問題を解析する際に、この論文では、確率論的Likelihood関数の内部に折れ線形制約を導入する革新的な手法を提案します。この手法は、偏波解析におけるポール探索の効率性と、共振選択の信頼性を大幅に向上させます。
2310.20586
Harmonization-enriched domain adaptation with light fine-tuning for multiple sclerosis lesion segmentation
Deep learning algorithms utilizing magnetic resonance (MR) images have demonstrated cutting-edge proficiency in autonomously segmenting multiple sclerosis (MS) lesions. Despite their achievements, these algorithms may struggle to extend their performance across various sites or scanners, leading to domain generalization errors. While few-shot or one-shot domain adaptation emerges as a potential solution to mitigate generalization errors, its efficacy might be hindered by the scarcity of labeled data in the target domain. This paper seeks to tackle this challenge by integrating one-shot adaptation data with harmonized training data that incorporates labels. Our approach involves synthesizing new training data with a contrast akin to that of the test domain, a process we refer to as "contrast harmonization" in MRI. Our experiments illustrate that the amalgamation of one-shot adaptation data with harmonized training data surpasses the performance of utilizing either data source in isolation. Notably, domain adaptation using exclusively harmonized training data achieved comparable or even superior performance compared to one-shot adaptation. Moreover, all adaptations required only minimal fine-tuning, ranging from 2 to 5 epochs for convergence.
Jinwei Zhang, Lianrui Zuo, Blake E. Dewey, Samuel W. Remedios, Savannah P. Hays, Dzung L. Pham, Jerry L. Prince, Aaron Carass
2023-10-31T16:23:37
http://arxiv.org/abs/2310.20586v1
Harmonization-enriched domain adaptation with light fine-tuning for multiple sclerosis lesion segmentation ###### Abstract Deep learning algorithms using magnetic resonance (MR) images have demonstrated state-of-the-art performance in the automated segmentation of multiple sclerosis (MS) lesions. Despite their success, these algorithms may fail to generalize across sites or scanners, leading to domain generalization errors. Few-shot or one-shot domain adaptation is an option to reduce the generalization error using limited labeled data from the target domain. However, this approach may not yield satisfactory performance due to the limited data available for adaptation. In this paper, we aim to address this issue by integrating one-shot adaptation data with harmonized training data that includes labels. Our method synthesizes new training data with a contrast similar to that of the test domain, through a process referred to as "contrast harmonization" in MRI. Our experiments show that combining one-shot adaptation data with harmonized training data outperformed the use of either one of the data sources alone. Domain adaptation using only harmonized training data achieved comparable or even better performance compared to one-shot adaptation. In addition, all adaptations only required light fine-tuning of 2 to 5 epochs for convergence. Multiple Sclerosis, Lesion Segmentation, Domain Adaptation, Synthesis-based Harmonization Further author information: (Send correspondence to Jinwei Zhang) Jinwei Zhang: E-mail: [email protected] ## 1 Introduction Multiple sclerosis (MS) is a central nervous system disorder characterized by inflammatory demyelination and axonal and neuronal degeneration [1]. T2-weighted (T2w) magnetic resonance imaging (MRI) using the fluid-attenuated inversion recovery (FLAIR) pulse sequence is routinely used for clinical diagnosis of MS lesions because it provides high lesion-to-brain contrast while simultaneously suppressing hyperintense cerebrospinal fluid (CSF) signals, which can cause partial-volume artifacts in T2w images [2]. Extensive manual editing is required for accurate delineation of MS lesions, though the task can be quite subjective. Therefore, automatic detection and segmentation of MS lesions is desired for better efficiency and reproducibility. State-of-the-art methods [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14] employ deep learning (DL) to automate MS lesion segmentation using multi-contrast MRI scans, including FLAIR. However, these algorithms frequently face challenges in achieving consistent performance across different MRI scanners and imaging protocols. This has led to increased research interest in domain adaptation techniques, such as one-shot domain adaptation [5], spatially adaptive sub-networks [6], domain-invariant latent feature learning [7, 8, 9], contrast-adaptive generative modeling [10], and domain randomness via synthesis [11], to name just a few. An alternative to domain adaptation is image harmonization, which reduces inter-site variation to aid downstream comparisons and analysis of images across imaging sites, scanners, and over time [15, 16, 17]. Synthesis-based multi-contrast MRI harmonization has shown remarkable progress in recent years [18, 19, 20, 21]. In this paper, we use HACA3 [22], a new synthesis-based multi-site MRI harmonization approach, to enhance one-shot and even "zero-shot" domain adaptation performance for MS lesion segmentation. ## 2 Method Dataset.We use the training data from the ISBI longitudinal dataset consisting of five people with MS (PwMS) [23] and an in-house dataset including ten PwMS; both datasets have corresponding manual delineations. We first pre-trained a segmentation network using four of the five training subjects from the ISBI dataset, with the remaining subject being used for validation. We then applied the pre-trained segmentation network to our in-house dataset, which comes from a different domain than the training and validation data. Network.We implemented a modified 3D UNet following the design choices of nnUNet [24]. For the convolutional building block in the UNet, we chose the "Conv+InstanceNorm+ReLU" configuration with 2 blocks for each level of the encoding or decoding path of the UNet and 4 downsampling/upsampling operations. The numbers of channels in each convolutional block at all levels along the encoding path were 32, 64, 128, 256, and 512. Three dimensional patches were cropped from skull-stripped [25] and white matter intensity normalized [26] T1-weighted (T1w) and FLAIR images, and these patches were concatenated along the channel dimension to be used as input. We generate the binary prediction of segmentation by thresholding the sigmoid of the output of UNet's last convolutional layer. Training.A batch size of two was used and the 3D patch size was set to \(112\times 112\times 112\) to fully utilize GPU memory during backpropagation. Heavy augmentations were employed on the fly, including random cropping, axis permutation, intensity shifts, as well as affine and elastic deformations. The loss function was the mean of the Dice similarity coefficient (DSC) and binary cross-entropy. The Adam optimizer was used with an initial learning rate of \(10^{-4}\) for 100 epochs, where each epoch involved the application of eight random augmentations to every training subject. Harmonization-based domain adaptation.For domain adaptation after training, all T1w and FLAIR images from the ISBI training dataset were transformed to match the contrast of our in-house test dataset using the synthesis-based multi-site harmonization of HACA3 [22]. HACA3 was trained on diverse MR datasets acquired from 21 sites, which included white matter lesion multi-contrast images with varying field strengths, scanner platforms, and acquisition protocols. An example of such image harmonization is shown in Fig. 1, where the FLAIR "source data" from "Site-01" in the ISBI training set was harmonized to match the contrast of "Site-02" of our in-house test set. Consistent gray and white matter contrast was observed between "synthetic/harmonized data" and "real data" for "Site-02". After harmonization, three domain adaptation strategies were evaluated: Figure 1: **Synthesis-based harmonization of one FLAIR axial slice.** (a) Source FLAIR slice on Site-01 (ISBI public training set). (b) Harmonized FLAIR slice from Site-01 to Site-02 (in-house test set) using HACA3 [22]. (c) Real FLAIR slice on Site-02. One-Shot Strategy Fine-tune (FT) the pre-trained network with only one of the ten subjects in the test domain and evaluate on the remaining nine test subjects. Zero-Shot Strategy FT with only the harmonized ISBI data and evaluate on the ten test subjects. Harmonization-enriched One-Shot Strategy FT with a combination of all harmonized ISBI training data and one of the test domain subjects, and evaluate on the remaining nine test subjects. For the One-Shot and Harmonization-enriched One-Shot strategies, ten-fold cross-validation (CV) was employed to evaluate on all ten test subjects from the in-house data, where each of the ten subjects in the test domain was included in the training set for one fold. All FTs were conducted for 20 epochs, and the models were tested after every epoch. For comparison, two-fold CV FT with 4/1/5 training/validation/test subjects per fold was also employed for 100 epochs. This CV served as a "normal" performance estimation, assuming enough labeled data in new domains was provided for network adaptation. Evaluation metrics.The DSC, lesion-wise F1 score (L-F1), and Pearson's correlation coefficient of the lesion volumes between the ground truth and the prediction (VC) were utilized as the segmentation performance evaluation metrics in the experiment. ## 3 Results Quantitative comparison.Figure 2 shows DSC, L-F1, and VC scores of the three domain adaptation strategies after each FT epoch. First, the performance of all three strategies converged within just 2 to 5 epochs, exhibiting noticeable improvement over the pre-trained results (dashed purple lines) in terms of DSC and L-F1. However, a noticeable degradation in VC was observed for normal CV (dashed red line) and the One-Shot strategy (solid orange line), which was not observed for the Zero-Shot (solid blue line) and Harmonization-enriched One-Shot (solid green line) strategies. Second, the Harmonization-enriched One-Shot strategy consistently outperformed the other two strategies in terms of DSC and L-F1, and performed similarly well with the Zero-Shot strategy for VC. Notably, the Harmonization-enriched One-Shot strategy achieved a DSC score of above 0.6 after convergence, approaching inter-rater consistency [27]. Third, Zero-Shot outperformed One-Shot strategy in terms of DSC and VC, while these two strategies performed similarly for L-F1. Qualitative comparison.Figure 3 shows segmentation predictions with the corresponding ground-truth label and axial FLAIR slice (left column). The pre-trained prediction (Fig. 3a) exhibited false negative predictions throughout the entire image, which was addressed by the target site CV (Fig. 3b). False positive lesions (indicated by red arrows) were observed in the One-Shot strategy after 2 (Fig. 3c-1) and 20 (Fig. 3c-2) fine-tuning epochs, which were not observed in the Zero-Shot strategy (Fig. 3d) or the Harmonization-enriched One-Shot strategy (Fig. 3e) after 2 epochs (Fig. 3d-1 and Fig. 3e-1, respectively) or 20-epochs (Fig. 3d-2 and Fig. 3e-2) of fine-tuning. False negative lesions (indicated by yellow arrows) missed by the Zero-Shot strategy (Fig. 3d) were still captured by the One-Shot and Harmonization-enriched One-Shot strategies (Fig. 3c and Fig. 3e, respectively). No significant differences were observed between the 2 epoch (Fig. 3d-1 or Fig. 3e-1) and 20 epoch (Fig. 3d-2 or Fig. 3e-2) fine-tuning results of the Zero-Shot or Harmonization-enriched One-Shot strategies. ## 4 New Work to Be Presented We will present how to use synthesis-based harmonization to boost one-shot domain adaptation performance for MS lesion segmentation. This work has not been submitted or presented elsewhere before. ## 5 Discussion and Conclusion Domain adaptation through network fine-tuning has been successfully applied to different imaging problems in MRI, including under-sampled k-space reconstruction [28], biophysical inversion [29] with uncertainty quantification [30], and contrast translation [31]. In this work, we demonstrate the feasibility of leveraging synthesis-based MRI harmonization to enhance domain adaptation performance in MS lesion segmentation. Our experiments demonstrate that our Zero-Shot domain adaptation, utilizing solely public data synthesized to the target contrast, yields comparable or superior performance than a One-Shot strategy on the target domain. More notably, the combination of One-Shot and Zero-Shot adaptation, which we coin as Harmonization-enriched One-Shot domain adaptation, achieved DSC results approaching inter-rater performance. Additionally, only light fine-tuning of between 2 and 5 epochs was enough for an adequate adaptation of the pre-trained network. ###### Acknowledgements. This material is partially supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1746891 (Remedios). This work also received support from National Multiple Sclerosis Society RG-1907-34570 (Pham), CDMRP W81XWH2010912 (Prince), and the Department of Defense in the Center for Neuroscience and Regenerative Medicine. The opinions and assertions expressed herein are those of the authors and do not reflect the official policy or position of the Uniformed Services University of the Health Sciences or the Department of Defense.
深層学習アルゴリズムが磁気共鳴(MR)画像を利用して、多発性硬化症(MS)の lessions を自律的に分割する能力を極めて高いレベルで示し、その成果は素晴らしい。しかし、これらのアルゴリズムは、異なるサイトやスキャナーで性能を拡張することが困難であり、このため、ドメイン généralisation エラーが生じる可能性がある。少数のショットやワンショットのドメイン適応が、汎用性の改善に可能性を秘めているが、ターゲットドメインにおけるラベル付きデータの不足がその効果を阻害することがある。本論文では、ワンショット適応データを統合した、ラベル付きのトレーニングデータと統合することで、この課題に対処することを目的としている。私たちのアプローチは、テストドメインの類似性のある新しいトレーニングデータを作成することである。これは、磁気共鳴画像における「対比調整」と称するプロセスである。実験の結果、ワンショット
2303.18121
BERTino: an Italian DistilBERT model
The recent introduction of Transformers language representation models allowed great improvements in many natural language processing (NLP) tasks. However, if on one hand the performances achieved by this kind of architectures are surprising, on the other their usability is limited by the high number of parameters which constitute their network, resulting in high computational and memory demands. In this work we present BERTino, a DistilBERT model which proposes to be the first lightweight alternative to the BERT architecture specific for the Italian language. We evaluated BERTino on the Italian ISDT, Italian ParTUT, Italian WikiNER and multiclass classification tasks, obtaining F1 scores comparable to those obtained by a BERTBASE with a remarkable improvement in training and inference speed.
Matteo Muffo, Enrico Bertino
2023-03-31T15:07:40
http://arxiv.org/abs/2303.18121v1
# BERTino: an Italian DistilBERT model ###### Abstract **English.1** The recent introduction of Transformers language representation models allowed great improvements in many natural language processing (NLP) tasks. However, if on one hand the performances achieved by this kind of architectures are surprising, on the other their usability is limited by the high number of parameters which constitute their network, resulting in high computational and memory demands. In this work we present BERTino, a DistilBERT model which proposes to be the first lightweight alternative to the BERT architecture specific for the Italian language. We evaluated BERTino on the Italian ISDT, Italian ParTUT, Italian WikiNER and multiclass classification tasks, obtaining F1 scores comparable to those obtained by a \(BERT_{BASE}\) with a remarkable improvement in training and inference speed. Footnote 1: Copyright ©2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). **Italiano.** La recente introduzione dei Transformers come modelli di rappresentazione del linguaggio naturale ha pernesso grandi avanzamenti sullo stato dell'arte in molte applicazioni di Natural Language Processing (NLP). Tuttavia, se da una parte i risultati raggiunti da queste architetture sono sorprendenti, dall'altra la loro fruibilita e limitata dall'elevato numero di parametri che costituiscono la loro architettura, con conseguenti elevate esigenze computazionali e di memoria. In questo lavoro presentiamo BERTino, un modello DistilBERT che e la prima alternativa _leggera_ all'architettura BERT specifica per la lingua italiana. Abbiamo valutato BERTino sui task ISDT tialiano, ParTUT italiano, WikiNER italiano e classificazione multiclasse, ottenendo punteggi F1 paragonabili a quelli ottenuti da un modello \(BERT_{BASE}\) con un notevole miglioramento nella velocita di addestramento e inferenza. ## 1 Introduction In recent years the introduction of Transformers language models allowed great improvements in many natural language processing (NLP) tasks. Among Transformer language models, BERT Devlin et al. (2018) affirmed itself as an high-performing and flexible alternative, being able to transfer knowledge from general tasks to downstream ones thanks to the pretraining-finetuning approach. The context-dependent text representations provided by this model demonstrated to be a richer source of information when compared to static textual embeddings such as Word2Vec Mikolov et al. (2013), GloVe Pennington et al. (2014), FastText Bojanowski et al. (2016) or Sent2Vec Pagliardini et al. (2018). However, despite the substantial improvements brought by BERT in the NLP field, the high number of parameters that constitute its network makes its usage prohibitive in resource-limited devices, both at training and inference time, and with a non-negligible environmental impact. To address the aforementioned problem, recent research proposes several approaches to reduce the size of the BERT network, such as DistilBERT Sanh et al. (2019), MobileBERT Sun et al. (2020) or pruning Gordon et al. (2020); McCarley et al. (2019). The experiments conducted in Virtanen et al. (2019), de Vries et al. (2019) and Martin et al. (2020) demonstrate that monolingual BERT models outperform the same multilingual BERT architecture Devlin et al. (2018), justifying the effort for pre-training Transformer models required for specific languages. In this work we present **BERTino**, a DistilBERT model pre-trained on a large Italian corpus. This model proposes to be the first general-domain, lightweight alternative to BERT specific for the Italian language. We evaluate BERTino on two Part Of Speech tagging tasks, Italian ISDT Bosco et al. (2000) and Italian ParTUT Sanguinetti and Bosco (2015), on the Italian WikiNER Nothman et al. (2012) Named Entity Recognition task and on a multi-class sentence classification. Comparing the scores obtained by BERTino, its teacher model and GilBERTo, the first obtains performances comparable to the other two architectures while sensibly decreasing the fine-tuning and evaluation time. In Section 2 we discuss the related works with a focus on DistilBERT, in Section 3 we describe the corpus and the pre-train followed by the results in Section 4. ## 2 Related work In this section we will give a brief outline of the inner workings for Transformers, then we overview some lightweight alternatives to BERT. The introduction of Transformer blocks Vaswani et al. (2017) in language representation models is a keystone in recent NLP. The attention mechanism adopted by the Transformer encoder allows to provide contextualized representations of words, which proved to be a richer source of information than static word embeddings. Attention mechanism processes all words in an input sentence simultaneously, allowing parallelization of computations. This is a non-negligible improvement with respect to models like ELMo Peters et al. (2018), which aim to provide contextualized text representations using a bidirectional LSTM network, processesion each word sequentially. Among language models that adopt Transformer technology, BERT Devlin et al. (2018) affirmed itself as a flexible and powerful alternative, being able to establish new state-of-the-art for 11 NLP tasks at the time of publication. In its base version, this model adopts an hidden size of 768 and is composed of 12 layers Transformer blocks), each of these involving 12 attention heads, for a total of 110 millions of parameters. As outlined in Section 1, the high number of parameters constituting BERT's network can result prohibitive for deployment in resource-limited devices and the computational effort is not negligible. For this reason, great effort has been devoted by researchers in order to propose smaller but valid alternatives to the base version of BERT. Gordon et al. (2020) studies how weight pruning affects the performances of BERT, concluding that a low level of pruning (30-40% of weights) marginally affects the natural language understanding capabilities of the network. McCarley et al. (2019) conducts a similar study on BERT weight pruning, but applied to the Question Answering downstream task specifically. Sanh et al. (2019) propose DistilBERT, a smaller BERT architecture which is trained using the knowledge distillation technique Hinton et al. (2015). Since the model that we propose relies on this training technique, we propose a brief description of knowledge distillation in section 2.1. DistilBERT leverages the inductive biases learned by larger models during pre-training using a triple loss combining language modeling, distillation and cosine-distance losses. DistilBERT architecture counts 40% less parameters but is able to retain 97% of natural language understanding performances with respect to the teacher model, while being 60% faster. Sun et al. (2020) propose MobileBERT, a compressed BERT model which aims to reduce the hidden size instead of the depth of the network. As DistilBERT, MobileBERT uses knowledge distillation during pre-training but adopts a \(BERT_{LARGE}\) model with inverted bottleneck as teacher. ### Knowledge distillation Knowledge distillation Hinton et al. (2015) is a training technique that leverages the outputs of a big network (called _teacher_) to train a smaller network (the _student_). In general, in the context of supervised learning, a classifier is trained in such a way that the output probability distribution that it provides is as similar as possible to the one-hot vector representing the gold label, by minimizing the cross-entropy loss between the two. By receiving a one-hot vector as learning signal, a model evaluated on the training set will provide an output distribution with a near-one value in cor respondence of the right class, and all near-zero values for other classes. Some of the near-zero probabilities, however, are larger than the others and are the result of the generalization capabilities of the model. The idea of knowledge distillation is to substitute the usual one-hot vector representing gold labels with the output distribution of the teacher model in the computation of the cross-entropy loss, in order to leverage the information contained in the near-zero values of the teacher's output distribution. Formally, the knowledge distillation loss is computed as: \[\mathcal{L}_{KD}=\sum_{i}t_{i}*\log(s_{i}) \tag{1}\] with \(t_{i}\) being the output distribution of the teacher model relative to the \(i^{th}\) observation, and \(s_{i}\) being the output distribution of the student model relative to the \(i^{th}\) observation. ## 3 BERTTino As outlined in section 1, we propose in this work BERTTino, a DistilBERT model pre-trained on a general-domain Italian corpus. As for BERT-like architectures, BERTTino is task-agnostic and can be fine-tuned for every downstream task. In this section we will report details relative to the pre-training that we conducted. ### Corpus The corpus that we used to pre-train BERTino is the union of PAISA (Lyding et al., 2014) and ItWaC (Baroni et al., 2009), two general-domain Italian corpora scraped from the web. While the former is made up of short sentences, the latter includes a considerable amount of long sentences. Since our model can receive input sequences of at most 512 tokens, as for BERT architectures, we decided to apply a pre-processing scheme to the ItWaC corpus. We split the sentences with more than 400 words into sub-sentences, using fixed points to create chunks that keep the semantic sense of a sentence. In this way, most of the long sentences contained in ItWaC are split into sub-sentences containing less than 512 tokens. A certain number of the final sentences still contain more than 512 tokens and they will be useful for training the parameters relative to the last entries of the network. The PAISA corpus counts 7.5 million sentences and 223.5 million words. The ItWaC corpus counts 6.5 million sentences and 1.6 billion words after preprocessing. Our final corpus counts 14 million sentences and 1.9 billion words for a total of 12GB of text. ### Pre-training **Teacher model** The teacher model that we selected to perform knowledge distillation during the pre-training of BERTTino is _dbmdz/bert-basetialian-xcl-uncased_, made by _Bavarian State Library2_. We chose this model because it is the Italian \(BERT_{BASE}\) model trained on the biggest corpus (81 GB of text), up to our knowledge. Following Sanh et al. (2019), we initialized the weights of our student model by taking one layer out of two from the teacher model. Footnote 2: [https://github.com/dbmdz/berts](https://github.com/dbmdz/berts) **Loss function** We report the loss function used to pre-train BERTino: \[\mathcal{L}=0.45\mathcal{L}_{KD}+0.45\mathcal{L}_{MLM}+0.1\mathcal{L}_{COS} \tag{2}\] with \(\mathcal{L}_{KD}\) being the knowledge distillation loss as described in equation 1, \(\mathcal{L}_{MLM}\) being the masked language modeling loss and \(\mathcal{L}_{COS}\) being the cosine embedding loss. Sanh et al. (2019) describe the cosine embedding loss useful to "align the directions of the student and teacher hidden states vectors". When choosing the weights of the three loss functions, we wanted our model to learn from the teacher and by itself in an equal way, so we set the same weights for both \(\mathcal{L}_{KD}\) and \(\mathcal{L}_{MLM}\). Moreover, we considered the alignment of student and teacher hidden states vectors marginal for our objective, setting \(\mathcal{L}_{COS}\) as 10% of the total loss. **Architecture** The architecture of BERTTino is the same as in DistilBERT. Our model adopts an hidden size of 768 and is composed of 6 layers (Transformer blocks), each of which involving 12 attention heads. In this way BERTTino's network results to have half the layers present in the \(BERT_{BASE}\) architecture. **Training details** To pre-train BERTTino we used a batch size of 6 and an initial learning rate of \(5\times 10^{-4}\), adopting Adam (Kingma and Ba, 2014) as optimization algorithm. We chose 6 as batch size due to the limited computational resources available. Results described in section 4 demonstrate that the small batch size that we adopted is sufficient to obtain a valid pre-trained model. We trained our model on 4 Tesla K80 GPUs for 3 epochs, requiring 45 days of computation in total. For some aspects of the training, we relied on the Huggingface Transformers repository (Wolf et al., 2019). ## 4 Results We tested the performances of BERTino on benchmark datasets: the Italian ISDT (Bosco et al., 2000) and Italian ParTUT (Sanguinetti and Bosco, 2015) Part Of Speech tagging tasks, and the Italian WikiNER (Nothman et al., 2012) Named Entity Recognition task. To complete the evaluation of the model, we also tested it on a multi-class sentence classification task. In particular, we focused on intent detection, a task specific to the context of Dialogue Systems, creating a novel italian dataset which is freely available at our repository3. The dataset that we propose collects 2786 real-world questions (2228 for training and 558 for testing) submitted to a digital conversational agent. The total number of classes in the dataset is 139. Footnote 3: [https://github.com/indigo-ai/BERTino](https://github.com/indigo-ai/BERTino) For the first two tasks mentioned, we fine-tuned our model on the training set for 4 epochs with a batch size of 32 and a learning rate of \(5\times 10^{-5}\), for the NER task we performed 5-fold splitting of the dataset and fine-tuned BERTino for 2 epochs per fold with a batch size of 32 and a learning rate of \(5\times 10^{-5}\), while for the multi-class classification task we fine-tuned our model for 14 epochs on the training set with a batch size of 32 and a learning rate of \(5\times 10^{-5}\). To compare the results obtained, we fine-tuned the teacher model and a GilBERTo model4 on the same tasks with the same hyper-parameters. Tables 1, 2, 3 and 4 collect the F1 scores gathered in these experiments together with fine-tuning and evaluation time. All the scores reported represent the average computed over three different runs. Results show that the teacher model slightly outperforms BERTino, with an increase of the F1 score of 0,29%, 5,15%, 1,37% and 1,88% over the tasks analysed. However BERTino results to be a sensibly faster network with respect to the teacher model and GilBERTo, taking almost half of the time to perform both fine-tuning and evaluation. We can conclude from the last observation that BERTino is able to retain most of the natural language understanding capabilities of the teacher model, even with a much smaller architecture. Footnote 4: Available at [https://github.com/idb-ita/GilBERTo](https://github.com/idb-ita/GilBERTo) ## 5 Conclusions In this work we presented BERTino, a DistilBERT model which aims to be the first lightweight alternative to BERT specific for the Italian language. Our model has been trained on a general-domain corpus and can then be finetuned with good performances on a wide range of tasks like its larger counterparts. BERTino showed comparable performances with respect to both the teacher model and GilBERTo in the Italian ISDT, Italian ParTUT, Italian WikiNER and multi-class sentence classification tasks while taking almost half of the time to fine-tune, demonstrating to be a valid lightweight alternative to \(BERT_{BASE}\) models for the Italian language. \begin{table} \begin{tabular}{|l|c|c|c|} \hline \multicolumn{4}{|c|}{Italian ISDT} \\ \hline Model & F1 score & Fine-tuning time & Evaluation time \\ \hline BERTino & 0,9800 & 9’10” & 3” \\ Teacher model & 0,9829 & 16’32” & 6” \\ GilBERTo & 0,9804 & 18’11” & 5” \\ \hline \end{tabular} \end{table} Table 1: F1 scores obtained by BERTino and the teacher model in the Italian ISDT task. \begin{table} \begin{tabular}{|l|c|c|c|} \hline \multicolumn{4}{|c|}{Italian PariTUT} \\ \hline Model & F1 score & Fine-tuning time & Evaluation time \\ \hline BERTino & 0,9039 & 38’3” & 3’2” \\ Teacher model & 0,9176 & 67’2” & 5’21” \\ GilBERTo & 0,9136 & 66’33” & 5’9” \\ \hline \end{tabular} \end{table} Table 3: F1 scores obtained by BERTino and the teacher model in the Italian WikiNER task. The results reported are the average of the scores obtained in each of the 5 folds. \begin{table} \begin{tabular}{|l|c|c|c|} \hline \multicolumn{4}{|c|}{Multi-class sentence classification} \\ \hline Model & F1 score & Fine-tuning time & Evaluation time \\ \hline BERTino & 0,7766 & 5’4” & 6” \\ Teacher model & 0,7954 & 9’48” & 10” \\ GilBERTo & 0,7381 & 10’0” & 10” \\ \hline \end{tabular} \end{table} Table 4: F1 scores obtained by BERTino and the teacher model in the multi-class sentence classification task. \begin{table} \begin{tabular}{|l|c|c|c|} \hline \multicolumn{4}{|c|}{Italian PariTUT} \\ \hline Model & F1 score & Fine-tuning time & Evaluation time \\ \hline BERTino & 0,9193 & 1’19” & 1” \\ Teacher model & 0,9708 & 2’19” & 1” \\ GilBERTo & 0,9621 & 2’21” & 1” \\ \hline \end{tabular} \end{table} Table 2: F1 scores obtained by BERTino and the teacher model in the Italian ParTUT task.
Transformers言語表現モデルの最近の導入により、多くの自然言語処理 (NLP) タスクの改善がなされました。しかし、一方では、この種のアーキテクチャによって得られるパフォーマンスは驚くべきものですが、一方で、ネットワークを構成するパラメータの数が多く、高計算量とメモリ使用量が必要になります。この研究では、BERTアーキテクチャに特化した軽量な代替案であるDistilBERTモデルであるBERTinoを提案します。BERTinoは、イタリア語のISDT、ParTUT、WikiNER、多クラス分類タスクに対して評価を行い、F1スコアはBERTBASEと比較して見事な改善が得られ、トレーニングとInference速度が劇的に向上しました。
2309.14998
An Ensemble Model for Distorted Images in Real Scenarios
Image acquisition conditions and environments can significantly affect high-level tasks in computer vision, and the performance of most computer vision algorithms will be limited when trained on distortion-free datasets. Even with updates in hardware such as sensors and deep learning methods, it will still not work in the face of variable conditions in real-world applications. In this paper, we apply the object detector YOLOv7 to detect distorted images from the dataset CDCOCO. Through carefully designed optimizations including data enhancement, detection box ensemble, denoiser ensemble, super-resolution models, and transfer learning, our model achieves excellent performance on the CDCOCO test set. Our denoising detection model can denoise and repair distorted images, making the model useful in a variety of real-world scenarios and environments.
Boyuan Ji, Jianchang Huang, Wenzhuo Huang, Shuke He
2023-09-26T15:12:55
http://arxiv.org/abs/2309.14998v1
# An Ensemble Model for Distorted Images in Real Scenarios ###### Abstract Image acquisition conditions and environments can significantly affect high-level tasks in computer vision, and the performance of most computer vision algorithms will be limited when trained on distortion-free datasets. Even with updates in hardware such as sensors and deep learning methods, it will still not work in the face of variable conditions in real-world applications. In this paper, we apply the object detector YOLOv7 to detect distorted images from the dataset CDCOCO. Through carefully designed optimizations including data enhancement, detection box ensemble, denoiser ensemble, super-resolution models, and transfer learning, our model achieves excellent performance on the CDCOCO test set. Our denoising detection model can denoise and repair distorted images, making the model useful in a variety of real-world scenarios and environments. Boyuan Ji, Jianchang Huang, Wenzhuo Huang, Shuke He Zhejiang University of Science and Technology Edge Intelligence Security Lab, School of Big Data Science Hangzhou, Zhejiang, China Object Detection, Image Restoration, Ensemble Learning ## 1 Introduction CNN-based methods have become prevailed in object detection [1, 2] and image classification [3, 4, 5]. They not only have achieved promising performance on benchmark datasets [6, 7] but also have been deployed in the real-world applications such as autonomous driving [8] Due to the domain shift in input images [9], general object detection models trained by high-quality images often fail to achieve satisfactory results in detecting distorted images (e.g., foggy and blur, etc.). The distorted images will influence the bounding box which is the key to detecting objects. The distortions not only blind the human eye, but also fraud detector to make the confidence of the anchor reduce. To tackle this challenging problem, Huang, Le, and Jaw [10] employed two subnetworks to jointly learn visibility enhancement and object detection, where the impact of image degradation is reduced by sharing the feature extraction layers. However, it is hard to tune the parameters to balance the weights between detection and restoration during training. Another approach is to eliminate the effects of image distortion by pre-processing the input image (e.g., image defogging, image de-rain, image enhancement, etc.). In this paper, our approach is also constructed according to this paradigm. We apply YOLOv7 [11] and ensemble model to address the problem of object detection under uncontrolled acquisition environment, which are mainstream approaches in object detection tasks. Because these models have difficulty identifying object locations and classes where the image is affected by specific environments and noise, the model is improved by adding a denoiser to the model to increase its resistance to specific noise and reduce the noisy image to a normal image to make the model better at detecting images. Considering that the dataset contains several types and intensities of noise, multiple denoisers are integrated and trained to ensure that the model can remove factors from the real scene that interfere with the captured images. Figure 1: The distorted images denoised by Restormer and detected by our model. Text on detection box represents class of CD-COCO. The number represents the confidence of predicted class. Experimental results demonstrate that our model achieves excellent performance on the test set with more than 56.2% mAP(0.5:0.95). ## 2 Basic Algorithm The challenge focuses on real scenarios where object detection is difficult due to different acquisition conditions. To address this challenge, we increase the detection accuracy of the detector by removing specific interference information and revealing more potential information to the image adaptive detection model, where the image input to the object detector is denoised so that the noise is attenuated. The complete denoiser includes a real noise denoiser, a dynamic blur denoiser, and a super-resolution model. In this paper, in terms of detectors, we used YOLOv7 as our base detection model. Real noise denoiser In general training and detection process, real noise denoiser is often used to counteract the noise that interferes with the input image of the model, such as KBNet [12], PNGAN [13], HINet [14], Restormer [15], etc. For the denoiser integration model, we chose NAFNet [16], Restormer, and KBNet three denoisers for removing real noise. YOLOv7 is a series of mainstream object detection algorithms developed from the paper. The largest model YOLOv7e6 makes the best performance which makes it particularly suitable for detecting images with noises. Restormer is an image restoration transformer model that is computationally efficient to handle high- resolution images. It makes key designs for the core components of the transformer block for improved feature aggregation and transformation. KBNet and NAFNet are networks for image restoration. KBA is the key design that adopts the learnable kernel bases to model the local image patterns and fuse kernel bases linearly to aggregate spatial information efficiently. After the detection model outputs multiple bounding boxes, we fuse the similar bounding boxes by WBF [17] to increase the confidence of bounding boxes and IOU. Fig. 4 shows that our method improves the performance of the detector. ## 3 Implementation ### pre-processing The original dataset obtains 93,811 images of 80 classes on the training set. And the validation set has 4,926 images. The Figure 4: The figure shows the performance of different denoiser and super-resolution models. The real denoising and motion deblurring can remove the noise of the image and increase the confidence of the bounding box. Figure 3: After denoised, the pixel value distribution becomes sharper, reducing the effects of blurring. Figure 2: The figure shows the performance of the purifier. The red box magnifies the detail of the image to make the denoising effect more visible. test set contains 23,527 images. They added dedicated distortions type at specific levels according to the scene context of images. It was correlated to the scene type and context. The distortion severity level will change with the object type and position for local distortions or atmospheric distortions. During the data processing, we apply the ensemble model and data augmentation. Other related parameters are showed on Tab. 1 ### Training Procedure YOLOv7: The model YOLOv7e6 has been pre-trained by the COCO dataset. Then we train the model on the distorted training set. We choose Adam as the optimizer with momentum=0.999 and weight decay is 0.0005. The learning rate is set at 0.001. It will be updated by cosine cyclical learning rate strategy. The box loss gain, classification loss gain, and object loss gain is 0.05, 0.3 and 0.7. ### Testing During the testing phase, we first apply the ensemble model module. The test examples are denoised by Restormer, NAFNet, and KBNet. The multiple denoised images will be achieved. Then we use a motion deblurring model to eliminate the motion blur noise in the images. Because most denoising models cause low-resolution images. The Real-esrgan [19] can restore low-resolution images with unknown and complex degradations. Lastly, we apply the super-resolution to increase visual performance and improve image quality. From Fig. 2, we can know that the images distorted by severe environments can not be detected by the human eye. The pixel value distribution is smooth when the images are distorted by motion blur. The histogram can measure the effectiveness of our denoising algorithm, as the distribution of pixel values in an image will usually change after denoising. A good denoising algorithm should be able to retain as much detail as possible in the image while reducing the effect of noise. Therefore, if the distribution of pixel values in the denoised image is more concentrated, the denoising is more effective. \begin{table} \begin{tabular}{c c} \hline Model & settings \\ \hline \multirow{2}{*}{Restormer [15]} & MDTA attention head = [1,2,4,8] \\ & GDFN channel expansion factor = 2.66 \\ \hline \multirow{4}{*}{KBNet [12]} & Encoder block = {2,2,2,2} \\ & Decoder block = {4,2,2,2} \\ & Iteration = 300k \\ & Kernel base number = 32 \\ \hline \multirow{4}{*}{NAFNet [11]} & Optimizer = Adam \\ & Number of GMAC = 16 \\ \cline{1-1} & Iteration = 200k \\ \cline{1-1} & Width and number of blocks = 32,36 \\ \cline{1-1} & Input size = \(256\times 256\) \\ \hline \multirow{4}{*}{Real-ESRGAN [19]} & Optimizer = Adam \\ & Weight of feature map = {0.1,0.1,1,1,1} \\ \cline{1-1} & Iteration = 400k \\ \cline{1-1} & Input size = \(256\times 256\) \\ \hline \end{tabular} \end{table} Table 1: Different denoiser and super-resolution model setting and special modules Figure 5: The figure shows the entire implementation procedure. We pretrain multiple denoise models on SIDD [18] and combine them as the purifier. The detection model is trained on augmented training dataset. We apply Weighted Boxes Fusion(WBF) to ensemble the prediction of multiple denoised images. ### Model Ensemble Not only in the denoiser we use the ensemble model but also in the bounding box selection of detector we use WBF as our ensemble strategy. Different from the common model, we put multiple denoised images into the detector, generating a huge number of bounding boxes. WBF method uses confidence scores of all proposed bounding boxes in the iterative algorithm that constructs the averaged boxes. After ensemble, our detector can predict a more accurate anchor on the object with higher confidence. The weights of the multiple images will be related to the performance of the corresponding denoiser on the SIDD. The entire procedure of our implementation is illustrated in Fig. 5. ## 4 Ablation Studies The main results of the detection models YOLOv7e6 on the test set are summarized in Tab. 2. The components of the purifier are the most important module in our method. Most denoiser are benefit to the model's performance. Among them, the super-resolution increases the mAP of dataset, performing effectively. It suggests that the ensemble model is necessary for the detection task in the severe environment. In Tab. 3, it shows that the model sequence also affects image rebuild and the performance of the detector. The RD represents the real denoiser. The MD represents the motion deblur model. The RE represents the Real-ESRGAN. The real denoiser and motion deblur model will make disturbed images low resolution, so if the Real-ESRGAN is the first model it can not make images high resolution. It will decrease the confidence of the bounding box of the detector. ## 5 Final Result Final result of our ensemble denoiser and detection model on ICIP 2023 challange of Object detection under uncontrolled acquisition environment and scene context constraints is presented on Tab. 4. ## 6 Conclusion We investigate an integrated denoising object detector for challenge dataset containing multiple noises. The denoiser is an important component in our detection process. We form an integrated denoising model by fusing multiple denoisers trained by WBF. With the integrated denoising model, the challenge dataset containing multiple noises is not only effectively removed from multiple noises, but also the distribution of its pixel values becomes clearer. Our detector is based on YOLOv7e6e, and the model is optimized by integration training, which enables the detection model to predict objections with higher confidence. Our model achieves excellent performance on the Challenge dataset with an mAP(0.5:0.95) of over 56.2%. \begin{table} \begin{tabular}{c c c c c} \hline \hline Model & Precision & Recall & mAP (0.5) & mAP (0.5:0.95) \\ \hline YOLOv7e6 & 0.770 & 0.682 & 0.751 & 0.562 \\ \hline \hline \end{tabular} \end{table} Table 4: The final result of YOLOv7e6 \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline Aug. & \begin{tabular}{c} Real \\ Denoise \\ \end{tabular} & \begin{tabular}{c} Motion \\ Deblur \\ \end{tabular} & \begin{tabular}{c} Real \\ ESRGAN \\ \end{tabular} & \begin{tabular}{c} Large \\ size \\ \end{tabular} & \begin{tabular}{c} KBNet \\ size \\ \end{tabular} & \begin{tabular}{c} NAFNet \\ \end{tabular} & WBF & Precision & Recall & mAP (0.5) & mAP (0.5:0.95) \\ \hline & & & & & & & & 0.785 & 0.635 & 0.706 & 0.511 \\ & & & & & & & & 0.752 & 0.654 & 0.698 & 0.513 \\ & & & & & & & & 0.745 & 0.620 & 0.678 & 0.494 \\ & & & & & & & & 0.759 & 0.643 & 0.700 & 0.517 \\ & & & & & & & & 0.777 & 0.635 & 0.709 & 0.510 \\ & & & & & & & & 0.787 & 0.660 & 0.731 & 0.538 \\ & & & & & & & & 0.774 & 0.659 & 0.731 & 0.539 \\ & & & & & & & & 0.773 & 0.669 & 0.736 & 0.545 \\ & & & & & & & & 0.770 & 0.671 & 0.741 & 0.550 \\ \hline \hline \end{tabular} \end{table} Table 2: Experimental results of YOLOv7e6 on test set. Ticked item represents that corresponding method is applied during the procedure. Aug. represents data augmentation. \begin{table} \begin{tabular}{c c c c c} \hline \hline Model & Precision & Recall & mAP (0.5) & mAP (0.5:0.95) \\ \hline YOLOv7e6 & 0.770 & 0.682 & 0.751 & 0.562 \\ \hline \hline \end{tabular} \end{table} Table 3: Different model sequence will cause different denoised images and offset the bounding box.
画像の取得条件と環境は、コンピュータビジョンにおける高レベルタスクに大きな影響を与える可能性があり、ほとんどのコンピュータビジョンアルゴリズムの性能は、歪みがないデータセットでトレーニングされると限られる。ハードウェアのアップデート、例えばセンサーや深層学習方法では、それでも実世界での変動条件に耐えることは難しい。本論文では、物体検出器 YOLOv7 を、CDCOCO データセットから歪んだ画像を検出する目的で適用する。データエンhancement、検出ボックスエンカamble、ノイズ除去エンsemble、超高解像度モデル、そして転移学習を含む、巧みに設計された最適化により、モデルはCDCOCO テストセットで優れたパフォーマンスを発揮する。ノイズ除去検出モデルは、歪んだ画像をノイズ除去し、修復することで、実世界での様々な状況や環境で有用となる。
2309.11638
A survey on the semantics of sequential patterns with negation
A sequential pattern with negation, or negative sequential pattern, takes the form of a sequential pattern for which the negation symbol may be used in front of some of the pattern's itemsets. Intuitively, such a pattern occurs in a sequence if negated itemsets are absent in the sequence. Recent work has shown that different semantics can be attributed to these pattern forms, and that state-of-the-art algorithms do not extract the same sets of patterns. This raises the important question of the interpretability of sequential pattern with negation. In this study, our focus is on exploring how potential users perceive negation in sequential patterns. Our aim is to determine whether specific semantics are more "intuitive" than others and whether these align with the semantics employed by one or more state-of-the-art algorithms. To achieve this, we designed a questionnaire to reveal the semantics' intuition of each user. This article presents both the design of the questionnaire and an in-depth analysis of the 124 responses obtained. The outcomes indicate that two of the semantics are predominantly intuitive; however, neither of them aligns with the semantics of the primary state-of-the-art algorithms. As a result, we provide recommendations to account for this disparity in the conclusions drawn.
Thomas Guyet
2023-09-20T21:03:18
http://arxiv.org/abs/2309.11638v1
# A survey on the semantics of sequential patterns with negation # A survey on the semantics of sequential patterns with negation Thomas Guyet\({}^{1}\) \({}^{1}\) Inria - Centre de Lyon, AlstroSight [email protected] **Abstract** A sequential pattern with negation, or negative sequential pattern [10], takes the form of a sequential pattern for which the negation symbol (\(\neg\)) may be used in front of some of the pattern's itemsets. Intuitively, such a pattern occurs in a sequence if negated itemsets are _absent_ in the sequence. Recent work [3] has shown that different semantics can be attributed to these pattern forms, and that state-of-the-art algorithms do not extract the same sets of patterns. This raises the important question of the interpretability of sequential pattern with negation. In this study, our focus is on exploring how potential users perceive negation in sequential patterns. Our aim is to determine whether specific semantics are more "intuitive" than others and whether these align with the semantics employed by one or more state-of-the-art algorithms. To achieve this, we designed a questionnaire to reveal the semantics' intuition of each user. This article presents both the design of the questionnaire and an in-depth analysis of the 124 responses obtained. The outcomes indicate that two of the semantics are predominantly intuitive; however, neither of them aligns with the semantics of the primary state-of-the-art algorithms. As a result, we provide recommendations to account for this disparity in the conclusions drawn. Keywordspattern mining, sequential patterns, negation, interpretation, survey ## 1 Introduction Sequential pattern extraction is a classic class of data mining methods. Its objective is to extract subsequences (patterns) that frequently appear from a large dataset of sequences. A pattern is considered frequent when it appears in at least \(\sigma\) sequences, where \(\sigma\) is user-defined. For instance, consider the pattern \(\langle e\ (ca)\ d\rangle\), which indicates that "item \(e\) is followed by the itemset \(ca\) and then by item \(d\) simultaneously". In the table below, this pattern appears in 4 sequences (\(\boldsymbol{p_{0}}\), \(\boldsymbol{p_{2}}\), \(\boldsymbol{p_{3}}\), and \(\boldsymbol{p_{4}}\)). These frequent patterns can be efficiently enumerated thanks to the anti-monotonicity property of the support measure (i.e., the number of occurrences of a pattern). Intuitively, the support of a pattern decreases with the pattern's size. This property, utilized by most algorithms in the \begin{table} \begin{tabular}{l l} \hline \(id\) & _Sequence_ \\ \hline \(\boldsymbol{p_{0}}\) & \(\langle e\ (caf)\ d\ b\ e\ d\rangle\) \\ \(\boldsymbol{p_{1}}\) & \(\langle c\ a\ d\ b\ e\ d\rangle\) \\ \(\boldsymbol{p_{2}}\) & \(\langle e\ (ca)\ d\rangle\) \\ \(\boldsymbol{p_{3}}\) & \(\langle d\ e\ (ca)\ b\ d\ b\ e\ f\rangle\) \\ \(\boldsymbol{p_{4}}\) & \(\langle c\ e\ b\ (fac)\ d\ e\ c\rangle\) \\ \hline \end{tabular} \end{table} Table 1: Example of a dataset containing five sequences over an alphabet of six items \(\Sigma=\{a,b,c,d,e,f\}\). literature, prevents enumerating patterns that are larger than those known a priori not to be frequent. This trick ensures the complete exploration of the search space while maintaining algorithm efficient. Several studies [5, 7] have expanded the domain of sequential patterns by incorporating information about the absence of item occurrences. Such patterns are termed "sequential patterns _with negation_" or "_negative sequential patterns_". Sequential patterns with negation take the form of sequential patterns in which negation symbols, \(\neg\), preceding certain items. The negation symbol indicates that the specified item must be absent from a sequence for the pattern to be considered to occur. Intuitively, the pattern \(\langle a\neg b\ c\rangle\) is recognized in a sequence if the latter contains an \(a\) followed by a \(c\), and \(b\) is not present between the occurrences of \(a\) and \(c\). We advocate for a broader use of sequential patterns with negation in the process of mining datasets of sequences. This type of pattern holds particular significance for data analysts, as it has the potential to unveil meaningful insights from the absence of events. For instance, in the context of health, the non-administration of a certain drug (\(d\)) might trigger an illness (\(i\)). When analyzing a database using a conventional sequential pattern mining algorithm, frequent patterns might indicate an illness occurrence without other co-occurring events. However, in the conventional semantics of sequential patterns, the absence of other events related to illness cannot be concluded from this pattern. Sequential patterns with negation, such as \(\langle\neg d\ i\rangle\), bring to light the frequent co-occurrence of drug absence and the occurrence of an illness. In this study, we would like to highlight possible interpretability issues of sequential patterns with negation. Indeed, Besnard and Guyet [3] have demonstrated the existence of multiple semantics for these patterns. have demonstrated the existence of multiple semantics for these patterns, there is a risk of misinterpretation of extracted patterns in case the user and the algorithms do not share the same semantics. This concern is not solely theoretical; it manifests practically since the two state-of-the-art algorithms, eNSP [5] and NegPSpan [7], do not have the same semantics for the negation symbol [3]. As a result, the patterns extracted by each of these algorithms need to be interpreted differently by the user. Considering that a user does not necessarily seek to understand the intricacies of these patterns, we believe that the designers of pattern mining algorithms have to take care of possible misinterpretations of the outputs of their algorithms. Therefore, it is crucial to identify any possible disparity between the semantics used in an algorithm and the one that is perceived "intuitively" by users. In this article, we have therefore investigate three questions: 1. Is there an "intuitive" semantics for patterns with negation? 2. Does the "intuitive" semantics correspond to those actually employed by any of the algorithms? 3. What recommendations can be made regarding the use of patterns with negations? To address these questions, our methodology involved designing a questionnaire to uncover the intuitive semantics of potential users of pattern mining algorithms. The details of the methodology of this survey are described in Section 3. Section 5 presents the questions posed to users and makes explicit the potential alternative interpretations. The collected results from 124 participants are presented and analyzed in Section 6. We begin by introducing a brief overview of state-of-the-art algorithms for extracting sequential pattern with negations. ## 2 State-of-the-art in sequential pattern extraction with negations The first endeavor in negative pattern extraction was presented by Savasere et al. [9] in the context of itemset mining. Initial efforts toward sequential patterns with negation were made by Wu et al. [12] for association rules. Over time, several recent approaches have emerged to capitalize on advancements in pattern extraction techniques. The eNSP algorithm extracts negative patterns by leveraging set operations between sets of sequences matched by frequent sequential patterns [5]. This approach circumvents the direct enumeration of patterns with negation that leads to efficient algorithms. Since then, numerous alternatives to this algorithm have been proposed, focusing on item utility [13], repetitions [6], multiple support constraints [14], and more. Nonetheless, these methods do not rely on an antimonotonicity property of the support measure and they do not guarantee to extract all frequent patterns. An alternative to eNSP is NegPSpan[7], which employs a distinct pattern semantics to harness the antimonotonicity property. This enables efficient and complete extraction following conventional pattern mining principles. The completeness of the mining process makes the approach more reliable as it guarantees to the user to not miss interesting patterns. And the implementation benefits from decades of pattern mining research to maintain the efficiency. More recently, Wang et al. [11] introduced VM-NSP, an algorithm utilizing a vertical representation to enhance efficiency. For a comprehensive overview of recent developments in mining sequential pattern with negation, interested readers can refer to the work of Wang et al. [10]. In the initial stages, early approaches were compared without employing uniform pattern semantics. However, the recognition of distinct semantics has contributed to the clarification of the domain [3]. Specifically, eight semantics of patterns with negations have been delineated. These eight variations stem from different interpretations of the notion of non-inclusion, occurrence, and inclusion relation. These notions, detailed in Section 5, have informed the design of our questionnaire. ## 3 Survey on the Perception of Sequential Patterns with Negations The survey aims to identify the most intuitive semantics of sequential patterns with negation. The questionnaire is organized into three parts: 1. Evaluation of background knowledge in the domains of pattern mining and logic. In this part, participants are asked whether they are familiar with the concepts of pattern mining and whether they are computer scientists, logicians, or researchers. This information helps characterize potential biases within the participant group. 2. Verification of the understanding of sequential patterns (without negation) and the scope of negations. The general framework for the semantics of negative sequential patterns [3] makes assumptions about the definition of a classical sequential pattern and the intuitive scope of the negation. Two questions assess whether participants adhere to these definitions. Correct answers are required for inclusion in the survey analysis. 3. Identification of the intuitive semantics of sequential patterns with negation. This third part constitutes the core of the questionnaire. Participants are asked to determine which sequences they believe contain a given pattern (see example in Figure 1). The questions have been designed to unveil the semantics assumed by each participant. Thus, each participant is assigned one of the eight possible semantics. We refer to this questionnaire as revealing the intuitive semantics of participants, as they are not explicitly asked to state their preferred interpretations, but their interpretations are indirectly inferred from their answers. The questionnaire was distributed between December 2021 to March 2023. We used research mailing lists and non-research channels to collect responses from both experts and non-experts. The questionnaire is accessible via a standard web browser1. The questionnaire begins with explanations of sequential pattern concepts. It is designed to accommodate users with varying levels of mathematical comprehension by offering two versions: one employing letter notations and the other employing colored symbols. Figure 1 depicts the two alternative format to presenting a question. Footnote 1: [http://people.irisa.fr/Thomas.Guyet/negativepatterns/Survey/index.php](http://people.irisa.fr/Thomas.Guyet/negativepatterns/Survey/index.php) The questionnaire is entirely anonymous, and the collected data only include dates and answers to the questions. ## 4 General Framework We now introduce the syntax of sequential patterns with negation which restricts the general definition of sequential patterns with negation to the ones introduced by Besnard and Guyet [3]. In the following, let \([n]=1,\ldots,n\) denote the set of the first \(n\) integers, and let \(\mathcal{I}\) denote a set of items (alphabet). A subset \(A=a_{1},a_{2},\ldots,a_{m}\subseteq\mathcal{I}\) is called an _itemset_. A _sequence_\(\boldsymbol{s}\) is of the form \(\boldsymbol{s}=\langle s_{1},s_{2},\ldots,s_{n}\rangle\), where \(s_{i}\) is an itemset. **Definition 1** (Sequential pattern with negation).: _A sequential pattern with negation \(\boldsymbol{p}=\langle p_{1},\neg q_{1},\neg p_{2},\neg q_{2},\ldots,p_{n-1}, \neg q_{n-1},p_{n}\rangle\) is such that \(p_{i}\in 2^{\mathcal{I}}\setminus\emptyset\) for all \(i\in[n]\) and \(q_{i}\in 2^{\mathcal{I}}\) for all \(i\in[n-1]\). \(\boldsymbol{p}^{+}=\langle p_{1},p_{2},\ldots,p_{n}\rangle\) denotes the positive part of \(\boldsymbol{p}\)._ The semantics of patterns relies on the containment relation, which specifies how to determine whether a pattern occurs (is contained) or not in a sequence. This relation utilizes the notion of occurrence of a (positive) sequential pattern in a sequence, formally defined as follows: **Definition 2** (Occurrence of a sequential pattern).: _Let \(\boldsymbol{s}=\langle s_{1},s_{2},\ldots,s_{n}\rangle\) be a sequence and \(\boldsymbol{p}=\langle p_{1},p_{2},\ldots,p_{m}\rangle\) be a sequential pattern, \(\boldsymbol{e}=(e_{i})_{i\in[m]}\in[n]^{m}\) is an occurrence of the pattern \(\boldsymbol{p}\) in the sequence \(\boldsymbol{s}\) if \(p_{i}\subseteq s_{e_{i}}\) for all \(i\in[m]\) and \(e_{i}<e_{i+1}\) for all \(i\in[m-1]\)._ The understanding of this definition (explained at the beginning of the questionnaire) is verified through the following question. **Question 1** (Occurrence of a sequential pattern).: _Let \(\boldsymbol{p}=\langle(ca)\ d\ e\rangle\) be a sequential pattern, indicate in which sequences of Table 1 the pattern \(\boldsymbol{p}\) occurs._ The expected answers to this question are the sequences \(\boldsymbol{p_{0}}\), \(\boldsymbol{p_{3}}\), and possibly \(\boldsymbol{p_{4}}\). Sequence \(\boldsymbol{p_{0}}\) allows us to verify the understanding that \((ca)\) appears in \((caf)\) as per our definitions. Sequence \(\boldsymbol{p_{1}}\) verifies that all the elements of \((ca)\) appear together (and not just a subset). Sequence \(\boldsymbol{p_{2}}\) allows us to verify the understanding of the importance of the occurrence order in the sequence. Sequence \(\boldsymbol{p_{3}}\) lets us verify the understanding of the notion of a _gap_: it is possible to have itemsets in the middle of an occurrence (e.g., the occurrence of \(b\) between \(d\) and \(e\)). Lastly, the final sequence presents an itemset whose items are not ordered. If \(\boldsymbol{p_{4}}\) is not deemed to contain \(\boldsymbol{p}\), it would indicate a user's sensitivity to the order within an itemset (which is classically not the case). Likewise, the semantics of sequential patterns with negation are based on a containment relation. A pattern with negation, \(\boldsymbol{p}\), is contained in a sequence \(\boldsymbol{s}\) if \(\boldsymbol{s}\) contains a subsequence \(\boldsymbol{s}^{\prime}\) such that each positive set of \(\boldsymbol{p}\) (denoted as \(p_{i}\)) is included in an itemset of \(\boldsymbol{s}^{\prime}\) (in proper order), and all the negation constraints expressed by \(\neg q_{i}\) are also satisfied. The negation constraint on \(q_{i}\) then applies to the subsequence of \(\boldsymbol{s}^{\prime}\) located between the occurrence of the positive itemset preceding \(\neg q_{i}\) in \(\boldsymbol{p}\) and the occurrence of the positive itemset following \(\neg q_{i}\) in \(\boldsymbol{p}\). This definition determines the scope of the negation, which is specific to the framework we are working in. Ensuring that users share this definition is paramount. The subsequent question enables us to affirm this understanding. **Question 2** (Scope of the negation).: _Let \(\boldsymbol{p}=\langle c\neg d\ e\rangle\) be a pattern with negation, indicate the sequences of the table below in which, according to you, \(\boldsymbol{p}\) occurs._ Figure 1: Illustration of the two versions of the questionnaire: on the left, the classical view employing mathematical notations; on the right, the version employing colored shapes tailored for non-expert users. The use of colors and shapes provides redundancy while also catering to color-blind individuals. \begin{tabular}{l l} \hline \(id\) & _Sequence_ \\ \hline \(\mathbf{s_{0}}\) & \(\langle f\ f\ c\ b\ d\ a\ e\rangle\) \\ \(\mathbf{s_{1}}\) & \(\langle f\ c\ b\ f\ a\ e\rangle\) \\ \(\mathbf{s_{2}}\) & \(\langle b\ f\ c\ b\ a\rangle\) \\ \(\mathbf{s_{3}}\) & \(\langle b\ c\ b\ e\ d\rangle\) \\ \(\mathbf{s_{4}}\) & \(\langle f\ a\ c\ e\ b\rangle\) \\ \hline \end{tabular} In this question, it seems reasonable to consider that \(\mathbf{p}\) occurs in \(\mathbf{s_{1}}\), \(\mathbf{s_{3}}\) (since \(d\) is outside the assumed scope of the negation), and \(\mathbf{s_{4}}\). Participants who do not tick \(\mathbf{s_{4}}\) likely interpret the constraint \(\neg d\) as referring to the occurrence of an element other than \(d\) (which is not consistent with the definitions proposed above). If \(\mathbf{p_{0}}\) is deemed to contain \(\mathbf{p}\), it is likely that the constraint \(\neg d\) is understood to strictly follow \(c\) which is not a situation considered in our framework. ## 5 Questions on the semantics of negations In this section, we take up the questions of the third part of the questionnaire and we explain the different interpretations revealed by the answers given by the participants. There are three questions. Each question is dedicated to one dimension of the semantics of negative sequential pattern, and they cover all dimensions identified in [3]. ### Itemset non-inclusion **Question 3**.: _Let \(\mathbf{p}=\langle d\ \neg(af)\ b\rangle\) be a sequential pattern with negation, indicate the sequences of the table below in which, according to you, \(\mathbf{p}\) occurs._ \begin{tabular}{l l} \hline \(id\) & _Sequence_ \\ \hline \(\mathbf{i_{0}}\) & \(\langle e\ e\ d\ a\ b\ e\rangle\) \\ \(\mathbf{i_{1}}\) & \(\langle d\ (af)b\ c\rangle\) \\ \(\mathbf{i_{2}}\) & \(\langle e\ d\ (fc)\ b\rangle\) \\ \(\mathbf{i_{3}}\) & \(\langle e\ c\ d\ (ec)\ b\rangle\) \\ \(\mathbf{i_{4}}\) & \(\langle d\ (fa)\ b\ e\rangle\) \\ \hline \end{tabular} This question is designed to unveil the interpretation of the inclusion relation between itemsets. Each sequence in the table contains the positive part of the pattern, \(\mathbf{p}^{+}=\langle d\ b\rangle\), with only one itemset between the occurrences of \(d\) and \(b\). These sequences prompt inquiry into the non-inclusion of the \((af)\) itemset in \(a\), \((af)\), \((fc)\), \((ec)\), or \((fa)\). If a participant ticks the sequences \(i_{0}\), \(i_{2}\), and \(i_{3}\), we can deduce that they regard the presence of at least one element of the itemset \((af)\) to "validate" the negation. This is referred to as "partial non-inclusion". On the other hand, if only sequence \(\mathbf{i_{3}}\) is ticked, it suggests that the participant considers that all items in the itemset must be present to "validate" the negation. This is referred to as "total non-inclusion". Additionally, sequence \(\mathbf{i_{4}}\) is included to examine whether the order of items in the itemset matters to participants and whether their response aligns with their answer to sequence \(\mathbf{p_{4}}\) in Question 1. More formally, this question discriminates between two choices of inclusion between two itemsets, \(P\) and \(I\): * Partial non-inclusion: \(P\not\subseteq_{G}I\Leftrightarrow\exists e\in P\), \(e\notin I\) * Total non-inclusion: \(P\not\subseteq_{D}I\Leftrightarrow\forall e\in P,e\notin I\) Partial non-inclusion means that \(P\setminus I\) is non-empty, while total non-inclusion means that \(P\) and \(I\) are disjoint. In the following, the symbol \(\not\subseteq_{*}\) denotes a relation of non-inclusion between itemsets, either \(\not\subseteq_{G}\) or \(\not\subseteq_{D}\). ### Embedding of a pattern with negation **Question 4** (Embedding of a pattern with negation).: _Let \(\mathbf{p}=\langle f\ \neg(ea)\ d\rangle\) be the sequential pattern with negation, indicate the sequences from the table below in which, according to you, \(\mathbf{p}\) occurs._ The form of the pattern \(\mathbf{p}=\langle f\ \neg(ea)\ d\rangle\) mirrors that of the previous question, differing by a permutation of letters. Each sequence in the table contains the positive part of \(\mathbf{p}\), i.e. \(\langle f\ d\rangle\). The primary difference is that there are multiple itemsets between the occurrences of \(f\) and \(d\). Participants must decide which itemset(s) of the sequence to compare with the negated itemsets of the pattern. First and foremost, we anticipate participants to deduce that \(\mathbf{p}\) occurs in \(\mathbf{e_{3}}\) (there is clearly neither \(e\) nor \(a\) here) but that \(\mathbf{p}\) does not occur in \(\mathbf{e_{2}}\) (the itemset \((ea)\) is found in the scope of the negation). The sequence that unveil the participant semantics is \(\mathbf{e_{1}}\). Notably, this sequence comprises both elements of the negated itemset (\(e\) and \(a\)), but in two separated itemsets of the sequence. The participant who does not tick it (i.e. he/she considers that \(e\) does not occur in \(e_{1}\)) uses the notion of "soft-embedding": \(e\) and \(a\) would have to appear together to "validate" the negation (as in the case of \(e_{2}\)). The participant who ticks it consider that the negation constraint applies across the entire set of itemsets within the negation's scope. The interpretation is termed _strict-embedding_. Furthermore, \(\mathbf{e_{0}}\) unveils the notion of non-inclusion discussed earlier: in the case of partial non-inclusion, \(\mathbf{p}\) occurs in \(\mathbf{e_{0}}\), but not if we consider a total non-inclusion. Thus, this sequence serves to assess the consistency of responses. Two interpretations have been distinguished: strict- and soft-embeddings. They can be formally defined as follows: Let a sequence \(\mathbf{s}=\langle s_{1},\ldots s_{n}\rangle\) and a pattern with negation \(\mathbf{p}=\langle p_{1},\ldots\ \neg q_{1},\ldots\ \neg q_{m-1}\ p_{m}\rangle\). We say that \(\mathbf{e}=(e_{i})_{i\in[m]}\in[n]^{m}\) is a soft-embedding of \(\mathbf{p}\) in the sequence \(\mathbf{s}\) iff: * \(p_{i}\subseteq s_{e_{i}}\) for all \(i\in[m]\) * \(q_{i}\not\subseteq_{*}s_{j},\ \forall j\in[e_{i}+1,e_{i+1}-1]\) for all \(i\in[m-1]\) We say that \(\mathbf{e}=(e_{i})_{i\in[m]}\in[n]^{m}\) is a strict-embedding of \(\mathbf{p}\) in the sequence \(\mathbf{s}\) iff: * \(p_{i}\subseteq s_{e_{i}}\) for all \(i\in[m]\) * \(q_{i}\not\subseteq_{*}\bigcup_{j\in[e_{i}+1,e_{i+1}-1]}s_{j}\) for all \(i\in[m-1]\) Intuitively, the soft-embedding considers the non-inclusion of \(q_{i}\) for each of the itemsets within the positional range \([e_{i}+1,e_{i+1}-1]\) while the strict-embedding considers the non-inclusion across the union of the itemsets at those same positions. The interval corresponds to the itemsets of the sequence that lie strictly between the occurrences of the itemsets surrounding \(q_{i}\). ### Multiple occurrences **Question 5** (Multiple occurrences of a pattern with negation).: _Let \(\mathbf{p}=\langle b\ \neg e\ f\rangle\) be a negative sequential pattern, indicate the sequences of the table below in which, according to you, \(\mathbf{p}\) occurs._ \begin{tabular}{l l} \hline _id_ & _Sequence_ \\ \hline \(\mathbf{o_{0}}\) & \(\langle b\ a\ f\ d\ b\ d\ f\rangle\) \\ \(\mathbf{o_{1}}\) & \(\langle b\ a\ f\ d\ e\ b\ d\ f\rangle\) \\ \(\mathbf{o_{2}}\) & \(\langle d\ b\ e\ c\ a\ d\ f\ b\ d\ e\ f\rangle\) \\ \(\mathbf{o_{3}}\) & \(\langle b\ a\ f\ b\ a\ e\ f\rangle\) \\ \hline \end{tabular} In this question, each sequence contains multiple occurrences of the positive part of the pattern, \(p^{+}=\langle b\ f\rangle\). Notably, there are even non-nested occurrences of \(\langle b\ f\rangle\) in each sequence to underscore this. Given that the negation constraint pertains only to the item \(e\), whatever the choices of non-inclusion and embedding interpretations, the question centers on the interpretation of these multiple occurrences. Two alternative interpretations are anticipated: * The first interpretation considers that once an occurrence of the positive part, \(\langle b\ f\rangle\), fulfills the negation constraint, the sequence contains the pattern. This is termed a "weak occurrence". Ticking sequences \(\boldsymbol{o_{0}}\), \(\boldsymbol{o_{1}}\), and \(\boldsymbol{o_{3}}\) indicates alignment with this interpretation. * The second interpretation holds that if any occurrence of the positive part fails to satisfy the negation constraint, the sequence does not contain the pattern. This is termed a "strong non-occurrence". In Question 5, participants subscribing to this view solely ticked \(\boldsymbol{o_{0}}\), as all other sequences possess at least one occurrence of \(\langle b\ f\rangle\) with an interstitial \(e\). However, sequence \(\boldsymbol{o_{1}}\) might pose a challenge for those with this interpretation. It contains two minimal occurrences [8] of \(\langle b\ f\rangle\) that meet the negation constraint, alongside an occurrence involving the first \(b\) and the last \(f\) which does not satisfy the negation constraint.2 This subtlety may be difficult to detect for those unfamiliar with sequences. Hence, it is advisable to assess the interpretation solely based on the absence of \(\boldsymbol{o_{3}}\). When the participant ticks \(\boldsymbol{o_{1}}\), we assign to him/her a specific attention to minimal occurrences. Footnote 2: In sequence mining, a minimal occurrence [8] is an occurrence of a pattern whose extent within the sequence does not contain another occurrence of the same pattern. For instance, in the sequence \(\langle b\ b\ f\rangle\), the blue occurrence of \(\langle b\ f\rangle\) is minimal, but not the red one. Finally, the three dimensions of interpretation for negation combine to establish eight distinct semantics, each characterized by its containment relations as studied in [3]. The three questions above were strategically crafted to individually delve into each of the three dimensions underlying the semantics of sequential patterns with negation. Notably, this approach illustrates how the question design facilitates the assignment of a specific semantics to a participant based on their provided responses. ## 6 Analysis of the questionnaire answers By the conclusion of the survey period, we had amassed 124 fully completed questionnaires. Participants' self-assessed expertise in pattern mining is distributed as follows: 40 novices (level 0), 54 with knowledge of data science (level 1), and 27 who identified themselves as familiar with pattern mining (level 3). In terms of background, 79 participants identified themselves as computer scientists, 82 as researchers, and 23 as logicians. The average number of attempts made to comprehend the notion of pattern occurrence was \(1.27\pm 0.49\), with attempts ranging from 1 to 5. Notably, 102 participants answered correctly on their initial attempt. It is worth noting that among the participants with knowledge of data analysis (out of 24), 6 requires more than one attempt to arrive at the correct answer. The objective of the questionnaire analysis is to identify clusters of individuals who selected the same answers, i.e., who have the same intuitive semantics of sequential patterns with negation. This process unfolds in two stages: 1. Initially, we analyze the results question by question, focusing individually on each dimension of the semantics of negative sequential patterns 2. The analysis is then complemented by a global analysis of semantics. In the preceding section, we determined the expected responses for each question. We propose the utilization of formal concept analysis (FCA) to achieve a comprehensive overview of the outcomes. FCA is a data analysis technique that identifies concepts within a dataset. Each concept is defined by its intention, which represents the set of selected answers, and its extension, which enumerates all individuals who select those answers. These extracted concepts are "closed", meaning that their extension is maximal for their intention, and vice versa. FCA empowers us to succinctly represent the answers in a concept lattice. Through this lattice, we visualize all subgroups of individuals who provided identical answers. FCA has previously found application in questionnaire analysis [1]. For our practical implementation, we employed the GALACTIC tool [4] to construct our lattices. ### Analysis of each dimension of semantics In this section, we analyze the responses to questions 2 to 5. It should be noted that participants are required to answer Question 1 correctly to proceed with the questionnaire. As a result, the analysis of answers to this question may not be significant. First, we focus on the answers to the question regarding the scope of negations. Subsequently, we delve into the analyze of the three dimensions of the semantics of patterns with negation: the non-inclusion of itemsets, embeddings, and multiple occurrences. Tables 2 to 5 provide a synthetic account of each of the interpretations. Furthermore, Figures 2 to 4 depict the concept lattices obtained for each of these questions to give a more global picture of the responses. Regarding the scope of the negations, 101 participants provided answers that corresponded with the expected understanding of negation scope (see Table 2). It is interesting to note that 9 people who selected \(\mathbf{s_{1}}\) and \(\mathbf{s_{3}}\) did not select \(\mathbf{s_{4}}\). This discrepancy suggests that, for them, negating an itemset means negating the event itself, rather than negating the presence of the event.3 The remaining marginal differences (14 people) are assumed to be omissions or errors. As their grasp of the scope of negation might differ, these individuals were excluded from further results analysis, ensuring the interpretability of the responses. Therefore, the further analysis is based on 110 completed answers. Footnote 3: NB: In the following questions, all sequences have at least one “neutral” event where an itemset with negation is expected. Regarding the non-inclusion of itemsets (Table 3 and Figure 2), we can observe that the majority of participants (100) selected the response triple \(\mathbf{i_{0}}\), \(\mathbf{i_{2}}\), and \(\mathbf{i_{3}}\) aligning with the interpretation of partial non-inclusion (concept $8 in Figure 2). Only 3 people considered the total non-inclusion \begin{table} \begin{tabular}{l c c} \hline **Scope** & **Count** & **Percentage** \\ \hline Conform & 101 & 81.4\% \\ Conform except \(\mathbf{s_{4}}\) & 9 & 7.3\% \\ Alternative & 14 & 11.3\% \\ \hline \end{tabular} \end{table} Table 2: Results on the question of the scope of negation. Figure 2: Concepts extracted from the answers to Question 3: non-inclusion of an itemset. Each concept is illustrated by a box containing different elements: the generators on an orange background (representing possible answers to the questions), and the prototypes on a green background. The size of the extension is indicated with a #. Each concept indicates the intention as a set of ticked sequences (refer to the tables presented in the examples). In the responses to the questions, i0 indicates that the participant ticked the sequence \(\mathbf{i_{0}}\), and ni1 (prefixed with n) indicates that the participant _did not_ tick the sequence \(\mathbf{i_{1}}\). interpretation. An interesting observation pertains to the 22 participants who considered that the sequence \(\mathbf{i_{4}}\) contains the pattern. They believe that \((fa)\) is not incompatible with \((af)\). These participants spanned across varying levels of expertise: 8, 11, and 3, respectively, for levels 0, 1, and 2. Unsurprisingly, people knowledgeable in pattern mining (level 2) are, in proportion, less represented among people who are inclined to differentiate between \((fa)\) and \((af)\). Moving on to the analysis of the embeddings (Table 4 and Figure 3), the sequence \(\mathbf{e_{1}}\) allows us to distinguish the participants' intuition. For Table 4, we also ensure that the answers are correct for \(\mathbf{e_{2}}\) and \(\mathbf{e_{3}}\); otherwise, we categorize the answer as "other". Once again, we observe a pronounced trend in the results. 97 participants subscribed to the soft-embedding interpretation (Concept $7 in Figure 3). Concept $3 corresponds to individuals who did not select \(\mathbf{e_{1}}\), indicative of a strict-embedding interpretation. Lastly, regarding the analysis of the inclusion relations (Table 5 and Figure 4), two balanced groups of participants emerge. 75 participants have exclusively identified the three sequences corresponding to the notion of a weak occurrence. They are represented by Concept $3 of Figure 4. On the other hand, 31 participants exclusively selected the sequence \(\mathbf{o_{0}}\) (Concept $1). The latter group preferred the interpretation of a strong occurrence. Among these 31 participants, 15 did not select the \(\mathbf{o_{1}}\) sequence, while 16 did (Concept $2). The latter group tends to align more with the notion of minimal occurrence. ### Global semantics analysis Questions 3 to 5 assign each participation to an interpretation of one of the three dimensions that constitute the semantics of a pattern with negation (according to the framework of Besnard and \begin{table} \begin{tabular}{l c c} \hline \hline **Interpretation** & **Count** & **Percentage** \\ \hline Strict-embedding & 97 & 88.2\% \\ Soft-embedding & 7 & 6.3\% \\ Other & 6 & 5.5\% \\ \hline \hline \end{tabular} \end{table} Table 4: Responses to the question of embeddings. Figure 3: Concepts extracted from responses to the Question 4 relating to embeddings (see Figure 2 for legend details). \begin{table} \begin{tabular}{l c c} \hline \hline **Interpretation** & **Count** & **Percentage** \\ \hline Partial non-inclusion & 100 & 90.9\% \\ Total non-inclusion & 3 & 2.7\% \\ Other & 7 & 6.4\% \\ \hline \hline \end{tabular} \end{table} Table 3: Responses to the question of non-inclusions (number and percentage). Guyet [3]. We now investigate if there are dominant semantics (combinations of interpretation choices for the three dimensions) among the eight possibilities. Figure 5 provides a summary of the survey responses. It presents the concept lattice that represents the semantics of patterns with negation. The five prototypes at the bottom level describe the five semantics (and their representation in the data) that the participants used. Among the 110 participants, 96 were assigned an intuitive semantic by the questionnaire. The remaining 14 participants had at least one question for which no clear interpretation was identified. These individuals are categorized in the intermediate concepts (prototypes $5, $6, and $10; generator $15; and concepts $3, $9, and $13). One noteworthy observation is that a significant proportion of participants can be attributed a semantic, suggesting that the same individuals likely provided "alternative" answers to different questions. This outcome reinforces the reliability of the collected answers. Furthermore, the figure highlights the main finding of this study: there are two primary intuitively used semantics: * the first is partial non-inclusion, with soft-embedding and strong-occurrences, accounting for 23.9% of participants, and * the second is partial non-inclusion, with soft-embedding and weak-occurrences, accounting for 69.8% The representation of the other semantics is marginal. Additionally, we sought to compare the populations defined by their choice of semantics by analyzing their responses to profile questions. To do this, we conducted a statistical test to compare the distributions of expertise levels using Student's t-test. The results show no significant difference between the groups. In conclusion, we find that the intuition of a semantics is not inherently linked to a particular expertise in computer science or data science. Figure 4: Concepts extracted from responses to Question 5 relating to multiple occurrences (see Figure 2 for legend details). \begin{table} \begin{tabular}{l c c} \hline **Interpretation** & **Count** & **Percentage** \\ \hline Weak occurrence & 75 & 69.2\% \\ Strong occurrence & 31 & 28.2\% \\ Other & 4 & 3.6\% \\ \hline \end{tabular} \end{table} Table 5: Responses to the question of multiple occurrences. ## 7 Preferred semantics vs state-of-the-art algorithms As a preliminary summary, the analyses reveal the absence of a single shared semantic among participants, but rather the presence of two dominant semantics. These results prompt a comparison with the choices made by two prominent algorithms in the field: * eNSP employs total non-inclusion, with soft-embedding and strong-occurrences * NegPSpan employs total non-inclusion, with soft-embedding and weak-occurrences Firstly, neither of the algorithms aligns with the participants' intuitive understanding, as both rely on total non-inclusion of itemsets, whereas partial non-inclusion appears to be the most intuitive. One possible explanation for this algorithmic choice is that partial non-inclusion is anti-monotonic, while the total non-inclusion is monotonic. The latter is less straightforward to exploit algorithmically. Therefore, the most intuitive semantics may not be the most suitable from an algorithmic perspective. In practice, this raises concerns about potential misinterpretation of patterns extracted by these state-of-the-art algorithms. Without explicitly defining their semantics, the results of this study indicate that the patterns will be interpreted differently from the intended interpretation used for their extraction. This poses a significant challenge for the practical use of these algorithms. In light of these findings, several recommendations emerge: 1. **Singleton-only negations**: Consider limiting negations to singletons only. This adjustment would make partial and total non-inclusions equivalent, potentially reducing confusion and aligning better with participants' intuition. 2. **Algorithmic Adaptations**: Develop alternative algorithms tailored to the partial non-inclusion semantics. While these adaptations are algorithmically feasible, their computational performance should be rigorously compared to existing algorithms to assess their efficiency and competitiveness. Given that NegPSpan adheres more closely with the intuition of a larger number of participants, consider favoring the extension and utilization of the NegPSpan algorithm. 3. **Distinct Syntaxes**: Promote the adoption of distinct syntaxes for each semantic interpretation. This approach can help differentiate and avoid confusion between different interpretations. This recommendation serves as a practical solution to address the challenges faced by the pattern mining community regarding sequential patterns with negations. Figure 5: Concepts extracted from the attributions made for each dimension. While preferred semantics have been identified through our survey, we recognize that all semantics might have their uses depending on the data context. Resolving this challenge might involve designing algorithms capable of extracting various types of negative sequential patterns. This avenue has been explored in [2] using a declarative pattern mining framework, although scalability to large datasets remains a limitation. ## 8 Discussion In this part, we discuss the methodology employed for conducting the survey. However, it's important to acknowledge several limitations associated with our approach. Firstly, the survey encompassed only a limited number of questions that enabled a precise profiling of participants. Consequently, our understanding of whether the surveyed population accurately represents potential users of pattern mining algorithms remains constrained. Additionally, the questionnaire was primarily disseminated through academic channels, which may introduce bias in the responses. A second limitation of the questionnaire is the lack of redundancy in the questions. Each dimension of the semantics of patterns with negation is addressed by only one question. This approach may be prone to errors. We chose to have a shorter questionnaire without repeating questions in order to prevent from early abandon of the participant and to maximize the number of complete answers. This was effectively the case because 100% of the answers were complete. Furthermore, redundant question might be prone to inconsistent answers that would lead to discard them. Then, we designed the questionnaire to separate the different dimensions as much as possible to avoid ambiguity in the analysis of results. The third limitation pertains to the relatively modest number of collected responses. Acquiring 124 completed questionnaires spanned several months, and an increased number of participants would have necessitated alternative dissemination strategies. Nonetheless, considering the nature of the questions and the results, we deemed this sample size to be sufficient for statistically significant analysis. Notably, the substantial disparities observed in the outcomes substantiate the validity of our findings. The quality of the collected responses is buttressed by two questions: a preliminary eliminatory question and a second question on the scope of negation, which were used to filter out participants who could bias the results. The very low number of such participants indicates that the response set is of good quality, suggesting that participants answered the questions conscientiously. Another potential bias of this questionnaire is the presentation of basic notions of sequential patterns, which may have influenced certain responses over others. It is noteworthy that the questions on non-inclusion and embedding exhibited low diversity of responses. We expected a more varied perception of the notion of non-inclusion of itemsets, but this diversity was not reflected in the participant panel. Considering the diversity observed in the responses to the multiple occurrence question, we believe that if there was significant heterogeneity in the previous questions, it would have emerged in the questionnaire responses. Among the presentation biases, the use of symbols (rather than letters) in the questionnaire format was reported as interesting by some participants. Using letters assumes an order in the items that does not exist. In practice, we observed that only 22.6% of participants were sensitive to the item order. The use of geometric symbols better captures the idea of set without order. Unfortunately, we did not collect information on the graphic mode of the participant used, so we cannot test this hypothesis. Lastly, the questionnaire is closely aligns with the analysis framework proposed by Besnard and Guyet [3], which makes specific assumptions about the syntax and semantics of patterns with negation. Two crucial assumptions revolve around insensitivity to item order within an itemset and the scope of negation. The latter assumption saw 11.3% of participants responding differently than anticipated. As we excluded these individuals from the analysis, it does not affect the conclusions, but it raises questions about the "intuition" held by these people. Further in-depth interviews could shed light on this matter. A third hypothesis pertains to the syntax of patterns with negation. A more comprehensive study could explore more extensive syntaxes, such as allowing consecutive negations or negations at the beginning or end of a pattern. While such possibilities are inherent in some state-of-the-art pattern extraction algorithms, they were not explored in this study. Conclusion This paper delves into the semantics of sequential patterns with negation from the perspective of potential users of algorithms that extract such patterns. Prior research has highlighted the inherent ambiguity in the notations employed for these patterns [3]. Our primary objective was to determine whether the patterns extracted by state-of-the-art algorithms could potentially lead to misinterpretation by users. To address this question, we conducted a survey targeting potential users with diverse profiles. The goal of the survey was to understand which of the identified semantics were preferred by or intuitive for the users. Analysis of the questionnaire responses, which involved 124 participants, revealed that two semantics dominate within the panel. A first important result is that there is no universally shared intuitive semantics among the participants. The second significant outcome underscores the discrepancy between user intuitive semantics and the semantics used in state-of-the-art pattern extraction algorithms with negation, such as eNSP and NegPSpan. As the partial non-inclusion arises when negation involves sets of items (e.g., \(\neg(ab)\)), patterns incorporating this form of constraint warrant special attention to ensure optimal user comprehension. Furthermore, the substantial majority preference (approximately 69% of participants) is for weak-embeddings, aligning with the choice made by the NegPSpan algorithm. This semantics also exhibits antimonotonicity properties when negations are restricted to singletons. Based on these findings, we offer the following recommendations for sequential pattern extraction methods with negation: * Limit the use of item set negation and prioritize item negation instead, * Alternatively, explore the extension of the NegPSpan algorithm, as its inclusion relation semantics aligns with the majority intuition, * Promote the use of specific syntaxes for each semantics in order to avoid confusion.
否定を含む順序パターン、あるいは負の順序パターンは、いくつかのパターンアイテムセットの前において否定記号を挿入できる順序パターンとして表される。直感的に、このパターンは、否定のアイテムセットがシーケンスに含まれない場合に発生する。最近の研究は、これらのパターン形式に異なる意味合いを関連付け、最新のアルゴリズムは同じパターンセットを抽出していないことを示している。これは、否定を含む順序パターンの解釈可能性についての重要な質問である。この研究では、潜在的なユーザーが順序パターンにおける否定をどのように認識するかを調査することに重点を置いている。私たちの目標は、特定の意図が他の意図よりも「直感的」であるかどうか、およびこれらの意図が最新のアルゴリズムが使用している意図と一致するかどうかを明らかにすることである。この目的のために、ユーザーの意図の解釈に関する質問紙を設計した。この論文では、質問紙のデザインと1
2309.09077
On the Balmer spectrum of the Morel-Voevodsky category
We introduce the Morava-isotropic stable homotopy category and, more generally, the stable homotopy category of an extension $E/k$. These "local" versions of the Morel-Voevodsky stable ${\Bbb{A}}^1$-homotopy category $SH(k)$ are analogues of local motivic categories introduced in [22], but with a substantially more general notion of "isotropy". This permits to construct the, so-called, isotropic Morava points of the Balmer spectrum $\operatorname{Spc}(SH^c(k))$ of (the compact part of) the Morel-Voevodsky category. These analogues of topological Morava points are parametrized by the choice of Morava K-theory and a $K(p,m)$-equivalence class of extensions $E/k$. This provides a large supply of new points, and substantially improves our understanding of the spectrum. An interesting new feature is that the specialization among isotropic points behaves differently than in topology.
Peng Du, Alexander Vishik
2023-09-16T18:56:29
http://arxiv.org/abs/2309.09077v3
# On the Balmer spectrum of Morel-Voevodsky category ###### Abstract We introduce the _Morava-isotropic stable homotopy category_ and, more generally, the _stable homotopy category of an extension_\(E/k\). These "local" versions of the Morel-Voevodsky stable \(\mathbb{A}^{1}\)-homotopy category \(\mathrm{SH}(k)\) are analogues of local motivic categories introduced in [22] but with a substantially more general notion of "isotropy". This permits to construct the, so-called, _isotropic Morava points_ of the Balmer spectrum \(\mathrm{Spc}(\mathrm{SH}^{c}(k))\) of (the compact part of) the Morel-Voevodsky category. These analogues of topological Morava points are parametrized by the choice of Morava K-theory and a \(K(p,m)\)-equivalence class of extensions \(E/k\). This provides a large supply of new points, and substantially improves our understanding of the spectrum. An interesting new feature is that the specialization among isotropic points behaves differently than in topology. ## 1 Introduction The spectrum of a commutative ring \(A\) permits to look at the category of \(A\)-modules geometrically. The Balmer spectrum \(\mathrm{Spec}(\mathcal{D})\) - [4] plays a similar role for a \(\otimes\)-translated category \(\mathcal{D}\). This is a ringed topological space whose points are _prime_\(\otimes\)\(\cdot\)\(\cdot\)-ed ideals of \(\mathcal{D}\). One of the most important and useful \(\otimes\)-translated categories is the stable homotopy category \(\mathrm{SH}\) in topology. The Balmer spectrum (of the compact part) of it was essentially computed by the famous _Nilpotence Theorem_ of Devinatz-Hopkins-Smith [7], though in a different language, see [5, Corollary 9.5] for the modern description. It consists of a generic point \(\mathfrak{a}_{0}\) given by the prime ideal of torsion compact objects of \(\mathrm{SH}\) (in other words, by objects killed by \(\wedge\,\mathrm{H}\,\mathbb{Q}\)), which specializes into the chains of ordered points parametrized by prime numbers \(p\). Points on the \(p\)-chain are numbered by natural numbers \(1\leqslant m\leqslant\infty\) (and infinity), and \(\mathfrak{a}_{p,n}\)-point is a specialization of the \(\mathfrak{a}_{p,m}\)-one if and only if \(n\geqslant m\). The prime ideal \(\mathfrak{a}_{p,m}\) consists of objects annihilated by \(\wedge K(p,m)\), where \(K(p,m)\) is the Morava K-theory. In particular, \(\mathfrak{a}_{p,\infty}\)-points are closed and the respective ideal consists of objects annihilated by \(\wedge\,\mathrm{H}\,\mathbb{F}_{p}\). The Balmer spectrum \(\mathrm{Spc}(D(Ab)^{c})\) of (the compact part of) the "topological motivic category" \(D(Ab)\) is isomorphic to \(\mathrm{Spec}(\mathbb{Z})\), it maps injectively into \(\mathrm{Spc}(\mathrm{SH}^{c})\) via the (singular complex) "motivic" functor \(M:\mathrm{SH}^{c}\to D(Ab)^{c}\), covering the generic \(\mathfrak{a}_{0}\) and closed \(\mathfrak{a}_{p,\infty}\) points. So, in a sense, the "genuinely homotopic" life is happening inbetween these two extremes - over the Morava points \(\mathfrak{a}_{p,m}\), where \(m\) is finite. In the algebro-geometric context, the analogue of \(\mathrm{SH}\) is the _stable \(\mathbb{A}^{1}\)-homotopy category_\(\mathrm{SH}(k)\) of Morel-Voevodsky - [14]. In the current paper we study the Balmer spectrum of (the compact part of) it. The category \(\mathrm{SH}(k)\) is substantially more complicated than its' topological counterpart. This is already apparent by the comparison of the "atomic objects" (points) in topology and algebraic geometry. While in topology there is only one kind of a point, in algebraic geometry there are many types of them, corresponding to various extensions \(E/k\). Below we will see that this is reflected in the structure of the Balmer spectra of the respective categories. Balmer has shown that, in a rather general situation, the spectrum of the \(\otimes\)\(\cdot\)\(\cdot\)\(\cdot\)-ed category surjects to the Zariski spectrum of the endomorphism ring of the unit object. In particular, \(\mathrm{Spc}(\mathrm{SH}^{c}(k))\) surjects to \(\mathrm{Spec}(\mathrm{GW}(k))\) - [5, Corollary 10.1], where \(GW(k)\) is the _Grothendieck-Witt ring_ of quadratic forms (which coincides with the \(\mathrm{End}_{\mathrm{SH}(k)}(1)\) by the result of Morel -[13]). This was refined by Heller-Ormsby [8], who showed that \(\operatorname{Spc}(\operatorname{SH}^{c}(k))\) surjects also to the spectrum of the _Milnor-Witt K-theory_ of \(k\) (the latter spectrum is bigger than that of the \(GW(k)\)). There is also the topological realisation functor (we assume everywhere that characteristic of our field is zero) \(\operatorname{SH}(k)\to\operatorname{SH}\) which gives a copy of \(\operatorname{Spc}(\operatorname{SH})\) inside our Balmer spectrum. But this is far from everything. Our results, in particular, show that the discrepancy between the two spectra is dramatic. Even in terms of cardinality of them. Our approach to \(\operatorname{SH}(k)\) is based on the idea of _isotropic realisations_. Such realisations and, more generally, the _local versions_ of the category corresponding to field extensions \(E/k\), in the case of the Voevodsky category of motives \(\operatorname{DM}(k)\), were constructed in [21] and [22]. In the motivic case, one needs to choose a prime number \(p\) and then annihilate the motives of \(p\)-anisotropic varieties over \(k\), that is, varieties which have no (closed) points of degree prime to \(p\) (the idea of such localisation is due to Bachmann - [2]). We get the _isotropic motivic category_\(\operatorname{DM}(k/k;\mathbb{F}_{p})\) together with the natural functor \(\operatorname{DM}(k)\to\operatorname{DM}(k/k;\mathbb{F}_{p})\). One may combine this with the extension of fields and get functors with values in \(\operatorname{DM}(E/E;\mathbb{F}_{p})\), for every extension \(E/k\). For a general field \(E\), the isotropic motivic category \(\operatorname{DM}(E/E;\mathbb{F}_{p})\) will not be particularly simple (for example, for an algebraically closed field, it will coincide with the global category \(\operatorname{DM}(E)\)). But there is an important class of _flexible fields_ - [22, Definition 1.1] for which this category is handy. These are purely transcendental extensions of infinite transcendence degree of some other fields. Any field \(E\) can be embedded into its' _flexible closure_\(\widetilde{E}:=E(\mathbb{P}^{\infty})\), and so, for any extension \(E/k\), we get the _isotropic realisation_ functor \(\psi_{p,E}:\operatorname{DM}(k)\to\operatorname{DM}(\widetilde{E}/\widetilde{ E};\mathbb{F}_{p})\) with values in a much simpler category. It was shown in [24, Theorem 5.13] that the kernels \(\mathfrak{a}_{p,E}\) of these functors (on the compact part) are prime \(\otimes\) - \(\triangle\)-ed ideals and so, provide _isotropic points_ of the Balmer spectrum \(\operatorname{Spc}(\operatorname{DM}(k)^{c})\). Two such points \(\mathfrak{a}_{p,E}\) and \(\mathfrak{a}_{q,F}\) coincide if and only if \(p=q\) and extensions \(E/k\) and \(F/k\) are \(p\)_-equivalent_ - [22, Definition 2.2], and there is a huge number of such \(p\)-equivalence classes of extensions, in general - see [24, Example 5.14]. In the case of \(\operatorname{SH}(k)\), we need to use a more general version of _isotropy_. In [24, Definition 2.1], the notion of \(A\)-anisotropic varieties for any oriented cohomology theory \(A^{*}\) was introduced - see Definition 2.1. In the case of the Chow groups \(\operatorname{CH}^{*}/p\) modulo \(p\) it coincides with the \(p\)-anisotropy considered above. The natural theories to consider are the Morava K-theories \(K(p,m)^{*}\) and related theories \(P(m)^{*}\). These _small_ algebro-geometric analogues of the respective topological theories are _free theories_ in the sense of Levine-Morel [11] on the category \(\operatorname{\mathbf{Sm}}_{k}\) of smooth varieties, obtained from the \(BP^{*}\)-theory by change of coefficients: \(K(p,m)^{*}(X):=BP^{*}(X)\otimes_{BP}K(p,m)\) and \(P(m)^{*}(X):=BP^{*}(X)\otimes_{BP}P(m)\), where \(BP=\mathbb{Z}_{(p)}[v_{1},v_{2},\ldots]\) with generators \(v_{r}\) of dimension \(p^{r}-1\), \(K(p,m)=\mathbb{F}_{p}[v_{m},v_{m}^{-1}]\) (with the map sending other generators to zero) and \(P(m)=BP/I(m)\), where \(I(m)=(v_{0},v_{1},\ldots,v_{m-1})\) is the _invariant ideal_ of Landweber [9]. We construct the \(K(p,m)\)_-isotropic_ version \(\operatorname{SH}_{(p,m)}(k/k)\) of the Morel-Voevodsky category, for any choice of the Morava K-theory \(K(p,m)\). To start with, (similar to the motivic case) we annihilate the \(\Sigma_{\mathrm{p}^{1}}^{\infty}\)-spectra of all \(K(p,m)\)-anisotropic varieties over \(k\). And then, in the resulting category \(\widehat{\operatorname{SH}}_{(p,m)}(k/k)\), we annihilate all compact objects on whose \(MGL\)-motives \(v_{m}\) is nilpotent - see Definition 3.1. Alternatively, \(\operatorname{SH}_{(p,m)}(k/k)\) is the Verdier localisation of \(\operatorname{SH}(k)\) by the category generated by the compact objects whose \(MGL\)-motives belong to the localising \(\otimes\)-ideal generated by \(MGL\)-motives of \(K(p,m)\)-anisotropic varieties and \(\mathbb{1}^{MGL}/v_{m}\) (where we assume \(v_{\infty}=1\)). In the case \(m=\infty\) and non-formally real fields, the category \(\operatorname{SH}_{(p,\infty)}(k/k)\) is obtained in one step as in the motivic situation and is closely related to \(\operatorname{DM}(k/k;\mathbb{F}_{p})\). It coincides with the category considered by Tanania in [18] and [19]. We have a natural projection from "global" to isotropic category from which, as above, we obtain the _isotropic realisations_ \[\psi_{(p,m),E}:\operatorname{SH}(k)\to\operatorname{SH}_{(p,m)}(\widetilde{E}/ \widetilde{E}),\] for all extensions \(E/k\). These realisations take values in isotropic motivic categories over flexible fields. Below we will show that the zero ideal of such a category is prime. Moreover, it is expected that the Balmer spectrum of it is a singleton. One may introduce a certain \(K(p,m)\)-equivalence relation \(\stackrel{{(p,m)}}{{\sim}}\) on the set of extensions - see Section 3, and, as appears, it is sufficient to use one representative \(E\) from each equivalence class. Denote as \(\mathfrak{a}_{(p,m),E}\) the kernel of \(\psi_{(p,m),E}\) restricted to the compact part \(\mathrm{SH}^{c}(k)\). It is a \(\otimes\,\)-\(\,\triangle\)-ed ideal. Our main result is Theorem 7.6: **Theorem 1.1**: 1. \(\mathfrak{a}_{(p,m),E}\) _are prime and so, points of the Balmer spectrum_ \(\mathrm{Spc}(\mathrm{SH}^{c}(k))\)_._ 2. \(\mathfrak{a}_{(p,m),E}=\mathfrak{a}_{(q,n),F}\) _if and only if_ \(p=q\)_,_ \(m=n\) _and_ \(E/k\stackrel{{(p,m)}}{{\sim}}F/k\)_._ 3. \(\mathfrak{a}_{(p,\infty),E}\) _is the image of the point_ \(\mathfrak{a}_{p,E}\) _of_ _[_24_, Theorem 5.13]_ _under the natural map of spectra_ \(\mathrm{Spc}(\mathrm{DM}^{c}(k))\to\mathrm{Spc}(\mathrm{SH}^{c}(k))\) _induced by the motivic functor_ \(M:\mathrm{SH}^{c}(k)\to\mathrm{DM}^{c}(k)\)_._ We obtain a large supply of _isotropic points_ of the Balmer spectrum \(\mathrm{Spc}(\mathrm{SH}^{c}(k))\). Note, that for a general field, the number of \((p,m)\)-equivalence classes of extensions is large. For example, over the field of real numbers \(\mathbb{R}\), for every topological Morava point \(\mathfrak{a}_{2,m}\), \(1\leqslant m\leqslant\infty\) we have \(2^{2^{\aleph_{0}}}\) isotropic points \(\mathfrak{a}_{(2,m),E}\) - see Example 8.9. In particular, the cardinality of \(\mathrm{Spc}(\mathrm{SH}^{c}(\mathbb{R}))\) is equal to the cardinality of the set of all subsets of this category. To show that the zero ideal of the Morava-isotropic category \(\mathrm{SH}^{c}_{(p,m)}(k/k)\), over a flexible field \(k\), is prime, we study the category \(\widehat{\mathrm{SH}}^{c}_{(p,m)}(k/k)\) which has natural projection \(\widehat{M}^{MGL}\) to the \(MGL\)-motivic version \(\widehat{\mathrm{MGL}}^{c}_{(p,m)}\,\)-_mod_ - see (2). The latter category has a natural _weight structure_ in the sense of Bondarko [6] whose heart can be identified with the category of _isotropic \(P(m)[\overline{x}]^{*}\)-Chow motives_ (the same generators \(\overline{x}\) as in \(\Omega^{*}_{(p)}=BP^{*}[\overline{x}]\)). Here comes the crucial step. It was shown in [24, Theorem 5.1] that the category of _isotropic \(P(m)^{*}\)-Chow motives_ (over a flexible field) is equivalent to the category of _numerical \(P(m)^{*}\)-Chow motives_. This identifies our heart with the numerical category \(Chow_{Num}^{P(m)[\overline{x}]}(k)\) - see Proposition 4.3. The weight filtration assigns to any compact object \(Y\) of \(\widehat{\mathrm{SH}}^{c}_{(p,m)}(k/k)\) some choice \(t(Y)\) of the _weight complex_ of it - an object of \(K^{b}(Chow_{Num}^{P(m)|\overline{x}]}(k)\). The radicals of \(\mathbb{L}\)-annihilators of \(Y\) and \(t(Y)\) coincide, and if \(Y\) is the \(\widehat{MGL}\)-motive of some \(U\in\widehat{\mathrm{SH}}^{c}_{(p,m)}(k)\), then the action of the Landweber-Novikov operations ensures that this radical is the invariant ideal \(I(p,r)\) of Landweber, for some \(r\geqslant m\) - see Proposition 5.3. The \(P(m)^{*}\)-theory naturally projects to the Morava K-theory \(K(p,m)^{*}\), which induces the functor \(K^{b}(Chow_{Num}^{P(m)|\overline{x}]}(k))\to K^{b}(Chow_{Num}^{K(p,m)}(k))\). But the category of numerical Morava-motives is semi-simple. As a result, we obtain a well-defined functor \(t_{K(p,m)}:\widehat{\mathrm{MGL}}^{c}_{(p,m)}\,\)-_mod_\(\to K^{b}(Chow_{Num}^{K(p,m)}(k))\). Moreover, we prove that the \(\widehat{M}^{MGL}(U)\), for \(U\in\widehat{\mathrm{SH}}^{c}_{(p,m)}(k/k)\), belongs to the kernel of it if and only if it is annihilated by some power of \(v_{m}\) - see Proposition 7.4. It follows from the results of Sosnilo [17] and Aoki [1] that \(t_{K(p,m)}\) is a \(\otimes\,\)-\(\,\triangle\)-ed functor. Finally, the zero ideal in \(K^{b}(Chow_{Num}^{K(p,m)}(k))\) is prime, since the respective \(K(p,m)\)-Chow category is semi-simple and the ring \(K(p,m)\) is an integral domain. This proves part (1) of the above Theorem. To prove part (2), in analogy with [22, Definition 2.3], we introduce the _local_ versions \(\mathrm{SH}^{c}_{(p,m)}(E/k)\) of the Morel-Voevodsky category, parametrized by finitely generated extensions \(E/k\) (and the choice of Morava K-theory) - see Definition 8.1 and then prove that the local-to-isotropic functor is conservative on the image of \(\mathrm{SH}^{c}(k)\), if \(k\) is flexible - Proposition 8.3. This identifies the \(\stackrel{{(p,m)}}{{\sim}}\)-equivalence class of the point \(\mathfrak{a}_{(p,m),E}\) - Proposition 8.5. The prime number \(p\) is its' characteristic. Finally, as in topology, the number \(m\) is determined with the help of certain "test spaces" \(X_{p,n}\) of \(p\)_-type_ exactly \(n\). Here we use the construction of Mitchell [12] adapted to the algebro-geometric situation - [25]. The equivalence relation \(\stackrel{{(p,m)}}{{\sim}}\) is _coarser_ than \(\stackrel{{(p,n)}}{{\sim}}\), for \(m\leqslant n\), so we get the natural surjection \[\mathcal{P}_{(p,m)}\twoheadleftarrow\mathcal{P}_{(p,n)}\] between the respective sets of Morava-isotropic points. Nevertheless, the expected analogue of the topological specialisation \(\mathfrak{q}_{p,m}\succ\mathfrak{q}_{p,n}\) doesn't hold for isotropic points. The obstruction is given, for example, by the _norm-varieties_ of Rost - see Remark 8.7. The article is organized as follows. In Section 2, we recall the notion of a _small_ oriented cohomology theory, introduce (generalized) _isotropic equivalence_, _flexible fields_ and cite the results from [24] we need. In Section 3, we define _Morava-isotropic stable homotopy categories_ and _isotropic realisations_. In Section 4 we introduce the needed _weight structure_, while Section 5 is devoted to the Landweber-Novikov operations. In Section 6, we make a closer look at _numerical Morava-Chow motivic category_ and introduce _Morava weight cohomology_. In Section 7, we formulate our Main Theorem and prove parts (1) and (3) of it. Finally, in Section 8 we introduce the _local stable homotopy categories_ and prove part (2) of the Main Theorem. **Acknowledgements:** We are very grateful to Tom Bachmann, Paul Balmer, Martin Gallauer, John Greenlees, Daniel Isaksen and Beren Sanders for very helpful discussions and stimulating questions. The support of the EPSRC standard grant EP/T012625/1 is gratefully acknowledged. ## 2 Oriented cohomology theories and isotropic equivalence Everywhere in this article the ground field \(k\) will be any field of characteristic zero. Let \(A^{*}:\mathbf{Sm}_{k}\to Rings\) be an oriented cohomology theory with localisation (on the category of smooth varieties over \(k\)) in the sense of [20, Definition 2.1] (which is the standard axioms of Levine-Morel [11, Definition 1.1.2] plus the excision axiom \((EXCI)\)). We will also call such theories _small_ in contrast to _large_ theories represented by spectra in \(\mathrm{SH}(k)\). The _algebraic cobordism_\(\Omega^{*}\) of Levine-Morel [11] is the universal oriented cohomology theory. That is, for any oriented theory \(A^{*}\), there is a unique morphism of theories (a map respecting pull-backs and push-forwards) \(\theta_{A}:\Omega^{*}\to A^{*}\) - see [11, Theorem 1.2.6]. By the result of Levine [10], the theory \(\Omega^{*}\) coincides with the _pure part_\(MGL^{2*,*}\) of the \(MGL\)-theory of Voevodsky. Among oriented cohomology theories, there are _free theories_ in the sense of Levine-Morel. Such theories \(A^{*}\) are obtained from algebraic cobordism by change of coefficients: \(A^{*}=\Omega^{*}\otimes_{\mathbb{L}}A\) and are in 1-to-1 correspondence with the _formal group laws_\(\mathbb{L}\to A\). These are exactly the _theories of rational type_ in the sense of [20]. Recall from [24, Definition 2.1], that for any (small) oriented cohomology theory \(A^{*}\), one can introduce the notion of \(A\)-anisotropic varieties. **Definition 2.1**: _Let \(X\stackrel{{\pi}}{{\to}}\mathrm{Spec}(k)\) be a smooth projective variety over \(k\). Then \(X\) is called \(A\)-anisotropic, if the push-forward map \(\pi_{*}:A_{*}(X)\to A_{*}(\mathrm{Spec}(k))\) is zero._ Such (non-empty) varieties exist only if \(A^{*}\) is \(n\)-torsion, for some \(n\in\mathbb{N}\) - see [24, Remark 2.4(1)]. In the case \(A^{*}=\mathrm{CH}^{*}\left/n\right.\) of Chow groups modulo \(n\), we get the usual notion of \(n\)-anisotropy. If \(X\) is a smooth projective variety and \(x\in A^{*}(X)\), we say that \(x\) is \(A\)_-anisotropic_, if it is in the image of the push-forward map from some \(A\)-anisotropic variety. The \(A^{*}\)-anisotropic classes form an ideal stable under \(A^{*}\)-correspondences - see [24, Proposition 2.6]. Following [24, Definition 2.7] we can define: **Definition 2.2**: _For \(X/k\) - smooth projective, define:_ \[A^{*}_{iso}(X):=A^{*}(X)/(A-\text{anisotropic classes}).\] Applying the construction from [22, Example 4.1], this extends to the oriented cohomology theory \(A^{*}_{iso}\) on \({\bf Sm}_{k}\) (in the sense of [20, Definition 2.1]) - the _isotropic version_ of the theory \(A^{*}\). By the projection formula, any _anisotropic_ class is _numerically trivial_ in the sense of the pairing \[\langle\,,\rangle:A^{*}(X)\times A^{*}(X)\to A\] given by \(\langle x,y\rangle:=\pi_{*}(x\cdot y)\in A=A^{*}({\rm Spec}(k))\). Thus, the isotropic version of the theory surjects to the numerical one: \(A^{*}_{iso}\twoheadrightarrow A^{*}_{Num}\). More generally, if \(A^{*}\) and \(B^{*}\) are two oriented cohomology theories, we may consider the \(A\)_-isotropic version_\(B^{*}_{A\mbox{-}\,iso}\) of the theory \(B^{*}\), defined for smooth projective varieties as: \[B^{*}_{A\mbox{-}\,iso}(X):=B^{*}(X)/(A-\mbox{anisotropic classes}).\] The respective category of Chow motives will be denoted \(Chow^{B}_{A\mbox{-}\,iso}(k)\). As in topology, the \(p\)-localisation of algebraic cobordism splits as a polynomial algebra over the \(BP^{*}\)-theory: \(\Omega^{*}_{\mathbb{Z}_{(p)}}=BP^{*}[x_{i}|_{i\neq p^{*}-1}]=BP^{*}[\overline{x}]\), where the coefficient ring of the \(BP^{*}\)-theory is \(BP=\mathbb{Z}_{(p)}[v_{1},v_{2},\ldots]\) with \(\dim(v_{i})=p^{i}-1\). On \(\Omega^{*}\) and \(BP^{*}\) there is the action of the Landweber-Novikov operations and, by the results of Landweber [9], the ideals \(I(m):=(p,v_{1},\ldots,v_{m-1})\subset BP\), for any \(1\leqslant m\leqslant\infty\), are invariant under this action (actually, together with \(I(0):=(0)\), are the only prime invariant ideals). For any prime \(p\) and any \(1\leqslant m\leqslant\infty\), we can introduce the free theories \(K(p,m)^{*}:=BP^{*}\otimes_{BP}K(p,m)\), where \(K(p,m)=\mathbb{F}_{p}[v_{m},v_{m}^{-1}]\) (the other variables are mapped to zero), for \(m<\infty\) and \(K(p,m)=\mathbb{F}_{p}\) (with the augmentation map), for \(m=\infty\). These are (small) _Morava K-theories_. We will also use the related free theories \(P(m)^{*}:=BP^{*}\otimes_{BP}P(m)\), where \(P(m)=BP/I(m)\). Note that \(K(p,\infty)^{*}=P(\infty)^{*}={\rm Ch}^{*}={\rm CH}^{*}/p\) is just Chow groups modulo \(p\). The introduced _small_ theories \(BP^{*}\), \(K(p,m)^{*}\), \(P(m)^{*}\) are _pure parts_ of the respective _large_ theories. The isotropy of varieties with respect to these theories may be determined from the invariant \(I(X):={\rm Im}(\pi_{*}:BP_{*}(X)\to BP)\). Being an invariant (and finitely-generated) prime ideal of \(BP\), the radical of it is either \(I(n)\), for \(0\leqslant n<\infty\), or the whole \(BP\). In [24] it was shown that: **Proposition 2.3**: ([24, Proposition 4.15]) _The following conditions are equivalent:_ 1. \(X\) _is_ \(P(m)\)_-anisotropic;_ 2. \(X\) _is_ \(K(m)\)_-anisotropic;_ 3. \(\sqrt{I(X)}=I(n)\)_, for some_ \(1\leqslant n\leqslant m\)_._ In our considerations, an important role will be played by a special type of fields called _flexible_ - [22, Definition 1.1]. These are purely transcendental extensions of infinite transcendence degree of some other fields. Any field \(k\) can be embedded into its' _flexible closure_\(\widetilde{k}=k(\mathbb{P}^{\infty})\). It appears that for the \(K(p,m)^{*}\) and \(P(m)^{*}\)-theories over a flexible field, the isotropic versions coincide with numerical ones. This fundamental fact, established in [24], plays the crucial role in our constructions below. **Theorem 2.4**: ([24, Theorem 1.4]) _Let \(k\) be flexible. Then, for \(1\leqslant m\leqslant\infty\),_ \[P(m)^{*}_{iso}=P(m)^{*}_{Num}\quad\mbox{ and }\quad K(p,m)^{*}_{iso}=K(p,m)^{*}_{Num}.\] Isotropic stable homotopy categories Let \(k\) be any field of characteristic zero and \(\operatorname{SH}(k)\) be the stable \(\mathbb{A}^{1}\)-homotopy category of Morel-Voevodsky [14]. Let \(p\) be a prime, \(1\leqslant m\leqslant\infty\) be a natural number (or infinity), and \(K(p,m)^{*}\) be the (small) Morava K-theory introduced above. Let \(Q_{(p,m)}\) be the disjoint union of all \(K(p,m)\)-anisotropic smooth projective varieties over \(k\) (up to isomorphism). Let \(\tilde{C}(Q_{p,m})\) be the respective Cech simplicial scheme and \(\mathfrak{X}_{Q_{(p,m)}}\) be the \(\Sigma^{\infty}_{\mathbb{P}^{1}}\)-spectrum of it. It is a \(\wedge\)-projector. Let us denote as \(\Upsilon_{(p,m)}\) the complementary projector \(\operatorname{Cone}(\mathfrak{X}_{Q_{(p,m)}}\to\mathbb{1})\). Denote as \(\widehat{\operatorname{SH}}_{(p,m)}(k/k)\) the category \(\Upsilon_{(p,m)}\wedge\operatorname{SH}(k)\). It is naturally a full subcategory of \(\operatorname{SH}(k)\) equivalent to the Verdier localisation of \(\operatorname{SH}(k)\) by the localising subcategory generated by \(K(p,m)\)-anisotropic varieties. The natural projection \(pr:\operatorname{SH}(k)\to\widehat{\operatorname{SH}}_{(p,m)}(k/k)\) is left adjoint to the embedding, which respects direct sums. Thus, \(pr\) maps compact objects to compact ones. We have an adjoint pair: (1) where \(M^{MGL}\) is a \(\otimes\)-functor and \(B\) satisfies the projection formula. If we denote \(\Upsilon^{MGL}_{(p,m)}:=M^{MGL}(\Upsilon_{(p,m)})\) and \(\widehat{\operatorname{MGL}}_{(p,m)}\) - \(mod:=\Upsilon^{MGL}_{(p,m)}\otimes\operatorname{MGL}\) - \(mod\), then it descends to the adjoint pair (2) Let \(\mathcal{A}\) be the localising subcategory of \(\widehat{\operatorname{SH}}_{(p,m)}(k/k)\) generated by all compact objects \(U\) such that: for \(m<\infty\), \(v_{m}\) is nilpotent on \(\widehat{M}^{MGL}(U)\); for \(m=\infty\), \(\widehat{M}^{MGL}(U)=0\). **Definition 3.1**: _The isotropic stable homotopy category \(\operatorname{SH}_{(p,m)}(k/k)\) is the Verdier localisation of the category \(\widehat{\operatorname{SH}}_{(p,m)}(k/k)\) with respect to \(\mathcal{A}\). It comes equipped with the natural functor \(\operatorname{SH}(k)\to\operatorname{SH}_{(p,m)}(k/k)\)._ **Remark 3.2**: _If \(m=\infty\) and \(k\) is not formally real, then by the result of Bachmann - [3], the \(MGL\)-motivic functor \(M^{MGL}:\operatorname{SH}(k)\to\operatorname{MGL}\) - \(mod\) is conservative on \(\Upsilon_{(p,\infty)}\wedge\operatorname{SH}^{c}(k)\). Thus, \(\mathcal{A}=0\), in this case, and the category \(\operatorname{SH}_{(p,\infty)}(k/k)\) coincides with the category considered by Tanania in [18] and [19]._ Since the subcategory \(\mathcal{A}\) is generated by compact objects, our realisation restricts to compact parts: \(\operatorname{SH}^{c}(k)\to\operatorname{SH}^{c}_{(p,m)}(k/k)\) - see [15, Lemma 4.4.4]. **Proposition 3.3**: _Let \(1\leqslant m<\infty\). Then \(\operatorname{SH}_{(p,m)}(k/k)\) is the Verdier localisation of \(\operatorname{SH}(k)\) by the subcategory, generated by compact objects \(U\) such that \(M^{MGL}(U)\) belongs to the localising \(\otimes\)-ideal of \(\operatorname{MGL}\) - \(mod\) generated by \(\mathbbm{1}^{MGL}/v_{m}:=\operatorname{Cone}(1^{MGL}(*)[2*]\stackrel{{ v_{m}}}{{\longrightarrow}}1^{MGL})\) and \(MGL\)-motives of \(K(p,m)\)-anisotropic varieties._ _Proof_: Let \(\mathcal{X}^{MGL}_{(p,m)}:=M^{MGL}(\mathfrak{X}_{Q_{(p,m)}})\) be the projector complementary to \(\Upsilon^{MGL}_{(p,m)}\). \((\rightarrow)\) Let \(U\in\operatorname{SH}^{c}(k)\) be a compact object, such that \(v_{m}\) is nilpotent on \(\Upsilon^{MGL}_{(p,m)}\otimes M^{MGL}(U)\). Then by [5, Theorem 2.15], this object belongs to the thick \(\otimes\) - \(\triangle\)-ed ideal of \(MGL\) - \(mod\) generated by \(\Upsilon^{MGL}_{(p,m)}\otimes\mathbbm{1}^{MGL}/v_{m}\). But \(M^{MGL}(U)\) is an extension of \(\mathcal{X}^{MGL}_{(p,m)}\otimes M^{MGL}(U)\) and \(\Upsilon^{MGL}_{(p,m)}\otimes M^{MGL}(U)\), where \(\mathcal{X}^{MGL}_{(p,m)}\) belongs to the localising \(\otimes\)-ideal generated by \(MGL\)-motives of \(K(p,m)\)-anisotropic varieties. Hence, \(M^{MGL}(U)\) belongs to the specified ideal. \((\leftarrow)\) If \(U\in\operatorname{SH}^{c}(k)\) be a compact object such that \(M^{MGL}(U)\) belongs to the ideal described, then \(\Upsilon^{MGL}_{(p,m)}\otimes M^{MGL}(U)\) belongs to the localising \(\otimes\)-ideal of \(\widehat{\operatorname{MGL}}_{(p,m)}\) - \(mod=\Upsilon^{MGL}_{(p,m)}\otimes\operatorname{MGL}\) - \(mod\) generated by \(\Upsilon_{(p,m)}^{MGL}\otimes\mbox{$1$}^{MGL}/v_{m}\). And this object is still compact. Hence, the action of \(v_{m}\) on it is nilpotent, again by [5, Theorem 2.15]. \(\Box\) We can introduce the partial \(K(p,m)\)-order on the set of field extensions of \(k\). Let \(E=\mbox{colim}_{\alpha}\,E_{\alpha}\), and \(F=\mbox{colim}_{\beta}\,F_{\beta}\), for some finitely generated extensions \(E_{\alpha}=k(Q_{\alpha})\) and \(F_{\beta}=k(P_{\beta})\) with smooth models \(Q_{\alpha}\) and \(P_{\beta}\). We say that \(E/k\stackrel{{(p,m)}}{{\geq}}F/k\), if for any \(\beta\), there exists \(\alpha\) such that \((P_{\beta})_{E_{\alpha}}\) is \(K(p,m)\)-isotropic (\(\Leftrightarrow\)\(\pi_{*}:K(p,m)_{*}(Q_{\alpha}\times P_{\beta})\twoheadrightarrow K(p,m)_{*}(Q_{\alpha})\) is surjective). We say that two extensions are equivalent \(E/k\stackrel{{(p,m)}}{{\sim}}F/k\), if \(E/k\stackrel{{(p,m)}}{{\geq}}F/k\) and \(F/k\stackrel{{(p,m)}}{{\geq}}E/k\). Let \(\widetilde{E}\) be the _flexible closure_ of the field \(E\) (that is, \(\widetilde{E}=E(\mathbb{P}^{\infty})\)). Combining natural restrictions \(\mbox{SH}(k)\rightarrow\mbox{SH}(\widetilde{E})\) with projections to isotropic categories, we get a family of _isotropic realisations_ \[\psi_{(p,m),E}:\mbox{SH}(k)\rightarrow\mbox{SH}_{(p,m)}(\widetilde{E}/ \widetilde{E}),\] where \(p\) is a prime number, \(1\leqslant m\leqslant\infty\), and \(E/k\) runs over equivalence classes of the field extensions under the relation \(\stackrel{{(p,m)}}{{\sim}}\) above. ## 4 The weight structure on \(\mbox{MGL}\,\)- \(mod\) On \(\mbox{MGL}^{c}\,\)- \(mod\) we have the weight structure in the sense of Bondarko [6] with the _heart_\(\cal H\) consisting of the direct summands of \(\mbox{MGL}\)-motives of smooth projective varieties. That is, \({\cal H}=Chow^{\Omega}(k)\). A similar structure exists on \(\widehat{\mbox{MGL}}_{(p,m)}\,\)- \(mod\). **Proposition 4.1**: _On the compact part \(\widehat{\mbox{MGL}}^{c}_{(p,m)}\,\)- \(mod\) we have a bounded, non-degenerate weight structure whose heart \(\cal H\) is equivalent to the category \(Chow^{\Omega}_{K(p,m)\,\)- \(iso}(k)\) of \(K(p,m)\)-isotropic \(\Omega\)-Chow motives._ _Proof_: Our weight structure is determined by its' heart \(\cal H\) which consists of direct summands of \(\widehat{M}^{MGL}\)-motives of smooth projective varieties. The fact that \(\cal H\) indeed defines a weight structure follows from the following result. **Lemma 4.2**: _For \(X,Y\in SmProj/k\), with \(\dim(Y)=d\) and \(l\in\mathbb{Z}\), we have:_ \[(1) \mbox{Hom}_{\widehat{\mbox{MGL}}_{(p,m)}}(\widehat{M}^{MGL}(X)(l )[2l],\widehat{M}^{MGL}(Y))=\Omega^{d-l}(X\times Y)/(K(p,m)-\mbox{anisotropic classes});\] \[(2) \mbox{Hom}_{\widehat{\mbox{MGL}}_{(p,m)}}(\widehat{M}^{MGL}(X)(l )[2l],\widehat{M}^{MGL}(Y)[i])=0,\;\;\mbox{for}\;\;i>0.\] _Proof_: Due to duality for \(\mbox{MGL}\,\)- \(mod\), it is sufficient to treat the case \(X=\mbox{Spec}(k)\). The argument closely follows that of [22, Proposition 2.16]. For a smooth \(Q\), denote as \({\cal X}_{Q}\) the \(MGL\)-motive of the respective Cech simplicial scheme, and as \(\widetilde{\cal X}_{Q}\) the complementary projector \(\mbox{Cone}({\cal X}_{Q}\to T)\). Then \(\mbox{Hom}_{\widehat{\mbox{MGL}}_{(p,m)}}(T(l)[2l],\widehat{M}^{MGL}(Y))\) is given by \[\mbox{Hom}_{\mbox{MGL}}(\widetilde{\cal X}(l)[2l],M^{MGL}(Y)\otimes\widetilde {\cal X})=\mbox{Hom}_{\mbox{MGL}}(T(l)[2l],M^{MGL}(Y)\otimes\widetilde{\cal X}),\] where \(\widetilde{\cal X}=\widetilde{\cal X}_{Q_{(p,m)}}\) and \(Q_{(p,m)}\) is the disjoint union of all \(K(p,m)\)-anisotropic varieties over \(k\). Let \(({\cal X}_{Q})_{\geqslant 1}=\mbox{Cone}(M^{MGL}(Q)\rightarrow{\cal X}_{Q})\). This object is an extension of \(M^{MGL}(Q^{\times(i+1)})[i]\), for \(i\geqslant 1\). So, there are no Homs in \(\mbox{MGL}\,\)- \(mod\) from \(T(l)[2l]\) to this object (and any positive shift \([k]\) of it), since \(\operatorname{MGL}^{a,b}(Z)=0\), for \(a>2b\) and any smooth variety \(Z\). Because \(\widetilde{\mathcal{X}}_{Q}\) is an extension of \((\mathcal{X}_{Q})_{\geqslant 1}[1]\) by \(\operatorname{Cone}(M^{MGL}(Q)\to T)\), we get: \[\operatorname{Hom}_{\operatorname{MGL}}(T(l)[2l],M^{MGL}(Y)\otimes\widetilde{ \mathcal{X}}_{Q_{(p,m)}})=\operatorname{Coker}(\Omega_{l}(Y\times Q_{(p,m)}) \stackrel{{\pi_{*}}}{{\longrightarrow}}\Omega_{l}(Y)).\] In other words, our group is obtained from \(\Omega_{l}(Y)\) by moding-out \(K(p,m)\)_-anisotropic classes_. The vanishing of \(\operatorname{Hom}_{\widehat{\operatorname{MGL}}_{(p,m)}}(\widehat{M}^{MGL}( X)(l)[2l],\widehat{M}^{MGL}(Y)[i])\), for positive \(i\), follows from the same arguments. \(\square\) By [6, Theorem 4.3.2, Proposition 5.2.2], we have a unique bounded non-degenerate weight structure on \(\widehat{\operatorname{MGL}}_{(p,m)}\cdot mod\) whose heart \(\mathcal{H}\) is the category of \(K(p,m)\)_-isotropic \(\Omega\)-Chow-motives \(Chow^{\Omega}_{K(p,m)\,\text{-}\,iso}(k)\)_ which is the idempotent completion of the category of correspondences, where Homs are given by part (1) of Lemma 4.2. \(\square\) Recall, that \(\Omega^{*}\otimes_{\mathbb{Z}}\mathbb{Z}_{(p)}=BP^{*}[x_{i}|_{i\neq p^{s}-1}]= BP^{*}[\overline{x}]\). The results of [24] permit to identify, in the case of a flexible field, the heart of our weight structure with the category of certain numerical Chow motives, which is the crucial step of the article. **Proposition 4.3**: _Let \(k\) be flexible. Then on the compact part \(\widehat{\operatorname{MGL}}^{c}_{(p,m)}\cdot mod\) we have a bounded, non-degenerate weight structure whose heart \(\mathcal{H}\) is equivalent to the category \(Chow^{P(m)[\overline{x}]}_{Num}(k)\) of numerical \(P(m)[\overline{x}]\)-Chow motives._ _Proof_: If \(k\) is flexible, then there is a \(K(p,m)\)-anisotropic variety \(Q\) whose class in the Lazard ring is \(v_{m-1}\) (for example, one may use a norm-variety corresponding to some non-zero pure \(m\)-symbol modulo \(p\)). Then \(I(Q)=\operatorname{Image}(\Omega_{*}(Q)\stackrel{{\pi_{*}}}{{ \longrightarrow}}\mathbb{L})\) contains \(v_{i}\), for \(0\leqslant i<m\) (since this ideal is stable under Landweber-Novikov operations) - see Example 7.7. Thus, classes \(v_{i}\in\Omega^{*}(\operatorname{Spec}(k))\), for \(0\leqslant i<m\), are \(K(p,m)\)-anisotropic. In particular, \(p\in\mathbb{L}_{0}\) is \(K(p,m)\)-anisotropic. Since \(\Omega^{*}\otimes_{\mathbb{Z}}\mathbb{Z}_{(p)}=BP^{*}[\overline{x}]\) and \(BP^{*}/(v_{0},\dots,v_{m-1})=P(m)^{*}\), we get that, for any smooth projective variety \(X\), \[\Omega^{*}(X)/(K(p,m)\operatorname{-anis.\ classes})=P(m)[\overline{x}]^{*}(X) /(K(p,m)\operatorname{-anis.\ classes}).\] Since \(K(p,m)\)-anisotropic varieties are exactly \(P(m)\)-anisotropic ones, by Proposition 2.3, the latter group can be rewritten as \[P(m)[\overline{x}]^{*}(X)/(P(m)\operatorname{-anis.\ classes})=P(m)[\overline{x }]^{*}_{iso}(X).\] Thus, our heart is the category of isotropic \(P(m)[\overline{x}]\)-Chow motives. Finally, it follows from [24, Theorem 5.1] - see Theorem 2.4 that this category is equivalent to the category of numerical \(P(m)[\overline{x}]\)-Chow motives \(Chow^{P(m)[\overline{x}]}_{Num}(k)\). \(\square\) ## 5 Landweber-Novikov operations Let \(\operatorname{MGL}=B(T)\) be the \(\operatorname{MGL}\)-spectrum, then for variables \(b_{i}\) of degree \((i)[2i]\) and any monomial \(\overline{b}^{\overline{\phantom{b}}}\) in them of degree \((s)[2s]\), we have the Landweber-Novikov map in \(\operatorname{SH}(k)\): \[S^{\overline{b}^{\overline{\phantom{b}}}}_{L-N}:\operatorname{MGL}\longrightarrow \Sigma^{2s,s}\operatorname{MGL}\] which due to the adjoint pair (1) induces the Landweber-Novikov operation \[S^{\overline{\phantom{b}}}_{L-N}:MGL^{a,b}(U)\longrightarrow MGL^{a+2s,b+s}(U),\] for any \(U\in\mathrm{SH}(k)\) functorial on \(U\). Taken together, these operations define the action of the Landweber-Novikov algebra \(MU_{*}MU\) on \(\mathrm{SH}(k)\). The natural functor \(\mathrm{SH}(k)\longrightarrow\widehat{\mathrm{SH}}_{(p,m)}(k/k)\) translates the above maps to maps \[\widehat{S^{\overline{b}}_{L-N}}:\widehat{\mathrm{MGL}}\longrightarrow\Sigma^{2 s,s}\widehat{\mathrm{MGL}}\] in \(\widehat{\mathrm{SH}}_{(p,m)}(k/k)\) which induce the action \[\widehat{S^{\overline{b}}_{L-N}}:\widehat{MGL}^{a,b}(U)\longrightarrow \widehat{MGL}^{a+2s,b+s}(U)\] of the Landweber-Novikov algebra on \(\widehat{\mathrm{SH}}_{(p,m)}(k/k)\). These operations may be combined into a _Total_ one \[\widehat{S}^{Tot}_{L-N}=\sum_{\overline{\tau}}\widehat{S^{\overline{b}}_{L-N }}:\widehat{MGL}^{a,b}(U)\longrightarrow\bigoplus_{s\in\mathbb{Z}}\widehat{MGL }^{a+2s,b+s}(U)\] which is multiplicative. If \(V\) is compact, the group \(\widehat{MGL}^{a,b}(U\otimes V^{\vee})\) may be identified with \[\mathrm{Hom}_{\widehat{\mathrm{MGL}}_{(p,m)}}(\widehat{M}^{MGL}(U),\widehat{ M}^{MGL}(V)(b)[a]).\] In particular, for a compact \(U\), we obtain the action of the Landweber-Novikov algebra on \[\mathbb{E}nd(\widehat{M}^{MGL}(U))=\oplus_{i\in\mathbb{Z}}\mathrm{Hom}_{ \widehat{\mathrm{MGL}}_{(p,m)}}(\widehat{M}^{MGL}(U),\widehat{M}^{MGL}(U)(i)[ 2i]),\] where the action of the Total Landweber-Novikov operation \(\widehat{S}^{Tot}_{L-N}\) is compartible with the composition of endomorphisms. **Definition 5.1**: _For \(U\in\widehat{\mathrm{SH}}^{c}_{(p,m)}(k)\), define \(J(U)\subset\mathbb{L}\) as \(Ann(\widehat{M}^{MGL}(U))\) (cf. [23, Definition 2.1])._ Since \(\widehat{S}^{Tot}_{L-N}\) is multiplicative and fixes the identity map of \(\widehat{M}^{MGL}(U)\), it follows that \(J(U)\) is an invariant ideal of the Lazard ring. Hence, so is its radical \(\sqrt{J(U)}\). **Proposition 5.2**: _The ideal \(\sqrt{J(U)}\) is finitely generated._ _Proof_: Let \(M\) be an arbitrary object of \(\widehat{\mathrm{MGL}}^{c}_{(p,m)}\) - \(mod\) and \(t(M)\) be any choice of the weight complex of \(M\) with respect to the weight structure from Proposition 4.3. It is a finite complex \[\ldots\longrightarrow X_{i+1}\stackrel{{ d_{i}}}{{ \longrightarrow}}X_{i}\longrightarrow\ldots,\] where \(X_{i}\)s are numerical \(P(m)[\overline{x}]\)-Chow motives. The endomorphism ring \(\mathbb{E}nd_{K^{b}(\mathcal{H})}(t(M))\) is the cohomology of the complex \((E,d)=E_{1}\to E_{0}\to E_{-1}\), where \(E_{j}=\oplus_{i}\mathbb{H}om_{Chow_{Num}^{P(m)[\overline{x}]}(k)}(X_{i},X_{i+ j})\) (the sum is finite) and differentials are induced by \(d_{i}\)s. Here \(E_{j}\)s are finitely presented \(\mathbb{L}\)-modules, and so, \((E,d)=(\hat{E},\hat{d})\otimes_{\mathbb{L}_{\leqslant r}}\mathbb{L}\), for some \(r\), where \(\mathbb{L}_{\leqslant r}=\mathbb{Z}[x_{1},\ldots,x_{r}]\). Since the ring \(\mathbb{L}_{\leqslant r}\) is Noetherian, \(\mathbb{E}nd_{K^{b}(\mathcal{H})}(t(M))\) is a finitely presented \(\mathbb{L}\)-module. Hence, \(Ann(t(M))\) is a finitely generated ideal of \(\mathbb{L}\). Then, so is the radical \(\sqrt{Ann(t(M))}\) of it. But by [6, Theorem 3.3.1.II], \(\sqrt{Ann(M)}=\sqrt{Ann(t(M))}\). Thus, \(\sqrt{Ann(M)}\) is finitely generated for any compact object \(M\) of \(\widehat{\mathrm{MGL}}_{(p,m)}\) - \(mod\). It remains to apply it to \(M=\widehat{M}^{MGL}(U)\). \(\Box\) Thus, \(\sqrt{J(U)}\) is a finitely generated radical invariant ideal of \(\mathbb{L}\). Since, over a flexible field, the classes \(v_{i}\), \(0\leqslant i<m\) are \(K(p,m)\)-anisotropic, \(J(U)\) contains \(I(p,m)=(v_{0},\ldots,v_{m-1})\). From the results of Landweber [9, Theorem 2.7, Proposition 3.4] we obtain that our radical is an invariant prime ideal of Landweber. **Proposition 5.3**: _Let \(k\) be flexible. Then, for any compact \(U\in\widehat{\rm SH}_{(p,m)}(k/k)\), we have: \(\sqrt{J(U)}=I(p,r)\), for some \(r\geqslant m\)._ ## 6 Numerical Morava motives We have the natural ring homomorphism \(P(m)[\overline{x}]\to K(p,m)\) of coefficient rings (\(v_{m}\) is inverted, while \(v_{j}\), for \(j>m\) and all \(x_{i}\)s are mapped to zero). It gives the morphism of _free_ theories \(P(m)[\overline{x}]^{*}\to K(p,m)^{*}\) which, in turn, induces the \(\otimes-\bigtriangleup\)-functor \[K^{b}(Chow^{P(m)[\overline{x}]}_{Num}(k))\longrightarrow K^{b}(Chow^{K(p,m)} _{Num}(k))\] of homotopy categories of the respective numerical Chow motives. Since the coefficient ring \(K(p,m)=\mathbb{F}_{p}[v_{m},v_{m}^{-1}]\) is a _graded field_, the numerical Chow category \(Chow^{K(p,m)}_{Num}(k)\) is semi-simple. **Proposition 6.1**: _Any map \(f:X\to Y\) in \(Chow^{K(p,m)}_{Num}(k)\) can be written as \(id_{W}\oplus 0\), where \(X=W\oplus X^{\prime}\) and \(Y=W\oplus Y^{\prime}\)._ _Proof_: We closely follow the arguments of [24, Lemma 5.9] which treats the case of \(K(p,\infty)\). Below we will assume that \(m\) is finite. Any graded element of \(K(p,m)\) is either zero, or invertible. If \(f\neq 0\), then so is the adjoint map \(\overline{f}:T\to X^{\vee}\otimes Y\). Since our motives are numerical, this means that there exists \(\underline{g}:X^{\vee}\otimes Y\to T\), whose composition with \(\overline{f}\) gives an invertible map \(T\to T\). In other words, for the respective map \(g:Y\to X\), the traces of \(\alpha=g\circ f\) and \(\beta=f\circ g\) are non-zero. Let us show that some powers of \(\alpha\) and \(\beta\) are non-zero projectors. We may assume that \(X\) and \(Y\) are (numerical-\(K(p,m)\)) motives of varieties. Let \(\widetilde{\alpha}\) be the restriction of some lifting of \(\alpha\) from the (non-numerical) \(K(p,m)(X\times X)\) to the topological Morava K-theory \(K(p,m)_{Top}(X\times X)\), and similar for \(\beta\). As the degree pairing is defined on the level of the topological realisation, the traces of \(\widetilde{\alpha}\) and \(\widetilde{\beta}\) coincide with that of \(\alpha\) and \(\beta\), and so, are non-zero. But the topological Morava K-theory satisfies the Kunneth formula (in other words, a topological Morava motive of a smooth projective variety is a sum of Tate-motives). In particular, the pairing identifies \(K(p,m)_{Top}(X)^{\vee}=K(p,m)_{Top}(X)\) and \(K(p,m)_{Top}(X\times X)\) can be identified with \({\rm End}_{K(p,m)}(K(p,m)_{Top}(X))\). Consider the filtration \(K(p,m)_{Top}(X)=F_{0}\supset F_{1}\supset F_{2}\supset\ldots\), where \(F_{k}=image(\widetilde{\alpha}^{k})\). Since \(K(p,m)_{Top}(X)\) is a graded \(K(p,m)\)-module of finite rank and \(K(p,m)\) is a graded field, there exists \(r\), such that the embeddings \(F_{k}\supset F_{k+1}\) are strict, for \(k<r\), and the identity, for \(k\geqslant r\). Since the degree zero part \({\rm End}^{0}_{K(p,m)}(K(p,m)_{Top}(X))\) of the endomorphism ring is finite, there is \(N\), such that \(\widetilde{\alpha}^{N}\) is a projector. It must be a projection on \(F_{r}\). Since \(tr(\widetilde{\alpha}^{N+1})=tr(\widetilde{\alpha}|_{F_{r}})=tr(\widetilde{ \alpha})\), it follows that the \(\deg(\widetilde{\alpha}\cdot(\widetilde{\alpha}^{N})^{\vee})\neq 0\). Hence, \(\alpha^{\prime}=\alpha^{N}\) is a non-zero projector in \(Chow^{K(p,m)}_{Num}(k)\) (note that \(K(p,m)_{Num}\) is a sub-quotient of \(K(p,m)_{Top}\)). Then so is \(\beta^{\prime}=\beta^{2N}\), while \(f\) and \(g^{\prime}=g\circ\beta^{2N-1}\) identify \((X,\alpha^{\prime})\) with \((Y,\beta^{\prime})\). Denote this motive as \(W^{\prime}\). Then \(f=id_{W^{\prime}}\oplus f^{\prime}\), for a non-zero \(W^{\prime}\) and some \(f^{\prime}\). Applying the same arguments inductively to \(f^{\prime}\), \(f^{\prime\prime}\), etc. and using the fact that the endomorphism rings of \(X\) and \(Y\) are finite, we get the needed decomposition of \(f\). \(\Box\) As a corollary we get: **Proposition 6.2**: _Any object of \(K^{b}(Chow^{K(p,m)}_{Num}(k))\) can be presented by a complex with zero differentials. Such a presentation is unique (up to a canonical isomorphism)._ _Proof_: A given object of \(K^{b}(Chou_{Num}^{K(p,m)}(k))\) is represented by some finite complex \(X_{r}\to\ldots\to X_{s}\). Using Proposition 6.1, we may present \(d_{s}:X_{s+1}\to X_{s}\) as \(id_{W}\oplus 0\), for some direct summands \(W\) of \(X_{s+1}\) and \(X_{s}\). Moding out the homotopy trivial direct summand \(W\xrightarrow{id}W\) of our complex, we get an alternative representative of our object with the zero differential \(d_{s}\). Repeating this inductively for the complexes \(X_{r}\to\ldots\to X_{s+1}\), etc., we obtain a representative with zero differentials. The uniqueness of such a presentation is clear. \(\square\) The graded pieces of the canonical representative from Proposition 6.2 provide cohomology functors \[H^{i}:K^{b}(Chow_{Num}^{K(p,m)}(k))\to Chow_{Num}^{K(p,m)}(k).\] If \(\mathcal{D}\) is a triangulated category with the weight structure \(\omega\) with the heart \(\mathcal{H}\), then the _weight complex_\(t(X)\in K(\mathcal{H})\), for an object \(X\in\mathcal{D}\), is not uniquely defined, in general. But Bondarko has shown [6, Section 3] that it is well defined in some quotient-category. Namely, for \(U,V\in K(\mathcal{H})\) we may consider the subgroup of morphisms \(Z(U,V)\subset\operatorname{Hom}_{K(\mathcal{H})}(U,V)\) that can be presented as \((d_{i-1}^{V}\circ t_{i})\), for some \(t_{i}:U_{i}\to V_{i-1}\). Then \(Z\) is a two-sided ideal and one may define the quotient additive category \(K_{\mathfrak{w}}(\mathcal{H})=K(\mathcal{H})/Z\) - see [6, Definition 3.1.6] (note though, that this category is usually not triangulated). Then the assignment \(X\mapsto t(X)\) gives an additive functor \(\mathcal{D}\to K_{\mathfrak{w}}(\mathcal{H})\) - see [6, Theorem 3.2.2(II)]. In particular, we have this _weak weight complex functor_ in our situation: \[t_{\mathfrak{w}}:\widehat{\operatorname{MGL}}^{c}_{(p,m)}\operatorname{-} mod\longrightarrow K^{b}_{\mathfrak{w}}(Chow_{Num}^{P(m)[\overline{x}]}(k)).\] It is, actually, defined for any field \(k\), as we may always project from \(P(m)[\overline{x}]_{iso}\) to \(P(m)[\overline{x}]_{Num}\). We may combine it with the natural functor \[K^{b}_{\mathfrak{w}}(Chow_{Num}^{P(m)[\overline{x}]}(k))\longrightarrow K^{b} _{\mathfrak{w}}(Chow_{Num}^{K(p,m)}(k)).\] But it follows from Proposition 6.2 that the natural projection \[K^{b}(Chow_{Num}^{K(p,m)}(k))\to K^{b}_{\mathfrak{w}}(Chow_{Num}^{K(p,m)}(k))\] is an equivalence of categories. Thus, we get the _Morava weight complex functor_ \[t_{K(p,m)}:\widehat{\operatorname{MGL}}^{c}_{(p,m)}\operatorname{-}mod \longrightarrow K^{b}(Chow_{Num}^{K(p,m)}(k)).\] Since the category \(\operatorname{MGL}\operatorname{-}mod\) has an \(\infty\)-categorical (stable) enhancement, by the result of Sosnilo [17, Corollary 3.5], we have the _strong weight complex functor_ \[t:\operatorname{MGL}^{c}\operatorname{-}mod\to K^{b}(Chow^{\Omega}(k)),\] which restricts to the weak one when projected to \(K^{b}_{\mathfrak{w}}\). Since the category \(\widehat{\operatorname{MGL}}^{c}_{(p,m)}\operatorname{-}mod\) is obtained from \(\operatorname{MGL}\operatorname{-}mod\) by a \(\otimes\)-projector, which respects the weight structure, we get that \(t\) agrees with \(t_{K(p,m)}\) and so, the latter functor is triangulated. **Definition 6.3**: _Define the Morava weight cohomology_ \[H^{i}_{K(p,m)}:\widehat{\operatorname{MGL}}^{c}_{(p,m)}\operatorname{-}mod \longrightarrow Chow_{Num}^{K(p,m)}(k)\] _as the composition \(H^{i}\circ t_{K(p,m)}\), for \(i\in\mathbb{Z}\)._ **Remark 6.4**: _Combining with the projection \({\rm MGL}^{c}\)-\(\,mod\to\widehat{\rm MGL}^{c}_{(p,m)}\)-\(\,mod\), we get the "global" version of the Morava weight cohomology:_ \[H^{i}_{K(p,m)}:{\rm MGL}^{c}\)_-\(\,mod\longrightarrow Chow^{K(p,m)}_{Num}(k).\] \(\triangle\) By the result of Aoki [1, Corollary 4.5], the strong weight complex functor \(t\) respects \(\otimes\). Hence, the _Morava weight complex functor_\(t_{K(p,m)}:\widehat{\rm MGL}^{c}_{(p,m)}\)-\(\,mod\longrightarrow K^{b}(Chow^{K(p,m)}_{Num}(k))\) is tensor triangulated. Thus, we get the Kunneth formula for _Morava weight cohomology_: **Proposition 6.5**: (Kunneth formula) _The Morava weight cohomology satisfy:_ \[H^{k}_{K(p,m)}(X\otimes Y)=\bigoplus_{i+j=k}H^{i}_{K(p,m)}(X)\otimes H^{j}_{K (p,m)}(Y),\] _for any compact \({\rm MGL}\)-modules \(X\) and \(Y\)._ Let \(A^{*}\) be an oriented cohomology theory whose coefficient ring \(A\) is an integral domain. Then \(\otimes\) has no zero-divisors in \(Chow^{A}_{Num}(k)\) on objects and morphisms - see [24, Proposition 5.2]. In particular, we get: **Proposition 6.6**: _The zero ideal \((0)\) of \(K^{b}(Chow^{K(p,m)}_{Num}(k))\) is prime triangulated._ _Proof_: Indeed, by Proposition 6.2, any object \(U\) of \(K^{b}(Chow^{K(p,m)}_{Num}(k))\) is isomorphic to the direct sum of its (shifted) weight cohomology \(H^{i}(U)[i]\in Chow^{K(p,m)}_{Num}(k)[i]\). But \(K(p,m)=\mathbb{F}_{p}[v_{m},v_{m}^{-1}]\) is an integral domain, thus \(Chow^{K(p,m)}_{Num}(k)\) has no \(\otimes\)-zero-divisors. Hence, so does \(K^{b}(Chow^{K(p,m)}_{Num}(k))\). \(\square\) ## 7 The Main theorem Denote as \(U\mapsto\overline{U}\) the natural \(\otimes\)-\(\,\triangle\)-ed functor \[K^{b}(Chow^{P(m)|\overline{\mathbb{F}}|}_{Num}(k))\longrightarrow K^{b}(Chow ^{K(p,m)}_{Num}(k)).\] Let \(\mathfrak{b}\) be the kernel of it. It is a \(\otimes\)-triangulated ideal in \(K^{b}(Chow^{P(m)|\overline{\mathbb{F}}|}_{Num}(k))\). **Proposition 7.1**: _Let \(U\in\mathfrak{b}\). Then \(Ann(U)\subset P(m)[\overline{x}]\) is non-zero._ _Proof_: If \(m=\infty\), then the above functor is an equivalence and there is nothing to prove. Below we will assume that \(m\) is finite. For any \(M_{1},M_{2}\in Chow^{P(m)|\overline{\mathbb{F}}|}_{Num}(k)\), the \[\mathbb{H}om_{Chow^{P(m)|\overline{\mathbb{F}}|}_{Num}(k)}(M_{1},M_{2})= \oplus_{l}{\rm Hom}_{Chow^{P(m)|\overline{\mathbb{F}}|}_{Num}(k)}(M_{1},M_{2}( l)[2l])\] is a finitely presented \(P(m)[\overline{x}]\)-module. Our object \(U\) is given by some finite complex \(\ldots\to N_{i}\stackrel{{ d_{i}}}{{\to}}N_{i+1}\to\ldots\). The endomorphism ring \(\mathbb{E}nd(U)\) can be identified with the \(0\)-th cohomology of the (finite) complex \[\ldots\stackrel{{\partial}}{{\to}}F_{1}\stackrel{{ \partial}}{{\to}}F_{0}\stackrel{{\partial}}{{\to}}F_{-1} \stackrel{{\partial}}{{\to}}\ldots,\ \ {\rm where}\ \ F_{j}=\oplus_{i}\mathbb{H}om_{Chow^{P(m)|\overline{\mathbb{F}}|}_{Num}(k)}(N_ {i},N_{i+j})\] are finitely presentated \(P(m)[\overline{x}]\)-modules and differentials are induced by \(d_{i}\)s. Moreover, \((F,\partial)=(\hat{F},\hat{\partial})\otimes_{\hat{P}}P(m)[\overline{x}]\), where \(\hat{P}=\mathbb{F}_{p}[v_{m},\ldots,v_{n}][x_{i},i\leqslant l]\), for some \(n\) and \(l\), and \(P(m)[\overline{x}]\) is free over \(\hat{P}\). In particular, since \(\hat{P}\) is Noetherian, the cohomology \(H_{j}\) of our complex are finitely presented \(P(m)[\overline{x}]\)-modules. Note, that, for a smooth projective variety \(X\), any \(K(p,m)\)-numerically trivial class, multiplied by some power of \(v_{m}\), can be lifted to a \(P(m)\)-numerically trivial class, by [24, Proposition 4.14]. Hence, \[\mathbb{H}om_{Chow_{Num}^{K(p,m)}(k)}(\overline{M}_{1},\overline{M}_{2})= \mathbb{H}om_{Chow_{Num}^{P(m)[\overline{x}]}(k)}(M_{1},M_{2})\otimes_{P(m)[ \overline{x}]}K(p,m). \tag{3}\] Here \(\mathbb{H}om_{Chow_{Num}^{P(m)[\overline{x}]}(k)}(M_{1},M_{2})\) is given by the numerical \(P(m)[\overline{x}]\) of a direct summand of some smooth projective variety. There is a natural filtration by codimension of support on this \(\mathbb{L}\)-module. The graded pieces of it possess the action of the Landweber-Novikov algebra \(MU_{*}MU\). Since the respective \(\mathbb{L}\)-modules are finitely presented, by the result of Landweber [9, Lemma 3.3], these are extensions of (finitely many) \(\mathbb{L}\)-modules \(\mathbb{L}/I(q,n)\), and so, our \(\mathbb{H}om\) is an extension of such modules. Moreover, since it is a \(P(m)\)-module, it should be an extension of modules of the type \(\mathbb{L}/I(p,n)=P(n)[\overline{x}]\), with \(n\geqslant m\). Let \(P\{m\}^{*}\) be the free theory with the coefficient ring \(P\{m\}=P(m)[v_{m}^{-1}]\). Note that the functor \(\otimes_{P(m)}P\{m\}\) is exact and it annihilates the modules \(P(n)\), for \(n>m\). Thus, \[\mathbb{H}om_{Chow_{Num}^{P(m)[\overline{x}]}(k)}(M_{1},M_{2})\otimes_{P(m)}P \{m\}\] is a free \(P\{m\}[\overline{x}]\)-module of finite rank. This shows that our complex \((F,\partial)\) is adjusted for the right exact functor \(\otimes_{P(m)[\overline{x}]}K(p,m)\). We need the following facts. The first one is a variant of [25, Lem. 3.4]. **Lemma 7.2**: _Let \(L\) be a finitely-generated \(P(m)[\overline{x}]\)-module, such that \(L\otimes_{P(m)[\overline{x}]}K(p,m)=0\). Then \(Ann(L)\subset P(m)[\overline{x}]\) is non-zero. Moreover, \(Ann(L)\otimes_{P(m)[\overline{x}]}K(p,m)=K(p,m)\)._ _Proof_: Denote as \(I\subset P(m)[\overline{x}]\) the ideal generated by \(v_{r}\), \(r>m\) and all \(x_{i}\), \(i\neq p^{s}-1\). Then the proof is identical to that of [25, Lemma 3.4] (which works for any polynomial algebra). \(\Box\) **Lemma 7.3**: _Let \(L\) be a finitely-generated \(P(m)[\overline{x}]\)-module, such that \(L\otimes_{P(m)[\overline{x}]}K(p,m)=0\). Then_ \[\mathrm{Tor}_{i}^{P(m)[\overline{x}]}(L,K(p,m))=0,\ \ \mbox{for all}\ i.\] _Proof_: Here \(\mathrm{Tor}_{i}^{P(m)[\overline{x}]}(L,K(p,m))\) is a \(K(p,m)\)-module which is annihilated by every \(u\in Ann(L)\). Since \(Ann(L)\otimes_{P(m)[\overline{x}]}K(p,m)=K(p,m)\), this module is zero. \(\Box\) Let \(H_{j}\) be the \(j\)-cohomology of the complex \((F,\partial)\). These are finitely-presented \(P(m)[\overline{x}]\)-modules. Let us show by the increasing induction on \(l\) that \(\mathrm{Tor}_{i}^{P(m)[\overline{x}]}(H_{j},K(p,m))=0\), for all \(i\) and all \(j\leqslant l\). This is true for \(l<<0\), as our complex is finite. Suppose, it is true for \(j<l\). We have the spectral sequence: \[E_{2}^{p,q}=\mathrm{Tor}_{p}^{P(m)[\overline{x}]}(H_{q},K(p,m))\] converging to the cohomology of \((F,\partial)\otimes_{P(m)[\overline{x}]}K(p,m)\) which is zero, since \(\overline{U}=0\). Here \(d_{r}:E_{r}^{p,q}\to E_{r}^{p-r,q+r-1}\). By our condition, there are no differential to, or from the \(E^{0,l}\) term. Thus, \(H_{l}\otimes_{P(m)[\overline{x}]}K(p,m)=E_{2}^{0,l}=E_{\infty}^{0,l}=0\). By Lemma 7.3, \(\mathrm{Tor}_{i}^{P(m)[\overline{x}]}(H_{l},K(p,m))=0\), for any \(i\). The induction step and the statement are proven. In particular, \(H_{0}\otimes_{P(m)[\overline{x}]}K(p,m)=0\). Since \(H_{0}=\mathbb{E}nd(U)\), by Lemma 7.2, \(Ann(U)\subset P(m)[\overline{x}]\) is non-zero. \(\Box\) **Proposition 7.4**: _Let \(k\) be flexible and \(U\in\widehat{\rm SH}^{c}_{(p,m)}(k)\). Then the following conditions are equivalent:_ * \(Ann(\widehat{M}^{MGL}(U))\subset\mathbb{L}\) _contains some power of_ \(v_{m}\) _(where_ \(v_{\infty}=1\)_)._ * \(t_{K(p,m)}(\widehat{M}^{MGL}(U))=0\in K^{b}(Chow^{K(p,m)}_{Num}(k))\)_._ _Proof_: Let \(m<\infty\), \(M=\widehat{M}^{MGL}(U)\) and \(t(M)\) be any choice of the weight complex for the weight filtration from Proposition 4.3. By [6, Theorem 3.3.1.II], the radicals of the annihilators of \(M\) and \(t(M)\) in \(\mathbb{L}\) coincide. Thus, (1) \(\Rightarrow\)\(Ann(t(M))\subset\mathbb{L}\) contains some power of \(v_{m}\). Then the projection of \(t(M)\) to \(K^{b}(Chow^{K(p,m)}_{Num}(k))\) is zero, since \(v_{m}\) is inverted there. Conversely, if the projection of \(t(M)\) to \(K^{b}(Chow^{K(p,m)}_{Num}(k))\) is zero, then by Proposition 7.1, the annihilator of \(t(M)\) in \(P(m)[\overline{x}]\) is non-zero, which means that \(\sqrt{Ann(t(M))}\subset\mathbb{L}\) contains \(I(p,m)\) strictly. Since, by Proposition 5.3, \(\sqrt{Ann(t(M))}=\sqrt{Ann(M)}\) is a prime invariant ideal \(I(p,r)\), we have that \(r>m\) and so, this radical contains \(v_{m}\). Hence, \(Ann(M)\) contains some power of \(v_{m}\). For \(m=\infty\), it follows from the conservativity of the functor \(t_{K(p,\infty)}\). \(\square\) Combining this result with the properties of the _Morava weight cohomology_ functor, we get: **Proposition 7.5**: _Let \(k\) be flexible. Objects \(U\in\widehat{\rm SH}^{c}_{(p,m)}(k/k)\) for which \(\widehat{M}^{MGL}(U)\) is annihilated by some power of \(v_{m}\) (where \(v_{\infty}=1\)) form a prime \(\otimes\)- \(\triangle\)-ed ideal._ _Proof_: By Proposition 7.4, this ideal consists of objects \(U\) such that the Morava weight complex \(t_{K(p,m)}\) of \(\widehat{M}^{MGL}(U)\) is zero. By Proposition 6.6, the zero ideal (0) of \(K^{b}(Chow^{K(p,m)}_{Num}(k))\) is prime. Since the functors \(\widehat{M}^{MGL}\) and \(t_{K(p,m)}\) are tensor-triangulated, the preimage of this ideal is also prime. \(\square\) Now we can prove our Main Theorem. Consider _isotropic realisations_ \[\psi_{(p,m),E}:{\rm SH}^{c}(k)\to{\rm SH}^{c}_{(p,m)}(\widetilde{E}/ \widetilde{E}),\] where \(p\) is a prime number, \(1\leqslant m\leqslant\infty\), \(E/k\) runs over \(\stackrel{{(p,m)}}{{\sim}}\)-equivalence classes of field extensions, and \(\widetilde{E}\) is its _flexible closure_. **Theorem 7.6**: _Let \(\mathfrak{a}_{(p,m),E}:=\ker(\psi_{(p,m),E})\). Then_ * \(\mathfrak{a}_{(p,m),E}\) _are points of the Balmer spectrum_ \({\rm Spc}({\rm SH}^{c}(k))\) _of (the compact part) of Morel-Voevodsky category._ * \(\mathfrak{a}_{(p,m),E}=\mathfrak{a}_{(q,n),F}\) _if and only if_ \(p=q\)_,_ \(m=n\) _and_ \(E/k\stackrel{{(p,m)}}{{\sim}}F/k\)_._ * \(\mathfrak{a}_{(p,\infty),E}\) _is the image of the point_ \(\mathfrak{a}_{p,E}\) _of_ _[_24_, Theorem 5.13]_ _under the natural map of spectra_ \({\rm Spc}({\rm DM}^{c}(k))\to{\rm Spc}({\rm SH}^{c}(k))\) _induced by the motivic functor_ \(M:{\rm SH}^{c}(k)\to{\rm DM}^{c}(k)\)_._ _Proof_: We will prove only parts (1) and (3) here and will leave the proof of (2) to the next section. (1): The isotropic realisation \(\psi_{(p,m),E}\) is the composition \[{\rm SH}^{c}(k)\to\widehat{\rm SH}^{c}_{(p,m)}(\widetilde{E}/ \widetilde{E})\to{\rm SH}^{c}_{(p,m)}(\widetilde{E}/\widetilde{E}).\] The preimage of the zero ideal (0) of \({\rm SH}^{c}_{(p,m)}(\widetilde{E}/\widetilde{E})\) under the second map, by definition, consists of those objects \(U\in\widehat{\rm SH}^{c}_{(p,m)}(\widetilde{E}/\widetilde{E})\) whose \(\widehat{M}^{MGL}\)-motive is annihilated by some power of \(v_{m}\), for \(m<\infty\) respectively, is zero, for \(m=\infty\). By Proposition 7.5, this ideal is prime. As \(\mathfrak{a}_{(p,m),E}\) is the preimage of it under the first map, it is also prime and defines a point of the Balmer spectrum. (3) The ideal \(\mathfrak{a}_{(p,\infty),E}\) consists of \(U\in\operatorname{SH}^{c}(k)\) with \(t_{K(p,\infty)}(\widehat{M}^{MGL}(U_{\widetilde{E}}))=0\). But the latter is nothing else than the weight complex \(t(\widehat{M}(U_{\widetilde{E}}))\) of the (\(p\)-isotropic) motive of \(U\), where \(\widehat{M}:\operatorname{SH}_{(p,\infty)}(\widetilde{E}/\widetilde{E}) \to\operatorname{DM}(\widetilde{E}/\widetilde{E};\mathbb{F}_{p})\) is the (isotropic) motivic functor. Since the weight complex is conservative, this happens if and only if \(M(U)\) belongs to \(\mathfrak{a}_{p,E}\). Thus, \(\mathfrak{a}_{(p,\infty),E}\) is the image of \(\mathfrak{a}_{p,E}\) under the map induced by \(M\). \(\square\) **Example 7.7**: _Let \(\alpha=\{a_{1},\dots,a_{n}\}\neq 0\in K_{n}^{M}(k)/p\), and \(Q_{\alpha}\) be the norm-variety of Rost [16]. Then the invariant \(I_{BP}(Q_{\alpha})=\operatorname{Im}(BP_{*}(Q_{\alpha})\stackrel{{ \pi_{*}}}{{\rightarrow}}BP)\) is \(I(n)=(v_{0},\dots,v_{n-1})\). Hence \(Q_{\alpha}\) is \(K(p,m)\)-isotropic, for \(m<n\), and \(K(p,m)\)-anisotropic, for \(m\geqslant n\), and remains the same over \(\widetilde{k}\) - see Proposition 2.3._ _In particular, \(\Sigma_{\mathbb{P}^{1}}^{\infty}Q_{\alpha}\in\mathfrak{a}_{(p,m),k}\), for \(m\geqslant n\). On the other hand, for any \(m<n\), there is \(l\) prime to \(p\), so that the map \(1^{MGL}(d)[2d]\stackrel{{ l\cdot_{\mathrm{un}}}}{{\longrightarrow}} 1^{MGL}\) factors through \(M^{MGL}(Q_{\alpha})\) (by the definition of \(I(Q_{\alpha})\)). Since \(\mathbb{1}\) doesn't vanish in \(K^{b}(Chow_{Num}^{K(p,m)}(\widetilde{k}))\) and \(l\cdot v_{m}\) is invertible there, we get that \(\Sigma_{\mathbb{P}^{1}}^{\infty}Q_{\alpha}\not\in\mathfrak{a}_{(p,m),k}\), for \(m<n\). \(\triangle\)_ ## 8 Local stable homotopy categories To prove part (2) of the Main Theorem 7.6 we will need to introduce, in analogy with [22, Definition 2.3], the local versions \(\operatorname{SH}_{(p,m)}(E/k)\) of \(\operatorname{SH}(k)\) corresponding to all possible finitely-generated extensions \(E/k\) of the base field, where isotropic categories correspond to trivial extensions. Let \(E/k\) be a finitely-generated extension and \(P/k\) be a smooth projective variety with \(k(P)=E\). Let \(\mathbf{Q}\) be the disjoint union of all smooth projective connected varieties, such that \(k(Q)/k\stackrel{{(p,m)}}{{\ncong}}E/k\). Let \(\widetilde{C}(\mathbf{Q})\) be the respective Cech simplicial scheme, and \(\mathfrak{X}_{\mathbf{Q}}\) be the \(\Sigma_{\mathbb{P}^{1}}^{\infty}\)-spectrum of it. This is a \(\wedge\)-projector. Let \(\widetilde{\mathfrak{X}}_{\mathbf{Q}}\) be the complementary projector \(\operatorname{Cone}(\mathfrak{X}_{\mathbf{Q}}\to\mathbb{1})\). Similarly, let \(\mathfrak{X}_{P}\) be the \(\Sigma_{\mathbb{P}^{1}}^{\infty}\)-spectrum of the Cech simplicial scheme of \(P\) (note, that it depends only on the extension \(E/k\), so can be denoted as \(\mathfrak{X}_{E/k}\)). Then \(\Upsilon_{(p,m),E/k}:=\widetilde{\mathfrak{X}}_{\mathbf{Q}}\wedge\mathfrak{X} _{P}\) is a \(\wedge\)-projector in \(\operatorname{SH}(k)\). Define the category \(\widehat{\operatorname{SH}}_{(p,m)}(E/k)\) as \(\Upsilon_{(p,m),E/k}\wedge\operatorname{SH}(k)\). This is naturally a full subcategory of \(\operatorname{SH}(k)\), which is a retract of it via the above projector. Note though, that the respective functors are not adjoint to each other, in general. We have an adjoint pair: (4) Denote \(\widehat{\operatorname{MGL}}_{(p,m),E/k}\) - \(mod:=\Upsilon_{(p,m),E/k}^{MGL}\otimes\operatorname{MGL}\) - \(mod\), where \(\Upsilon_{(p,m),E/k}^{MGL}:=M^{MGL}(\Upsilon_{(p,m),E/k})\), then we get an adjoint pair (5) Let \(\mathcal{A}\) be the localising subcategory of \(\widehat{\operatorname{SH}}_{(p,m)}(E/k)\) generated by objects \(\Upsilon_{(p,m),E/k}\wedge U\), where \(U\in\operatorname{SH}^{c}(k)\) and the \(\widehat{M}^{MGL}\)-motive of it belongs to the tensor localising ideal of \(\widehat{\operatorname{MGL}}_{(p,m),E/k}\) - \(mod\) generated by \(\widehat{\mathbb{1}}^{MGL}/v_{m}\), where \(v_{\infty}=1\). **Definition 8.1**: _The isotropic stable homotopy category \(\operatorname{SH}_{(p,m)}(E/k)\) is the Verdier localisation of the category \(\widehat{\operatorname{SH}}_{(p,m)}(E/k)\) with respect to \(\mathcal{A}\). It comes equipped with the natural realisation functor_ \[\phi_{(p,m),E}:\operatorname{SH}(k)\to\operatorname{SH}_{(p,m)}(E/k).\] In the case of a trivial extension \(k/k\), we get the isotropic stable homotopy category \({\rm SH}_{(p,m)}(k/k)\). There is functoriality for the denominator of the extension \(E/k\). Namely, if \(L\to F\to E\) is a tower of finitely-generated field extensions and \(P/L\), \(Q/L\) are smooth projective varieties, such that \(L(P)=E\) and \(L(Q)/L\stackrel{{(p,m)}}{{\geqq}}E/L\), then \(Q\) stays \(K(p,m)\)-anisotropic over \(L(P)\) and so, over \(F(P)\). Thus, \(F(Q)/F\stackrel{{(p,m)}}{{\geqq}}F(P)/F\stackrel{{ (p,m)}}{{\simq}}E/F\). Moreover, since there are rational maps in both directions between \(P/F\) and a smooth model of \(E/F\), we have \(\mathfrak{X}_{F(P)/F}=\mathfrak{X}_{E/F}\). Hence, we get a natural functor \[{\rm SH}_{(p,m)}(E/L)\to{\rm SH}_{(p,m)}(E/F).\] In particular, we obtain the local to isotropic functor \({\rm SH}_{(p,m)}(E/k)\to{\rm SH}_{(p,m)}(E/E)\). In order to evaluate the conservativity of it, we will need the following fact (cf. [22, Proposition 2.7]). **Proposition 8.2**: _Let \(U\in{\rm SH}^{c}(k)\) vanishes in \({\rm SH}_{(p,m)}(E/k)\), then there exists a smooth variety \(Q/k\) of finite type, such that \(k(Q)/k\stackrel{{(p,m)}}{{\geqq}}E/k\) and \(\widetilde{\mathcal{X}}_{Q}\otimes\mathcal{X}_{P}\otimes M^{MGL}(U)\) belongs to the tensor localising ideal generated by \(1^{MGL}/v_{m}\), where \(v_{\infty}=1\)._ _Proof_: We know that \(\widetilde{\mathcal{X}}_{\mathbf{Q}}\otimes\mathcal{X}_{P}\otimes M^{MGL}(U)\) belongs to the tensor localising ideal \(\mathcal{I}\) generated by \(1^{MGL}/v_{m}\). Then \(\widetilde{\mathcal{X}}_{\mathbf{Q}}\otimes M^{MGL}(P)\otimes M^{MGL}(U)\) also belongs to the same ideal. Since \(M:=M^{MGL}(P)\otimes M^{MGL}(U)\) and the generator \(1^{MGL}/v_{m}\) of \(\mathcal{I}\) are compact, the map \(M\to\widetilde{\mathcal{X}}_{\mathbf{Q}}\otimes M\) factors throughs some compact object \(N\) of \(\mathcal{I}\), and so, is the composition \[M\to N\to\widetilde{\mathcal{X}}_{Q}\otimes M\to\widetilde{\mathcal{X}}_{ \mathbf{Q}}\otimes M,\] for some variety \(Q\) of finite type as above. Moreover, (by passing to a larger \(Q\), if necessary) we may assume that the map \(M\to\widetilde{\mathcal{X}}_{Q}\otimes M\) here is \((1^{MGL}\to\widetilde{\mathcal{X}}_{Q})\otimes id_{M}\). Tensoring with \(\widetilde{\mathcal{X}}_{Q}\), we see that the identity map of \(\widetilde{\mathcal{X}}_{Q}\otimes M\) factors through \(\widetilde{\mathcal{X}}_{Q}\otimes N\in\mathcal{I}\). Hence, \(\widetilde{\mathcal{X}}_{Q}\otimes M\) is a direct summand of \(\widetilde{\mathcal{X}}_{Q}\otimes N\) and belongs to \(\mathcal{I}\). Since \(\widetilde{\mathcal{X}}_{Q}\otimes M^{MGL}(P)\otimes M^{MGL}(U)\) belongs to \(\mathcal{I}\) and \(\mathcal{X}_{P}\) belongs to the tensor localising ideal generated by \(M^{MGL}(P)\), we get that \(\widetilde{\mathcal{X}}_{Q}\otimes\mathcal{X}_{P}\otimes M^{MGL}(U)\) belongs to \(\mathcal{I}\). \(\square\) In analogy with [22, Proposition 2.9] we have: **Proposition 8.3**: _Let \(E/k\) be a finitely-generated extension of a flexible field. Then the functor_ \[{\rm SH}_{(p,m)}(E/k)\longrightarrow{\rm SH}_{(p,m)}(E/E)\] _is conservative on the image of \(\phi_{(p,m),E}\) on \({\rm SH}^{c}(k)\)._ _Proof_: We follow the proof of [22, Proposition 2.9]. Any finitely generated extension is a tower of a purely transcendental extension followed by a finite one. We will first treat the purely transcendental part. **Lemma 8.3.1**: _Let \(E/L/k\) be a tower of finitely generated extensions where \(L/k\) is a purely transcendental extension of a flexible field. Then the functor_ \[{\rm SH}_{(p,m)}(E/k)\longrightarrow{\rm SH}_{(p,m)}(E/L)\] _is conservative on the image of \(\phi_{(p,m),E}\) on \({\rm SH}^{c}(k)\)._ _Proof_: Let \(L=k(\mathbb{A}^{n})\), and \(E=k(\overline{R})\) for some smooth variety \(\overline{R}/k\). Let \(\overline{U}\in Ob(\mathfrak{X}_{\overline{R}}\wedge\mathrm{SH}^{c}(k))\) be an object vanishing in \(\mathrm{SH}_{(p,m)}(E/L)\). Then, according to the Proposition 8.2, there exists a variety \(\overline{Q}/L\) of finite type such that \(L(\overline{Q})/L\stackrel{{(p,m)}}{{\geqq}}L(\overline{R})/L\) and \(\widetilde{\mathcal{X}}_{\overline{Q}}\otimes M^{MGL}(\overline{U_{L}})\) belongs to the tensor localising ideal generated by \(1^{MGL}/v_{m}\). The condition \(L(\overline{Q})/L\stackrel{{(p,m)}}{{\geqq}}L(\overline{R})/L\) means that we have a \(K(p,m)\)-\(L\)-correspondence \(\overline{\alpha}:\overline{Q}\leadsto\overline{R}_{L}\) of degree one, and there is no such correspondence in the opposite direction. Since \(k=k_{0}(\mathbb{P}^{\infty})\) is flexible, varieties \(\overline{R}\) and \(\overline{Q}\) are, actually, defined over \(F\) and \(M=F(\mathbb{A}^{n})\), respectively, where extensions \(k/F/k_{0}\) are purely transcendental and \(F/k_{0}\) is moreover finitely generated. By the same reason, we can assume that the object \(\overline{U}\) is defined over \(F\), while the correspondence \(\overline{\alpha}\) is defined over \(M\). So, there exist varieties \(R/F\), \(Q/M\), an object \(U\) of \(\mathfrak{X}_{R}\wedge\mathrm{SH}^{c}(F)\) and a degree one \(K(p,m)\)-\(M\)-correspondence \(\alpha:Q\leadsto R_{M}\) such that \(R|_{k}=\overline{R}\), \(Q|_{L}=\overline{Q}\), \(U|_{k}=\overline{U}\) and \(\alpha|_{L}=\overline{\alpha}\). Note, that we still have: \(M(Q)/M\stackrel{{(p,m)}}{{\geqq}}M(R)/M\) (since \(\alpha\) is defined over \(M\) and by functoriality). The extension \(M/F\) can be embedded into \(k/F\) making \(k/M\) purely transcendental. Let \(Q^{\prime}\) be a variety over \(k\) obtained from \(Q\) using this embedding. Then \(k(Q^{\prime})/k\stackrel{{(p,m)}}{{\geqq}}k(\overline{R})/k\) (since \(k/M\) is purely transcendental). Then \(\mathcal{X}_{Q}\otimes M^{MGL}(U_{M})\) belongs to the \(\otimes\)-localising ideal of \(\mathrm{MGL}(M)\,\mbox{-}\,mod\) generated by \(1^{MGL}/v_{m}\), since the map \(\mathrm{MGL}(M)\,\mbox{-}\,mod\to\mathrm{MGL}(L)\,\mbox{-}\,mod\) has a splitting (on the left) satisfying the projection formula, as the extension \(L/M\) is purely transcendental. Then \(\widetilde{\mathcal{X}}_{Q^{\prime}}\otimes M^{MGL}(\overline{U})\) belongs to a similar ideal of \(\mathrm{MGL}(k)\,\mbox{-}\,mod\). Thus, \(\overline{U}=0\) in \(\mathrm{SH}_{(p,m)}(E/k)\). \(\Box\) Using Lemma 8.3.1 our problem is reduced to the case of a finite extension. In this situation, the statement is true for an arbitrary field. **Lemma 8.3.2**: _Let \(E/L\) be a finite extension of fields. Then the functor_ \[\mathrm{SH}_{(p,m)}(E/L)\longrightarrow\mathrm{SH}_{(p,m)}(E/E)\] _is conservative._ _Proof_: Let \(E=L(P)\) for some smooth connected \(0\)-dimensional variety \(P\). Let \(U\in Ob(\mathrm{SH}^{c}(L))\) be an object vanishing in \(\mathrm{SH}_{(p,m)}(E/E)\). Then, for the disjoint union \(\overline{\mathbf{Q}}\) of all anisotropic varieties over \(E\), we have: \(\widetilde{\mathcal{X}}_{\overline{\mathbf{Q}}}\otimes M^{MGL}(U_{E})\) belongs to the tensor localising ideal generated by \(1^{MGL}/v_{m}\). Consider a smooth \(L\)-variety \(\widehat{\mathbf{Q}}\) given by the composition \(\overline{\mathbf{Q}}\rightarrow\mathrm{Spec}(E)\rightarrow\mathrm{Spec}(L)\). We have a natural map \(\overline{\mathbf{Q}}\rightarrow\widehat{\mathbf{Q}}_{E}\), and so, \(\widetilde{\mathfrak{X}}_{\overline{\mathbf{Q}}}\wedge\widetilde{\mathfrak{X}} _{\widehat{\mathbf{Q}}_{E}}=\widetilde{\mathfrak{X}}_{\widehat{\mathbf{Q}}_{E}}\). Clearly, \(L(\widehat{\mathbf{Q}})/L\stackrel{{(p,m)}}{{\geqq}}L(P)/L\). Suppose, these are equivalent. Then in the commutative diagram the \(K(p,m)\)-push-forward along the total lower horizontal map is surjective. Here \(\mathrm{Spec}(E\otimes_{L}E)\) is a disjoint union of \(\mathrm{Spec}(F)\) for some fields \(F\). But since \([E:L]\) is finite, the degrees of both maps \(\mathrm{Spec}(E)\longleftarrow\mathrm{Spec}(F)\longrightarrow\mathrm{Spec}(E)\) coincide. In particular, the respective \(K(p,m)\)-push-forwards coincide as well. Hence, the \(K(p,m)\)-push-forward along the map \(\overline{\mathbf{Q}}\rightarrow\mathrm{Spec}(E)\) is surjective - a contradiction, as \(\overline{\mathbf{Q}}\) is \(K(p,m)\)-anisotropic. Hence, \(\widehat{\mathbf{Q}}_{E}\) is \(K(p,m)\)-anisotropic and so, there is a map \(\widehat{\mathbf{Q}}\rightarrow\mathbf{Q}\) (over \(L\)), where \(\mathbf{Q}\) is a disjoint union of all \(L\)-varieties \(Q\) with \(L(Q)/L\stackrel{{(p,m)}}{{\geqq}}L(P)/L\). In particular, \(\widetilde{\mathfrak{X}}_{\widehat{\mathbf{Q}}}\wedge\widetilde{\mathfrak{X}} _{\mathbf{Q}}=\widetilde{\mathfrak{X}}_{\mathbf{Q}}\). Since \(\mathfrak{X}_{\overline{Q}}\wedge\mathfrak{X}_{\mathfrak{Q}_{E}}=\mathfrak{X}_{ \mathfrak{Q}_{E}}\), we know that \((\widetilde{\mathcal{X}}_{\mathbf{Q}}\otimes M^{MGL}(U))_{E}\) belongs to the tensor localising ideal generated by \(\mathbbm{1}^{MGL}/v_{m}\). Then the same is true about \(M^{MGL}(P)\otimes\widetilde{\mathcal{X}}_{\mathbf{Q}}\otimes M^{MGL}(U)\), and so, also for \(\mathcal{X}_{P}\otimes\widetilde{\mathcal{X}}_{\mathbf{Q}}\otimes M^{MGL}(U)\). Thus, \(U=0\) in \(\mathrm{SH}_{(p,m)}(E/L)\). \(\Box\) This finishes the proof of the Proposition 8.3. \(\Box\) Combining the restriction \(k\to\widetilde{k}\) with \(\phi_{(p,m),\widetilde{E}}\), we get the _local realisation_: \[\varphi_{(p,m),E}:\mathrm{SH}(k)\to\mathrm{SH}_{(p,m)}(\widetilde{E}/ \widetilde{k}).\] From Proposition 8.3 we immediately get: **Corollary 8.4**: \[\mathfrak{a}_{(p,m),E}=\ker(\varphi_{(p,m),E}^{c}).\] _Proof_: Indeed, since \(\widetilde{k}\) is flexible, by Proposition 8.3, \[\ker(\mathrm{SH}^{c}(\widetilde{k})\to\mathrm{SH}_{(p,m)}(\widetilde{E}/ \widetilde{k}))=\ker(\mathrm{SH}^{c}(\widetilde{k})\to\mathrm{SH}_{(p,m)}( \widetilde{E}/\widetilde{E})).\] Then their pre-images under \(\mathrm{SH}^{c}(k)\to\mathrm{SH}^{c}(\widetilde{k})\) coincide as well. \(\Box\) Now we can compare these ideals for \(K(p,m)\)-equivalent extensions (not necessarily finitely-generated). **Proposition 8.5**: _Let \(E/k\) and \(F/k\) be arbitrary field extensions. Then_ \[E/k\stackrel{{(p,m)}}{{\sim}}F/k\quad\text{ if and only if }\quad \mathfrak{a}_{(p,m),E}=\mathfrak{a}_{(p,m),F}.\] _Proof_: Let \(E/k\stackrel{{(p,m)}}{{\sim}}F/k\) be two \(K(p,m)\)-equivalent field extensions. Then \(\widetilde{E}/\widetilde{k}\stackrel{{(p,m)}}{{\sim}}\widetilde{ F}/\widetilde{k}\) as well. Let \(E=\mathrm{colim}_{\alpha}\,E_{\alpha}\) and \(F=\mathrm{colim}_{\beta}\,F_{\beta}\), where \(E_{\alpha}=k(R_{\alpha})\) and \(F_{\beta}=k(P_{\beta})\) are finitely-generated extensions with smooth models \(R_{\alpha}\) and \(P_{\beta}\). Let \(U\in\mathfrak{a}_{(p,m),E}\). Then, since \(M^{MGL}(U_{\widetilde{E}})\) is compact, by Proposition 3.3, it is a direct summand of some extension of \(MGL\)-motives of a finite number of \(K(p,m)\)-anisotropic varieties \(S\) over \(\widetilde{E}\) as well as finitely many objects of the type \(M^{MGL}(Y)\otimes\mathbbm{1}^{MGL}/v_{m}\), for some \(\widetilde{E}\)-varieties \(Y\). All the varieties and morphisms involved are defined over some finitely-generated extension \(\widetilde{E}_{\alpha}=\widetilde{k}(R_{\alpha})\). Since \(E/k\stackrel{{(p,m)}}{{\sim}}F/k\), the respective varieties \(S\) will stay \(K(p,m)\)-anisotropic over \(\widetilde{F}(R_{\alpha})\). Thus, \(U_{\widetilde{F}(R_{\alpha})}\in\mathfrak{a}_{(p,m),\widetilde{F}(R_{\alpha} )/\widetilde{F}(R_{\alpha})}\). By Proposition 8.3, \(U_{\widetilde{F}}\) vanishes when restricted to \(\mathrm{SH}_{(p,m)}(\widetilde{F}(R_{\alpha})/\widetilde{F})\). But since \(F/k\stackrel{{(p,m)}}{{\geq}}E/k\), the variety \(R_{\alpha}|_{\widetilde{F}}\) is \(K(p,m)\)-isotropic and so, \(\widetilde{F}(R_{\alpha})/\widetilde{F}\) and \(\widetilde{F}/\widetilde{F}\) are \(K(p,m)\)-equivalent finitely-generated field extensions. So, in order to show that \(U\in\mathfrak{a}_{(p,m),F}\), in other words, that \(U_{\widetilde{F}}\) vanishes when restricted to \(\mathrm{SH}_{(p,m)}(\widetilde{F}/\widetilde{F})\), it is sufficient to prove the finitely-generated case of our statement. Below we assume that \(\widetilde{E}=\widetilde{k}(R)\) and \(\widetilde{F}=\widetilde{k}(P)\) are finitely-generated and \(K(p,m)\)-equivalent (over \(\widetilde{k}\)). Then, the varieties \(\mathbf{Q}\) are the same, \(\Upsilon_{(p,m),\widetilde{E}/\widetilde{k}}=\mathfrak{X}_{R}\wedge\widetilde{ \mathfrak{X}}_{\mathbf{Q}}\) and \(\Upsilon_{(p,m),\widetilde{F}/\widetilde{k}}=\mathfrak{X}_{P}\wedge\widetilde{ \mathfrak{X}}_{\mathbf{Q}}\). We know that \(I(P_{\widetilde{E}})=\mathrm{Im}(\Omega_{*}(P_{\widetilde{E}})\stackrel{{ \pi_{*}}}{{\longrightarrow}}\Omega_{*}(\mathrm{Spec}(\widetilde{E})))\subset\mathbb{L}\) is a finitely generated ideal invariant under Landweber-Novikov operations. So, by the results of Landweber, the radical \(\sqrt{I(P_{\widetilde{E}})}\) of it is an intersection of finitely many ideals \(I(q,n)\) of Landweber. Since \(\widetilde{E}/\widetilde{k}\stackrel{{(p,m)}}{{\geq}}\widetilde{ F}/\widetilde{k}\), the variety \(P_{\widetilde{E}}\) is \(K(p,m)\)-isotropic, then by Proposition 2.3, this radical localised at \(p\) contains \(v_{m}\) and the \(p\)-component of it is \(I(p,n)\), for some \(n>m\), or the whole ring \(\mathbb{L}_{(p)}\). This shows that there is an \(\Omega^{*}\)-cycle on \(P\times R\) whose projection to \(R\) is \(l\cdot v_{m}^{r}\cdot 1_{R}\), for some \(r\) and some \(l\) relatively prime to \(p\). In other words, we have a commutative diagram where \(T=M^{MGL}(\operatorname{Spec}(\widetilde{k}))\) and horizontal maps are induced by projections to the point. Multiplied by \(M^{MGL}(R)\) the upper map acquires a splitting (given by the diagonal map) and we see that the map \(M^{MGL}(R)(*)[2*]\stackrel{{ l\cdot v_{m}^{r}}}{{\longrightarrow}}M ^{MGL}(R)\) factors through \(M^{MGL}(P\times R)\). Note that as \(\widetilde{k}\) is flexible, there are non-trivial extensions \(L/\widetilde{k}\) of degree \(p\) which stay anisotropic over \(\widetilde{E}\) and \(\widetilde{F}\). Thus, the identity map of \(\widetilde{\mathcal{X}}_{\mathbf{Q}}\) is killed by \(p\), and we may get rid of the prime to \(p\) number \(l\), if we tensor with \(\widetilde{\mathcal{X}}_{\mathbf{Q}}\). That is, the map \[M^{MGL}(R)\otimes\widetilde{\mathcal{X}}_{\mathbf{Q}}(*)[2*]\stackrel{{ v_{m}^{r}}}{{\longrightarrow}}M^{MGL}(R)\otimes \widetilde{\mathcal{X}}_{\mathbf{Q}}\] factors through \(M^{MGL}(P\times R)\otimes\widetilde{\mathcal{X}}_{\mathbf{Q}}\). Since \(\widetilde{F}/\widetilde{k}\stackrel{{(p,m)}}{{\geq}}\widetilde{ E}/\widetilde{k}\) as well, the map \[M^{MGL}(P)\otimes\widetilde{\mathcal{X}}_{\mathbf{Q}}(*^{\prime})[2*^{\prime}] \stackrel{{ v_{m}^{r}}}{{\longrightarrow}}M^{MGL}(P)\otimes \widetilde{\mathcal{X}}_{\mathbf{Q}}\] factors through \(M^{MGL}(P\times R)\otimes\widetilde{\mathcal{X}}_{\mathbf{Q}}\), for some \(s\). Observe, that the cones of these two maps belong to the tensor localising ideal generated by \(\mathbb{1}^{MGL}/v_{m}\). It follows that, for a given object \(U\in\operatorname{SH}^{c}(\widetilde{k})\), if \(M^{MGL}(U)\otimes\mathcal{X}_{R}\otimes\widetilde{\mathcal{X}}_{\mathbf{Q}}\) belongs to this ideal, then so does \(M^{MGL}(U)\otimes M^{MGL}(R)\otimes\widetilde{\mathcal{X}}_{\mathbf{Q}}\) and hence, \(M^{MGL}(U)\otimes M^{MGL}(P)\otimes\widetilde{\mathcal{X}}_{\mathbf{Q}}\) and \(M^{MGL}(U)\otimes\mathcal{X}_{P}\otimes\widetilde{\mathcal{X}}_{\mathbf{Q}}\). And vice-versa. Thus, \(\ker(\varphi_{(p,m),E}^{c})=\ker(\varphi_{(p,m),F}^{c})\). Conversely, if \(E/k\stackrel{{(p,m)}}{{\not\sim}}F/k\), then \(E=k(R)\), \(F=k(P)\), where (at least) one of the varieties \(R_{F}\), \(P_{E}\) is \(K(p,m)\)-anisotropic. From symmetry, may assume the former. Then \(\Sigma_{\mathbb{P}^{1}}^{\infty}R_{+}\in\mathfrak{a}_{(p,m),F}\) (as \(R\) remains anisotropic over \(\widetilde{F}\)). On the other hand, a Tate-motive splits from \(M^{MGL}(R)\) over \(E\), and so, the image of this object in \(\widehat{\operatorname{MGL}}_{(p,m)}\!-\!mod\) is not annihilated by any power of \(v_{m}\). Hence, \(\Sigma_{\mathbb{P}^{1}}^{\infty}R_{+}\not\in\mathfrak{a}_{(p,m),E}\) and the ideals are different. \(\square\) Now we are ready to prove part (2) of the Main Theorem 7.6. (2): We have a natural homomorphism of spectra \(\rho:\operatorname{Spec}(\operatorname{SH}^{c}(k))\to\operatorname{Spec}( \operatorname{End}_{\operatorname{SH}(k)}(\mathbb{1}))\) - see [5, Corollary 5.6], where \(\operatorname{End}_{\operatorname{SH}(k)}(\mathbb{1})=\operatorname{GW}(k)\). The image of \(\mathfrak{a}_{(p,m),E}\) is the ideal consisting of those virtual quadratic forms \(\varphi\) that \(\mathbb{1}/\varphi:=\operatorname{Cone}(\mathbb{1}\stackrel{{ \varphi}}{{\to}}\mathbb{1})\) is not contained in \(\mathfrak{a}_{(p,m),E}\). Suppose, \(\varphi\) has dimension \(d\) divisible by \(p\). Consider the projection of \(\mathbb{1}/\varphi\) via the composition \[\operatorname{SH}^{c}(k)\!\to\widehat{\operatorname{SH}}^{c}_{(p,m)}( \widetilde{E}/\widetilde{E})\stackrel{{\widehat{M}^{MGL}}}{{ \rightarrow}}\widehat{\operatorname{MGL}}^{c}_{(p,m)}\!\cdot\!mod(\widetilde{E} )\stackrel{{ t_{K(p,m)}}}{{\longrightarrow}}\!\!K^{b}(Chow^{K(p,m)}_{ Num}(\widetilde{E})).\] The category \(\widehat{\operatorname{MGL}}^{c}_{(p,m)}(\widetilde{E})\!\cdot\!mod\) is \(p\)-torsion, so \(\varphi\) projects to zero there. Hence, the projection of \(\mathbb{1}/\varphi\) to \(K^{b}(Chow^{K(p,m)}_{Num}(\widetilde{E}))\) is a direct sum of two Tate-motives, and so, is non-zero. On the other hand, for \(\varphi=\langle 1\rangle\) (the unit of \(\operatorname{GW}(k)\)), the object \(\mathbb{1}/\langle 1\rangle\) is zero already in \(SH^{c}(k)\). Hence, \(\rho(\mathfrak{a}_{(p,m),E})=\mathfrak{q}_{p}\) is the image of \((p)\) under the natural map \(\operatorname{Spec}(\mathbb{Z})\to\operatorname{Spec}(\operatorname{GW}(k))\) induced by the rank map \(rank:\operatorname{GW}(k)\to\mathbb{Z}\) mapping \(\varphi\) to \(\dim(\varphi)\). Hence, if \(\mathfrak{a}_{(p,m),E}=\mathfrak{a}_{(q,n),F}\), then \(p=q\). The invariant \(m\) of \(\mathfrak{a}_{(p,m),E}\) can be recovered with the help of certain test spaces. Namely, by [25, Theorem 3.1] (which is an adaptation to the algebro-geometric context of the construction of Mitchell [12]), for any field \(k\) of characteristic different from \(p\), there exists an object \(X_{p,n}\) in \(\mathrm{SH}^{c}(k)_{(p)}\), whose \(p\)-level is exactly \(n\), that is, the \(r\)-th Morava K-theory vanishes on \(X_{p,n}\), for \(r<n\), and is non-zero, for \(r\geqslant n\). Moreover, for any \(r<n\), \(M^{MGL}(X_{p,n})\) is killed by some power of \(v_{r}\). Such an object can be constructed as a direct summand of the suspension spectrum of some smooth variety, namely, of the quotient \(V\backslash GL(p^{n})\), where \(V=(\mathbb{Z}/p)^{n}\) acts by permutations on the basis, for odd \(p\) (respectively, of \(V\backslash GL(2^{n+1})\), where \(V=(\mathbb{Z}/2)^{n+1}\), for \(p=2\)). The important feature of \(X_{p,n}\) is that the properties of it are unaffected by field extensions. The number \(m\) is uniquely determined from the following result. **Proposition 8.6**: \(X_{p,n}\in\mathfrak{a}_{(p,m),E}\)_, for any \(n>m\), and \(X_{p,n}\not\in\mathfrak{a}_{(p,m),E}\), for any \(n\leqslant m\)._ _Proof_: Since \(M^{MGL}(X_{p,n})\) is killed by some power of \(v_{m}\) by [25, Theorem 3.2], \(X_{p,n}\in\mathfrak{a}_{(p,m),E}\), for any \(n>m\). On the other hand, if \(n\leqslant m\), then \(\widehat{M}^{MGL}(X_{p,n})\in\widehat{\mathrm{MGL}}_{(p,m)}(\widetilde{E})-mod\) is not killed by any power of \(v_{m}\), by [25, Corollary 3.6], and so, \(X_{p,n}\not\in\mathfrak{a}_{(p,m),E}\). \(\square\) It follows from Proposition 8.6 that if \(\mathfrak{a}_{(p,m),E}=\mathfrak{a}_{(p,n),F}\), then \(n=m\). Combining it with the Proposition 8.5, we complete the proof of part (2) of the Main Theorem. \(\square\) **Remark 8.7**: _Observe, that we don't have the classical specialisation relations among isotropic points. Indeed, while their topological counterparts \(\mathfrak{a}_{(p,m),Top}\) satisfy: \(\mathfrak{a}_{(p,n),Top}\subset\mathfrak{a}_{(p,m),Top}\), for \(n>m\), and so, the former point is a specialisation of the latter, this doesn't hold for isotropic points \(\mathfrak{a}_{(p,m),E}\). The obstruction comes (for example) from the (suspension spectra) of norm-varieties - see Example 7.7. Indeed, if \(n>m\) and \(\alpha\in K_{n}^{M}(k)/p\) is a non-zero pure symbol, then the suspension spectrum \(\Sigma_{\mathbb{P}^{1}}^{\infty}(Q_{\alpha})\) of the norm-variety \(Q_{\alpha}\) belongs to \(\mathfrak{a}_{(p,n),k}\), but not to \(\mathfrak{a}_{(p,m),k}\)._ _The reason for this is that the isotropic ideals are generated not only by \(v_{m}\)-nilpotent objects, as in topology, but also by \(K(p,m)\)-anisotropic ones. These, in a sense, counterbalance each other - compare the properties of torsion spaces \(X_{p,n}\) from Proposition 8.6 with those of \(\Sigma_{\mathbb{P}^{1}}^{\infty}(Q_{\alpha})\) from Example 7.7. \(\square\)_ **Remark 8.8**: _It is not enough to consider finitely-generated extensions only. The vast majority of \((p,m)\)-equivalence classes will not have any finitely-generated representatives - see [24, Example 5.14]. \(\square\)_ For \(m\leqslant n\), the \(K(p,m)\)-isotropic varieties are \(K(p,n)\)-isotropic (see Proposition 2.3), hence the equivalence relation \(\overset{(p,m)}{\sim}\) is _coarser_ than \(\overset{(p,n)}{\sim}\). Thus, the set of \(K(p,n)\)-isotropic points naturally surjects to the set of \(K(p,m)\)-isotropic ones. So, in the end, all these Morava-points \(\mathfrak{a}_{(p,m),E}\) form a "quotient" of motivic isotropic points \(\mathfrak{a}_{p,E}\). The following example shows that these quotients are still pretty large. **Example 8.9**: (cf. [24, Example 5.14]) _For an algebraically closed field \(k\), we have a single \(\overset{(p,m)}{\sim}\)-equivalence class of extensions (as all varieties have rational points and so, are \(K(p,m)\)-isotropic), and hence, a single isotropic point, for every \(p\) and \(m\)._ _For \(k=\mathbb{R}\), we have a unique \(\overset{(p,m)}{\sim}\)-equivalence class of extensions, for any odd \(p\). But for \(p=2\) the situation is completely different. Observe, that a curve of genus one is \(K(p,1)\)-isotropic if and only if it is \(K(p,\infty)\)-isotropic. This follows from Proposition 2.3 and the fact that the ideal \(I(X)\subset BP\) is generated by the images of the unit \(1_{X}\in BP(X)\) (which is zero, in our case) and the classes of all closed points. Thus, the extensions \(E_{J}\) constructed in [24, Example 5.14] are not \(K(2,m)\)-equivalent, for any \(1\leqslant m\leqslant\infty\). Hence, for any such \(m\), we get \(2^{2^{\aleph_{0}}}\) different \(K(2,m)\)-isotropic points \(\mathfrak{a}_{(2,m),E}\)._ **Remark 8.10**: _The isotropic points are different from the topological points. This can be seen by their action on \(\tau:1/p(-1)\to 1/p\). While the topological realisation inverts \(\tau\), the isotropic one maps it to zero. Thus, \({\rm Cone}(\tau)\in\mathfrak{a}_{(p,m),Top}\), while \({\rm Cone}(\tau)\not\in\mathfrak{a}_{(p,m),E}\), for any \(p,m\) and \(E\). The latter fact is obvious, since \(A:=t_{K(p,m)}(1/p)\in K^{b}(Chou_{Num}^{K(p,m)}(\widetilde{E}))\) is non-zero. More precisely, is a sum of two Tate-motives, which are exactly the Morava weight cohomology of it (Definition 6.3). Then no map \(f:A(-1)\to A\) may be invertible, since it shifts the weight grading. By Proposition 7.4, \({\rm Cone}(\tau)\not\in\mathfrak{a}_{(p,m),E}\). \(\triangle\)_
MORAVA・ISOTROPIC・STABLE・HOMOTOPY・CATEGORYの導入と、より一般に、拡張 $E/k$ の安定 Homotopy CATEGORY。これらの「ローカル」バージョンは、Morel-Voevodsky の安定 $ {\Bbb{A}}^1 $-Homotopy CATEGORY $ SH(k)$のローカルバージョンである。[22] に導入されたローカルの motivic CATEGORY の類似であり、より一般の「異物性」の概念を持ち、これにより、Balmer スペクトル $\operatorname{Spc}(SH^c(k))$ の isotropic Morava POINT を構築することができる。これらのtopological Morava POINT の類似は、Morava K-theory の選択と $K(p,m)$ 等価類の拡張 $E/k$ によってパラメータ化される。これにより、新しい点の大規模な供給が可能になり、スペクトルへの理解を大幅に向上させる。興味
2302.14432
Gap engineering and wave function symmetry in C and BN armchair nanoribbons
Many are the ways of engineering the band gap of nanoribbons including application of stress, electric field and functionalization of the edges. In this article, we investigate separately the effects of these methods on armchair graphene and boron nitride nanoribbons. By means of density functional theory calculations, we show that, despite their similar structure, the two materials respond in opposite ways to these stimuli. By treating them as perturbations of a heteroatomic ladder model based on the tight-binding formalism, we connect the two behaviours to the different symmetries of the top valence and bottom conduction wave functions. These results indicate that opposite and complementary strategies are preferable to engineer the gapwidth of armchair graphene and boron nitride nanoribbons.
Elisa Serrano Richaud, Sylvain Latil, Hakim Amara, Lorenzo Sponza
2023-02-28T09:21:28
http://arxiv.org/abs/2302.14432v2
# Impact of edge morphology and chemistry on nanoribbons' gapwidth ###### Abstract In this work, we scrutinise theoretically how the gap of C and BN armchair nanoribbons changes upon variations of the bond length between edge atoms and their distance from passivating species. Our DFT calculations indicate that the gap of C-based nanoribbons is more sensitive to the relaxation of the bonding length between edge atoms (morphology) whereas in BN-nanoribbons it is more sensitive to the distance between edge atoms and passivating hydrogens (chemical environment). To understand the origin of these two different behaviours, we solved a tight-binding ladder model numerically and at the first-order perturbation theory, demonstrating that the different dependence is due to the interference of the wavefunctions of the top valence and the bottom conduction states. ## I Introduction In recent decades, graphene and hexagonal boron nitride (BN) have attracted a great deal of interest because of their remarkable transport and optical properties [1; 2; 3; 4; 5]. A much explored way to modulate them is by adding extra confinement (as in 2D quantum dots, nanoribbons or nanotubes). The presence of confining edges endows them with novel size-dependent features dominated by the characteristics of the edge itself. This is why graphene and BN nanoribbons are often classified according to their edge shape, which can be zig-zag, armchair, fall in an intermediate chiral angle, or present structures that require a more general nomenclature [6]. In zig-zag nanoribbons, well localised edge-state are formed which confer antiferromagnetic properties to C-based zig-zag nanoribbons [6; 7; 8; 9; 10; 11; 12]. Instead, BN-based zig-zag nanoribbons have an indirect gap and display an intrinsic dipole moment [13; 14; 15; 16; 17; 18; 19]. At variance, both graphene [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19] and BN [14; 15; 16; 17; 18] armchair nanoribbons (AGNR and ABNN), have no magnetic states and display a direct size-dependent gapwidth To take full advantage of this richness of properties, several methods have been explored including the application of external electromagnetic fields [9; 10; 14; 18; 27], strain [17; 24; 28] and edge engineering [17; 19; 21; 22; 23; 24; 25; 26; 29]. As a matter of fact, the edge characteristics are crucial for the performances of nanoribbons-based devices such as transistors, interconnects and logical devices [23; 29; 30; 31; 32; 33], photovoltaic applications [33; 34], or chemical sensing [35; 33]. Experimentally, edge engineering [34; 36; 37], chemical treatment [38] or selective passivation [29] have been demonstrated to have a significant impact on the device quality, precisely because of their action on the edges. Alterations of the electronic structure due to edge modifications can be divided into morphology effects (variation of the bondlengths) and chemistry effects (variation of the passivating species and their distance from the edges) [6; 26]. The sensitivity of AGNR and ABNNR gap to the passivation has been investigated by many authors [6; 17; 19; 21; 22; 23; 24; 25; 26; 29] who showed that its effect depends on the type of atoms involved, and/or on the number and position of the passivated sites. Most of these first-principle studies [17; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 82; 84; 86; 88; 89; 91; 87; 88; 89; 92; 85; 89; 93; 86; 87; 89; 94; 95; 96; 97; 98; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 170; 171; 172; 173; 174; 175; 176; 1778; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 208; 209; 210; 211; 224; 213; 214; 215; 216; 217; 218; 219; 225; 217; 219; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 259; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 289; 291; 285; 286; 288; 287; 289; 292; 300; 31; 329; 331; 332; 333; 341; 342; 343; 35; 361; 370; 38; 393; 394; 395; 396; 397; 40; 41; 429; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 94; 95; 96; 97; 98; 101; 12; 12; 13; 14; 15; 16; 17; 18; 19; 199; 18; 19; 202; 213; 224; 245; 246; 247; 248; 249; 251; 261; 275; 28; 293; 250; 252; 254; 255; 256; 257; 258; 259; 261; 276; 289; 293; 301; 302; 303; 31; 33; 342; 35; 36; 37; 38; 39; 40; 41; 429; 51; 52; 53; 54; 56; 57; 58; 59; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 78; 79; 81; 82; 83; 84; 85; 86; 87; 89; 95; 96; 97; 98; 101; 12; 13; 14; 15; 16; 17; 18; 19; 19; 19; 21; 22; 23; 24; 25; 26; 27; 28; 29; 31; 32; 33; 34; 35; 36; 37; 38; 39; 50; 39; 41; 42; 43; 44; 45; 46; 47; 49; 51; 53; 54; 56; 57; 59; 60; 62; 63; 64; 65; 66; 67; 68; 69; 71; 19; 80; 19; 20; 21; 23; 24; 25; 26; 27; 28; 29; 303; 32; 343; 35; 36; 37; 39; 510; 38; 31; 39; 40; 43; 44; 45; 47; 48; 49; 52; 53; 54; 56; 57; 58; 59; 70; 74; 75; 76; 78; 79; 82; 83; 85; 86; 87; 89; 99; 90; 911; 12; 13; 14; 15; 16; 17; 18; 19; 19; 19; 203; 21; 24; 25; 26; 27; 28; 29; 32; 33; 34; 35; 36; 37; 38; 39; 511; 36; 39; 52; 37; 39; 53; 54; 57 tion stops at a stability level and the relation to the gapwidth is not explored. However, both effects seem to be decisive in determining the gap of nanoribbons and we deemed that the subject deserved a more focused study. In this article, we employ density functional theory (DFT) to study the evolution of the gap, the top valence (TV) and the bottom conduction (BC) states of AGNRs and ABNRs as a function of the nanoribbon size upon variations of the distance between edge atoms and between these and the passivating species. Our objective is to compare the effect of morphological and chemical variations on the gapwidth and understand which of them is dominant and in which situation. We demonstrate that the response of the gapwidth to changes of the distance between edge atoms (morphology) or between edge atoms and passivating atoms (chemical environment) is opposite in the two materials and we rationalise this different behaviour by means of a tight-binding model which we solved both numerically and perturbatively. ## II Structural and computational details All nanoribbons studied in this article have armchair edges passivated with H atoms. They form an infinite periodic structure in the \(y\) direction and are confined along \(x\). The extension of the periodic cell along \(y\) is the cell parameter \(a\), while the width is expressed by the number \(N_{a}\) which indicates the number of dimers aligned along \(y\) inside the unitary cell (number of rows). To indicate a specific structure we will attach the index \(N_{a}\) after the label of the material, as in Figure 1, so for instance AGNR5 designates an armchair graphene nanoribbon of size \(N_{a}=5\). Density functional theory calculations were carried out within the generalized gradient approximation using the PBE [41] exchange correlation potential as implemented in the Quantum ESPRESSO [42] simulation package. Long-range van der Waals corrections were included via the DFT-D2 method [43]. To avoid interactions between consecutive cells, we included 15 A and 20 A of empty space in the \(z\) and \(x\) directions respectively. In electron density calculations and relaxation runs, the periodic axis was sampled with 20 k-points centered in \(\Gamma\) (corresponding to 11 irreducible k-points). This mesh was dense enough to converge total energies in the smallest nanoribbons. For density of states (DOS) calculations, a five times denser sampling was adopted for all systems and the resulting spectra have been broadened with a Gaussian distribution with a width of 0.02 eV. We used norm-conserving pseudopotentials [44] and set the kinetic energy cutoff at 80 Ry in both materials. It is worth stressing that using a large vertical empty space and a high energy cutoff is essential even in the relaxation runs in order to prevent nearly free-electron states from hanging below the \(p_{z}\) states hence jeopardizing the gap description. In fact, as already well known for free-standing layers [45; 46; 47; 48; 49] and nanotubes [50; 51; 52] in BN nanomaterials there is a competition at the bottom conduction between \(2p_{z}\) and \(3s\) states, whose right alignment requires a dedicated convergence study. If sometimes one can overlook this issue in BN layers, because the two competing states originate direct and indirect band gaps, this is not the case in ABNNRs where both states give rise to a direct gap at \(\Gamma\). In non-relaxed structures, all atoms occupy the sites of a regular honeycomb lattice with an inter-atomic distance of 1.42 A. Structural relaxation runs have been performed with the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm for all systems with the stopping criterion of all forces being lower than \(5\times 10^{-5}\) eV/A. We allowed variations of the cell parameter \(a\) and all atomic positions. As clarified in the following, we also run some calculations letting only specific atoms to move. In Figure 1 we report the relaxed structures of AGNR and ABNNR at \(N_{a}=5\) for sake of example, and we introduce some notable structural parameters. In the AGNRs, the main modifications with respect to non-relaxed structures are a contraction of the distance between edge atoms \(d_{E}\) and between C and H \(d_{HC}\). In ABNNR, we observe a similar contraction of the B-N distance on the edges \(d_{E}\), and different contractions of the distances between H-B and H-N (\(d_{HB}\neq d_{HN}\)). We observed also that these modifications are basically independent on the size of the nanoribbon both qualitatively and quantitatively, so the structural parameters undergo minimal variations when comparing nanoribbons of different size. ## III Gap edge states ### Agnrs The electronic structure of AGNRs has been already studied in the past [6; 8; 9; 10; 11; 20; 21; 22; 23; 34]. Both non-relaxed and relaxed ribbons display a band gap at \(\Gamma\) of gapwidth \(\Delta_{N_{a}}\). Because of the 1D confinement, the gapwidth falls in one of the three families \(N_{a}=3m-1\), \(3m\) or \(3m+1\) (with \(m\in\mathbb{N}^{*}\)). Each family follows a different trend which asymptotically tends to zero for growing nanoribbon sizes and follows the general rule \(\Delta_{3m-1}<\Delta_{3m}<\Delta_{3m+1}\). This is depicted in Figure 2 where we plot the gapwidth of AGNRs versus \(N_{a}\) for both non-relaxed and relaxed structures (red dashed and solid blue curves). The effect of relaxation is to open the gap by about 0.1 eV in families \(N_{a}=3m+1\) and \(3m-1\), while in the \(N_{a}=3m\) the opening is observed only in small nanoribbons, while the gap closes in larger ones. Our results are in quantitative agreement with previous works both for relaxed [11; 12; 21; 25], and unrelaxed simulations [26]. To characterise better the gap states, we analyzed in more detail the nature of the TV and the BC states at \(\Gamma\) in the relaxed structures. In panels a) and b) of Figure 3, we report the band structure and the density of states (DOS) of the AGNR8, chosen as a representative example. For sake of comparison, in panel b) we also report the orbital-projected DOS and the DOS of an infinite graphene sheet with the same inter-atomic distance. The DOS around the gap (from -1.5 eV to 1.5 eV) displays neat van Hove singularities arranged more or less symmetrically with respect to the middle of the gap. As the inset of panel b) shows clearly, the states composing the gap are entirely of \(p_{z}\) character. They form a \(\pi\) bonding with nodes on the \(xy\) plane, as expected. Instead, the first empty \(\sigma\) state is found at 3 eV above the BC. To go deeper in the analysis of the gap-edge states, we look at the site-projected DOS. We integrated the bare data inside an interval of 0.1 eV encompassing the TV and BC (shaded bands in the inset of Figure 3b). The outcome of this analysis is summarised in Figure 3c), where the site-projected DOS of gap-edge states is reported as a function of the row index (note that the curves are plotted on the same \(y\) axis). At variance from what observed in zigzag nanoribbons [7], the gap states are not concentrated on the edge atoms, but rather delocalized throughout the full nanoribbon and present a modulation that nicely displays the characteristics of a static wave. This observation is confirmed by the wave-like modulation of the charge probability \(|\psi(\mathbf{r})|^{2}\) associated with the TV and BC states, reported aside panel c). The wavefunction plot shows also that there is no spill-out on the passivating hydrogens and that, with respect to the edge-bbondings \(d_{E}\), TV and BC states display respectively a bonding and an antibonding character. ### Abnnrs The gapwidth of ABNNRs fall in the same three families with the same hierarchy [17; 18; 28]. This similarity with the graphene ribbons is actually quite general and can be understood from a simple tight-binding model (see section IV.2). The evolution of the ABNNRs gapwidth for sizes going from \(N_{a}\)=5 to 19 in the relaxed and non-relaxed configurations is presented in Figure 4 by the solid blue and the red dashed lines. The non-relaxed structures present a gap that monotonically tends to the limit \(N_{a}\rightarrow\infty\) in a way that is similar to non-passivated calculations [17]. We estimate \(N_{a}\rightarrow\infty=3.885\) eV from the weighted average of the curves extrapolated at \(1/N_{a}=0\) (cfr. inset of the Figure). This value is about 0.8 eV lower than the gapwidth of the isolated BN sheet (4.69 eV in PBE). All these aspects are consistent because, as it will become clearer later, in non-relaxed calculation, H atoms are too far to saturate efficiently the dangling bonds located at the edges of the ribbon. As a consequence, these form edge states inside the gap that lower the gapwidth similarly to what happens in non-passivated (bare) ribbons. As a result of the structural optimisation, the gapwidth of all families opens and tends to an asymptotic limit that is still about 0.1 eV lower than in the isolated monolayer, in agreement with similar calculations [14; 17]. This discrepancy is ascribed to a non-negligible edge contribution to the BC state, obviously absent in the isolated monolayer (cfr. the row-projected DOS analysis here below, and [14]). Finally, we note that the first empty \(\sigma\) state, i.e. the near free-electron state, is only 0.5 eV above the BC. Similarly to what done before, in Figure 5 we report the band structure, the projected DOS and the row-resolved DOS of the TV and BC states of the Figure 3: Electronic structure of the relaxed AGNR8. a) Band structure. b) and Inset: Total density of states (thick black) and projected on \(p_{z}\) orbital character (red bullets) compared with the DOS of the graphene sheet (dashed blue). c) Row-projected DOS from the integration of the total DOS around the band-edge states (shaded areas of panel b) and charge density associated with the TV and BC states at \(\Gamma\). Figure 2: Energy gap of graphene nanoribbons as a function of the width \(N_{a}\). Relaxed calculations (blue solid line), unrelaxed (red dashed line) and tight-binding numerical solution (black dotted) with parameters indicated in Table 1. The three families are reported with different symbols. A blue arrow at \(N_{a}=8\) indicate the nanoribbon chosen for the analysis presented in Figure 3. representative ABNNR8 system. We verify that the TV and the BC states are formed essentially of N-centered and B-centered \(p_{z}\) orbitals respectively. The row-projected DOS of both TV and BC, reported in panel c), shows again a very nice static-wave-like modulation with nodes in rows 3 and 6, but at variance with the AGNR8 case, here the TV and BC states localize differently: while the TV states are delocalised on the entire nanoribbon as in the previous case, the BC states are clearly peaked at the edges. The visualization of the associated charge density confirms that the TV state is characterised by a wavefunction equally delocalised on all the N atoms except those on rows 3 and 6. Instead, the BC state presents a wavefunction more concentrated on the edge B atoms with non negligible tails touching passivating H and edge nitrogens, in contrast to the isolated monolayer. The compared study of the TV and BC states of AGNRs and ABNNRs suggests that the gap of the two materials responds differently to modifications of the morphology and the passivation of the edges. To test this intuition, we have performed a detailed analysis by separating the two effects. ## IV Morphology vs chemistry of the edges ### Distinguishing the effects through selective relaxation in DFT Several investigations can be found in literature on the effects of edge reconstruction on the gapwidth of AGNR and ABNNR [19; 21; 22; 23; 24; 25; 26; 6; 12]. However, a study that systematically compares the effects of passivation and edge morphology is absent. Here we monitor the gapwidth in the family \(N_{a}=3m-1\) by relaxing separately the H-X distances \(d_{HX}\) (\(X\) = C, B or N) and the C-C or B-N distance on the edges \(d_{E}\). We did calculate data from the other two families but we do not report them because they have qualitatively the same behaviour. In Figure 6, a variation of \(d_{HX}\) is represented by a change in the line's type (color and dash), while a variation of \(d_{E}\) is represented by a change in the symbols (colour filled or empty). Let us examine first the case of AGNRs in panel a). We can start from a non-relaxed configuration where all atoms are equidistant \(d_{HC}\)=\(d_{E}\)=1.42 A (empty bullets, red dashed line), then we reduce \(d_{HC}\) to its relaxed value 1.08 A (empty bullets, blue solid line). We observe that there is basically no variation on the AGNRs' gapwidth. Instead, contracting the edge bonds from \(d_{E}\)=1.42 A to \(d_{E}\)=1.36 A opens the gap by around 0.15 eV irrespective of the value of \(d_{HC}\). Consequently, we conclude Figure 4: Energy gap of BN nanoribbons as a function of the size \(N_{a}\). Relaxed DFT (blue solid line), unrelaxed (red dashed line) and the numerical tight-binding solution (Table 1). The three families are reported with different symbols. Horizontal dashed lines indicate the gapwith of the DFT hBN sheet (4.69 eV) and the asymptotic \(N_{a}=\infty\) limit (\(\sim\)3.885 eV). The blue arrow pointing at the ABNNR8 indicates the system analysed in Figure 4. Inset: extrapolation of non-relaxed calculations at \(1/N_{a}=0\). The red arrow in the inset indicates the \(N_{a}=\infty\) limit as the weighted average of the extrapolation of the three families. Figure 5: Electronic structure of the relaxed ABNNRs. a) band structure; b) total density of states (thick black) and projected on \(p_{z}\) orbital character (red and green dotted for B and N states) compared to the hBN sheet DOS (dashed blue). c) Row-projected DOS integrated around the band-edge states (shaded areas of panel b). Insets: charge density of the TV and BC states at \(\Gamma\). that in AGNRs, the variations of the gapwidth induced by the relaxation and reported in Figure 2 are essentially due to changes of bond length \(d_{E}\) between C atoms at the edge. Interestingly, this gap opening is approximately independent on the width of the ribbon. Passing now to the study of ABNNRs (bottom panel), we observe an opposite behaviour. The gapwidth undergoes very small changes upon relaxation of \(d_{E}\), whereas the passage from the unrelaxed H-B and H-N distance (1.42 A) to the relaxed values clearly opens the gap by about 0.8 eV. To be more precise, by changing separately the two distances \(d_{HB}\) and \(d_{HN}\) (not shown), we found that it is the bonding between H and B that plays a major role in the opening of the gapwidth, indicating a dominant contribution from conduction states consistently with the observations we drew from Figure 5. According to this analysis, the gapwidth of ABNNRs is more sensitive to the passivation than to the very morphology of the edge. Once again we notice that the gap opening is basically independent on \(N_{a}\). This clarifies why our non-relaxed DFT gapwidth look very similar to the non-passivated results of Topsakal and coworkers [17]. ### Unperturbed tight-binding model To investigate further the reasons of this different behaviour, we generalise a ladder tight-binding model introduced initially for AGNRs to the heteroatomic case. Changes in the edge passivation and morphology will be successively introduced through variations of the on-site and hopping parameters of the model, as suggested in [6; 12], and the modified Hamiltonian solved both numerically and perturbatively [12]. Following references [6; 7; 8; 10; 12; 16; 20], the gap of an armchair nanoribbon whose TV BC states are formed of \(p_{z}\) orbitals can be described with a ladder tight-binding model as the one reported in Figure 7. The Hamiltonian of the model reads: \[H^{0}=\sum_{j,\mu}\left(\epsilon_{\mu j}\ket{\Phi_{\mu j}}+\sum_{j^{\prime}, \mu^{\prime}}t_{\mu\mu^{\prime}jj^{\prime}}\ket{\Phi_{\mu^{\prime}j^{\prime}} }\right)\ket{\Phi_{\mu j}}. \tag{1}\] The index \(j\in[1,N_{a}]\) labels the position of a dimer in the \(x\) coordinate (row coordinate), while \(\mu=1,2\) indicates the atomic site within the dimer (\(C_{1}\) or \(C_{2}\) in AGNRs and \(B\) or \(N\) in ABNNRs). The basis function \(\bra{r}\Phi_{\mu j}=\Phi_{\mu}(\mathbf{r}-\mathbf{r}_{j})\) is the \(p_{z}\) orbital of the atom \(\mu\) of the dimer placed at \(\mathbf{r}_{j}=\hat{x}(j-1)a\). For \(\mu=1\), \(\Phi_{\mu}(\mathbf{r}-\mathbf{r}_{j})\) is centered on the bottom rung if \(j\) is odd and in the upper rung if \(j\) is even, and the opposite for \(\mu=2\). At the unperturbed level, \(\epsilon_{\mu j}\) does not depend on the row-index \(j\) and is equal to \(\epsilon\) for \(\mu=1\) and \(-\epsilon\) for \(\mu=2\), with \(\epsilon\geq 0\). In the first-neighbour approximation, the hopping term \(t_{\mu\mu^{\prime}jj^{\prime}}=t\in\mathbb{R}\) if \(\mu\neq\mu^{\prime}\) and \(j-1\leq j^{\prime}\leq j+1\) and vanishes otherwise. The unperturbed solutions of this model are: \[E^{0}_{n\pm}=\pm\sqrt{\epsilon^{2}+\tau_{n}^{2}}=\pm\mathcal{E}_{n}\,, \tag{2}\] where \(\tau_{n}=t\left[1+2\cos\left(\theta_{n}\right)\right]\), the discrete index \(n\) comes from the confinement in the \(x\) direction and \(\theta_{n}=n\pi/(N_{a}+1)\). The eigenfunction associated to these states read \[\Psi_{n\pm}=\sum_{j=1}^{N_{a}}\sum_{\mu=1,2}\sin\left(j\theta_{n}\right)D_{\mu }^{n\pm}\Phi_{\mu}(\mathbf{r}-\mathbf{r}_{j}) \tag{3}\] with \[\begin{split} D_{1}^{n\pm}&=\sqrt{\frac{\mathcal{E }_{n}\pm\epsilon}{(N_{a}+1)\mathcal{E}_{n}}}\\ D_{2}^{n\pm}&=\pm\text{sgn}\left(\tau_{n}\right) \sqrt{\frac{\mathcal{E}_{n}\mp\epsilon}{(N_{a}+1)\mathcal{E}_{n}}}\end{split} \tag{4}\] where the function \(\text{sgn}\left(x\right)=1\) if \(x\geq 0\) and \(-1\) if \(x<0\). At this point, it is worth stressing two aspects. First, if one poses \(\tau_{n}=0\), then the Hamiltonian becomes diagonal and equivalent to that of a Figure 6: Gapwidth of the \(N_{a}=3m-1\) family of a) AGNRs and b) ABNNRs. Full (empty) symbols stand for relaxed (non-relaxed) edge-atom bondings. Blue solid (red dashed) lines for relaxed (non-relaxed) passivating-to-edge-atoms bondings. Figure 7: Scheme of the ladder model of width \(N_{a}=8\). The first neighbours distance is \(a\), the index \(j\) defines the position of a dimer. Atoms \(\mu=1\) are placed above \(\mu=2\) if \(j\) is even, below if \(j\) is odd. non-interacting system. Consistently, the coefficients \(D_{\mu}^{n\pm}\) become those of a pure system: \(D_{1}^{n+}=-D_{2}^{n-}=\sqrt{2/(N_{a}+1)}\) and \(D_{1}^{n-}=D_{2}^{n+}=0\). If instead one takes the homatomic limit, i.e. \(\epsilon\to 0\), then the coefficients become a bonding and antibonding pair, with \(D_{1}^{n\pm}=1/\sqrt{N_{a}+1}\) and \(D_{n}^{n\pm}=\pm\mathrm{sgn}\left(\tau_{n}\right)/\sqrt{N_{a}+1}\). The last occupied state (TV) \(\ket{\tilde{n},-}\) and the first empty state (BC) \(\ket{\tilde{n},+}\) are found at the integer quantum number \(\tilde{n}\) that minimizes the quantity \(\mathcal{E}_{n}\), i.e. that minimize \(\ket{\tau_{n}}\). If \(N_{a}=3m\) or \(3m+1\) with \(m\in\mathbb{N}^{*}\), then \(\tilde{n}=2m+1\). Note that the interacting term \(\tau_{2m+1}\) changes sign in passing from one family to the other. Instead if \(N_{a}=3m-1\), then the integer \(\tilde{n}=2m\) and \(\tau_{n}=0\). These considerations leads to the unperturbed gap of a heteroatomic system (\(\epsilon>0\)): \[\Delta_{N_{a}}^{0}=\left\{\begin{array}{ll}2\epsilon&\text{for $N_{a}=3m-1$}\\ 2\mathcal{E}_{2m+1}&\text{for the other values of $N_{a}$}\end{array}\right. \tag{5}\] and the eigenstates of the TV and BC of the \(N_{a}=3m-1\) family are pure states. The gap of a homoatomic system (\(\epsilon=0\)) reads: \[\Delta_{N_{a}}^{0}=\left\{\begin{array}{ll}0&\text{for $N_{a}=3m-1$}\\ 2|\tau_{2m+1}|&\text{for the other values of $N_{a}$}\end{array}\right. \tag{6}\] and the eigenstates of the TV and BC of the \(N_{a}=3m-1\) family are the bonding and antibonding combinations of \(C_{1}\) and \(C_{2}\). ### Distinguishing the effects through perturbation theory As in [6; 12], we now add to \(H^{0}\) a perturbation Hamiltonian \(\delta H\) which consists in adding \(\delta t\) to the hopping term connecting the atoms of the edge rows (\(j=1,N_{a}\)) and in changing their on-side energy by \(\delta\epsilon_{\mu}\). The hopping perturbation \(\delta t\) accounts for changes in \(d_{E}\), so it is more strongly related to the edge morphology, while the on-site one \(\delta\epsilon\) takes into account variations of \(d_{HX}\) and of the passivating species. The perturbative correction to the energy of the generic state \(|n\pm\rangle\) reads \[\begin{split}\langle n,\pm|\delta H|n,\pm\rangle=2\sin^{2}( \theta_{n})\times\\ \times\left[(D_{1}^{n\pm})^{2}\delta\epsilon_{1}+(D_{2}^{n\pm})^ {2}\delta\epsilon_{2}+2D_{1}^{n\pm}D_{2}^{n\pm}\delta t\right]\end{split} \tag{7}\] In the heteroatomic case \(\epsilon>0\), the perturbative correction to the gap is always \(\delta\Delta=\bra{\tilde{n},+|\delta H|\tilde{n},+}-\bra{\tilde{n},-|\delta H |\tilde{n},-}\). Using (7), the coefficients (4) or their appropriate limit, and remembering that \(\Delta_{N_{a}}^{0}=2\mathcal{E}_{\tilde{n}}\), then the gap correction for the case \(\epsilon>0\) reads, \[\delta\Delta=\left(\delta\epsilon_{1}-\delta\epsilon_{2}\right)/m \tag{8}\] for \(N_{a}=3m-1\); and \[\begin{split}\delta\Delta=\frac{8\sin^{2}\left(\theta_{2m+1} \right)}{(N_{a}+1)\Delta^{0}}\times\\ \times\left[\epsilon\left(\delta\epsilon_{1}-\delta\epsilon_{2} \right)+2\tau_{2m+1}\delta t\right]\end{split} \tag{9}\] for \(N_{a}=3m\) and \(N_{a}=3m+1\). Notice that, by construction, \(\tau_{2m+1}\) is the closest to zero among the accessible values, so the term \(2\tau_{2m+1}\delta t\) is always negligible. The result shows that in ABNNRs the variations of the gap are mostly due to the chemical environment of the edge atoms. This dependence comes ultimately from an interference between the TV and the BC wavefunctions. These two states are very close to pure states, so the mixed products \(D_{1}^{+}D_{2}^{+}\) and \(D_{1}^{-}D_{2}^{-}\) of equation (7) are systematically negligible, and they do actually vanish in the family \(N_{a}=3m-1\) where the two states are perfectly pure. In the homoatomic case (\(\epsilon=0\)) the corrected gap can be obtained following the same approach as before, and taking the appropriate limits of the coefficients (4). However, more attention must be paid in studying the family \(N_{a}=3m-1\). In fact this case corresponds to the double limit \(\epsilon\to 0\) and \(\tau_{n}\to 0\). Even though the final eigenvalues do not depend on the order with which the two limits are taken, the eigenstates do, therefore also the perturbative corrections depend on this choice. In DFT calculation and experiments, the system itself is well defined at the very first place, because one works either with ABNNRs or with AGNRs. So, for comparisons with DFT to make sense, the right order with such the limits must be taken is: first \(\epsilon\to 0\), followed by \(\tau_{n}\to 0\). Finally, one has to pay attention to another point: in the \(N_{a}=3m-1\) family, the TV and the BC states are degenerate and the unperturbed gap is 0. So there is no reason to define \(\delta\Delta=\bra{\tilde{n},+|\delta H|\tilde{n},+}-\bra{\tilde{n},-|\delta H| \tilde{n},-}\) rather than its opposite. However, the correction must be positive, so the correction must be defined as the modulus of the difference above. Putting all these things together, one gets for the homoatomic (\(\epsilon=0\)) case \[\delta\Delta=\left\{\begin{array}{ll}\frac{2}{m}|\delta t|&\text{for $N_{a}=3m-1$}\\ \mathrm{sgn}\left(\tau_{2m+1}\right)\frac{8\sin^{2}(\theta_{2m+1})}{(N_{a}+1)} \delta t&\text{otherwise}\end{array}\right. \tag{10}\] This result shows that in AGNRs most of the variations of the gap is accounted by \(\delta t\), so by morphological changes of the bonding between edge atoms, and not by changes of their chemical environment. Once again this result can be understood from the symmetries of the TV and BC wavefunctions. In fact, when \(\epsilon=0\), the TV and BC states are perfect bonding and antibonding combinations at any \(N_{a}\), so their difference causes the terms in \((D_{\mu}^{n\pm})^{2}\) of equation (7) to always cancel out. This result, although in perfect agreement with [12], seems to be in blatant contradiction with results from 2H-passivated AGNRs [26], where the gap is found independend on the C-C edge distance. Actually, these systems present a hybridisation of the \(sp3\) type and their gapwidth can not be described by this model. ### Validation of the perturbative approach Besides the perturbative approach, we also solved the perturbed Hamiltonian \(H=H^{0}+\delta H\) numerically. For the unperturbed problem, we parametrized the model with values that fit the band structure of the isolated graphene and hBN monolayers. Instead, the perturbation parameters \(\delta\epsilon\) and \(\delta t\) have been adjusted to recover as best as possible the DFT curves reported in Figures 2 and 4. The best parameters are reported in Table 1. Successively we explored how the gap changes upon variations of the perturbative parameters \(\delta t\) and \(\delta\epsilon_{\mu}\) in the range -1 eV, +1 eV in the nanoribbons of width \(N_{a}\)=11, 12 and 13, i.e. one representative nanoribbon per family. Guided by physical intuitions we took \(\delta\epsilon_{1}=\delta\epsilon_{2}=\delta\epsilon\) in the case of AGNRs, and \(\delta\epsilon_{1}=-\delta\epsilon_{2}=\delta\epsilon\) in the case of ABNNRs. Globally, the numerical and the perturbative gap-width are in very good agreement for both ABNNRs and AGNRs in the range explored, confirming our conclusions. In all cases, the numerical solution displays a quadratic trend with respect to \(\delta\epsilon\) which adds on top of the invariance (AGNR) or the linear (ABNNR) dependence predicted by the perturbative approach. The deviations between the two approaches are larger for this parameter than for \(\delta t\), with the larger deviations of the order of 0.2 eV in the \(N_{a}=3m\) and \(N_{a}=3m+1\) families of ABNNRs. Instead, the deviations for the parameter \(\delta t\) are in general very small and never larger than 0.1 eV. Note however that for extreme values of \(\delta t\), the numerical solution may undergo a band crossing in the top valence and the bottom conduction which would lead to a sudden closing of the gap, as it is the case at \(\delta t=-0.9\) in AGNR13 and \(\delta t=0.9\) in AGNR12. This physics is not accessible in our first order expansion and clearly sets the limit of applicability of the perturbative approach. ## V Conclusion We have calculated with DFT the gapwidth of graphene and boron nitride armchair nanoribbons (AGNRs and ABNNRs) for ribbon sizes going from \(N_{a}=5\) rows to \(N_{a}=19\) rows both for relaxed and unrelaxed structures. We have relaxed selectively specific interatomic distances and reported how the gapwidth changes upon variations of the bondlength with passivating atoms (chemistry-driven changes) and between edge atoms (morphology-driven changes). Thanks to this selective relaxation, we showed that the variations of the gapwidth in AGNRs are morphology-driven, while in ABNNRs are chemistry-driven. To understand why, we adopted and extended the tight-binding approach introduced by Son and coworkers [12] and we demonstrated that the interference between the wavefunctions of the top valence and the bottom conduction are at the origin of these two distinct responses. \begin{table} \begin{tabular}{ In the AGNR case, these states are basically a bonding and antibonding pair. As the two states are equally distributed on the atoms, the difference between BC and TV leads to a mutual cancellation of on-site changes, and only hopping terms survive. This explains the stronger dependence of the gapwidth on interatomic distances and hence on the morphology of the edges rather than the chemical environment. At variance, in ABNNR case, the TV and the BC states are basically pure states and the effective Hamiltonian is quasi non-interacting. As a result, the two states are mostly insensitive to variations in the hopping term and are instead strongly affected by on-site variations (chemical environment). Our results can help pushing further the research on nanoribbon-based devices, as they clarify the role played by edge-engineering, and selective passivation and provide the tools to investigate more complex scenarios.
nanoribbonsのバンドギャップを工学する方法には、ストレス、電気場、端部機能化などが挙げられます。この論文では、これらの方法の影響を、アーマッシュ型グラフェンとホウ素窒化物ナノリバーンにそれぞれ別々に調査します。密度汎関数理論計算により、これらの2つの材料は、構造が似ているにもかかわらず、これらの刺激に対する反応が反対になることを示しました。2つの物質を、タイト結合形式に基づく異種原子ラダーモデルの変調として扱うことで、これらの2つの挙動を、頂部価電子と底辺伝導波関数の異なる対称性と結びつけました。この結果、アーマッシュ型グラフェンとホウ素窒化物ナノリバーンのギャップ幅を工学するための、互いに逆作用的で補完的な戦略が望ましいと示唆しています。
2309.16157
Sampling Methods for Inner Product Sketching
Recently, Bessa et al. (PODS 2023) showed that sketches based on coordinated weighted sampling theoretically and empirically outperform popular linear sketching methods like Johnson-Lindentrauss projection and CountSketch for the ubiquitous problem of inner product estimation. We further develop this finding by introducing and analyzing two alternative sampling-based methods. In contrast to the computationally expensive algorithm in Bessa et al., our methods run in linear time (to compute the sketch) and perform better in practice, significantly beating linear sketching on a variety of tasks. For example, they provide state-of-the-art results for estimating the correlation between columns in unjoined tables, a problem that we show how to reduce to inner product estimation in a black-box way. While based on known sampling techniques (threshold and priority sampling) we introduce significant new theoretical analysis to prove approximation guarantees for our methods.
Majid Daliri, Juliana Freire, Christopher Musco, Aécio Santos, Haoxiang Zhang
2023-09-28T04:13:52
http://arxiv.org/abs/2309.16157v4
# Sampling Methods for Inner Product Sketching ###### Abstract. Recently, Bessa et al. (PODS 2023) showed that sketches based on coordinated weighted sampling theoretically and empirically outperform popular linear sketching methods like Johnson-Lindentrauss projection and CountSketch for the ubiquitous problem of inner product estimation. Despite decades of literature on such sampling methods, this observation seems to have been overlooked. We further develop the finding by presenting and analyzing two alternative sampling-based inner product sketching methods. In contrast to the computationally expensive algorithm in Bessa et al., our methods run in linear time (to compute the sketch) and perform better in practice, significantly beating linear sketching on a variety of tasks. For example, they provide state-of-the-art results for estimating the correlation between columns in unjoined tables, a problem that we show how to reduce to inner product estimation in a black-box way. While based on known sampling techniques (threshold and priority sampling) we introduce significant new theoretical analysis to prove approximation guarantees for our methods. 2019 W W W W ## 1. Introduction We study methods for approximating the inner product \(\langle\mathbf{a},\mathbf{b}\rangle=\sum_{i=1}^{n}\mathbf{a}_{i}\mathbf{b}_{i}\) between two length \(n\) vectors \(\mathbf{a}\) and \(\mathbf{b}\). In particular, we are interested in algorithms that independently compute compact _sketches_\(\mathcal{S}(\mathbf{a})\) and \(\mathcal{S}(\mathbf{b})\) of \(\mathbf{a}\) and \(\mathbf{b}\), and approximate \((\mathbf{a},\mathbf{b})\) using only the information in these sketches \(\mathcal{S}(\mathbf{a})\) and \(\mathcal{S}(\mathbf{b})\) should take much less than a space to store, allowing them to be quickly retrieved from disk or transferred over a network. Additionally, both the sketching procedure \(\mathbf{a}\rightarrow\mathcal{S}(\mathbf{a})\) and the estimation procedure that returns an approximation to \((\mathbf{a},\mathbf{b})\) should be computationally efficient. Sketching methods for the inner product have been studied for decades and find applications throughout data science and database applications. They can be used to quickly compute document similarity, to speed up the evaluation of machine learning models, and to estimate quantities like join size (Bessa et al., 2016; Datta et al., 2017; Datta et al., 2017; Datta et al., 2017; Datta et al., 2017). Recently, inner product sketching has found applications in scalable dataset search and augmentation, where sketches can be used to estimate correlations between columns in unjoined tables (Sandhi, 2018). ### Prior Work **Inner Product Estimation via Linear Sketching.** Until recently, all sketching algorithms with strong worst-case accuracy guarantees for approximating the inner product between arbitrary inputs were based on _linear sketching_. Such methods include Johnson-Lindentrauss random projection (JL) (Bessa et al., 2016; Datta et al., 2017), the closely related AMS sketch (Datta et al., 2017; Datta et al., 2017), and the CountSketch algorithm (Datta et al., 2017). These methods are considered "linear" because the sketching operation \(\mathbf{a}\rightarrow\mathcal{S}(\mathbf{a})\) is a linear map, meaning that \(\mathcal{S}(\mathbf{a})=\Pi\mathbf{a}\) for a matrix \(\Pi\in\mathbb{R}^{m\times n}\). \(\Pi\) is typically chosen at random and its row count \(m\) is equal to the size of the sketch \(\mathcal{S}(\mathbf{a})\). To estimate the inner product between \(\mathbf{a}\) and \(\mathbf{b}\), the standard approach is to simply return \(\langle\mathcal{S}(\mathbf{a}),\mathcal{S}(\mathbf{b})\rangle=\langle\Pi \mathbf{a},\Pi\mathbf{b}\rangle\). For all common linear sketching methods (including those listed above), it can be shown (see e.g., (Datta et al., 2017)) that, if we choose the sketch size \(m=O\left(1/\epsilon^{2}\right)\), then with high probability: \[|\langle\mathcal{S}(\mathbf{a}),\mathcal{S}(\mathbf{b})\rangle- \langle\mathbf{a},\mathbf{b}\rangle|\leq\epsilon\|\mathbf{a}\|_{2}\|\| \mathbf{b}\|_{2}. \tag{1}\] Here \(\|\mathbf{x}\|_{2}=\sqrt{\sum_{i=1}^{n}\mathbf{x}_{i}^{2}}\) denotes the Euclidean norm of a vector \(\mathbf{x}\). **Better Accuracy via Weighted MinHash Sketch.** While (1) is a strong accuracy guarantee, it was recently improved by Bessa et al. (2016), who introduce a method based on the popular Weighted MinHash (WMH) algorithm (Han et al., 2017; Datta et al., 2017; Datta et al., 2017; Datta et al., 2017; Datta et al., 2017). Like unweighted MinHash sketches, and similar techniques such as conditional random sampling (Datta et al., 2017; Datta et al., 2017), Bessa et al.'s WMH sketch consists of a subsample of entries from \(\mathbf{a}\) and \(\mathbf{b}\) that can be used to approximate the inner product. Importantly, entries with higher absolute value are sampled and added to the sketch with higher probability, since they can contribute more to the inner product sum \(\langle\mathbf{a},\mathbf{b}\rangle=\sum_{i=1}^{n}\mathbf{a}_{i}\mathbf{b}_{i}\). Using sketches of size \(O(1/\epsilon^{2})\), WMH achieves accuracy: \[|\langle\mathcal{S}(\mathbf{a}),\mathcal{S}(\mathbf{b})\rangle- \langle\mathbf{a},\mathbf{b}\rangle|\leq\epsilon\max\,(\|\mathbf{a}_{\mathcal{ I}}\|_{2}\|\mathbf{b}\|_{2},\|\mathbf{a}\|_{2}\|\mathbf{b}_{\mathcal{I}}\|_{2})\,. \tag{2}\] Here \(\mathcal{I}=\{i:\mathbf{a}[i]\neq 0\text{ and }\mathbf{b}[i]\neq 0\}\) is the set of all indices in the _intersection_ of the supports of \(\mathbf{a}\) and \(\mathbf{b}\), and \(\mathbf{a}_{\mathcal{I}}\) and \(\mathbf{b}_{\mathcal{I}}\) denote the vectors restricted to the indices in \(\mathcal{I}\).1 Since we always have \(\|\mathbf{a}_{T}\|_{2}\leq\|\mathbf{a}\|_{2}\) and \(\|\mathbf{b}_{T}\|_{2}\leq\|\mathbf{b}\|_{2}\), the error in (2) is always less or equal to the error in (1) for the linear sketching methods. As confirmed by experiments in (Han et al., 2017), the improvement over linear sketching methods like JL and CountSketch can be significant in applications where \(\mathbf{a}\) and \(\mathbf{b}\) are sparse and their non-zero entries only overlap at a small fraction of indices, i.e., when \(|\mathbf{I}|\) is much smaller than the number of non-zeros in \(\mathbf{a}\) and \(\mathbf{b}\). This is commonly the case when inner product sketches are used for data discovery, either to estimate join-sizes or correlations between unjoined tables (Han et al., 2017; Wang et al., 2018; Wang et al., 2019). In these applications, overlap between non-zeros in \(\mathbf{a}\) and \(\mathbf{b}\) corresponds to overlap between the keys of the tables being joined, which is often small. **Limitations of WMH sketches.** While WMH provides better accuracy than linear sketching, it has important limitations. Notably, the method has high computational complexity, requiring \(O(Nm\log n)\) time to produce a sketch of size \(m\) from a length \(n\) vector \(\mathbf{a}\) with \(N\leq n\) non-zero entries. While this nearly matches the \(O(Nm)\) complexity of JL projection or AMS sketch (which require multiplying \(\mathbf{a}\) by a dense matrix), it is far slower than methods like CountSketch or the \(k\)-minimum values (KMV) sketch (Kurthy et al., 2017), which can be applied in \(O(N)\) or \(O(N\log m)\) time, respectively.2 It is possible to reduce the complexity of WMH to \(O(N+m\log m)\) using recent work (Han et al., 2018; Wang et al., 2019). However, as shown in Section 5, even these theoretically faster methods are orders of magnitude slower in practice than the simpler sketches introduced in our work. Footnote 2: The KMV sketch is not typically thought of as a sketch for estimating inner products between arbitrary vectors, but can be modified to do so. See (Han et al., 2017) for details. Beyond computational cost, another disadvantage of WMH is that it is complex, both to implement and analyze. For example, while a high probability bound is obtained in (Han et al., 2017), they are unable to directly analyze the variance of the method. This makes it difficult, for example, to use tools like Chebyshev's inequality or Gaussian approximation to compute confidence intervals for estimated inner products. Moreover, the WMH requires careful discretization of the vectors being sketched. This complexity leads to large constant factors in the results of (Han et al., 2017). While such factors do not impact the Big O claim that a sketch of size \(O(1/\epsilon^{2})\) achieves error guarantee (2), they matter a lot in practice. Practical accuracy of the method is also negatively impacted by the fact that it samples entries from a and \(\mathbf{b}\)_with replacement_, which can lead to unnecessary redundancy in the sketches \(\mathcal{S}(\mathbf{a})\) and \(\mathcal{S}(\mathbf{b})\). ### Our Contributions **Methods and Theory.** In this paper, we present and analyze two algorithms for inner product sketching that eliminate the limitations of WMH sketches, while maintaining the same strong theoretical guarantees. Both are based on existing methods for weighted sampling of vectors _without replacement_, but our choice of sampling probabilities, estimation procedure, and theoretical analysis are new, and tailored to the problem of inner product estimation. The first method we study is based on _Threshold Sampling_(Han et al., 2017; Wang et al., 2019). We show that, when used to sample vector entries with probability proportional to their squared value, this method produces inner product sketches that yield the same accuracy guarantee as WMH sketches. At the same time, the method is extremely simple to implement and can be applied to a vector with \(N\) non-zero entries in linear \(O(N)\) time. Moreover, unlike WMH, the analysis of the method is completely straightforward. Its only disadvantage is that Threshold Sampling produces sketches that _randomly vary_ in size. The user can specify a parameter \(m\) and is guaranteed that the sketch has size \(m\) in _expectation_, and will not exceed \(m+O(\sqrt{m})\) with high-probability. However, there is no hard bound. We address this drawback with an alternative method based on Priority Sampling, which has been widely studied in the sketching, streaming, and statistics literature (Han et al., 2017; Wang et al., 2019; Wang et al., 2019). Priority Sampling offers a hard sketch size bound and can be implemented in near-linear \(O(N\log m)\) time to produce a sketch of size \(m\). Priority Sampling is significantly more challenging to analyze than Threshold Sampling. However, by introducing a new estimation procedure and building on a recent analysis of Priority Sampling for a different problem (subset sum estimation) (Wang et al., 2019), we are able to show that it enjoys the same guarantees as WMH. Our analysis of Priority Sampling is the main theoretical contribution of this paper. **Experimental Results.** In addition to our theoretical analysis, we experimentally compare Threshold and Priority Sampling with linear sketching algorithms like JL random projections and CountSketch, as well as sampling-based sketches like \(k\)-Minimum values, MinHash, and WMH. We evaluate these on a variety of applications, including join size estimation and correlation estimation between unjoined tables. For the second problem, we introduce an approach to perform so-called "join-correlation estimation" (Wang et al., 2019) using _any_ inner product sketching method. This approach is outlined in Section 4, and we believe may be of independent interest. \begin{table} \begin{tabular}{|l|l|l|l|} \hline **Method** & \begin{tabular}{l} **High probability error guarantee** \\ **to few sketch of size \(m=O(1/\epsilon^{2})\)** \\ \end{tabular} & \begin{tabular}{l} **Time to compute sketch for length** \\ **\(n\) vector with \(N\) non-zero entries** \\ \end{tabular} & \begin{tabular}{l} **Strict bound on** \\ **sketch size?** \\ \end{tabular} \\ \hline JL Projection/AMS Sketch (Han et al., 2017; Wang et al., 2019) & \(\epsilon\cdot\|\mathbf{a}\|_{2}\|\mathbf{b}\|_{2}\) & \(O(Nm)\) & \(\bigvee\) \\ \hline CountSketch (Kurthy et al., 2017) & \(\epsilon\cdot\|\mathbf{a}\|_{2}\|\mathbf{b}\|_{2}\) & \(O(N)\) & \(\bigvee\) \\ \hline Weighted MinHash (WMH) (Han et al., 2017) & \(\epsilon\cdot\max\left(\|\mathbf{a}_{T}\|_{2}\|\mathbf{b}\|_{2},\|\mathbf{a}\|_{2 }\|\mathbf{b}_{T}\|_{2}\right)\) & \(O(Nm\log n)\) & \(\bigvee\) \\ \hline **Threshold Sampling** & \(\epsilon\cdot\max\left(\|\mathbf{a}_{T}\|_{2}\|\mathbf{b}\|_{2},\|\mathbf{a}\|_{2 }\|\mathbf{b}_{T}\|_{2}\right)\) & \(O(N)\) & \(\bigvee\) \\ \hline **Priority Sampling** & \(\epsilon\cdot\max\left(\|\mathbf{a}_{T}\|_{2}\|\mathbf{b}\|_{2},\|\mathbf{a}\|_{2 }\|\mathbf{b}_{T}\|_{2}\right)\) & \(O(N\log m)\) & \(\bigvee\) \\ \hline \end{tabular} \end{table} Table 1. Comparison of error guarantees and computational cost for sketching methods when used to estimate the inner product between vectors \(\mathbf{a}\) and \(\mathbf{b}\). Note that \(\epsilon\cdot\max\left(\|\mathbf{a}_{T}\|_{2}\|\mathbf{b}\|_{2},\|\mathbf{a}\|_{2 }\|\mathbf{b}_{T}\|_{2}\right)\) is always a better guarantee than \(\epsilon\cdot\|\mathbf{a}\|_{2}\|\mathbf{b}\|_{2}\), and often significantly so when \(\mathbf{a}\) and \(\mathbf{b}\) are sparse with limited overlap between non-zero entries. Our Threshold and Priority Sampling methods obtain this better bound while matching or nearly matching the fast runtime of the less accurate CountSketch method. Both our Threshold and Priority Sampling methods offer significantly better accuracy than the baselines, beating both linear sketches and WMH sketches. This indicates that, despite having identical worst-case accuracy guarantees, the hidden constant factors are smaller for our methods than for WMH. An optimized version of our sketches tailored to the application of join-correlation estimation outperforms the recently introduced Correlation Sketches method from (Wang et al., 2017), which is based on the KMV algorithm. We also test the run-time efficiency of Threshold and Priority Sampling for sketch construction. Even when WMH is implemented using the efficient DartMinHash algorithm (Dai et al., 2017), our methods are still faster by more than an order of magnitude. **Our Approach.** Our methods are based on existing algorithms for weighted sampling without replacement. As in (Bang et al., 2017), the idea is to collect samples from \(\mathbf{a}\) and \(\mathbf{b}\), to store these samples in the sketches \(\mathcal{S}(\mathbf{a})\) and \(\mathcal{S}(\mathbf{b})\), and then to estimate the inner product \(\sum_{i=1}^{n}\mathbf{a}_{i}\mathbf{b}_{i}\) using only a subset of terms in the sum. Specifically, our estimators will be of the form \(\sum_{j\in\mathcal{T}}w_{j}\cdot\mathbf{a}_{j}\mathbf{b}_{j}\), where \(\mathcal{T}\) is a subset of \(\{1,\ldots,n\}\) and \(\{w_{j},j\in\mathcal{T}\}\) are appropriately chosen positive weights. To compute this estimate, we need to store _both_\(\mathbf{a}_{j}\) in \(\mathcal{S}(\mathbf{a})\) and \(\mathbf{b}_{j}\) and \(\mathcal{S}(\mathbf{b})\). If \(\mathbf{a}\) and \(\mathbf{b}\) are sampled independently at random, the probability of obtaining matching indices in both sketches would be small, thus leading to a small number of usable samples, and a poor inner product estimate. To avoid this issue, our Threshold and Priority Sampling methods use shared random seeds to sample from the vectors in a _coordinated way_, which ensures that if entry \(\mathbf{a}_{j}\) is sampled from \(\mathbf{a}\), it is more likely that the corresponding \(\mathbf{b}_{j}\) is sampled from \(\mathbf{b}\). This idea is not new: coordinated variants of Threshold and Priority Sampling have been studied in prior work on different problems, as have coordinated variants of related methods like PPSWOR sampling (Pasascalis et al., 2017; Wang et al., 2017). What _is new_ is how we apply and analyze such methods for the problem of inner product estimation. Besides WMH (Bang et al., 2017), we are only aware of one prior paper that addresses the inner product estimation problem using coordinated sampling: the "End-Biased Sampling" algorithm of (Kang et al., 2017) can be viewed as a variant of Threshold Sampling where the \(i^{\text{th}}\) entry of \(\mathbf{a}\) is sampled with probability proportional to the magnitude \(|\mathbf{a}_{i}|\). We instead use the squared magnitude \(|\mathbf{a}_{i}|^{2}\). While variance bounds are shown in (Kang et al., 2017), due to this choice of sampling probability, they fall short of improving on results for linear sketches, i.e., on Eq. (1). Additionally, unlike our work, (Kang et al., 2017) does not address the issue of how to obtain a fixed-size sketch. We discuss End-Biased Sampling further in Section 5 and fully review related work in Section 6. **Summary + Paper Roadmap.** In summary, our contributions are: * [leftmargin=*,noitemsep,topsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,p,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep0pt=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep0pt=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep0pt=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep0pt=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,ppt=0pt,parsep=0pt,parsep=0pt,parsep=0pt,ppt=0pt,parsep=0pt,parsep=0pt,parsep=0pt,ppt=0pt,parsep=0pt,parsep=0pt,ppt=0pt,parsep0pt=0pt,parsep=0pt,parsep=0pt,parsep=0pt,parsep=0pt,ppt,parsep=0pt,parsep=0pt,ppt=0pt,parsep=0pt,parsep=0pt,ppt=0pt,parsep=0pt,parsep=0pt,ppt=0pt,parsep=0pt,parsep=0pt,ppt=0pt,parsep=0pt,parsep=0pt,ppt=0pt,parsep=0pt,parsep=0pt,ppt,parsep=0pt,ppt=0pt,parsep=0pt,parsep=0pt,ppt=0pt,parsep=0pt,ppt,parsep=0pt,parsep=0pt,ppt=0pt,parsep=0pt,parsep=0pt,ppt,parsep=0pt,ppt=0pt,parsep=0pt,ppt=0pt,parsep=0pt,ppt=0pt,parsep=0pt,ppt,parsep=0pt,ppt=0pt,parsep=0pt,ppt,parsep=0pt,ppt=0pt,parsep=0pt,ppt=0pt,parsep=0pt,ppt=0pt,parsep=0pt,ppt=0pt,parseppt=0pt,parsep=0pt,ppt=0pt,parseppt=0pt,parseppt=0pt,parsep=0pt,ppt=0pt,parseppt=0pt,parsep=0pt,ppt=0pt,parseppt=0pt,parsep=0pt,ppt=0pt,parsep=0pt,parseppt=0pt,parsep=0pt,ppt=0pt,parseppt=0pt,parsep=0pt,ppt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,ppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parsep=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parsep=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parsep=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parsep=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parsep=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parsep=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt=0pt,parsep,parseppt=0pt,parseppt=0pt,parseppt=0pt,parseppt,parseppt=0pt,parse are sampled with higher probability. Note that this is in contrast to "End-Biased Sampling" (Steintein and Tschir and CountSketch (Borde and McAllester, 2011). At the same time, as we show in Section 5, Threshold Sampling tends to perform better than WMH in practice. We believe there are a number of reasons for this, including the fact that Threshold Sampling selects vector entries without replacement, and the fact that the variance bound in Theorem 1 has an small constant factor of \(2\). We prove Theorem 1 below: Proof of Theorem 1.: Let \(\mathcal{I}\) denote the set of all indices \(i\) for which \(\mathbf{a}_{i}\neq 0\) and \(\mathbf{b}_{i}\neq 0\). For any \(i\in\mathcal{I}\), let \(\mathbb{I}_{i}\) denote the indicator random variable for the event that \(i\) is included in _both_\(K_{\mathbf{a}}\) and \(K_{\mathbf{b}}\). \(\mathbb{I}_{i}=1\) if this event occurs and \(0\) if it does not. Note that, for \(i\neq j\), \(\mathbb{I}_{i}\) is independent from \(\mathbb{I}_{j}\), since the hash values \(h(i)\) and \(h(j)\) are drawn uniformly and independently from \([0,1]\). Moreover, we claim that \(\mathbb{I}_{i}\) is equal to \(1\) with probability: \[p_{i}=\min\left(1,\frac{m\cdot\mathbf{a}_{i}^{2}}{\|\mathbf{a}\|_{2}^{2}}, \frac{m\cdot\mathbf{b}_{i}^{2}}{\|\mathbf{b}\|_{2}^{2}}\right)=\min(1,\tau_{ \mathbf{a}}\cdot\mathbf{a}_{i}^{2},\tau_{\mathbf{b}}\cdot\mathbf{b}_{i}^{2}). \tag{3}\] To see why this is the case, assume without loss of generality that \(\mathbf{a}_{i}^{2}\leq\mathbf{b}_{i}^{2}\). Then, by examining Line 3 of Algorithm 1, we can see that \(i\) is included in \(K_{\mathbf{a}}\) with probability \(\min\left(1,m\cdot\mathbf{a}_{i}^{2}/\|\mathbf{a}\|_{2}^{2}\right)\). Moreover, if \(i\) is included in \(K_{\mathbf{a}}\), it is _guaranteed_ to be included in \(K_{\mathbf{b}}\) since the threshold \(m\cdot\mathbf{b}_{i}^{2}/\|\mathbf{b}\|_{2}^{2}\) is at least as large as \(m\cdot\mathbf{a}_{i}^{2}/\|\mathbf{a}\|_{2}^{2}\). It follows that, when \(\mathbf{a}_{i}^{2}\leq\mathbf{b}_{i}^{2}\), we have that \(p_{i}=\min\left(1,m\cdot\mathbf{a}_{i}^{2}/\|\mathbf{a}\|_{2}^{2}\right)\). The analysis is identical for the case \(\mathbf{b}_{i}^{2}<\mathbf{a}_{i}^{2}\), in which case \(p_{i}=\min\left(1,m\cdot\mathbf{b}_{i}^{2}/\|\mathbf{b}\|_{2}^{2}\right)\). Combining the two cases establishes (3). Let \(W\) be the estimate returned by Algorithm 2. We can write \(W=\sum_{i\in\mathcal{I}}\mathbb{I}_{i}\cdot\frac{\mathbf{a}_{i}\mathbf{b}_{i}}{ p_{i}}\), and applying linearity of expectation, we have: \[\mathbb{E}[W]=\sum_{i\in\mathcal{I}}p_{i}\cdot\frac{\mathbf{a}_{i}\mathbf{b} _{i}}{p_{i}}=\sum_{i\in\mathcal{I}}\mathbf{a}_{i}\mathbf{b}_{i}=\langle \mathbf{a},\mathbf{b}\rangle. \tag{4}\] Next, since each term in the sum \(W=\sum_{i\in\mathcal{I}}\mathbb{I}_{i}\cdot\frac{\mathbf{a}_{i}\mathbf{b}_{i} }{p_{i}}\) is independent, \[\operatorname{Var}[W]=\sum_{i\in\mathcal{I}}\operatorname{Var}\left[\mathbb{I} _{i}\cdot\frac{\mathbf{a}_{i}\mathbf{b}_{i}}{p_{i}}\right]=\sum_{i\in\mathcal{ I}}\frac{(\mathbf{a}_{i}\mathbf{b}_{i})^{2}}{p_{i}^{2}}\operatorname{Var}[ \mathbb{I}_{i}].\] \(\operatorname{Var}[\mathbb{I}_{i}]=p_{i}-p_{i}^{2}\), which is \(0\) when \(p_{i}\) equals \(1\). If \(p_{i}\neq 1\), then we can bound \(\operatorname{Var}[\mathbb{I}_{i}]\leq p_{i}=m\cdot\min\left(\mathbf{a}_{i}^{2} /\|\mathbf{a}\|_{2}^{2},\mathbf{b}_{i}^{2}/\|\mathbf{b}\|_{2}^{2}\right)\). Plugging in, we are able to prove our desired variance bound: \[\operatorname{Var}[W] \leq\sum_{i\in\mathcal{I},p_{i}\neq 1}\frac{(\mathbf{a}_{i} \mathbf{b}_{i})^{2}}{p_{i}}\] \[=\sum_{i\in\mathcal{I},p_{i}\neq 1}\|\mathbf{a}\|_{2}^{2}\|\mathbf{b}\|_{ 2}^{2}\frac{(\mathbf{a}_{i}^{2}/\|\mathbf{a}\|_{2}^{2})(\mathbf{b}_{i}^{2}/\| \mathbf{b}\|_{2}^{2})}{m\cdot\min(\mathbf{a}_{i}^{2}/\|\mathbf{a}\|_{2}^{2}, \mathbf{b}_{i}^{2}/\|\mathbf{b}\|_{2}^{2})}\] \[=\sum_{i\in\mathcal{I},p_{i}\neq 1}\|\mathbf{a}\|_{2}^{2}\|\mathbf{b}\|_{ 2}^{2}\frac{\max(\mathbf{a}_{i}^{2}/\|\mathbf{a}\|_{2}^{2},\mathbf{b}_{i}^{2}/ \|\mathbf{b}\|_{2}^{2})}{m}\] \[\leq\frac{\|\mathbf{a}\|_{2}^{2}\|\mathbf{b}\|_{2}^{2}}{m}\sum_{i \in\mathcal{I}}\frac{\mathbf{a}_{i}^{2}}{\|\mathbf{a}\|_{2}^{2}}+\frac{ \mathbf{b}_{i}^{2}}{\|\mathbf{b}\|_{2}^{2}}\] \[=\frac{\|\mathbf{a}\|_{2}^{2}\|\mathbf{b}\|_{2}^{2}}{m}\left(\frac{ \|\mathbf{a}_{\mathcal{I}}\|_{2}^{2}}{\|\mathbf{a}\|_{2}^{2}}+\frac{\|\mathbf{b} _{\mathcal{I}}\|_{2}^{2}}{\|\mathbf{b}\|_{2}^{2}}\right)\] \[=\frac{1}{m}\left(\|\mathbf{a}_{\mathcal{I}}\|_{2}^{2}\|\mathbf{b} \|_{2}^{2}+\|\mathbf{a}\|_{2}^{2}\|\mathbf{b}_{\mathcal{I}}\|_{2}^{2}\right)\] \[\leq\frac{2}{m}\max(\|\mathbf{a}_{\mathcal{I}}\|_{2}^{2}\|\mathbf{b} \|_{2}^{2},\|\mathbf{a}\|_{2}^{2}\|\mathbf{b}_{\mathcal{I}}\|_{2}^{2}).\] Finally, we prove the claimed bound on the expected sketch size. We have that \(|K_{\mathbf{a}}|=\sum_{i=1}^{n}\mathbb{1}\left[i\in K_{\mathbf{a}}\right]\), where \(\mathbb{1}\left[i\in K_{\mathbf{a}}\right]\) is an indicator random variable that is \(1\) if \(i\) is included in \(K_{\mathbf{a}}\) and zero otherwise. By linearity of expectation, we have that: \[\mathbb{E}\left[|K_{\mathbf{a}}|\right]=\sum_{i=1}^{n}\mathbb{E}\left[\mathbb{1 }\left[i\in K_{\mathbf{a}}\right]\right]=\sum_{i=1}^{n}\min(1,m\cdot\mathbf{a}_ {i}^{2}/\|\mathbf{a}\|_{2}^{2})\leq m. \tag{5}\] An identical analysis shows that \(\mathbb{E}\left[|K_{\mathbf{b}}|\right]\leq m\), which completes the proof. In Appendix A.1, we further prove that \(|K_{\mathbf{a}}|\) and \(|K_{\mathbf{b}}|\) are less than \(m+O(\sqrt{m})\) with high probability. **Practical Implementation.** In Theorem 1, we show that the expected sketch size is _upper bounded_ by \(m\). As apparent from (5), it will be less than \(m\) whenever there are entries in the vector for which \(\mathbf{a}_{i}^{2}/\|\mathbf{a}\|_{2}^{2}>1/m\). This is not ideal: we would like a sketch whose size is as close to our budget \(m\) as possible. Fortunately, Threshold Sampling is easily modified so that the _expected_ sketch size is _exactly_\(m\). We simply use binary search to compute \(m^{\prime}\) such that \(\sum_{i=1}^{n}\min\left(1,m^{\prime}\cdot\mathbf{a}_{i}^{2}/\|\mathbf{a}\|_{2}^{2} \right)=m\). Then, we replace \(m\) in Lines 3 and 6 of Algorithm 1 with \(m^{\prime}\). Doing so ensures an expected sketch size of exactly \(m\), and does not increase our estimator's variance. Further details are provided in Appendix A.1, and we implement this variant of Threshold Sampling for our experiments in Section 5. ## 3. Priority Sampling While a simple and effective method for inner product sketching, one limitation of Threshold Sampling is that the user cannot exactly control the size of the sketch \(\mathcal{S}(\mathbf{a})\). We address this issue by analyzing an alternative algorithm based on Priority Sampling, a technique introduced in computer science by (Zegedy, 2011), and studied in statistics under the name "Sequential Poisson Sampling" (Zegedy, 2012). **Sketching.** To motivate the method, observe from rearranging Lines 3 and 4 in Algorithm 1, that Threshold Sampling selects all entries from \(\mathbf{a}\) for which \(h(i)/a_{i}^{2}\) falls below a fixed "global threshold", \(\tau_{\mathbf{a}}=m/\|\mathbf{a}\|_{2}^{2}\). There will **Input:** Length \(n\) vector \(\mathbf{a}\), random seed \(s\), target sketch size \(m\). ``` 1:Sketch \(\mathcal{S}(\mathbf{a})=\{K_{\mathbf{a}},V_{\mathbf{a}},\tau_{\mathbf{a}}\}\), where \(K_{\mathbf{a}}\) is a subset of indices from \(\{1,\ldots,n\}\) and \(V_{\mathbf{a}}\) contains \(\mathbf{a}_{i}\) for all \(i\in K_{\mathbf{a}}\). 2:Use random seed \(s\) to select a uniformly random hash function \(h:\{1,\ldots,n\}\rightarrow[0,1]\). Initialize \(K_{\mathbf{a}}\) and \(V_{\mathbf{a}}\) to be empty lists. 3:Compute rank \(R_{i}=\frac{h(i)}{\mathbf{a}_{i}^{2}}\) for all \(i\) such that \(\mathbf{a}_{i}\neq 0\). 4:Set \(\tau_{\mathbf{a}}\) equal to the \((m+1)^{\text{st}}\) smallest value \(R_{i}\), or set \(\tau_{\mathbf{a}}=\infty\) if a has less than \(m+1\) non-zero values. 5:for\(i\) such that \(\mathbf{a}_{i}\neq 0\)do 6:if\(R_{i}<\tau_{\mathbf{a}}\)then 7: Append \(i\) to \(K_{\mathbf{a}}\), append \(\mathbf{a}_{i}\) to \(V_{\mathbf{a}}\). 8:return\(\mathcal{S}(\mathbf{a})=\{K_{\mathbf{a}},V_{\mathbf{a}},\tau_{\mathbf{a}}\}\) ``` **Algorithm 3** Priority Sampling **Theoretical Analysis.** Drawing inspiration from a new analysis of Priority Sampling for the subset sum problem [26], we are able to overcome these obstacles for inner product estimation as well. Our main theoretical result on Priority Sampling is as follows: **Theorem 3**: _For vectors \(\mathbf{a},\mathbf{b}\in\mathbb{R}^{n}\) and sketch size \(m\), let \(\mathcal{S}(\mathbf{a})=\{K_{\mathbf{a}},V_{\mathbf{a}},\tau_{\mathbf{a}}\}\) and \(\mathcal{S}(\mathbf{b})=\{K_{\mathbf{b}},V_{\mathbf{b}},\tau_{\mathbf{b}}\}\) be sketches returned by Algorithm 3. Let \(W\) be the inner product estimate returned by Algorithm 2 applied to these sketches. We have that \(\mathbb{E}\left[W\right]=\langle\mathbf{a},\mathbf{b}\rangle\) and_ \[\operatorname{Var}\left[W\right]\leq\frac{2}{m-1}\max\left(\|\mathbf{a}_{I}\|_ {2}^{2}\|\mathbf{b}\|_{2}^{2},\|\mathbf{a}\|_{2}^{2}\|\mathbf{b}_{I}\|_{2}^{2 }\right)\] _Moreover, let \(|K_{\mathbf{a}}|\) and \(|K_{\mathbf{b}}|\) be the number of index/values pairs stored in \(\mathcal{S}(\mathbf{a})\) and \(\mathcal{S}(\mathbf{b})\). We have \(|K_{\mathbf{a}}|\leq m\) and \(|K_{\mathbf{b}}|\leq m\), with equality in the typical case when \(\mathbf{a}\) and \(\mathbf{b}\) have at least \(m\) non-zero entries._ Theorem 3 almost exactly matches our Theorem 1 for Threshold Sampling, except that the leading constant on the variance is \(\frac{2}{m-1}\) instead of \(\frac{2}{m}\). Again, we can apply Chebyshev's inequality to conclude that if we set \(m=\frac{2/\delta}{\epsilon^{2}}+1\), then \(|W-\langle\mathbf{a},\mathbf{b}\rangle|\) is bounded by \(\epsilon\cdot\max\left(\|\mathbf{a}_{I}\|_{2}\|\mathbf{b}\|_{2},\|\mathbf{a} \|_{2}\|\mathbf{b}\|_{2}\right)\|_{2}\|_{2}\|_{2}\|_{2}\|_{2}\|_{2}\) with probability \(\geq 1-\delta\). The matching theoretical results align with experiments: as seen in Section 5, Priority Sampling performs almost identically to Threshold Sampling, albeit with the added benefit of a fixed sketch size bound. **Proof of Theorem 3.** We start by introducing additional notation. Let \(\mathcal{A}=\{i:\mathbf{a}_{i}\neq 0\}\) denote the set of indices where \(\mathbf{a}\) is non-zero and let \(\mathcal{B}=\{i:\mathbf{b}_{i}\neq 0\}\) denote the set of indices where \(\mathbf{b}\) is non-zero. Recall that \(\tau_{\mathbf{a}}\) as computed in Algorithm 3 is the \((m+1)^{\text{st}}\) smallest value of \(h(i)/a_{i}^{2}\) over all \(i\in\mathcal{A}\). For any \(i\in\mathcal{A}\), let \(\tau_{\mathbf{a}}^{i}\) denote the \(m^{\text{th}}\) smallest of \(h(j)/a_{j}^{2}\) over all \(j\in\mathcal{A}\setminus\{i\}\). If \(\mathcal{A}\setminus\{i\}\) has fewer than \(m\) values, define \(\tau_{\mathbf{a}}^{i}=\infty\). Define \(\tau_{\mathbf{b}}^{i}\) analogously for all \(i\in\mathcal{B}\). Let \(\mathcal{T}=K_{\mathbf{a}}\cap K_{\mathbf{b}}\) be as in Algorithm 2. Later on we will use the easily checked fact that, for all \(i\in\mathcal{T}\), \(\tau_{\mathbf{a}}^{i}=\tau_{\mathbf{a}}\) and \(\tau_{\mathbf{b}}^{i}=\tau_{\mathbf{b}}\). For all \(i\in\mathcal{A}\cap\mathcal{B}\), define \(\hat{w}_{i}\) as follows: \[\hat{w}_{i}=\begin{cases}\frac{\mathbf{a}_{i}\mathbf{b}_{i}}{\min(1,\mathbf{a}_{ i}^{2}t,\tau_{\mathbf{b}}b_{i}^{2}t^{\prime})}&i\in\mathcal{T}\\ 0&i\notin\mathcal{T}\end{cases}\] The estimate \(W\) returned by Algorithm 2 can be rewritten as: \[W=\sum_{i\in\mathcal{A}\cap\mathcal{B}}\hat{w}_{i}. \tag{7}\] From (7), we can see that, to prove \(\mathbb{E}[W]=\langle\mathbf{a},\mathbf{b}\rangle=\sum_{i\in\mathcal{A}\cap \mathcal{B}}a_{i}\mathbf{b}_{i}\), it suffices to prove that, for all \(i\in\mathcal{A}\cap\mathcal{B}\), \(\mathbb{E}[\hat{w}_{i}]=\mathbf{a}_{i}\mathbf{b}_{i}\). To establish this equality, first observe that for \(i\) to be in \(\mathcal{T}\), it must be that both \(h(i)/a_{i}^{2}\) and \(h(i)/b_{i}^{2}\) are among the \(m^{\text{th}}\) smallest ranks computed when sketching \(\mathbf{a}\) and \(\mathbf{b}\), respectively. In other words, it must be that \(h(i)/a_{i}^{2}<\tau_{\mathbf{a}}^{i}\) and \(h(i)/b_{i}^{2}<\tau_{\mathbf{b}}^{i}\). So, \(\Pr\left[i\in\mathcal{T}\right]\) equals: \[\Pr\left[h(i)/a_{i}^{2}<\tau_{\mathbf{a}}^{i}\text{ and }h(i)/b_{i}^{2}<\tau_{ \mathbf{b}}^{i}\right]=\min(1,\mathbf{a}_{i}^{2}\tau_{\mathbf{a}}^{i},\mathbf{a }_{i}^{2}\tau_{\mathbf{b}}^{i}).\] Combined with the fact discussed earlier that, conditioned on \(i\in\mathcal{T}\), \(\tau_{\mathbf{a}}=\tau_{\mathbf{a}}^{i}\) and \(\tau_{\mathbf{b}}=\tau_{\mathbf{b}}^{i}\), we have: \[\mathbb{E}[\hat{w}_{i}] =\frac{\mathbf{a}_{i}\mathbf{b}_{i}}{\min(1,\mathbf{a}_{i}^{2}t^{ 2}_{\mathbf{a}},\mathbf{a}_{i}^{2}\tau_{\mathbf{b}})}\Pr\left[i\in\mathcal{T}\right]\] \[=\frac{\mathbf{a}_{i}\mathbf{b}_{i}}{\min(1,\mathbf{a}_{i}^{2}t^{ 2}_{\mathbf{a}},\mathbf{a}_{i}^{2}\tau_{\mathbf{b}}^{2}t^{2}_{\mathbf{b}})}\min(1, \mathbf{a}_{i}^{2}\tau_{\mathbf{a}}^{i},\mathbf{a}_{i}^{2}\tau_{\mathbf{b}}^{i})= \mathbf{a}_{i}\mathbf{b}_{i}.\] As desired, \(\mathbb{E}[W]=\langle\mathbf{a},\mathbf{b}\rangle\) follows by linearity of expectation. Next, we turn our attention to bounding the variance of \(W\). As discussed, this is complicated by the fact that \(\hat{w}_{i}\) and \(\hat{w}_{j}\) are non-independent. However, it is possible to show that the random variables are _uncorrelated_, which will allow us to apply linearity of variance to the sum in (7). In particular, we show that: \[\mathbb{E}[\hat{w}_{i}\hat{w}_{j}]=\mathbb{E}[\hat{w}_{i}]\,\mathbb{E}[\hat{w}_ {j}]. \tag{8}\] For any \(i,j\in\mathcal{A}\) define \(\tau_{\mathbf{a}}^{i,j}\) to equal the \((m-1)^{\text{st}}\) smallest of \(h(k)/a_{k}^{2}\) over all \(k\in\mathcal{A}\setminus\{i,j\}\), or so if there are not \(m-1\) values in \(\mathcal{A}\setminus\{i,j\}\). Define \(\tau_{\mathbf{b}}^{i,j}\) analogously for \(i,j\in\mathcal{B}\). As in our expression for \(\Pr\left[i\in\mathcal{T}\right]\) shown earlier, it can be see that \(\Pr\left[i,j\in\mathcal{T}\right]=\min(1,\mathbf{a}_{ Upper bounding the maximum by the sum, we then have: \[\operatorname{Var}\left[\hat{w}_{i}\right] \leq\mathbf{a}_{\mathbf{f}}^{2}b_{i}^{2}\int_{0}^{\infty}\!\!\!\! \int_{0}^{\infty}\!\!\!\!\!\int_{\left(\mathbf{a}_{\mathbf{f}}^{2}t+\frac{1}{ \mathbf{b}_{\mathbf{f}}^{2}t^{\prime}}\right)\operatorname{Pr}\left[\mathbf{r}_ {\mathbf{a}}^{2}=t,\mathbf{r}_{\mathbf{b}}^{2}=t^{\prime}\right]dtdt^{\prime}\] \[\leq\mathbf{a}_{\mathbf{f}}^{2}\int_{0}^{\infty}\frac{1}{t^{ \prime}}\operatorname{Pr}\left[\mathbf{r}_{\mathbf{b}}^{2}=t^{\prime}\right]dt ^{\prime}+\mathbf{b}_{\mathbf{f}}^{2}\int_{0}^{\infty}\frac{1}{t} \operatorname{Pr}\left[\mathbf{r}_{\mathbf{a}}^{2}=t\right]dt\] \[=\mathbf{a}_{\mathbf{f}}^{2}\operatorname{\mathbb{E}}\left[1/ \mathbf{r}_{\mathbf{b}}^{2}\right]+\mathbf{b}_{\mathbf{f}}^{2}\operatorname{ \mathbb{E}}\left[1/\mathbf{r}_{\mathbf{a}}^{2}\right]\,.\] So, we have reduced the problem to bounding the expected inverse of \(\mathbf{r}_{\mathbf{a}}^{2}\) and \(\mathbf{r}_{\mathbf{b}}^{2}\). Doing so is not straightforward - these are complex random variables that depend on all entries in a and b, respectively. However, it was recently shown in (Zhou et al., 2017) (Claim 5) that \(\operatorname{\mathbb{E}}[1/\mathbf{r}_{\mathbf{a}}^{2}]\leq\|\mathbf{a}\|_{2 }^{2}/(m-1)\) and \(\operatorname{\mathbb{E}}[1/\mathbf{r}_{\mathbf{b}}^{2}]\leq\|\mathbf{b}\|_{2 }^{2}/(m-1)\). Finally, we have: \[\operatorname{Var}\left[W\right]=\sum_{i\in A^{\prime}B}\! \operatorname{Var}\!\left[\hat{w}_{i}\right] \leq\sum_{i\in A\cap B}\mathbf{a}_{\mathbf{f}}^{2}\operatorname{ \mathbb{E}}\left[1/\mathbf{r}_{\mathbf{b}}^{2}\right]+\mathbf{b}_{\mathbf{f}}^ {2}\operatorname{\mathbb{E}}\left[1/\mathbf{r}_{\mathbf{a}}^{2}\right]\] \[\leq\sum_{i\in A\cap B}\mathbf{a}_{\mathbf{f}}^{2}\frac{\| \mathbf{b}\|^{2}}{m-1}+\mathbf{b}_{\mathbf{f}}^{2}\frac{\|\mathbf{a}\|^{2}}{m-1}\] \[=\frac{1}{m-1}\left(\|\mathbf{a}\|_{2}^{2}\|\mathbf{b}\|_{2}^{2}+ \|\mathbf{a}\|_{2}^{2}\|\mathbf{b}_{\mathbf{f}}\|_{2}^{2}\right).\] Noting that for any \(c,d\), \(c\!+\!d\leq 2\max(c,d)\) completes the proof. ## 4. Join-Correlation Estimation In addition to our theoretical results, we perform an empirical evaluation of Threshold and Priority Sampling for inner product sketching. One of our main motivating applications is _join-correlation estimation_(Wanderson, 1998; Dwork et al., 2010). This problem has previously been addressed using (unweighted) consistent sampling methods, like the KMV sketch (Dwork et al., 2010; Dwork et al., 2010). In this section, we show how it can be solved using _any_ inner product sketching algorithm in a black-box way, expanding the toolkit of methods that can be applied to the task. **Problem Statement.** The join-correlation problem consists of computing the Pearson's correlation coefficient between two data columns that originally reside in different data tables. Specifically, we are interested in the correlation between values that would appear in the columns _after_ performing an (inner) join on the tables, i.e., values for which the same key appears in both tables. We call this quantity the _post-join correlation_, or simply the _join-correlation_. As a concrete illustration, consider the example tables in Figure 2(a). The goal of join-correlation estimation is to approximate the correlation \(\rho_{\mathbf{x},\mathbf{y}}\) between the vectors \(\mathbf{x}\) and \(\mathbf{y}\) from \(\mathcal{T}_{\mathbf{A}\mapsto\mathbf{B}}\). The join-correlation problem arises in dataset search applications, where the goal is to discover new data to augment a query dataset, e.g., to improve predictive models (Dwork et al., 2010; Dwork et al., 2010; Dwork et al., 2010). In such applications, we typically want to estimate join-correlation for columns in a query table and those in a large collection of other data tables. Accordingly, the brute-force approach that explicitly joins tables and computes the correlation between attributes is infeasible. Prior work proposes to use sketching as an efficient alternative. The idea is to pre-process (i.e., sketch) the collection of tables in advance, so that join-correlation between columns in any two tables \(\mathcal{T}_{\mathbf{A}}\) and \(\mathcal{T}_{\mathbf{B}}\) can be evaluated _without explicitly materializing the join \(A\mapsto B\)_. Specifically, Santos et al. (2010) propose an extension of KMV sketches that uniformly samples entries from each table, and then uses the join between the sketches to estimate correlation. Unfortunately, just like inner product estimation, this approach can suffer when \(\mathcal{T}_{\mathbf{A}}\) and \(\mathcal{T}_{\mathbf{B}}\) contain entries with widely varying magnitude: larger entries often contribute more to the correlation, but are not selected with higher probability by the KMV sketch. **Join-Correlation via Inner Product Sketching.** We show an alternative approach for attacking the join-correlation problem by reducing it to inner product estimation. The reduction allows us to take advantage of sketches like WMH, and Threshold and Priority Sampling, which naturally make use of weighted sampling. Referring again to Figure 2(a), consider the vectors \(\mathbf{x}\) and \(\mathbf{y}\) from \(\mathcal{T}_{\mathbf{A}\mapsto\mathbf{B}}\). Let \(\overline{x}\) (resp. \(\overline{y}\)) denote the mean of \(\mathbf{x}\) (resp. \(\mathbf{y}\)), \(n\) denote the length of the vectors (number of rows in \(\mathcal{T}_{\mathbf{A}\mapsto\mathbf{B}}\)), \(\underline{\Sigma}_{\mathbf{x}}\) (resp. \(\underline{\Sigma}_{\mathbf{y}}\)) denote the summation of all values in \(\mathbf{x}\) (resp. \(\mathbf{y}\)), and \(\underline{\Sigma}_{\mathbf{x}^{2}}\) (resp. \(\underline{\Sigma}_{\mathbf{y}^{2}}\)) denote the summation of all squared values of \(\mathbf{x}\) (resp. \(\mathbf{y}\)). It can be verified that correlation coefficient between \(\mathbf{x}\) and \(\mathbf{y}\) can be rewritten as: \[\rho_{\mathbf{x},\mathbf{y}}=\frac{\langle\mathbf{x}-\overline{x},\mathbf{y}- \overline{y}\rangle}{\|\mathbf{x}-\overline{x}\|_{2}\|\mathbf{y}-\overline{y} \|_{2}}=\frac{n(\mathbf{x},\mathbf{y})-\underline{\Sigma}_{\mathbf{x}}\underline {\Sigma}_{\mathbf{y}}}{\sqrt{n\underline{\Sigma}_{\mathbf{x}^{2}}-\overline{ \Sigma}_{\mathbf{x}}^{2}}\sqrt{n\underline{\Sigma}_{\mathbf{y}^{2}}-\overline{ \Sigma}_{\mathbf{y}}^{2}}}. \tag{9}\] Our observation is that all of the values in Eq. (9) can be computed using only inner product operations over vectors derived from tables \(\mathcal{T}_{\mathbf{A}}\) and \(\mathcal{T}_{\mathbf{B}}\) independently. The vectors are shown in Figure 2(b): vectors \(\mathbf{a}\) and \(\mathbf{b}\) contain the values, with \(\mathbf{a}_{i}\) (resp. \(\mathbf{b}_{i}\)) set to zero if key \(i\) was not present in table \(\mathcal{T}_{\mathbf{A}}\) (resp. table \(\mathcal{T}_{\mathbf{B}}\)). Vectors \(\mathbf{1}_{\mathbf{a}}\) and \(\mathbf{1}_{\mathbf{b}}\) are indicator vectors for the corresponding join keys in each table. Finally, \(\mathbf{a}^{2}\) and \(\mathbf{b}^{2}\) are equal to \(\mathbf{a}\) and \(\mathbf{b}\) with an entrywise square applied. Using these vectors, we can compute all components of the correlation formula as inner products: \[n =\langle\mathbf{1}_{\mathbf{a}},\mathbf{1}_{\mathbf{b}}\rangle, \Sigma_{\mathbf{x}}=\langle\mathbf{a},\mathbf{1}_{\mathbf{b}}\rangle, \Sigma_{\mathbf{y}}=\langle\mathbf{1}_{\mathbf{a}},\mathbf{b}\rangle,\] \[\langle\mathbf{x},\mathbf{y}\rangle =\langle\mathbf{a},\mathbf{b}\rangle, \Sigma_{\mathbf{x}^{2}}=\langle\mathbf{a}^{2},\mathbf{1}_{\mathbf{b}}\rangle, \Sigma_{\mathbf{y}^{2}}=\langle\mathbf{1}_{\mathbf{a}},\mathbf{b}^{2}\rangle. \tag{10}\] In particular, we can rewrite \(\rho_{\mathbf{x},\mathbf{y}}\) equivalently as: \[\frac{\langle\mathbf{a},\mathbf{b}\rangle\langle\mathbf{1}_{ \mathbf{a}},\mathbf{1}_{\mathbf{b}}\rangle-\langle\mathbf{a},\mathbf{1}_{ \mathbf{b}}\rangle\langle\mathbf{1}_{\mathbf{a}},\mathbf{b}\rangle}{\sqrt{\left( \langle\mathbf{1}_{\mathbf{a}},\mathbf{1}_{\mathbf{b}}\rangle\langle\mathbf{a}^{2}, \mathbf{1}_{\mathbf{b}}\rangle-\langle\mathbf{a},\mathbf{1}_{\mathbf{b}}\rangle^{2} \right)\left(\langle\mathbf{1}_{\mathbf{a}},\mathbf{1}_{\mathbf{b}}\rangle \langle\mathbf{b}^{2},\mathbf{1}_{\mathbf{a}}\rangle-\langle\mathbf{b}, \mathbf{1}_{\mathbf{a}}\rangle^{2}\right)}}. \tag{11}\] Figure 2. **Join-correlation via inner product sketching.** Given this formula, we can use any inner product sketching method to approximate join-correlation. In particular, given \(\mathcal{T}_{A}\), we compute three separate sketches, one for each of \(\mathbf{a}\), \(\mathbf{a}^{2}\), \(\mathbf{1}_{\mathbf{a}}\). When combined with sketches for \(\mathbf{b}\), \(\mathbf{b}^{2}\), \(\mathbf{1}_{\mathbf{b}}\), we can estimate all of the inner products in (10) separately, and combine to obtain an estimate for \(\rho_{\mathbf{x},\mathbf{y}}\). **Optimization for Sampling-Based Sketches.** In Section 5, we use the approach above to estimate correlation using linear sketching methods like CountSketch and JL. Given sketch size budget \(m\), we allocate \(m/3\) space to sketching each of the three vectors \(\mathbf{a}\), \(\mathbf{a}^{2}\), and \(\mathbf{1}_{\mathbf{a}}\). Our final join-correlation sketch is then the concatenation of the equally sized sketches \(\mathcal{S}(\mathbf{a})\), \(\mathcal{S}(\mathbf{a}^{2})\), and \(\mathcal{S}(\mathbf{1}_{\mathbf{a}})\). We take roughly the same approach for the sampling methods like Threshold and Priority Sampling. However, in a sampling-based sketch, if we select index \(i\) when sketching _any_ of the three vectors \(\mathbf{1}_{\mathbf{a}}\), \(\mathbf{a}\), and \(\mathbf{a}^{2}\), then we might as well use the index in estimating inner products involving _all_ three. In particular, by storing the single key/value pair \((i,\mathbf{a}_{i})\), we can compute the information \((i,1)\), \((i,\mathbf{a}_{i})\), and \((i,\mathbf{a}_{i}^{2})\) needed to estimate all inner products. We take advantage of this fact to squeeze additional information out of our sketches. Details of the resulting optimized approach are included in Appendix A.4. ## 5. Experiments **Baselines.** We assess the performance of our methods by comparing them to representative baselines, including both sampling and linear sketching methods for inner product estimation: * _Johnson-Lindenstrauss Projection (JL):_ For this _linear sketch_, we use a dense random matrix \(\Pi\) with scaled \(\pm 1\) entries, which is equivalent to the AMS sketch (Bach et al., 2016; Chen et al., 2017). * _CountSketch (CS):_ The classic _linear_ sketch introduced in (Grover and Leskovec, 2017). * _Weighted MinHash Sampling (MH-weighted):_ The method described in (Bach et al., 2016), which is the first sketch with tighter theoretical bounds than linear sketching for inner product estimation. * _MinHash Sampling (MH):_ Also described in (Bach et al., 2016), MH is similar to Weighted MinHash, but indices are sampled uniformly at random from \(\mathbf{a}\), not with probability proportional to \(\mathbf{a}_{i}^{2}\). * _Uniform Priority Sampling (PS-uniform):_ The same as our Priority Sampling method, but the rank of index \(i\) in Algorithm 3 is chosen without taking the squared magnitude \(\mathbf{a}_{i}^{2}\) into account, so indices are sampled uniformly. This method is equivalent to the KMV-based inner product sketch implemented in (Bach et al., 2016). * _Uniform Threshold Sampling (TS-uniform):_ The same as our Threshold Sampling method, but \(\mathbf{a}_{i}^{2}\) is not taken into account when computing \(\pi_{i}\), so indices are sampled uniformly. To distinguish from the uniform sampling versions, our proposed Threshold and Priority Sampling methods are called **TS-weighted** and **PS-weighted** in the remainder of the section. In addition to the baselines above, we implemented and performed initial experiments using the End-Biased sampling method from (Kumar et al., 2017), which is equivalent to Threshold Sampling (Algorithm 1), but with probability proportional to \(|\mathbf{a}_{i}|/\|\mathbf{a}\|_{1}\). More details on how to implement this method, as well as TS-uniform and PS-uniform are included in Appendix A.2. As shown in Section 5.1, End-Biased sampling consistently performed slightly worse than our version of Threshold Sampling, which also enjoys stronger theoretical guarantees. So, we excluded End-Biased sampling from the majority of our experiments for conciseness and plot clarity. With regard to our baselines, we also note that there are many other versions of linear sketching, often focused on trading off between accuracy and computation time (Bach et al., 2016; Chen et al., 2017). We compare against JL/AMS and CountSketch because these methods represent two extremes: JL is slow to apply, but very accurate in practice, while CountSketch is less accurate, but runs in linear time (which is optimal). Our sampling methods improve on the accuracy of JL while matching the efficiency of CountSketch, so comparing against "intermediate" methods would have limited additional value. **Storage Size.** For linear sketches, we store the output of the matrix multiplication \(\Pi\mathbf{a}\) as 64-bit doubles. For sampling-based sketches, both samples (64-bit doubles) and hash values (32-bit intls) need to be stored. As a result, a sampling sketch with \(m\) samples takes \(1.5x\) as much space as a linear sketch with \(m\) entries. In our experiments, _storage size_ denotes the total number of bits in the sketch divided by 64, i.e., the total number of doubles that the sketch equates to. Storage size is fixed for all methods except Threshold Sampling, for which we report the expected storage size. We note that there are variants of linear sketching that further compress \(\Pi\mathbf{a}\) by thresholding or rounding its entries, e.g., SimHash (Grover and Leskovec, 2017) and quantized JL methods (Zhu et al., 2018). While an interesting topic for future study, we do not evaluate these methods because quantization can be used to reduce the sketch size of _all methods_. For instance, for sampling-based sketches, we do not need to store full 64-bit doubles. Evaluating optimal quantization strategies is beyond the scope of this work. **Estimation Error.** To make it easier to compare across different datasets, when estimating inner products, we define the following error measure: the absolute difference between ground truth inner product \((\mathbf{a},\mathbf{b})\) and the estimate, scaled by \(1/\|\mathbf{a}\|_{2}\|\mathbf{b}\|_{2}\). Given that most methods tested (except the uniform sampling methods) achieve an error guarantee at least as good as Eq. (1), this scaling roughly ensures that reported errors lie between \(0\) and \(1\). ### Assessing Estimation Accuracy **Synthetic Data.** We ran experiments on synthetic data to validate the performance of our methods in a controlled setting. To contrast the behavior of linear sketching and weighted sampling methods like MH-weighted, TS-weighted, and PS-weighted, we generate vector pairs \(\mathbf{a},\mathbf{b}\) with varying amounts of overlap, \(\mathcal{I}\), between their non-zero entries (\(1\%\) to \(100\%\) ). This allows us to verify our theoretical results: when \(|\mathcal{I}|\) is large, we expect linear sketching and sampling to perform similarly since the linear sketching error bound of \(\epsilon\|\mathbf{a}\|_{2}\|\mathbf{b}\|_{2}\) is closer to our bound of \(\epsilon\max(\|\mathbf{a}\|_{2}\|\mathbf{b}\|_{2}\|_{2}\|\mathbf{a}\|_{2}\| \mathbf{b}_{\mathcal{I}}\|_{2})\). When \(|\mathcal{I}|\) is small, we expect a bigger difference. We generate \(100\) pairs of synthetic vectors, each with \(10,000\) entries, \(2,000\) of which are non-zero. The locations of non-zero entries are randomly selected with a specific overlap \(\mathcal{I}\), and their values are uniformly drawn from the interval \([-1,1]\). Then, \(10\%\) of entries are chosen randomly as outliers. We include outliers to differentiate the performance of weighted sampling methods from the uniform counterparts (MH, TS-uniform and PS-uniform). If all entries have similar magnitude, weighted sampling is essentially the same as uniform sampling. Outliers are chosen to be uniform random numbers between 4 and 6, which are fairly moderate values. For datasets with even larger outliers, we expect an even more pronounced difference between weighted and unweighted sampling. #### 5.1.1. Inner Product Estimation Figure 3 shows the scaled average difference between the actual and estimated inner product for the different techniques. The plot is consistent with our theoretical findings: TS-weighted and PS-weighted are more accurate than all other methods for all levels of overlap. They consistently outperform the prior state-of-the-art sampling sketch, MH-weighted. For very low overlap, even unweighted sampling methods (MH, TS-uniform, and PS-uniform) outperform linear sketches (JL, CS), but this advantage decreases as overlap increases. Note that when overlap is above 50%, the performance of linear sketching is comparable to MH-weighted. However, our proposed methods, TS-weighted and PS-weighted, continue to outperform linear sketching, even in this regime. #### 5.1.2. Binary Inner Product Estimation We also evaluate inner product estimation for binary \(\{0,1\}\) vectors, which is important in applications like join size estimation for tables with unique keys (Kumar et al., 2017). We use the same synthetic data as before, except that all non-zero entries are set to 1. Results are presented in Figure 4. Note that weighted sampling methods (WMH, TS-weighted, and PS-weighted) are not included because they are exactly equivalent to the unweighted methods for binary vectors. All of the sampling methods clearly outperform linear sketching, and the gap is most significant when the overlap is small, as predicted by our theoretical results. #### 5.1.3. Join-Correlation Estimation As discussed in Section 4, post-join correlation estimation can be cast as an inner product estimation problem involving three vectors derived from a data column, which we denote \(\mathbf{a}\), \(\mathbf{a}^{2}\), and \(\mathbf{1_{a}}\). We do not explicitly construct synthetic database columns but instead generate vectors \(\mathbf{a}\) and \(\mathbf{b}\) as before, and derive \(\mathbf{a}^{2}\), \(\mathbf{1_{a}}\), \(\mathbf{b}^{2}\), and \(\mathbf{1_{b}}\) based on them. We set the overlap between vector pairs to 10%, although results were similar across all levels of overlap. To control the amount of correlation between our vectors (which are generated randomly), we use a standard regression-based method for adjusting correlation (Zhu and Wang, 2017). For the linear sketching methods, we split the storage size evenly among the sketches for all three vectors and estimate correlation as discussed in Section 4. For the uniform sampling methods (MH, TS-uniform, and PS-uniform), we instead follow the approach from (Zhu and Wang, 2017), computing a single sketch for each of \(\mathbf{a}\) and \(\mathbf{b}\) and then estimating the empirical correlation of the sampled entries. For TS-weighted and PS-weighted, we use our new method described in Section 4. As Figure 6 shows, MH, TS-uniform, and PS-uniform perform well despite the lack of weighted sampling. This is consistent with observations in prior work on the PS-uniform (KMV) method (Zhu and Wang, 2017). Even without weighting, these sketches are able to benefit from the advantage of data sparsity, unlike linear sketches. Nonetheless, TS-weighted and PS-weighted outperform all other approaches in terms of accuracy vs. sketch size. We note that we use the optimized variants of these methods discussed in Section 4 and Appendix A.4. #### 5.1.4. Comparison to End-Biased Sampling As mentioned, we also considered adding End-Biased Sampling (Zhu and Wang, 2017) as a baseline. This method is equivalent to our Threshold Sampling method, although it samples vector entries with probability proportional to their magnitude, normalized by the vector \(\ell_{1}\) norm. We refer to this as \(\ell_{1}\)-norm sampling to highlight the difference between our methods, which sample based on _squared magnitude_ normalized by the vector \(\ell_{2}\) norm. A variant of Priority Sampling can also be implemented using \(\ell_{1}\) sampling. In initial experiments, we found that End-Biased Sampling performed similarly, but never better than, Threshold Sampling. This can be seen in Figure 6, which used the same experimental setting as Figure 3. Figure 4. Inner product estimation for synthetic binary data. Weighted sampling methods are excluded since they are equivalent to their unweighted counterparts for binary vectors. Our PS-uniform and TS-uniform methods outperform both linear sketches and MH for computing inner products. Figure 3. Inner product estimation for real-valued synthetic data. The lines for PS-uniform and TS-uniform overlap, as do the lines for our PS-weighted and TS-weighted methods. As predicted by our theoretical results, PS-weighted and TS-weighted consistently outperform all other baselines. ### Assessing Runtime Performance As discussed in Section 1, its is also important to consider is the time required to compute inner product sketches. Threshold and Priority Sampling are designed to compute a sketch of size \(m\) in time \(O(N)\) and \(O(N\log N)\), respectively, for a vector with \(N\) non-zero entries, essentially matching the asymptotic complexity of the fastest linear sketching methods like CountSketch, and significantly improving on the \(O(Nm)\) complexity of the WMH method from (Beng et al., 2017). To see how this theoretical improvement translates to practice, we assess the run-time efficiency of these methods using high-dimensional synthetic vectors with 250,000 entries, 50,000 of which are non-zero. As above, non-zero entries are random values in \([-1,1]\), except 10% are chosen as outliers. However, for all methods considered, the precise values of entries should have little to no impact on run-time. In addition to our standard baselines, to evaluate runtime, we considered more efficient implementations of the WMH algorithm from (Beng et al., 2017), which relies on a sampling method studied in (Zhu et al., 2017) and (Zhu et al., 2017). This method 1) requires \(O(Nm)\) hash evaluations, and 2) requires an expensive discretization step. Several papers attempt to eliminate these limitations (Zhu et al., 2017; Li et al., 2017). We implement a recent, improved method called DartMinHash (DartMH) from (Dai et al., 2018), which runs in \(O(N+m\log m)\). Details on the method are discussed in Appendix A.3. The times required by different methods to create sketches of varying sizes are shown in Fig. 7. As expected, both our weighted and unweighted Threshold and Priority Sampling methods are significantly faster than the \(O(Nm)\) time methods like WMH, unweighted MinHash (MH) and Johnson-Lindenstrauss (JL). With an average runtime of.06 seconds across all sketch sizes, Priority Sampling is competitive with the less accurate CountSketch, whose average runtime is.05 seconds. Threshold Sampling was slightly slower, with an average time of.21 seconds. While this method has better asymptotic complexity than Priority Sampling (since there is no need to sort ranks), its slower empirical performance is due to the algorithm used to adaptively adjust the expected sketch size to exactly equal \(m\) (discussed in Section 2 and Appendix A.1). The WMH method from (Beng et al., 2017) is not competitive with any of the other methods, requiring 43 seconds to produce a sketch of size 1000, and 213 seconds to produce a sketch of size 5000. As such, it was omitted from Fig. 7. DartMH succeeds in speeding up the method, but even this optimized algorithm is between 20x and 60x more expensive than our Priority Sampling method. Finally, for completeness, we evaluated the estimation time for all sketches. As expected there are no significant differences, since the estimation procedure for both sampling and linear sketches Figure 5. Comparison of End-Biased Sampling (TS-Inorm) and its Priority Sampling counterpart (PS-Inorm) against our TS-weighted and PS-weighted methods. Figure 6. Join-Correlation Estimation for synthetic data. The lines for PS-weighted and TS-weighted overlap, as do the lines for our PS-uniform and TS-uniform methods, which outperform all other baselines. Figure 7. Sketch construction time. We omit MH-weighted since its slow time would make it difficult to see the other lines. We see a clear linear dependence on the sketch size for JL and MH, and a milder dependence for DartMH. The run-time of CountSketch, Threshold Sampling, and Priority Sampling does not noticeably scale with the sketch size. amounts to a simple sum over the entries in the sketch. For sketches of size 5000, estimation times ranged between 0.014ms and 0.052ms. ### Assessing Inner Product and Correlation Estimation for Real-World Data In addition to synthetic data, we carry out experiments using a collection of real-world datasets. We used World Bank Group Finances data (Wang et al., 2017) to assess the effect of table overlap and outliers on inner product and join-correlation estimation. We also evaluate the performance of Threshold Sampling and Priority Sampling for text similarity estimation on the 20 Newsgroups dataset (Wang et al., 2017), and for join-size estimation over the Twitter dataset (Wang et al., 2017). #### 5.3.1. World Bank Finances Data This collection consists of 56 tables (Wang et al., 2017), from which we randomly sampled 3,000 column pairs using the following approach (adapted from (Wang et al., 2017)). A column pair is represented as (\(\langle K_{A},V_{A}\rangle\), \(\langle K_{B},V_{B}\rangle\)), where \(K_{A}\) and \(K_{B}\) are join keys with temporal data, and \(V_{A}\) and \(V_{B}\) are columns with numerical values from tables \(A\) and \(B\). Since there can be repeated keys in \(K_{A}\) and \(K_{B}\), we pre-aggregate the values in \(V_{A}\) and \(V_{B}\) associated with repeated keys into a single value by summing them. This ensures that each key is associated with a single vector index **Inner Product Estimation.** We first evaluate and Threshold and Priority Sampling on the basic task of computing inner products between the real-world data columns. We normalize all columns to have unit Euclidean norm, which ensures the inner products have a consistent scale (and are upper bounded by 1). Then we construct sketches of size 400 for all methods, which are used to estimate inner products. Table 2 shows the inner product estimation results ranked by the average error over all pairs of columns (a single trial each). We also include the \(R^{2}\) score, which measures the goodness of fit of the estimated inner products to the actual inner products. The best methods are our TS-weighted and PS-weighted, followed by WMH and JL, which have average error roughly 3x larger. These results underscore the effectiveness of the weighted sampling methods. #### 5.3.2. 20 Newsgroups Dataset We assess the effectiveness of Threshold and Priority Sampling for estimating _document similarity_ using the 20 Newsgroups Dataset (Krishna et al., 2017). We generate a feature vector for each document that includes both unigrams (single words) and bigrams (pairs of words). To capture the importance of the unigrams and bigrams, we use TF-IDF weights, which are standard is commonly done for high-dimensional, sparse vectors (Srivastava et al., 2016). To measure similarity between the resulting document feature vectors, we use the cosine similarity metric, which is equivalent to the inner product when the vectors are normalized to have unit norm. We sampled 200,000 document pairs from the dataset and plotted average error. As Figure 8(a) shows, the linear sketching methods (JL and CountSketch) show the worst performance. WMH performs similarly to the unweighted MH sampling method. Threshold and Priority Sampling obtain the best accuracy for all sketch sizes, although the difference between the unweighted and weighted methods is negligible. This difference becomes more pronounced when only considering documents with more than 500 words. As shown in Figure 8(b), for longer documents, our TS-weighted and PS-weighted obtain better accuracy than all other methods, including their uniform-sampling counterparts. The larger performance gap could be due to more variability in TF-IDF weights in longer documents (which would benefit the weighted sampling methods) or simply to the fact that estimating cosine similarity is more challenging for longer documents, so differences in the methods become more pronounced as estimation error increases. #### 5.3.3. Twitter Dataset Finally, we evaluate the effectiveness of Threshold and Priority Sampling for estimating join sizes using the Twitter dataset (Srivastava et al., 2016). We use the standard reduction between join size estimation and inner product computation involving vectors containing key frequencies (Krishna et al., 2017). The Twitter dataset consists of a list of tuples (user, followed), representing the follower-followee relationship between users. We sampled 14,000,000 (user, follower) tuples from the dataset, which include approximately 420,000 users. Following the example in (Krishna et al., 2017), we perform a self-join of the table to identify all the 2-hop "follows" relationships. Results are shown in Figure 10. For all sketch sizes, TS-weighted and PS-weighted have the smallest errors, followed by the linear sketching methods, and then by WMH. The unweighted sampling methods (MH, TS-uniform, PS-uniform) perform poorly, since in this dataset there is a lot of variability in key frequencies. Accordingly, weighted sampling is needed to accurately estimate join size. ## 6. Additional Related Work As discussed in Section 1, we are only aware of two previous papers that directly address the inner product estimation problem using sampling-based sketches: the WMH work of (Bordes and T
最近、Bessa et al. (PODS 2023) は、協調重み付きサンプリングに基づいたスケッチが理論的かつempirically、Johnson-Lindentrauss投影やCountSketchのような人気な線形サンプリング方法と比べて、内積の推定の問題に対して普遍的に優れていることを示しました。さらに、これらの結果をさらに発展させるために、二つの代替的なサンプリングに基づいた方法を導入し、分析しました。Bessa et al. の計算コストの高いアルゴリズムに比べて、私たちの方法は線形時間で計算を行うことができ、実用においては、線形サンプリングに比べてはるかに優れており、さまざまなタスクでそれを上回る結果を示しました。例えば、コラム間の相関値を推定する問題において、これらは、ブラックボックス的な方法で内積推定に削減することができる、未連結テーブルの列間相関値を
2309.17249
Batch Calibration: Rethinking Calibration for In-Context Learning and Prompt Engineering
Prompting and in-context learning (ICL) have become efficient learning paradigms for large language models (LLMs). However, LLMs suffer from prompt brittleness and various bias factors in the prompt, including but not limited to the formatting, the choice verbalizers, and the ICL examples. To address this problem that results in unexpected performance degradation, calibration methods have been developed to mitigate the effects of these biases while recovering LLM performance. In this work, we first conduct a systematic analysis of the existing calibration methods, where we both provide a unified view and reveal the failure cases. Inspired by these analyses, we propose Batch Calibration (BC), a simple yet intuitive method that controls the contextual bias from the batched input, unifies various prior approaches, and effectively addresses the aforementioned issues. BC is zero-shot, inference-only, and incurs negligible additional costs. In the few-shot setup, we further extend BC to allow it to learn the contextual bias from labeled data. We validate the effectiveness of BC with PaLM 2-(S, M, L) and CLIP models and demonstrate state-of-the-art performance over previous calibration baselines across more than 10 natural language understanding and image classification tasks.
Han Zhou, Xingchen Wan, Lev Proleev, Diana Mincu, Jilin Chen, Katherine Heller, Subhrajit Roy
2023-09-29T13:55:45
http://arxiv.org/abs/2309.17249v2
# Batch Calibration: Rethinking Calibration for In-Context Learning and Prompt Engineering ###### Abstract Prompting and in-context learning (ICL) have become efficient learning paradigms for large language models (LLMs). However, LLMs suffer from prompt brittleness and various bias factors in the prompt, including but not limited to the formatting, the choice verbalizers, and the ICL examples. To address this problem that results in unexpected performance degradation, calibration methods have been developed to mitigate the effects of these biases while recovering LLM performance. In this work, we first conduct a systematic analysis of the existing calibration methods, where we both provide a unified view and reveal the failure cases. Inspired by these analyses, we propose _Batch Calibration_ (BC), a simple yet intuitive method that controls the contextual bias from the batched input, unifies various prior approaches, and effectively addresses the aforementioned issues. BC is zero-shot, inference-only, and incurs negligible additional costs. In the few-shot setup, we further extend BC to allow it to _learn_ the contextual bias from labeled data. We validate the effectiveness of BC with PaLM 2-(S, M, L) and CLIP models and demonstrate state-of-the-art performance over previous calibration baselines across more than 10 natural language understanding and image classification tasks. ## 1 Introduction Prompting large language models (LLMs) (Chowdhery et al., 2022; Anil et al., 2023) has become an efficient learning paradigm for adapting LLMs to a new task by conditioning on human-designed instructions. The remarkable in-context learning (ICL) ability of LLMs also leads to efficient few-shot learners that can generalize from few-shot input-label pairs (Brown et al., 2020; Liu et al., 2023). However, the predictions of LLMs are highly sensitive and even biased to the choice of templates (Min et al., 2022b), verbalizers (Holtzman et al., 2021), and demonstrations (Liu et al., 2022a), resulting in barriers for pursuing efficiently adaptable and robust LLM applications. Extensive research has been devoted to mitigating these biases, which we explicitly refer to the a-priori propensity of LLMs to predict certain classes over others unfairly. Lu et al. (2022) provide an analysis of the impacts of the order of ICL examples to LLMs and have explored the order selection mechanisms for ICL. On the other hand, Zhao et al. (2021) reveal the bias of language models toward certain answers and propose to calibrate the LLM given content-free tokens. More recently, Fei et al. (2023) detect the domain-label bias, and Han et al. (2023) treat the calibration of LLMs as learning a robust decision boundary. Though multiple calibration solutions have been provided, Figure 1: Batch Calibration (BC) achieves the best performance on 1-shot ICL over calibration baselines on an average of 13 classification tasks on PaLM 2-S and PaLM 2-L (Anil et al., 2023). the field currently lacks a unified analysis that systematically distinguishes and explains the unique characteristics, merits, and downsides of each approach. In this work, we first conduct a comprehensive analysis across existing calibration methods for LLMs. We approach the calibration problem from a distinctive point of view by interpreting the decision boundaries for each calibration method together with the ICL decision boundary. We start observing fatal failure cases for each method by extending them to more challenging and under-explored evaluation tasks. We then conclude the current limitation for each method with a novel interpretation from the decision boundary perspective, pointing to the need for a unified and widely applicable solution for conquering diverse bias sources in the field of LLM efficient learning. Inspired by these findings, we propose _Batch Calibration_ (BC), a zero-shot and inference-only calibration method for prompting and ICL. The central objective of BC is to accurately model the bias from the prompt context (referred to as _contextual bias_ in this paper) by marginalizing the LLM scores in the batched input. The simplicity of the design of BC only brings negligible computation overhead at the output of the LLM. We further extend BC to the black-box few-shot learning (BCL), a practical case where labeled data is available, by introducing a _single_ learnable parameter into BC, which enables it to adapt and _learn_ the contextual bias from the available data resources. We conducted extensive experiments on more than 10 natural language understanding tasks together with image classification tasks. BC stands as the most widely applicable calibration method while achieving state-of-the-art results. With the proposed black-box few-shot BCL framework, we show that further slight gains can be achieved by leveraging more labeled data. We provide further analysis with BC on robustness with templates, ICL choices and orders, and verbalizers, validating that BC can effectively alleviate prompt brittleness and make prompt engineering easier. To summarize, we provide the following contributions: * We provide a unified and systematic analysis of existing calibration methods through their decision boundaries, investigate the common use of content-free tokens as an estimator of contextual bias, and identify their deficiency with individual case studies. * We propose Batch Calibration (BC), a zero-shot and inference-only calibration method for ICL, that mitigates the bias from the batch. We further extend BC to learn from few-shot data. * We show that while conceptually simple, BC attains state-of-the-art performance in both zero-shot and few-shot learning setups over widely selected tasks with PaLM-2 and CLIP models. ## 2 A Systematic Analysis of Calibration ### Bias in Prompting and In-Context Learning (ICL) Prompting is an efficient learning paradigm that allows LLMs to perform zero-shot inference by conditioning on a human-designed instruction. Formally, denoting a test query-target pair \(\{x_{i},y_{i}\}\) and instruction as the context \(C\) for a classification task, LLMs make prediction by computing: \(\arg\max_{y\in\mathcal{Y}}\mathbf{p}(y|x_{i},C)\), where \(\mathbf{p}\in\mathbb{R}^{J}\) is the logits, and \(\mathcal{Y}\) denotes the verbalizers that define the label set for \(J\) classes. ICL further enables LLM to learn from \(k\) input-label pairs (i.e., few-shot setup), \(s^{(i)}=\texttt{Template}(x^{(i)},y^{(i)})\,\forall i\in\{1,...,k\}\), by concatenating few-shot demonstrations in a pre-defined template as the context, \(C=\texttt{Concat}(s^{(i)},...,s^{(k)})\). Though ICL has demonstrated strong performance with easy implementations, the prediction of LLMs is shown to be biased towards certain answers due to different elements of \(\mathbf{p}(y|x_{i},C)\)Lu et al. (2022). In the ICL context \(C\), majority label bias and recency label bias Zhao et al. (2021) can bias the prediction of LLMs toward the most frequent label and the label towards the end of the demonstration, respectively. Among verbalizer tokens \(y_{j}\in\mathcal{Y}\), LLMs are shown to be inherently biased towards predicting the label-tokens that appear more frequently from pretraining term statistics Shin et al. (2022); Razeghi et al. (2022). These bias factors significantly degrade the performance of LLMs for robust ICL applications. ### Overview of ICL Calibration Methods. Various _calibration_ methods have been proposed to mitigate the issue of bias identified above. In this section, we provide an overview of the state-of-the-art calibration methods. **Contextual Calibration (Zhao et al., 2021) (CC).** Motivated by a common calibration technique that applies affine transformation on the model outputs (Platt et al., 1999; Guo et al., 2017), Zhao et al. (2021) propose to calibrate the LLM prediction by first measuring the entire test-time distribution \(\hat{\mathbf{p}}\) by a content-free input. Using "\(\mathds{N}/\mathds{A}\)" as a content-free example, the model score distribution is generated by \(\hat{\mathbf{p}}_{\text{cf}}:=\mathbf{p}(y|[\mathds{N}/\mathds{A}],C)\). CC then generates the calibrated output by transforming the uncalibrated scores \(\mathbf{p}(y|x,C)\) with \(\mathbf{W}\in\mathbb{R}^{J\times J}\) via \(\mathbf{W}\mathbf{p}(y|x,C)\), where \(\mathbf{W}=\operatorname{diag}(\hat{\mathbf{p}}_{\text{cf}})^{-1}\) offsets the uncalibrated scores with the model score (a contextual prior) triggered by the content-free sample. **Domain-Context Calibration (Fei et al., 2023) (DC).** Instead of using a single content-free token, Fei et al. (2023) propose DC that estimates a contextual prior \(\hat{\mathbf{p}}(y|C)\) by using a random in-domain sequence. It randomly sampled \(L\) tokens at an average sentence length from an unlabeled text set. Then, it estimates the content-free prediction prior by averaging the model score \(T\) times, such that: \(\hat{\mathbf{p}}_{\text{random}}=\frac{1}{T}\sum_{t=1}^{T}\mathbf{p}(y|[\text {Random Text}]_{t},C)\). The final test-time prediction is then calibrated by dividing the estimated prior prediction, or equivalently in logits space, \(\mathbf{p}(y|x_{i},C)-\hat{\mathbf{p}}_{\text{random}}\). **Prototypical Calibration (Han et al., 2023) (PC).** PC learns a decision boundary with Gaussian mixture models (GMMs). It estimates \(J\) prototypical clusters for the model output \(\mathbf{p}\) for \(J\) classes: \(P_{\text{GMM}}(\mathbf{p})=\sum_{j=0}^{J-1}\alpha_{j}P_{G}(\mathbf{p}|\mathbf{\mu _{j}},\mathbf{\Sigma_{j}})\), where \(P_{G}\) denotes a multi-variate Gaussian distribution, and the parameters: mixing coefficient \(\alpha\), mean vector \(\mathbf{\mu}\), covariance matrix \(\mathbf{\Sigma}\) are estimated by the Expectation-Maximization (Moon, 1996). Followed by an automatic label assignment strategy, the predicted label is then computed by \(\operatorname{arg\,max}_{j}P_{G}(\mathbf{p}_{j}|\mu^{*},\mathbf{\Sigma}^{*})\) in the inference time. This EM-GMM process can require up to \(T\) repetitions to stabilize its estimation of clusters where \(T\) is a hyperparameter of the algorithm. ### Design Principles Behind Calibrations Summarizing the calibration methods with distinctive design principles discussed so far, in Table 1, we present a unified view of the characteristics of each method with their mathematical formulation, computation cost, and strengths & weaknesses. Though each approach demonstrates a clear motivation for calibrating ICL, it is still unclear which method surpasses others in what scenarios. We proceed with an in-depth analysis of existing methods in representative tasks. We provide a novel view of calibration methods from a multi-variate decision boundary perspective. In pursuit of practical guidelines for ICL calibration, we set out two important research questions behind their design principles: **1)** What constitutes a better decision boundary for calibrations? **2)** Is content-free prior a good estimator of contextual bias? What Constitutes a Better Decision Boundary for Calibrations?To address this research question, we first derive the decision boundary for each category of calibration methods. We recall that the classification by a LLM is based on \(\operatorname{arg\,max}_{j\in\{0,\dots,J-1\}}p_{j}\) where \(p_{j}\) denotes the \(j\)-th element of output vector \(\mathbf{p}\). Consider binary classification problem for simplicity: the decision boundary \(h(\mathbf{p})\) for ICL is given by the line \(p_{0}-p_{1}=0\): the model predicts class 0, \(y_{0}\), if \(p_{0}-p_{1}\geq 0\), and class 1 otherwise. Consequently, CC and DC that apply an affine transformation at \(\mathbf{p}\) is equivalent to a linear transformation to the decision boundary. In CC with \(\mathbf{W}=\operatorname{diag}(\hat{\mathbf{p}})^{-1}\), \(\mathbf{b}=\mathbf{0}\), the decision boundary can then be derived as: \[p_{0}\times\frac{1}{\hat{p}_{0}}=p_{1}\times\frac{1}{\hat{p}_{1}}\to p_{0}-p_{ 1}\times\frac{\hat{p}_{0}}{\hat{p}_{1}}=0, \tag{1}\] \begin{table} \begin{tabular}{l|c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Tokera} & \multirow{2}{*}{**\#F**forward} & \multicolumn{2}{c}{Comp.} & \multirow{2}{*}{Cali.} & \multirow{2}{*}{Learning} & \multirow{2}{*}{Decision} & Multi- & Multi- \\ & & & Cost & & & Term & Boundary \(h(\mathbf{p})\) & Sentence & Class \\ \hline CC & N/A & \(1+1\) & Inverse & \(\mathbf{W}\mathbf{p}+\mathbf{b}\) & \(\mathbf{W}-\operatorname{diag}(\hat{p})^{-1},\mathbf{b}=\mathbf{0}\) & \(p_{0}=\mathbf{0}\) & ✗ & ✓ \\ DC & Random & \(20+1\) & Add & \(\mathbf{W}\mathbf{p}+\mathbf{b}\) & \(\mathbf{W}-\mathbf{I},\mathbf{b}=-\frac{1}{2}\sum_{t}\mathbf{p}(y|\operatorname{ ale}_{t},C)\) & \(p_{0}-p_{1}+\alpha\) & ✗ & ✓ \\ PC & - & \(1\) & EM-GMM & - & \(\sum_{j}\alpha_{j}P_{G}(\mathbf{p}|\mathbf{\mu_{j}},\mathbf{\Sigma})\) & \(P_{0}(\mathbf{p}|\mathbf{\mu_{j}},\mathbf{\Sigma})\) & ✓ & ✗ \\ BC (Ours) & - & \(1\) & Add & \(\mathbf{W}\mathbf{p}+\mathbf{b}\) & \(\mathbf{W}-\mathbf{I},\mathbf{b}=-\mathbf{\Sigma}_{k}\left[\mathbf{p}(y|x,C)\right]\) & \(p_{0}-p_{1}+\alpha\) & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: Calibration methods with their mathematical formulation and their equivalent decision boundary derivations in a two-dimensional problem. The cost for the number of API calls is denoted as #Forward, where \(1\) counts for the ICL forward cost. The potential failure case for each calibration method in practical scenarios is marked as ✗. which is a _rotation_ of the ICL's linear decision boundary around the origin. Similarly, DC with \(\mathbf{W}=\mathbf{I}\), \(\mathbf{b}=-\frac{1}{T}\sum_{t}\mathbf{p}(y|\textsc{Random Text}|_{t},C)=- \hat{\mathbf{p}}\) is equivalent to a _shift_ of ICL's linear decision boundary away from the origin, such that \(p_{0}-p_{1}=(\hat{p}_{0}-\hat{p}_{1})\). It is worth noting that both calibration choices lead to a linear decision boundary, indicating that the calibration problem can be framed as an unsupervised decision boundary learning problem. Based on this intuition, we further derive the decision boundary for PC as \(P_{\text{G}}(\mathbf{p}|\mu_{0},\Sigma_{0})-P_{\text{G}}(\mathbf{p}|\mu_{1}, \Sigma_{1})=0\), which delivers a non-linear boundary between the estimated Gaussian mixtures. We conduct a preliminary experiment to visualize the derived decision bounds from existing calibration methods alongside the ICL baseline. In Fig. 2, we observe that uncalibrated ICL is biased towards predicting negative in the SST-2 task. This biased prediction is then mitigated by each calibration method, where we observe a rotated decision boundary from CC, a shifted boundary from DC, and a non-linear boundary between the GMMs by PC. However, in the QNLI task (bottom row of Fig. 2), we observe failure cases in the calibration baselines, in particular, PC (third figure from the left), where it fails to capture the correct distribution for each class. From Fig. 2 and the additional results in Fig. 10 in Appendix SSD, we find that while theoretically more flexible, the non-linear decision boundaries learned by PC tend to be susceptible to overfitting and may suffer from instability in EM-GMM. We hypothesize that the PC boundary is even more vulnerable to instability for more challenging multi-class tasks due to the increased difficulties of learning clusters and assigning classes correctly. Conversely, we find that linear decision boundaries, as evidenced by CC and DC, can be more robust and generalizable across tasks. We validate this hypothesis by proposing BC with extensive experiments in Sec. 4.2. Is Content-free Input a Good Estimator of the Contextual Prior?CC and DC both use a linear decision boundary but differ from each other by leveraging different formats of a content-free input to estimate the contextual prior. However, as we observed in Fig. 2, they both exhibit failure cases in QNLI, a question-answering NLI task. We hypothesize that contrary to the proposals made by CC and DC, relying on content-free tokens for calibration is _not_ always optimal and may even introduce additional bias, depending on the task type. For example, in a textual entailment task involving question-sentence pairs, we empirically observe that an ICL template employed with a content-free token 'N/A' such as 'Question: N/A, Figure 3: The distribution of ICL scores after applying CC and DC on QNLI. Due to an unfair content-free prior, the prediction by 1-shot PaLM-2 is biased towards entailment. Figure 2: Visualization of the decision boundaries of uncalibrated ICL, and after applying existing calibration methods and the proposed BC (to be introduced in Sec 4) in representative binary classification tasks of SST-2 (top row) (Socher et al., 2013) and QNLI (bottom row) (Wang et al., 2018) on 1-shot PaLM 2-S. We show success and failure cases for each baseline method (CC, DC, and PC), whereas BC is consistently effective. Refer to Appendix SSD for more examples. Sentence: N/A, Answer:' will result in a biased prediction towards 'entailment', because although 'N/A' is intended to be content-free, the LLM may nevertheless construe 'N/A' in the sentence to be substantively entailed to the 'N/A' in the question due to surface text equivalency. This phenomenon holds true for other multi-text classification tasks, such as paraphrasing and word disambiguation tasks. Consequently, the prior estimated via a single content-free token can lead to further bias. DC introduces multiple randomly sampled tokens to form a content-free input, e.g. 'Question: that What old rubisco's the did Which?'. We suspect a possible reason is that random sequences, when used in conjunction with in-context demonstrations, can be susceptible to spurious relations among them that often lead to unfair priors further skewing the predictions, which is also reflected in Fig. 3, where CC and DC fail to mitigate the contextual bias in the QNLI task. In sum, the empirical observation shows that content-free inputs can be inappropriate prior estimators, especially for multi-sentence classification tasks. ## 3 Batch Calibration Inspired by the previous discussions, we now propose Batch Calibration (BC), a zero-shot, inference-only, and generalizable calibration technique with negligible computation cost. We further discuss how BC can be extended to _vision_-language models as well as the black-box _few-shot_ learning setup where some labeled samples are available. Batch Calibration (BC).Following the discussion in Sec. 2.3, we argue that the most critical component for calibration is to accurately estimate the contextual bias term \(\mathbf{p}(y|C)\). Both CC and DC, which use content-free and in-domain random tokens as trigger signals, respectively, have failure cases in multi-sentence classification when the estimation of the contextual bias is inaccurate. On the other hand, PC is vulnerable to overfitting and may incorrectly model the mixtures, especially in high-dimensional space. We, therefore, opt for a linear decision boundary for its robustness, and instead of relying on content-free tokens, we propose to estimate the contextual bias for each class \(\mathbf{p}(y=y_{j}|C)\) from a batch with \(M\) samples, \(\{x^{1},...,x^{M}\}\), in a _content-based_ manner by marginalizing the output score over all samples \(x\sim P(x)\) within the batch: \[\mathbf{p}(y=y_{j}|C)=\operatorname*{\mathbb{E}}_{x\sim P(x)}\left[\mathbf{p} (y=y_{j}|x,C)\right]\approx\frac{1}{M}\sum_{i=1}^{M}\mathbf{p}(y=y_{j}|x^{(i) },C)\,\forall\,y_{j}\in\mathcal{Y}. \tag{2}\] We then obtain the calibrated probability by dividing the output probability over the contextual prior, which is equivalently by shifting the log-probability by the estimated mean of each class: \[\hat{y}_{i}=\operatorname*{arg\,max}_{y\in\mathcal{Y}}\mathbf{p}_{\text{BC}}( y|x_{i},C)=\operatorname*{arg\,max}_{y\in\mathcal{Y}}\big{[}\mathbf{p}(y|x_{i},C)- \hat{\mathbf{p}}(y|C)\big{]}. \tag{3}\] Figure 4: Illustration of Batch Calibration (BC). Batches of demonstrations with in-context examples and test samples are passed into the LLM. Due to implicit bias sources in the context, the score distribution from the LLM becomes highly biased. BC is a modular and adaptable layer option appended to the output of the LLM/VLM. BC generates calibrated scores according to Eq. 2 & 3. Highlighted symbols indicate the distribution means (visualized _for illustration only_). It is noteworthy that this BC procedure is zero-shot and only involves unlabeled test samples. BC incurs negligible computation costs. We may either compute the correction term \(\hat{\mathbf{p}}(y|C)\) once all test samples are seen or, alternatively, in an on-the-fly manner that dynamically processes the outputs. To do so, we may use a running estimate of the contextual bias for BC. At the \(n+1\) mini-batch, the bias term is given by: \[\mathbf{p}_{t}^{n+1}(y|C)=\frac{n}{n+1}\mathbf{p}_{t}^{n}(y|C)+ \frac{1}{n+1}\hat{\mathbf{p}}^{n+1}(y|C), \tag{4}\] thereby allowing BC's calibration term to be estimated from a small number of mini-batches that is subsequently stabilized when more mini-batches arrive. Adjustable Batch Calibration Layer (BCL).While BC is designed to be zero-shot and inference-only, it is also common that some _labeled_ data are available. In this section, we describe a simple, adapted variant of BC that may further refine the calibration and mitigate any estimation errors from the unlabeled data, which we term _BCL_. Specifically, instead of reducing the bias term \(\hat{\mathbf{p}}\) from the test data only, we introduce a single additional hyperparameter _strength_\(\gamma\in\mathbb{R}\): \[\text{P}_{\text{BCL}}(y|x_{i},C)=\mathbf{p}(y|x_{i},C)-\gamma \hat{\mathbf{p}}(y|C), \tag{5}\] where \(\gamma\) controls the strength of BC. To select the appropriate \(\gamma\), we simply perform a grid search by uniformly sampling \(T\) different \(\gamma\) values in \([a,b]\) (we set \([a,b]=[-5,5]\), but any reasonable range may be used). The strength \(\gamma\) is then learned by \(\gamma^{*}=\arg\max_{\gamma\in[a,b]}R(\text{P}_{\text{BC}},\gamma)\), where \(R(\cdot,\cdot)\) is the evaluation function (e.g., accuracy) on the set of _labeled_ data, allowing the amount of calibration to be adjusted from evaluation metrics directly. We give concrete examples in Fig. 5, which illustrates the effect of BCL where we plot the accuracy in SST-2 and CB (De Marneffe et al., 2019) tasks over a range of \(\gamma\). We observe that \(\gamma=1\), which corresponds to BC without adjustment (purple line), leads to a strong but not optimal performance. By using the \(\gamma\) learned from the labeled data (a 128-shot randomly sampled set in this case), BCL estimates the contextual bias more precisely by leveraging the labeled data and achieves a performance that is very close to the optimal. We refer readers to Table 3 for more results. Calibrating Vision-Language Models.Recently, vision-language models (VLM) (Radford et al., 2021), which simultaneously encode visual and textual information, have demonstrated strong zero-shot generalization capability by rewriting class labels. However, the sources of bias as LLMs have also been observed in prompting VLMs (Alayrac et al., 2022) but have not been adequately addressed. In this work, we propose to apply BC to Zero-Shot (ZS) CLIP (Radford et al., 2021) and mitigate the biases in its zero-shot classifications. We follow the same notation from Sec. 2.1, where the test image is now \(x\), and the prompt template becomes the context, \(C\). Similarly, we append the BC layer at the output of the ZS CLIP and calibrate for each class following Eq. 2 & 3. ## 4 Experiments ### Experimental Setup Evaluation Data.For natural language tasks, in contrast to previous works that only report on relatively simple single-sentence classification tasks (Zhao et al., 2021; Fei et al., 2023; Han et al., 2023), we conduct experiments on 13 more diverse and challenging classification tasks, including the standard GLUE (Wang et al., 2018) and SuperGLUE (Wang et al., 2019) datasets. Specifically, we consider commonsense reasoning: BoolQ (Clark et al., 2019), COPA (Roemmele et al., 2011); word Figure 5: _BC benefits from labeled data:_ The performance of an adaptable batch calibration layer (BCL) compared to the zero-shot BC with a changing strength. The strength \(\gamma\) at 0 and 1 represent the uncalibrated ICL and BC, respectively. We highlight the optimal strength learned from a labeled set by a red vertical line and the best test strength by a green line. disambiguation: WiC (Pilehvar and Camacho-Collados, 2019); sentiment classification: SST-2 (Socher et al., 2013); paraphrasing: QQP, MRPC (Dolan and Brockett, 2005); natural language inference and entailment: ANLI-R{1,2,3} (Nie et al., 2020), CB (De Marneffe et al., 2019), RTE, QNLI (QA/NLI), MNLI (Williams et al., 2018). For image classification tasks, we include SVHN (Yuval, 2011), EuroSAT (Helber et al., 2019), and CLEVR (Johnson et al., 2017). Models.We conduct experiments mainly on the state-of-the-art PaLM 2 (Anil et al., 2023) for its variants with different sizes, PaLM 2-S, PaLM 2-M, and PaLM 2-L. PaLM 2 is trained using a mixture of objectives, and readers are referred to Anil et al. (2023) for more details. For VLMs, we report the results on CLIP ViT-B/16 (Radford et al., 2021). ### Main Experiments Experiments on Natural Language Tasks.We present the results across a diverse set of NLP tasks in Table 2. Notably, BC consistently outperforms ICL, yielding significant performance enhancement of 8% and 6% on PaLM 2-S and PaLM 2-L, respectively. This shows that the BC implementation successfully mitigates the contextual bias from the in-context examples and unleashes the full potential of LLM in efficient learning and quick adaptation to new tasks. In addition, BC improves over the state-of-the-art PC baseline by 6% on PaLM 2-S, and surpasses the competitive CC baseline by another 3% on average on PaLM 2-L. Specifically, BC is a generalizable and cheaper technique across all evaluated tasks, delivering stable performance improvement, whereas previous baselines exhibit varying degrees of instability across tasks: DC baseline is the least competitive; CC displays more failure cases in multi-sentence classification tasks, particularly for paraphrasing and NLI tasks, as we hypothesized in Sec 2.3; PC, while occasionally competitive, exhibits large \begin{table} \begin{tabular}{l|c c c c|c c c c c} \hline Model & \multicolumn{6}{c|}{PaLM 2-S} & \multicolumn{6}{c}{PaLM 2-L} \\ \hline Method & ICL & CC & DC & PC & BC & ICL & CC & DC & PC & BC \\ \hline SST-2 & 93.62\(\pm\)0.54 & 95.80\(\pm\)0.23 & 94.92\(\pm\)0.32 & **95.71\(\pm\)**0.10 & 95.94\(\pm\)**0.15 & 93.16\(\pm\)**9.58\(\pm\)**0.26 & 94.91\(\pm\)0.12 & 95.64\(\pm\)0.26 & 95.78\(\pm\)0.55 \\ MNLI & **68.52\(\pm\)**9.86 & 60.07\(\pm\)1.12 & 63.54\(\pm\)1.99 & 59.29\(\pm\)1.79 & **75.12\(\pm\)**3.76 & 77.73\(\pm\)**0.56 & 79.45\(\pm\)0.36 & 71.53\(\pm\)0.36 & 78.68\(\pm\)0.10 & **81.34\(\pm\)**2.59 \\ QNLI & 81.20\(\pm\)**9.50 & 56.86\(\pm\)0.29 & 65.62\(\pm\)2.53 & 69.82\(\pm\)1.73 & **82.45\(\pm\)**1.62 & 64.68\(\pm\)**5.33 & **69.71\(\pm\)**3.39 & 68.97\(\pm\)**2.77 & 61.01\(\pm\)15.28 & **87.90\(\pm\)**2.34 \\ MRPC & 66.24\(\pm\)0.15 & **70.44\(\pm\)**0.40 & 68.58\(\pm\)0.21 & **71.86\(\pm\)**0.79 & 70.05\(\pm\)**2.31 & 71.20\(\pm\)**4.03 & 68.68\(\pm\)0.68 & **75.39\(\pm\)**2.90 & 70.39\(\pm\)**2.56 \\ QQP & 63.91\(\pm\)0.46 & 68.55\(\pm\)0.54 & 53.92\(\pm\)0.35 & 65.28\(\pm\)1.43 & **71.46\(\pm\)**0.28 & **72.05\(\pm\)**3.10 & 71.52\(\pm\)**0.32 & 78.32\(\pm\)**3.26 & 74.24\(\pm\)**2.02 & 79.56\(\pm\)0.60 \\ BooIQ & 38.99\(\pm\)0.36 & 87.14\(\pm\)0.80 & 87.64\(\pm\)1.16 & **88.70\(\pm\)**15 & 87.83\(\pm\)**0.10 & 90.02\(\pm\)**6.90 & **90.15\(\pm\)**3.54 & 87.77\(\pm\)**1.17 & 64.04\(\pm\)**0.23 & 99.10\(\pm\)**3.22 \\ CB & 45.71\(\pm\)0.81 & 29.64\(\pm\)1.25 & 65.71\(\pm\)1.20 & **81.07\(\pm\)**0.42 & 78.21\(\pm\)**1.19 & 92.86\(\pm\)**1.99 & 85.72\(\pm\)**2.73 & **92.86\(\pm\)**2.52 & 89.29\(\pm\)**32.10 \\ COPA & **94.62\(\pm\)**0.20 & 95.80\(\pm\)0.26 & **96.48\(\pm\)**2.89 & 96.20\(\pm\)**0.42 & **98.60\(\pm\)**0.26 & **98.60\(\pm\)**1.97 & 97.20\(\pm\)**1.10 & 97.40\(\pm\)**0.09 & 99.90\(\pm\)**0.01 & 97.00\(\pm\)**1.00 \\ RTE & 80.49\(\pm\)**1.39 & 79.78\(\pm\)0.92 & 76.82\(\pm\)1.21 & 80.43\(\pm\)1.07 & **83.47\(\pm\)**1.70 & 75.92\(\pm\)**0.91 & 80.00\(\pm\)**0.25 & 79.21\(\pm\)**1.18 & **96.64\(\pm\)**2.85 & 85.42\(\pm\)**2.43 \\ WGC & 50.69\(\pm\)0.59 & 50.50\(\pm\)0.50 & 49.97\(\pm\)0.13 & **51.38\(\pm\)**5.56 & **61.10\(\pm\)**2.00 & 51.35\(\pm\)**5.50 & 55.58\(\pm\)**3.36 & 54.67\(\pm\)**0.62 & 57.87\(\pm\)**1.68 & **64.83\(\pm\)**5.99 \\ ANLI-R1 & **46.24\(\pm\)**4.21 & 42.54\(\pm\)3.02 & 40.26\(\pm\)0.28 & **59.28\(\pm\)**5.03 & 63.06\(\pm\)0.25 & 71.92\(\pm\)1.37 & **73.56\(\pm\)**3.68 & 72.30\(\pm\)**0.03 & **75.00\(\pm\)**0.03 \\ ANLI-R2 & 40.40\(\pm\)**3.06 & 38.36\(\pm\)0.23 & 38.44\(\pm\)4.48 & 41.88\(\pm\)0.50 & **16.16\(\pm\)**2.52 & 60.41\(\pm\)**0.19 & 65.36\(\pm\)**3.95 & 65.48\(\pm\)**1.94 & 64.98\(\pm\)**2.94 & **67.70\(\pm\)**3.34 \\ **ANLI-R3** & 42.53\(\pm\)0.39 & 38.78\(\pm\)1.04 & **43.67\(\pm\)**5.25 & 37.50\(\pm\)**5.81 & **55.75\(\pm\)**1.66 & 61.35\(\pm\)**1.14 & **67.32\(\pm\)**3.98 & 66.23\(\pm\)**7.22 & 63.03\(\pm\)**0.50 & 66.38\(\pm\)**3.4 \\ Avg. & 66.20 & 62.39 & 64.98 & 67.65 & **74.41** & 75.16 & 77.83 & 76.89 & 76.13 & **81.09** \\ \hline \end{tabular} \end{table} Table 2: Accuracy (%) on natural language classification tasks with 1-shot PaLM 2-S and PaLM 2-L Models (Anil et al., 2023). We report the mean and standard deviation for all results for 5 different in-context examples. We reproduce all baselines, and the implementation details are described in Appendix SSC. The **best** and second-best results are marked in bold and ranked by color. Figure 6: The ICL performance on various calibration techniques over the number of ICL shots on PaLM 2-S. Each shot indicates 1 example per class in the demonstration. Lines and shades denote the mean and standard deviation over 5 random seeds, respectively. performance fluctuations, as evidenced by its large standard deviation, resulting in frequent substantial performance degradation. We further analyze the performance of BC by varying the ICL shots from 0 to 4 shots as shown in Fig. 6, and BC again outperforms all baseline methods. We also observe an overall trend for improved performance when more shots are available, and the performance disparities between BC and ICL converge on some tasks, which suggests that BC allows LLMs to more effectively take advantage of more in-context demonstrations. We also observe that PC exhibits the worst stability across the number of shots. In Table 3, we show that further slight gains, 1% on average, can be achieved by involving an adjustable strength parameter that refines the calibration and mitigates estimation errors. This alternative design not only makes BC applicable to both zero-shot and few-shot setups but also shows its capability to improve further from limited labeled data. Calibrating Vision-Language ModelsWe further extend the applicable scenarios of BC to multi-modal learning to handle the bias inherent in the prompt template designs in CLIP. We select three tasks in which the previous visual-prompting method shows significant improvement (Oh et al., 2023). As shown in Fig. 7, BC significantly improves the zero-shot baseline by 12% on average. This observation further highlights the presence of contextual bias even within vision-language models, and BC can successfully restore the performance of VLM in image classification tasks, suggesting that BC may serve as a versatile and common technique for mitigating contextual biases across multiple modalities. ### Robustness and Ablation Studies Robustness.We analyze the robustness of BC with respect to common prompt engineering design choices that were previously shown to significantly affect LLM performance (Lu et al., 2022; Liu et al., 2022): choices and orders of in-context examples, the prompt template for ICL, and the verbalizers, as shown in Fig. 8 evaluated on RTE. Setup details are listed in Appendix SSA. First, we find that BC is more robust to ICL choices and can mostly achieve the same performance with different ICL examples. Additionally, given a single set of ICL shots, altering the order between each ICL example has minimal impact on the BC performance. However, it is worth noting that an optimal order selection can still lead to promising ICL performance. Furthermore, we analyze the robustness of BC under 10 designs of prompt templates, where BC shows consistent improvement over the ICL baseline. Therefore, though BC makes further improvements, a well-designed template can further enhance the performance of BC. Lastly, we examine the robustness of BC to variations in verbalizer designs. Remarkably, even when employing unconventional choices such as emoji pairs as the verbalizers leading to dramatic oscillations of ICL performance, BC largely recovers performance. This observation shows BC robustifies LLM predictions under common prompt design choices and makes prompt engineering easier. Batch Size.We study the impact of batch size on the performance of BC as shown in Fig. 9. In contrast to PC, which also leverages an unlabeled estimate set, BC is remarkably more sample efficient, achieving a strong performance with only around 10 unlabeled samples, whereas PC requires more than 500 unlabeled samples before its performance stabilizes. \begin{table} \begin{tabular}{l|c c c c c c c c c c c c c c|c} \hline \hline Method & SST-2 & MNLI & QNLI & MRPC & QQP & BooIQ & CB & COPA & RTE & WiC & ANLI\({}_{\text{R}1}\) & ANLI\({}_{\text{R}2}\) & ANLI\({}_{\text{R}3}\) & Avg. \\ \hline BC & 95.4 & 75.0 & 83.5 & 68.6 & 70.3 & 87.9 & 75.0 & 98.0 & **84.1** & **63.3** & **59.8** & **51.1** & **53.3** & 74.3 \\ BCL & **96.3** & **75.0** & **83.5** & **74.3** & **72.3** & **88.8** & **83.9** & **99.0** & 82.7 & 63.2 & 58.0 & 49.7 & 52.2 & **75.3** \\ \hline \hline \end{tabular} \end{table} Table 3: Accuracy (%) on natural language classification tasks with the zero-shot BC and the BCL that involves an optimal strength term learned from a labeled set. The experiments are evaluated with the same in-context example on 1-shot PaLM 2-S. Figure 7: _BC improves zero-shot (ZS) image classification:_ Accuracy (%) on image classification tasks with the zero-shot CLIP ViT-16/B. The BC implementation is zero-shot, and we apply BC together with the CLIP to demonstrate the effectiveness of BC in vision-language models. Refer to additional tasks in Appendix SSA. ## 5 Related Work Understanding and Improving ICL.Lu et al. (2022) show the sensitivity of LLMs to ICL examples. This phenomenon is further explained through the effect from pretraining term frequencies (Razeghi et al., 2022) and corpora (Shin et al., 2022). Meanwhile, Xie et al. (2022) explains the ICL process through implicit Bayesian inference, and Wei et al. (2023) show the emergent ability of LLMs by learning new input-label mappings. Various methods have been proposed to optimally select better in-context templates (Sorensen et al., 2022; Pan et al., 2023; Yin et al., 2023) and examples (Rubin et al., 2022; Liu et al., 2022; Wan et al., 2023b). Specifically, Wan et al. (2023a) introduce a selection criteria based on the consistency, diversity, and repetition of in-context examples. Recently, noisy channel prompting (Min et al., 2022a) and flipped learning (Ye et al., 2023) have been proposed for robust ICL. Learning to assign labels by k-nearest neighbors (Xu et al., 2023) and training decoder networks (Cui et al., 2023) are also effective alternatives for few-shot ICL. Bias in ICL and Calibrating LLMs.Zhao et al. (2021) reveal the instability of LLMs in few-shot learning and demonstrate three bias sources: majority label bias, recency bias, and common token bias, as the bias factors behind the instability. They propose contextual calibration (CC) to mitigate these biases by grounding the prediction based on a content-free token as sample inputs. Si et al. (2023) characterize the feature bias of LLMs, and Wang et al. (2023) introduce the positional bias in candidate choices. Fei et al. (2023) further observe the existence of domain-label bias and propose domain-context calibration (DC) that uses random in-domain tokens for estimating the bias. Meanwhile, Han et al. (2023) analyze the impact of decision boundary for text classification tasks and propose to estimate prototypical clusters by Gaussian mixture models, thereby learning a robust decision boundary. Concurrently with our work, Pezeshkpour & Hruschka (2023) spot the positional bias in multiple-choice questions, and Zheng et al. (2023) propose to debias the positional bias in multiple choices with permutation-based prior estimation. BC differentiates from these methods as a generalizable solution across challenging classification tasks and modalities. ## 6 Conclusion We first revisit previous calibration methods while addressing two critical research questions from an interpretation of decision boundaries, revealing their failure cases and deficiencies. We then propose Batch Calibration, a zero-shot and inference-only calibration technique. We also introduce an adjustable extension, BCL, which offers more refined calibrations when labeled data is accessible. While methodologically simple and easy to implement with negligible computation cost, we show that BC scales from a language-only setup to the vision-language context, achieving state-of-the-art performance in both modalities. BC significantly improves the robustness of LLMs with respect to prompt designs, and we expect easy prompt engineering with BC while exploring the potential of BC to generative tasks in the future. Figure 8: _BC makes prompt engineering easier:_ Performance of BC with respect to ICL choices, prompt templates, and verbalizers. Figure 9: _BC is data-efficient and insensitive to the batch size:_ Performance of BC across different sizes of an initial unlabeled set without using a running estimate of the contextual bias. We compare BC with the state-of-the-art PC baseline that also leverages unlabeled estimate set, and experiments are conducted on PaLM 2-S. ## Acknowledgement We thank Emily Salkey for her sincere project management support. We also thank Dr Mohammad Havaei for fruitful suggestions and feedbacks and the PaLM 2 team at Google for helping with occasional infrastructure questions.
Prompt学習と文脈学習 (ICL)は、大規模言語モデル (LLM)にとって効率的な学習パラダイムとなっています。しかし、LLMは、プロンプトのフォーマット、選択された文言、およびICLの例など、プロンプトに含まれるさまざまなバイアス因子から、プロンプトへの脆さ、そして様々なバイアス要因に苦しんでいます。この結果として生じる予期せぬパフォーマンス低下を解決するために、校正手法が開発され、これらのバイアスの影響を軽減し、LLMの性能を回復しています。この研究では、既存の校正手法の体系的な分析を実施し、統一された視点を提供し、失敗例を明らかにしました。これらの分析を基に、BatchCalibration (BC)と呼ばれる、シンプルなながらも直感的な方法を提案しました。この方法では、バッチ入力からコンテキストバイアスを制御し、既存の事前アプローチを統合し、上記
2309.08369
An Efficient Wide-Range Pseudo-3D Vehicle Detection Using A Single Camera
Wide-range and fine-grained vehicle detection plays a critical role in enabling active safety features in intelligent driving systems. However, existing vehicle detection methods based on rectangular bounding boxes (BBox) often struggle with perceiving wide-range objects, especially small objects at long distances. And BBox expression cannot provide detailed geometric shape and pose information of vehicles. This paper proposes a novel wide-range Pseudo-3D Vehicle Detection method based on images from a single camera and incorporates efficient learning methods. This model takes a spliced image as input, which is obtained by combining two sub-window images from a high-resolution image. This image format maximizes the utilization of limited image resolution to retain essential information about wide-range vehicle objects. To detect pseudo-3D objects, our model adopts specifically designed detection heads. These heads simultaneously output extended BBox and Side Projection Line (SPL) representations, which capture vehicle shapes and poses, enabling high-precision detection. To further enhance the performance of detection, a joint constraint loss combining both the object box and SPL is designed during model training, improving the efficiency, stability, and prediction accuracy of the model. Experimental results on our self-built dataset demonstrate that our model achieves favorable performance in wide-range pseudo-3D vehicle detection across multiple evaluation metrics. Our demo video has been placed at https://www.youtube.com/watch?v=1gk1PmsQ5Q8.
Zhupeng Ye, Yinqi Li, Zejian Yuan
2023-09-15T12:50:09
http://arxiv.org/abs/2309.08369v1
# An Efficient Wide-Range Pseudo-3D Vehicle Detection Using A Single Camera ###### Abstract Wide-range and fine-grained vehicle detection plays a critical role in enabling active safety features in intelligent driving systems. However, existing vehicle detection methods based on rectangular bounding boxes (BBox) often struggle with perceiving wide-range objects, especially small objects at long distances. And BBox expression cannot provide detailed geometric shape and pose information of vehicles. This paper proposes a novel wide-range Pseudo-3D Vehicle Detection method based on images from a single camera and incorporates efficient learning methods. This model takes a spliced image as input, which is obtained by combining two sub-window images from a high-resolution image. This image format maximizes the utilization of limited image resolution to retain essential information about wide-range vehicle objects. To detect pseudo-3D objects, our model adopts specifically designed detection heads. These heads simultaneously output extended BBox and Side Projection Line (SPL) representations, which capture vehicle shapes and poses, enabling high-precision detection. To further enhance the performance of detection, a joint constraint loss combining both the object box and SPL is designed during model training, improving the efficiency, stability, and prediction accuracy of the model. Experimental results on our self-built dataset demonstrate that our model achieves favorable performance in wide-range pseudo-3D vehicle detection across multiple evaluation metrics. Our demo video has been placed at [https://www.youtube.com/watch?v=1gk1PmsQ5Q8](https://www.youtube.com/watch?v=1gk1PmsQ5Q8). DW image, Joint constraint loss, Pseudo-3D vehicle detection. ## I Introduction Wide-range vehicle detection is a fundamental task in autonomous driving. It can provide a distant and broad view for vehicles, helping them perceive potential collisions in advance and significantly improving the safety and stability of autonomous driving. Existing detecting methods [1, 2] based on multi-camera images perform well in detecting surrounding vehicles, and other work [3] incorporates radar data to boost performance in detecting small objects. While successful in the past, these approaches incur high labeling costs for multi-camera images and sensor data. On the other hand, current object detection methods [4, 5, 6, 7, 8, 9, 10, 11, 12] typically output axis-aligned BBox as the representation of objects, which can only provide rough vehicle size and localization information, and cannot describe the shape, appearance, and pose of the vehicle. This paper proposes an efficient wide-range Pseudo-3D Vehicle Detection (P3DVD) method based on images from a single camera, which combines high-precision detection for small distant targets and high detecting efficiency. Our method uses Double-Window (DW) images as the dataset, as shown in Fig. 1, which are generated from single-camera high-resolution (3840x2160) front-view images of the vehicle. This image format can fully utilize the limited image resolution to provide a wide range of vehicle image information and improve the parallelism efficiency of the model. Furthermore, this work specifically designs a novel representation method for vehicles called Pseudo-3D Vehicle Representation (P3DVR). P3DVR consists of an extended BBox and a Side Projection Line (SPL), and its properties are derived from the projection of the vehicle 3D BBox onto the image plane based on the rigid body characteristics of the vehicle. This simplifies the post-inference process of the model prediction results. P3DVR can not only describe the position and size information but also describe the appearance and posture of the vehicle. By incorporating geometric priors such as camera intrinsic parameters and ground plane, P3DVR can reconstruct the true 3D geometric structure of vehicles, providing important geometric cues for estimating the distance between the ego-vehicle and object vehicles [13], possible collision time [14], vehicle motion tasks [15], etc., without relying on expensive sensor data [16]. In this method, feature extraction and P3DVD are implemented by a fully convolutional network. The one-stage decoupled detection head [17] is extended according to the composition of P3DVR. During training, in addition to independent attribute losses, a shape matching loss is introduced based on the joint constraint betw Fig. 1: Double Window image and corresponding Pseudo-3D Vehicle Representation detection results. Each DW image is obtained by concatenating Center Window (CW) image and Global Window (GW) image up and down. SPL. This shape matching loss provides holistic supervision information for model learning, enhancing the consistency of predicted results for extended BBox and SPL to describe the pseudo-3D vehicle and improving the stability and authenticity of P3DVR. Moreover, image augmentation based on window-following is applied to address the data pattern of DW images, ensuring that the Center Window (CW) images are always taken from the concentrated area of distant vehicles in original images, which helps the model better adapt to and utilize the image data in DW pattern during training. To evaluate the performance of P3DVD, we design corresponding evaluation metrics and the experimental evaluations demonstrate the excellent performance of our method in wide-range P3DVD. In summary, the main contributions of this work are as follows: * DW images is introduced, which supports wide-range vehicle detection with limited resolution. Window-following image augmentation method is designed correspondingly, which helps the model make full use of the DW image during training. * P3DVR is specially designed for vehicles, which can provide not only the position and size of vehicles but also their pose information, as well as semantic information about their appearance. * A joint constraint loss based on expanded BBox and SPL is proposed, which provides holistic supervision during training and improves the detection performance. ## II Related work ### _Vehicle Detection_ In recent years, DNN-based object detection models have demonstrated remarkable performance in vehicle detection. Traditional two-stage detectors [5, 18] employ region proposal methods [5, 19] to generate candidate BBox, while one-stage detectors like SSD [7] and YOLO [6] pre-define anchors in advance for efficient inference. However, these methods still have limitations, such as reliance on anchor and post-processing steps like NMS. Anchor-free methods like Fully FCOS [9] directly predict BBox for each grid point on the feature map, providing flexibility for objects of different scales. Recently, DETR [20], an end-to-end object detection method, has gained attention for its set matching approach that eliminates NMS. The input of above detectors is the conventional front view image of vehicles, making it difficult to perform wide-range vehicle detection under the limitations of computational resources. Moreover, these detectors only provide axis-aligned BBox output, which lacks additional semantic and geometric information about the vehicles. Our proposed vehicle detector builds upon YOLOX [17] framework and uses DW pattern images as input to achieve accurate wide-range vehicle detection with only a reasonably small increase in computational cost. On the other hand, our vehicle detector is a P3DVD detector, which can predict richer semantic and geometric information about the vehicle in addition to its position and size information in the image. ### _Object Representation_ Recently, various object representations that account for data and object characteristics have been proposed in addition to the common axis-aligned BBox. 2D segmentation methods [21] classify pixels individually and output the external contours of objects. In the field of remote sensing, rotated BBox [22, 23, 24, 25] have been developed to accurately describe the rolling angle variation of objects. Additionally, inspired by text image detection, quadrilateral object representation methods [26] have been proposed to better determine the object boundaries. Due to different application scenarios, these representations may not fully capture the shape and pose of vehicles. TBox [16] extends from the axis-aligned BBox by adding additional information such as the visible surface category and three vehicle key points, but it requires a cascade-style approach and geometric reasoning. In this task, we represent the vehicle object as P3DVR. P3DVR consists of an extended BBox and SPL, which allow for the geometric decomposition of the forms that 3D BBox of vehicles projected onto the image based on the rigid body characteristics of vehicles. As a result, the attributes of P3DVR are fully decoupled and independently predicted, requiring only the combination of the predicted results of various attributes based on the Anchor [5] to obtain the P3DVR representation. ### _Shape Matching Constraint_ The shape of detected objects is represented by a composition of multiple attributes predicted by the network, and thus shape matching constraints can optimize the overall prediction performance of the model. Yang et al. [25] transform rotated BBox's shape parameters into a 2D Gaussian distribution and compute the Kullback-Leibler Divergence (KLD) between Gaussian distributions as the regression loss. This allows the rotated BBox parameters to mutually influence each other adaptively and collaboratively during the dynamic joint optimization process. Liu et al. [27] also convert the shape parameters of lane line segments into a 2D Gaussian distribution, and the KLD between Gaussian distributions is computed as an overall joint constraint loss. A joint constraint on the extended BBox and SPL provides a way to generate holistic supervisory information for multiple attributes of the joint target. It transforms all attributes of P3DVR into a unified 2D Gaussian distribution, which enables the model to establish intrinsic connections between the various attributes of P3DVR. This facilitates the model to adjust the predicted attributes in a coordinated manner during training, leading to more stable and accurate P3DVR predictions during testing. ## III Method In this section, we first briefly propose our problem definition and then introduce our model architecture and some details during the training process. ### _Problem Definition_ According to the statistical analysis of the dataset, distant vehicles (vehicles' sizes are smaller than a predefined threshold in the image) are concentrated in the central region from the ego-vehicle's perspective. By extracting the central region as the CW image, which contains the raw information of distant vehicles, and obtaining the GW image by scaling the original image to provide contextual information about nearby vehicles. The DW images are created by vertically concatenating the CW and GW images. These DW images provide wide-range vehicle information for the model. #### Ii-A1 Output Representation The P3DVR we designed can represent the 3D BBox of a vehicle projected onto a 2D image. The rigid characteristics of the vehicle can be utilized to decompose the representation into geometric attributes, which are then combined to form an extended BBox and the SPL of the vehicle. The P3DVR can be uniquely determined by these two components, as illustrated in Fig. 2a. The extended BBox can be described by a set of shape parameters \(EB=(x,y,w,h,r,p)\). In the figure, \(O_{c}(x,y),w,h\) represent the center coordinates, width, and height of the BBox of the vehicle target in the image, respectively. In addition to these attributes, we extend the BBox with two additional properties based on the vehicle's pose in the camera field of view, namely \(r\) representing the ratio of the width of the left part of the segmentation line to the total width of the BBox, and the pose classification result \(p\) based on the orientation of the vehicle as shown in Fig. 2b. The side projection line is the main form of the 3D geometric structure of the vehicle target, which can be described by a set of shape parameters \(SPL=(p_{wc},\theta)\). \(p_{wc}\) is the midpoint of the line connecting the contact points of the front and rear wheels on the same side, and it can be easily described by its coordinate as \(p_{wc}=(x_{l},y_{l})\). SPL is the connecting straight line past the front and rear wheel grounding points on the same side. According to the planar geometry prior in Fig. 2a, the SPL can be uniquely determined by midpoint \(p_{wc}\) and the angle \(\theta\) between this line and the x-axis of the image coordinate system. Therefore, the P3DVR can be represented as \((x,y,w,h,r,p,p_{wc},\theta)\). Given the P3DVR, the shapes of the red and yellow trapezoidal boxes in Fig. 2 can be determined, and these two trapezoidal boxes can describe some important 3D geometric structures of the vehicle. According to the composition of P3DVR, the P3DVD task can be decomposed into detecting the vehicle's BBox, pose classification, and ratio of the left part, and detecting SPL's middle point and its angle with the x-axis. ### _Network Architecture_ Fig. 3 briefly shows the network architecture of our P3DVD. With respect to the P3DVR, we have designed corresponding pseudo-3D detection heads, including an Expanded Object Detection Head(EODH) and a Side Projection Line Head(SPLH). #### Ii-B1 Backbone The model employs CSPDarknet53 [28] as the backbone for feature extraction. As shown in the Fig. 3a, the backbone directly extracts image features from synthesized DW images to improve computational efficiency, and outputs multi-scale features \(\mathcal{F}_{DW}=\{F_{DW}^{sd}|s_{d}=8,16,32\}\) from the DW images, where \(s_{d}\) denotes the stride of \(F_{DW}^{sd}\) relative to the DW images. Due to the dual-scale characteristics of the DW images, \(F_{DW}^{sd}\) is composed of feature maps from two different scales. Therefore, during the multi-scale feature fusion process, it is necessary to first separate \(\mathcal{F}_{DW}\) based on concatenation information, and then effectively fuse [29] the feature map information between adjacent scales according to the scale information. The separation of \(\mathcal{F}_{DW}\) results in GW multi-scale features denoted as \(\mathcal{F}_{GW}=\left\{F_{g}^{s_{g}}|s_{g}=8s_{o},16s_{o},32s_{o}\right\}\), and CW multi-scale features denoted as \(\mathcal{F}_{CW}=\{F_{c}^{s_{c}}|s_{c}=8,16\}\), where \(s_{g}\) and \(s_{c}\) represent the stride of \(F_{g}^{s_{g}}\) and \(F_{c}^{s_{e}}\) with respect to the original image respectively, and \(s_{o}\) represents the scaling factor of GW image with respect to the original image. #### Ii-B2 Eodh As shown in Fig. 3b, the EODH is responsible for predicting 2D object properties based on the extended vehicle pose. In addition to outputting the BBox \(B\in\mathbb{R}^{4}\) and the objectness confidence \(C_{o}\), EODH also predicts the regression value \(R\) for the width ratio of the left part of the vehicle segmentation line, as well as the pose classification scores \(P\in\mathbb{R}^{8}\) for the vehicle's pose on the ground plane from the camera's viewpoint. The prediction output of EODH can be represented as follows: \[EB^{s}=[C_{o}^{s},B^{s},R^{s},P^{s}]=EODH^{s}(F^{s}) \tag{1}\] where \(s\) represents five different strides of the feature \(F^{s}\) composed of \(F_{g}^{s_{g}}\) and \(F_{c}^{s_{e}}\). #### Ii-B3 Splh As illustrated in Fig. 3c, the SPLH is responsible for predicting the SPL properties of pseudo-3D vehicles. According to the definition of SPL in Section III-A1, SPLH needs to output the normalized value of the SPL angle \(\Theta\), as well as the coordinates of the midpoint \(P_{wc}\) of the line connecting the contact points of the same side front and rear wheels of the vehicle. It is worth noting that the point coordinate prediction by SPLH heavily relies on the visible image information of the wheels on the same side of the vehicle. However, when the front and rear wheels of some vehicles are not visible or occluded in the image, the model cannot explicitly obtain the wheel-ground contact information from the image and needs to rely on long-term training to form empirical knowledge to predict the contact point coordinates from other features, which may result in unstable P3DVR prediction results. Therefore, a confidence measure \(C_{l}\) is added to determine the prediction results of the current Fig. 2: Output representation. (a) Geometric attribute decomposition. (b) The pose is encoded 0-7 counterclockwise, and the arrow indicates the orientation of the vehicle head from the perspective of the camera point coordinates in SPLH. Although the definition of the SPL angle in Section III-A1 also depends on the information of the contact points of the same side front and rear wheels, based on the rigid body characteristics of vehicles, it can be assumed that the angle of the SPL is equivalent to the inclination angle of the vehicle body in the image coordinate system. Therefore, SPLH can predict the SPL angle from the overall features of the vehicle, without overly relying on the information of the contact points of the same side front and rear wheels. The prediction output of SPLH can be represented as follows: \[SPL^{s}=[\Theta^{s},P^{s}_{wc},C^{s}_{l}]=SPLH^{s}(F^{s}) \tag{2}\] ### _Loss Function_ Based on the output content of EDOH and SPLH, this section introduces the design of loss function for the P3DVD model. For N positive target samples assigned by the network in a batch, the average loss is calculated for the predicted results \(\{(EB^{s}_{i},SPL^{s}_{i})|i=1,2,...,N\}\). It is worth noting that, all loss calculations are performed in the pixel coordinate system scale, so the prediction outputs \(EB^{s}\) and \(SPL^{s}\) in the feature coordinate system need to be transformed to \(EB^{s}_{p}\) and \(SPL^{s}_{p}\) in the pixel coordinate system according to the corresponding scale factors \(s\). As shown in Fig. 3, the overall training loss consists of the Expanded Object Detection Loss \(L_{EOD}\), the Side Projection Line Detection Loss \(L_{SPL}\), and the Shape Matching Loss based on the joint constraint between the EB and SPL \(L_{OLC}\), formulated as follows: \[L=L_{EOD}+L_{SPL}+L_{OLC} \tag{3}\] #### Iii-C1 Expanded Object Detection Loss \(L_{EOD}\) is composed of the vehicle object confidence loss \(L_{O}\), the BBox IOU loss \(L_{IOU}\), the regression loss for the occupancy ratio on the left side of the vehicle segmentation line \(L_{R}\), and the pose classification loss \(L_{Pose}\), formulated as follows: \[L_{EOD}=L_{O}+\alpha_{1}L_{IOU}+\alpha_{2}L_{R}+L_{Pose} \tag{4}\] The weight coefficients for the losses are denoted as \(\alpha_{1}\) and \(\alpha_{2}\) in the equations. \(L_{O}\) and \(L_{Pose}\) are calculated using binary cross-entropy. \(L_{IOU}\) is calculated based on the definition provided in reference [30]. \(L_{R}\) is calculated using the L1 loss. #### Iii-C2 Side Projection Line Detection Loss \(L_{SPL}\) consists of the angle regression loss \(L_{\Theta}\), point regression loss \(L_{PC}\), and the confidence loss \(L_{Conf}\) for the midpoint prediction, formulated as follows: \[L_{SPL}=\alpha_{3}L_{\Theta}+L_{Conf}+\alpha_{4}L_{PC} \tag{5}\] In the formula, \(\alpha_{3},\alpha_{4}\) represent the loss weight. According to the definition of \(conf\), its supervision information is generated based on whether the grounding points of the front and rear wheel on the same side of the vehicle's ground truth label are present or not. The supervision label for \(conf\) is set to Fig. 3: P3DVD model architecture. (a) Brief network architecture. (b) Expanded Object Detection Head(EODH). (c) Side Projection Line Head(SPLH). The model takes DW images as inputs, and then the detection head module performs P3DVD on each scale of feature maps, generating predictions for various attributes of the P3DVR. 1 if the grounding points are present, and 0 if they are not. Therefore, the loss for the predicted results of \(conf\) can be calculated using binary cross-entropy. The output \(p_{wc}\) from SPLH represents the predicted regression of the point relative to the anchor center, and the predicted angle \(\theta\) is normalized to the range of [-1, 1]. Then, the L1 loss can be used to calculate the training loss for these two predictions. #### Iii-C3 Joint Constraints loss of EB and SPL To compute \(L_{OLC}\), we first convert the P3DVR into a 2D Gaussian distribution [25]. The shape matching loss is then calculated as the KLD between these Gaussian distributions. The conversion process from P3DVR to 2D Gaussian distributions is exemplified in Fig. 4, the \(i\)-th target with SPL can be described as a 2D Gaussian distribution \(G_{p}^{i}(\mu,\sum)\), and the mean \(\mu\) and variance \(\sum\) can be represented as follows: \[\begin{split}\mu&=p_{wc}\\ \sum&=\begin{pmatrix}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{pmatrix}\begin{pmatrix}\frac{w^{\prime}}{2}&0\\ 0&h^{\prime}\end{pmatrix}\begin{pmatrix}\cos\theta&\sin\theta\\ -\sin\theta&\cos\theta\end{pmatrix}\end{split} \tag{6}\] where \(h^{\prime}\) is the distance from the center of BBox \(O_{c}\) to SPL, and \(w^{\prime}\) is the diagonal length of the rectangular box on the side of the vehicle (the side that can see both front and rear tires from the camera's perspective). Due to the asymmetry of KL divergence, this method uses bidirectional KLD proposed by Wu et al [31] for calculation: \[D_{KL}^{i}=\frac{1}{2}[D_{KL}(G_{p}^{i}||G_{t}^{i})+D_{KL}(G_{t}^{i}||G_{p}^{i })] \tag{7}\] where \(G_{p}^{i}\) and \(G_{t}^{i}\) respectively represent the 2D Gaussian distributions inferred from the predicted results and the ground truth of the \(i\)-th object. Finally, the SPL shape matching loss is constrained by the object and SPL, and can be shown as follows: \[L_{OLC}=\frac{\alpha_{5}}{N^{\prime}}\sum_{i=1}^{N^{\prime}}\left(1-\frac{1}{1 +\ln(D_{KL}^{i}+1)}\right) \tag{8}\] where \(\alpha_{5}\) represents the loss weight, \(N^{\prime}\) represents the number of targets with SPL. According to the expression form of the vehicle 3D BBox projected onto the 2D image, the vehicle SPL is the main form to present the 3D geometric structure of vehicles in images. Given that P3DVR is sensitive to the slope and position of SPL, even slight differences between the predicted SPL and the ground truth can lead to a lack of authenticity in the predicted P3DVR. To address this issue, \(L_{OLC}\) is added based on the geometric constraint relationship between P3DVR attributes during the training stage. This approach introduces holistic supervision information for model learning, improving the stability and authenticity of P3DVR predicted by the model. ### _Window-Following Image Augmentation_ Image augmentation is crucial for model training, but existing methods cannot fully leverage the advantages of DW images due to their unique structure. According to the DW images synthesis method in Section 1, the CW must be taken from the concentrated area of distance vehicles in the original image. However, most data augmentation techniques alter the structure of the original image, which can shift the position of the concentrated area of distance vehicles. If we still obtain the CW from the predefined position, it will be difficult to ensure that it covers the concentrated area of distance vehicles, leading to a lack of fine-grained information for the model. To solve this issue and ensure that the CW images are always taken from the concentrated area of distance vehicles in the original image, we introduce a window-following image augmentation method. This method performs the same transformation as the image pixel coordinates on the predefined CW center position during the data enhancement process. After all augmentation operations are completed, CW is then taken according to the center coordinate position of the windows that follows the transformation. As shown in Fig. 5, the window-following image augmentation guarantees that CW images are always obtained from the concentrated areas of distance vehicles, regardless of geometric transformations during image augmentation. Fig. 4: Geometric relationship between P3DVR and 2D Gaussian distribution. The mean of the 2D Gaussian distribution is the midpoint \(p_{wc}\) of the line connecting the front and rear wheels of the vehicle on the same side. The variance in the y direction is the distance \(h^{\prime}\) from the center point \(O_{c}\) of the BBox of P3DVR to the SPL, and the variance in the \(x\) direction is \(\frac{w}{2}\). The rotation angle is the inclination angle \(\theta\) of the SPL. Fig. 5: DW images data processing results. The CW image is above the DW image, and the GW image is below it. The CW image is the content of the red rectangle area of the GW image in the original image. (a) shows the processing result without any image augmentation, and the window center position is taken from the predefined coordinates; (b) shows the enhancement result with mosaic and affine transformation, due to the transformation following the window center coordinates, the CW image can still be taken from the concentrated area of distance vehicles in one of the original images participating in mosaic enhancement. ## IV Experiment This section presents various comparative and ablation experiments to demonstrate the feasibility and superiority of the proposed wide-range P3DVD method based on DW images. ### _Dataset_ The dataset used in this study consists of images captured by a front-view camera mounted on a vehicle, which were taken in various traffic scenarios on urban and highway roads, including both daytime and nighttime, clear and rainy weather conditions. The images have a resolution of 3840x2160 and the Horizontal FOV is \(120^{\circ}\) and Vertical FOV is \(54.8^{\circ}\). The dataset provides BBox annotations for all vehicles in the images, as well as the segmentation ratio of the left part, and the orientation pose of the vehicles on the ground plane under the camera's perspective. Only the vehicles with visible front and rear wheels were annotated with wheel grounding points information. For experiments, a total of 5964 images are randomly selected from the dataset for training, and 3376 images are used for validation. In Fig.6a, the statistical histogram shows that large objects account for the majority (52.63\(\%\)) of annotations, while medium and small objects have a lower proportion (37.37\(\%\) and 10.00\(\%\)). In the training dataset, the largest box has an area of \(4.01\times 10^{6}\) square pixel, the minimum is 2.94 square pixel, and the average area of the annotation box is \(2.93\times 10^{4}\) square pixel. On average, each image has 10.6 targets. The object location distribution plots in Fig.6b illustrate the distribution position of large, medium, and small targets in the images of training dataset. It can be observed that the distribution of large and medium targets is relatively scattered, while the distribution of small targets is relatively concentrated, which verifies the hypothesis that distant and small targets are concentrated in the Center Window proposed earlier. According to statistics, the coordinates of the CW's midpoint is (1840, 1248). ### _Experiment Settings_ #### Iv-B1 Evaluation Metrics The commonly used evaluation method [32] for image-based 2D object detection tasks only assesses the prediction of object categories and bounding boxes (BBox) and is unable to evaluate other attributes of P3DVR. To address this, we propose an evaluation method for P3DVD that first computes the BBox IOU between the prediction and the ground truth, and considers a target a true positive when the IOU exceeds a threshold \(\lambda\). We then calculate the attribute prediction error against the ground truth and determine if the prediction is correct based on an error threshold \(\lambda^{\prime}\). Finally, we compute the ratio of correctly detected attributes to the total number of true positives as the average precision (AP) of the attribute under the threshold values \(\lambda\) and \(\lambda^{\prime}\). The evaluation of each P3DVR attribute is expressed as follows: \[\mathbb{T}=\Big{\{}(i,j)\left|\text{IOU}(\hat{B}_{i},B_{j})\geq \lambda,i\in[0,N_{g}),j\in[0,N_{p})\right\} \tag{9}\] \[\text{ABP}@\lambda@\lambda^{\prime}_{b}=\frac{1}{N_{T}}\sum_{(i,j)\in\mathbb{T}}\delta(1-\text{IOU}(\hat{B}_{i},B_{j})\leq\lambda^{\prime}_{ b})\] \[\text{ARP}@\lambda@\lambda^{\prime}_{r}=\frac{1}{N_{T}}\sum_{(i,j)\in\mathbb{T}}\delta(|\hat{r}_{i}-r_{j}|\leq\lambda^{\prime}_{r})\] \[\text{PP}@\lambda@=\frac{1}{N_{T}}\sum_{(i,j)\in\mathbb{T}} \delta\left(p\text{\text{\text@}}se_{i}=pose_{j}\right)\] \[\text{AAP}@\lambda@\lambda^{\prime}_{a}=\frac{1}{N^{\prime}_{T}} \sum_{(i,j)\in\mathbb{T}^{\prime}}\delta(|\hat{\theta}_{ni}-\theta_{nj}|\leq \lambda^{\prime}_{a})\] \[\text{APP}@\lambda@\lambda^{\prime}_{p}=\frac{1}{N^{\prime}_{T}} \sum_{(i,j)\in\mathbb{T}^{\prime}}\delta(||\hat{p}_{wci}-p_{wci}||\leq\lambda^ {\prime}_{p})\] In the Equation.9, \(ABP@\lambda@\lambda^{\prime}_{b}\), \(ARP@\lambda@\lambda^{\prime}_{r}\), \(PP@\lambda\), \(AAP@\lambda@\lambda^{\prime}_{a}\), and \(APP@\lambda@\lambda^{\prime}_{p}\) are defined as the precision of prediction results under different IOU and attribute error thresholds, respectively for BBox, vehicle segmentation ratio, pose classification, SPL angle, and the mid-point of the front and rear wheel grounding points. "@" denotes the evaluation Fig. 6: The statistical histograms and location distribution scatter plots of objects with different sizes in the training dataset. The objects are divided by their area into small: (0, 1024), medium: (1024, 9216), large: (9216, +\(\infty\)). (a) Statistical histograms. The A on the horizontal axis represents the arithmetic square root of the labeled box area, and the N on the vertical axis represents the number of boxes within that area range. (b) Object location distribution scatter plots. X and Y represent the pixel coordinate value of the input image in the training dataset. result of each metric at a certain threshold, while \(\lambda^{\prime}_{b}\), \(\lambda^{\prime}_{r}\), \(\lambda^{\prime}_{a}\), and \(\lambda^{\prime}_{p}\) are the corresponding attribute prediction error thresholds. \(N_{g}\) and \(N_{p}\) are the number of ground truth and predicted objects, respectively, \(N_{T}\) is the number of true positives, \(\delta\) represents the comparison function, \(\mathbb{T}^{\prime}\) denotes the matching set of predicted objects that have ground truth object with wheel grounding point labels, and \(N^{\prime}_{T}\) is the number of objects in \(\mathbb{T}^{\prime}\). To evaluate overall performance, ABP, ARP, PP, AAP, APP are used to represent the average precision of \(ABP\)@\(\lambda\)@\(\lambda^{\prime}_{b}\), \(ARP\)@\(\lambda\)@\(\lambda^{\prime}_{r}\), \(PP\)@\(\lambda\), \(AAP\)@\(\lambda\)@\(\lambda^{\prime}_{a}\), and \(APP\)@\(\lambda\)@\(\lambda^{\prime}_{p}\). Specifically, \(\lambda\) is selected from 0.5:0.05:0.95 (an arithmetic sequence from 0.5 to 0.95 with a common difference of 0.05), \(\lambda^{\prime}_{b}\) from 0.02:0.02:0.2, \(\lambda^{\prime}_{r}\) from 0.01:0.01:0.1, \(\lambda^{\prime}_{a}\) from 0.01:0.01:0.1, and \(\lambda^{\prime}_{p}\) from 2:2:20. In addition, the \(AP\)@\(\lambda\) and \(AR\)@\(\lambda\) metrics [32] of object detection are also important evaluation metrics. To evaluate the overall detection performance of the P3DVD model, the average AP and AR are also taken as the final model score. #### Iv-B2 Implementation Details In the training stage, we adopted a dynamic-k assign strategy with an expandable cost term, similar to simOTA [17]. This strategy allows using the predicted results from all five feature maps together for the assignment of positive and negative samples and the loss calculation. Correspondingly, in the inference stage, we perform non-maximum suppression (NMS) on the predictions from all five scales to obtain the final detection results. To reduce computational costs, this work constructs a lightweight network using depthwise separable convolutions [33, 34] and channel compression. The entire model is trained for 200 epochs in mixed precision mode with 64 images per batch. The augmentation techniques used during training include image translation, shear transformation, scaling, horizontal flipping, and mosaic augmentation. The window-following step described in Section III-D was also used in conjunction with the aforementioned augmentation techniques. After all data augmentation, a 960x384-sized image is extracted from the image as a CW image based on the window center coordinate position. Then, 52 and 60 rows are cropped from the top and bottom of the image, respectively, to obtain a 3840x2048-sized image, which is then scaled down to 1/4 to obtain a 960x512-sized GW image. Finally, the CW and GW images are concatenated vertically to form a 960x896 DW image as input to the model. The model is trained using the SGD optimizer, with an initial learning rate dynamically adjusted based on the batch size: \(0.01\times\text{batch size}/64\). The learning rate is updated using the cosine annealing strategy during training. In the testing stage, the NMS post-processing operation is performed with a predicted target confidence and IOU threshold of 0.5 and 0.65, respectively. Both training and testing are completed on a single Nvidia RTX3090 GPU. ### _Comparative Experiment_ To verify the superiority of the DW mode images in wide-range vehicle detection, we compared our method of inputting DW images with the baseline method of inputting GW images Fig. 7: The detecting performance of ours and the baseline method in different object sizes. (a) Small size. (b) Medium size. (c) Large size. and compared them on the P3DVD task. To make a fair comparison, the image resolution of the baseline input was adjusted to 1280x704 (i.e., the original image was scaled down 3 times), referred to as GW\({}^{\prime}\), which is comparable in data size to DW. The comparative experimental results are presented in Table I, where the vehicles are divided into four categories based on their pixel area size: small, medium, large, and all, following the evaluation criteria proposed by [32], and the corresponding PR curves are shown in Fig. 7. The PR curves in Fig. 7 demonstrate the significant improvement of our method in detecting small and medium targets, while still achieving performance similar to the baseline in large category. To be specific, as shown in Table I, our method achieves 30.86\(\%\) and 38.66\(\%\) higher AP and AR than baseline in detecting small objects and achieves 20.04\(\%\) and 20.05\(\%\) higher in the medium category. The improvement benefits from CW image providing original distant vehicle information (typically falling in the small or medium category), leading to significantly better detection performance in the small and medium categories compared to the Baseline. Overall, our proposed method can significantly improve the detection performance of distant vehicles. It beats the baseline 13.51\(\%\) AP and 14.40\(\%\) AR in all category, while it only slightly increases the model parameters without adding computational cost. To highlight the excellent performance of our method in detecting distant vehicles, we visually compared the prediction results of the Baseline and our method. As shown in Fig. 9, in scenarios with complex traffic on city roads and open views on highways, our method can detect vehicles farther away compared to the Baseline, while also having comparable detection capabilities for larger nearby vehicles. Therefore, our method has the ability to detect vehicles over a wide range. ### _Ablation Experiment_ During the model training stage, we designed the window-following image augmentation and the shape matching loss \(L_{OLC}\) separately based on the characteristics of DW images and P3DVR to better train the model. To verify the effectiveness of these learning methods on the model, we conducted relevant ablation experiments. #### Iv-D1 Contribution of Window-Following Image Augmentation As shown in Fig.8, it can be observed that window-following image augmentation significantly improves ARP and APP metrics, as well as AAP and PP metrics. The first and second rows of TableII clearly show that window-following can comprehensively improve the detection performance of the model across all metrics, especially 0.83\(\%\), 1.91\(\%\), 3.88\(\%\) and 3.49\(\%\) higher in ARP, APP, AP, and AR. This is because window-following helps CW images always be extracted from the concentrated area of distant vehicles in the image, providing the model with more positive instances of small objects and thus more supervisory information during training. This is also the main reason why AP and AR can be greatly improved. However, the reasons for the relatively small improvements in PP, AAP, and ARP are that the classification of 8 vehicle poses is a simple task, and the model can learn it well with fewer training samples. Additionally, most of the distant vehicles in CW are in pose 0, as defined in Fig. 2b, which is recognized as an angle of -90\({}^{\circ}\) and a ratio of 1, so the method does not introduce much variation in training angles for the model. #### Iv-D2 Contribution of \(L_{olc}\) Since there is no coupling relationship between data augmentation and loss functions, we directly added \(L_{OLC}\) to the window-following image augmentation during model training. However, the addition of \(L_{OLC}\) does not bring a very obvious improvement in evaluation metrics as shown in Fig.8. To be specific, the Fig. 8: The effect of adding window-following(W-F) and \(L_{OLC}\) on the detection performance in different IoU threshold. (a) ARP. (b) APP. (c) AAP. (d) PP. comparison between the second and third rows of Table II shows that adding \(L_{OLC}\) during training, except for AAP, slightly weakened the model's detection performance on other metrics, such as ARP(-0.16\(\%\)), APP(-0.07\(\%\)) particularly PP(-0.42\(\%\)), compared with the results of our previous window-following method. From Equation 6 and Fig. 4, it can be seen that the participation of \(pose\) in the calculation of \(L_{OLC}\) is not high, and therefore, during training, the model is unable to supervise the predicted results of \(pose\). Additionally, since the weights of the loss terms were not adjusted in the experimental process while controlling the variables, the model's attention to the learning of \(pose\) attributes decreased. To demonstrate the effectiveness of \(L_{OLC}\) in improving model learning, we conducted visual comparisons. As shown in Fig. 9(a), the model trained without \(L_{OLC}\) supervision produces P3DVR that does not fit the target instance well. In contrast, adding \(L_{OLC}\) supervision helps the model to correct the P3DVR boundaries and better fit the vehicle contour. As shown in the top middle columns of Fig. 9(b), when the side view of the vehicle is less visible (the vehicle's yaw angle is close to 90\({}^{\circ}\)), SPL is highly sensitive to the angle, and slight disturbances can cause the P3DVR to lack realism. Adding \(L_{OLC}\) supervision can help the model improve this phenomenon by predicting better \(pose\) or higher-precision angles and midpoints of connection line between wheel ground points. ### _Limitations_ Our method also has some limitations. When multiple vehicles overlap or the vehicle volume is too large, the P3DVR output by the model cannot accurately describe the vehicle contour, as shown in Fig.10(a) and 10(b). When the front and rear wheels of the vehicle are occluded, the model cannot accurately predict the midpoint coordinates of the front and Fig. 9: Distant vehicle detect comparison. The upper and lower rows in Urban and Highway scenes respectively visualize the detection results of CW and GW. (a) Baseline results. (b) Our results. The black rectangular box in the lower images represents the CW component of the DW images. rear tire ground contact points so that the position of the SPL cannot be determined, as shown in Fig. 11c. ## V Conclusion We propose an efficient wide-range P3DVD method based on DW images and an efficient learning approach for the P3DVD model. DW images can provide a wealth of vehicle perception information for the model with limited resolution. To complete the P3DVD task, we designed a P3DVR specifically for vehicles to describe more geometric attributes of the vehicles. Based on the above two points, we accordingly designed window-following image augmentation and shape matching loss with joint constraints on object and SPL, helping the model to better utilize DW image information and learn more stable P3DVR predictions. Finally, we conducted corresponding experiments to demonstrate the effectiveness of our proposed method for wide-range P3DVD. Since the method proposed in this paper only presents a pseudo-3D representation of vehicles on the image and cannot directly obtain the actual geometric properties of the vehicles in 3D space, we will explore how the model can further predict the actual distance of the vehicle targets relative to the camera based on the P3DVD results in future research.
広範囲と細かな車両検出は、 intelligenteな運転システムにおけるアクティブな安全機能を可能にする上で重要な役割を果たしています。しかし、矩形の bounding box (BBox) に基づく既存の車両検出方法は、広範囲のオブジェクトを感知する際にしばしば困難な場合があります。特に、長距離の小さなオブジェクトを感知する際に。BBox表現は、車両の詳細な幾何学的形状と姿勢情報を提供することができません。この論文では、単一のカメラからの画像に基づいた新しい広範囲の擬似3D車両検出方法を提案します。この方法には、効率的な学習方法が含まれています。このモデルは、高解像度画像の2つのサブウィンドウ画像を組み合わせたspliced画像を入力として使用します。この画像形式は、広範囲の車両オブジェクトに関する重要な情報を保持するために、限られた画像解像度を最大限に活用することを目的としています。擬似3Dオブジェクト
2309.10526
NSOAMT -- New Search Only Approach to Machine Translation
Translation automation mechanisms and tools have been developed for several years to bring people who speak different languages together. A "new search only approach to machine translation" was adopted to tackle some of the slowness and inaccuracy of the other technologies. The idea is to develop a solution that, by indexing an incremental set of words that combine a certain semantic meaning, makes it possible to create a process of correspondence between their native language record and the language of translation. This research principle assumes that the vocabulary used in a given type of publication/document is relatively limited in terms of language style and word diversity, which enhances the greater effect of instantaneously and rigor in the translation process through the indexing process. A volume of electronic text documents where processed and loaded into a database, and analyzed and measured in order confirm the previous premise. Although the observed and projected metric values did not give encouraging results, it was possible to develop and make available a translation tool using this approach.
João Luís, Diogo Cardoso, José Marques, Luís Campos
2023-09-19T11:12:21
http://arxiv.org/abs/2309.10526v1
# NSOAMT - New Search Only Approach to Machine Translation ###### Abstract Translation automation mechanisms and tools have been developed for several years to bring people who speak different languages together. A "new search only approach to machine translation" was adopted to tackle some of the slowness and inaccuracy of the other technologies. The idea is to develop a solution that, by indexing an incremental set of words that combine a certain semantic meaning, makes it possible to create a process of correspondence between their native language record and the language of translation. This research principle assumes that the vocabulary used in a given type of publication/document is relatively limited in terms of language style and word diversity, which enhances the greater effect of instantaneously and rigor in the translation process through the indexing process. A volume of electronic text documents where processed and loaded into a database, and analyzed and measured in order confirm the previous premise. Although the observed and projected metric values did not give encouraging results, it was possible to develop and make available a translation tool using this approach. NSOAMT NLP Natural language processing Translation Text metrics ## 1 Introduction Translation automation mechanisms and tools have been developed for several years to bring people who speak different languages together. In the last year most of these tools have been based on deep learning, in part due to the rise of AI technologies, but also due to some abstraction it provides to the multiple language semantics that exists. In this paper we describe a research project, named _New Search Only Approach to Machine Translation_ (NSOAMT) developed to tackle some of the issues (inaccuracies, etc.) of the other approaches. The idea is to develop a solution that, by indexing an incremental set of words that combine a certain semantic meaning, makes it possible to create a process of correspondence between their native language record and the language of translation. This research principle assumes that the vocabulary used in each type of publication/document is relatively limited in terms of language style and word diversity, which enhances the greater effect of instantaneously and rigor in the translation process through the indexing process. In this paper we present the results we found when putting such principles to practice, as we attempt build a machine translation service based on such premises. ## 2 Problem statement Although several general purpose language translation services already exist, it is still known that for high-quality translations in specific domains, a human expert is still required ([1], [2], [3]). Natural language sentences are just a sequence of words. In the eventuality that we could quantify and store the most commonly used sentences, could this domain expertise for machine translation purposes be crowd-sourced ([4])? If not, can the distance to this practical goal be measured (or at least guessed)? ## 3 State of the art The evolution of natural language processing has come a long way since the early 1950's.[5]. There are many technological approaches, but these can be mostly split into three major categories [6]: 1. symbolic rule-based processing systems 2. statistical approaches 3. neural network based Although the present's focus is mostly on neural network based techniques (due to the popularity of Large Language Models [7]), the approach discussed in this article is best categorized as a "statistical approach". ## 4 Methodology 1. Import vast quantities of text documents, broken into sentences. 2. Identify common text fragments. 3. Crowdsource [4] translation of common text fragments. This methodology is only viable today due to: * The Internet and the World Wide Web connecting several organizations, institutions, and private initiatives, making available several sources of text. (See sec.5.1). * General availability of open-source libraries and tools, such as NLTK, that enable quick prototyping of some NLP techniques. (See sec.5.2.3). * The consequences of the Moore's law [8] allowing for hardware capable of handling Terabytes of text at a reasonable cost. ### Sentence model Figure 1 illustrates the relational model for the documents and sentences ingested. The description of each table is: **document**: - one table row for each ingested document. The content column contains the UTF-8 plain text of the document before sentence separation. **sentence**: - one table row for each distinct sentence. The plainText column contains the UTF-8 text of the sentence, including punctuation and inner spaces. There are no duplicate sentences for the same plainText content, so it can be said that the rows in table sentence represent **distinct sentences**. (A count of rows on table sentence in a given context is the value of the metric of **#distinct sentences** in that same context). **sentencesource**: - one table row for sentence extracted from the document. The column startOffset is the sentence sequence order number. (A count of rows on table sentencesource in a given scope is the value of the metric of **#sentences** in that same scope). **sentencetranslation**: - one table row for a possible (context-free) translation of a sentence. ### Metrics When trying to translate a document it is assumed that there is already a database that contains all possible translations for each of the the sentences in the document. Logically, it can be assumed that the more sentences that are imported, the better chances there are of obtaining a correspondence between a new sentence and a sentence that already exists in the system. This line of thought thus allows measuring the current state of the system with the import of more and more files using the following metrics: **#sentences**: - count of sentences in the text, as parsed by a sentence tokenizer software. The sentence separation is not as accurate as a human reader would perceive it, but an effort was made to make it consistent across all measurements. See sec.5.2.3. **#distinct sentences**: - (sometimes abbreviated as **#d.sentences**) How many distinct sentences exist in system and information of the idea of the distance at which the theoretical ceiling is found; These can be divided into subcategories: **#distinct sentences without repetitions**: - (also called **#unique d.sentences** for shortness) How many distinct sentences exist that have been referenced (used in a document) only once in the whole sentence database. (In the short "#unique d.sentences", the "d." standing for "distinct" is redundant, as all unique sentences are distinct, but makes clearer that it should be related with the "#distinct sentences" metric, and not the "#sentences" metric). **#distinct sentences with repetitions**: - How many distinct sentences have been referenced more than once (regardless of the document, and regardless of the multiplicity of the repetition). For illustration purpose, Figure 2 has a small example text document, which results in the metrics shown in Table 1. Description of the other metrics shown in Table 1: **#text characters**: - include line breaks and other invisible characters, so it might vary for the same visual text content. For the English language texts, it is also a good approximation of the metric of the volume of information bytes processed, as UTF-8 character encoding [9] is used. Figure 1: Entity-relationship sentence diagram. Figure 2: Example text document. **#distinct sentences %** - the percentage is calculated as \(\frac{\#distinct\ sentences}{\#sentences}\) **#unique d.sentences %** - the percentage is calculated as \(\frac{\#unique\ d.sentences}{\#distinct\ sentences}\). **#non-unique sentences %** - You can calculate the percentage of sentences with repetitions using the expression \(\frac{\#sentences-\#unique\ d.sentences}{\#sentences}\). For this example, it is 50%. This metric is not usually shown (as the ratio between the two underlying values can be easily observed), but should not be compared with the **#unique d.sentences %** metric. ### Theoretical limits The feasibility of the project assumes that the vocabulary in use is limited. Based on this, it can be assumed that the number of feasible sentences resulting from possible vocabulary combinations is also limited. However, this claim contradicts the opinion of linguists who often point to the potential for an infinite number of possible sentences to exist. It is also known that for an infinite number of sentences to exist, at least one of the following conditions must be met: * There are an infinite number of words; * A sentence can contain any number of words (unbounded). For the first condition (there are an infinite number of words) to hold, there would have to be an infinite number of symbols (letters) or the words have an infinite number of letters. In western languages the number of symbols is not infinite and the longest word in the world contains 189819 letters [10] (being, however, a technical word not used in natural language) so it can be admitted that there is a finite number of words, because none of the conditions is verified. It is true that new words that exceed this limit can be created, but it is also possible to admit that the number of words that will be created is also finite and that these same words will not be used in everyday life. In this way, it is feasible to admit that there is a finite number of words used within an infinite number of possible words [11]. To estimate the number of existing words, the Oxford dictionary, which contains about 600,000 words, can be used as a basis. This number is constantly increasing; however the authors also assume that the number of new words will grow in a finite way, in the same way that the number of archaic words (i.e. words that are not used) also grows. The second condition (a sentence contains any number of words) also holds, as shown by the examples found [12]. Obviously, these examples are exceptions and in common communication longer sentences have a lower level of understand-ability. To answer the question of "How many words can a sentence contain in order to maintain effective communication?" It is possible to find studies that point out that, from 43 words, a reader only understands 10% of the content. For this reason, some organizations (such as the UK Government) recommend a maximum limit of 25 words per sentence to maintain effective communication ([13]). Based on the previous numbers, a maximum limit was estimated for the universe of sentences with comprehensibility above 10%, increasing the number of words in the dictionary to the number of words in a sentence. Thus, the number of possible sentences was limited to: \begin{table} \begin{tabular}{l||r} \hline \hline & Example (en) \\ \hline \hline \#documents & 1 \\ \hline \#text characters (UTF-8) & 140 \\ \hline \#sentences & 4 \\ \hline \#distinct sentences & 3 \\ \hline \#distinct sentences \% & 75\% \\ \hline \#d.sentences with repetitions & 1 \\ \hline \#d.sentences with repetitions \% & 33,33\% \\ \hline \#unique d.sentences & 2 \\ \hline \#unique d.sentences \% & 66,67\% \\ \hline \#non-unique sentences \% & 50,00\% \\ \hline \hline \end{tabular} \end{table} Table 1: Metrics for the example text in Figure 2 \[\sum_{n=1}^{n=43}600000^{n}\approx 600000^{43}\approx 288.74\times 10^{246} \tag{1}\] This value is a theoretical ceiling, as it is not possible to randomly combine 43 words and generate, in all iterations, a grammatically correct sentence. Estimating the possible number of grammatically correct sentences is extremely complex, because, to do so, one would have to understand it in such a way that so that it would be possible to enumerate them. According to a 1953 work by Michael West, it was concluded that, out of 600,000 words, it is possible to create a list with approximately 2000 words that represent a coverage of 80% of the entire text, often written. This list was published under the name of "General Service List" (G.S.L.) [14]. In 2013 (60 years after the creation of the original list) the list was expanded to contain 2818 words and is published under the name of "New General Service List" (N.G.S.L.) [15]. This new list increased the coverage of the entire text to around 90%. Given this new information, it was possible to repeat the calculation, with a view to trying to cover the maximum amount of text, with the fewest possible sentences: \[\sum_{n=1}^{n=43}2818^{n}\approx 2818^{43}\approx 22,26\times 10^{147} \tag{2}\] Again, this represented a theoretical ceiling, being practically lower, for the same reason described above. Limiting this value to the advised 25 words, the universe of possible phrases is even smaller: \[\sum_{n=1}^{n=25}2818^{n}\approx 2818^{25}\approx 177,22\times 10^{84} \tag{3}\] These 2818 words only represent the text written in everyday life. As the vocabulary used is circumstantial, when entering a context, new words will have to be added to obtain the same level of coverage. With this motivation, 3 new lists were created, which do not repeat words with the N.G.S.L.: * "Academic Word List" (N.A.W.L.)[16]: 92% coverage; * "TOEIC Service List" (T.S.L.)[17]: 99% coverage; * "Business Word List" (B.S.L.)[18]: 97% coverage. Therefore, Table 2 presents the limits of possible sentences, using the previous lists, as a "theoretical limit". (The number of possible sentences that "make sense" is expected to be lower). Given the above, it became necessary to verify the assumption, starting by importing enough sentences, which would allow obtaining a satisfactory degree of correspondence, as well as having a projection of necessary sentences, below the theoretical maximum limit. ## 5 Implementation This section describes the software stack (both technology, and implementation design choices), used to carry out the measurements and implement the resulting web site. \begin{table} \begin{tabular}{l||c|c|c} \hline List of Words & \# total of words & Ceiling for 25 words & Ceiling for 43 words \\ \hline N.A.W.L. & 3778 & \(2.70\times 10^{89}\) & \(6.64\times 10^{153}\) \\ T.S.L. & 4018 & \(1.25\times 10^{90}\) & \(9.38\times 10^{154}\) \\ B.S.L. & 4518 & \(2.36\times 10^{91}\) & \(2.36\times 10^{91}\) \\ \hline \end{tabular} \end{table} Table 2: Descriptive table of the number of words per word list and maximum possible combinations for advisable sentence length (25 words) and sentence length where it is incomprehensible (43 words) ### Text sources The text sources used where: [https://eur-lex.europa.eu/](https://eur-lex.europa.eu/) - Legislation documents from the European Union, available in 24 languages; HTML format. [https://dumps.wikimedia.org/](https://dumps.wikimedia.org/) - Wikipedia backup dumps. XML+Wikitext format. [https://arxiv.org/](https://arxiv.org/) - Open-access scholarly articles. PDF format.1 Download was performed my mirroring tools, with articles organized in monthly folders. (Only the latest version of each article was ingested.) Footnote 1: Non-PDF articles where discarded. **tBooks** - Several plain text literature content, obtained from sources like [https://www.gutenberg.org/](https://www.gutenberg.org/), [https://chroniclingamerica.loc.gov/](https://chroniclingamerica.loc.gov/), [https://muse.jhu.edu/](https://muse.jhu.edu/), [https://market.cantook.com/](https://market.cantook.com/), [https://www.bookrix.com/](https://www.bookrix.com/), [https://archive.org/](https://archive.org/), [https://manybooks.net/](https://manybooks.net/), [https://www.smashwords.com/](https://www.smashwords.com/), [http://digital.library.upenn.edu/books/](http://digital.library.upenn.edu/books/). Plain text (UTF-8) format. We call this source aggregate **tBooks**.2 Footnote 2: The content extracted from these sources is not publicly accessible on the NSOAMT site, due to possible copyright issues. ### Ingestion pipeline The first stage of the ingestion pipeline is loading a content (in a specific electronic format) and splitting into plain text sentences (see Figure 3). The actual sequence and details of each transformation depend on the format and source of the text. See 5.2.2 for issues and caveats of each source/format. The last stage of the ingestion pipeline is loading the batches of parsed documents into the database. For a large source, such as arXiv, concurrent/parallel loading was needed, as shown in Figure 4. The high level algorithm for ingestion is: 1. Format conversion to (HTML, WikiText, PDF, etc.) to plain text (or plain text groups). 2. Split text into sentences (using 5.2.3). Apply sentence transformation procedures (such as hash calculation). 3. insert the whole document into the database. 4. for each sentence (by order of occurrence in the document): 4.1. search if the sentence already exists in the database: 4.1.1. if yes, associate existing sentence to current document. 4.1.2. if no, insert the new sentence into the database, and then associate to the current document. 5. (Post ingestion) Duplicate sentence elimination. Figure 3: Ingestion pipeline general structure. Steps 1 and 2 can be sped up using multiple processes (when each document and sentences fit into memory). Steps 3 and 4 are performed in a single transaction (to avoid having non-parsed documents in the database), and can also be sped up using parallel execution, but there is a race condition between steps 4.1.1. and 4.1.2. Hence the need for a post-ingestion duplicate clean-up on step 5.3 Footnote 3: Normally, in a SQL database, the race condition would be avoided using a UNIQUE INDEX on the column sentence.plainText. But occasionally, long sentences are ingested, bumping into an undocumented PostgreSQL v12 limitation of unique indexes on large text values (which failed above 2704 plain text bytes, well below the expected 8129 bytes index page size limit). #### 5.2.1 md5hash The data model exhibited in Figure 1 shows a column sentence.md5hash, added for (read) indexing purposes. (As SQL built-in indexing was not possible4 due to very long sentences. These long 'plain text' sentences are probably _garbage_ resulting from a bad text extraction, but the decision was made to keep them, for future research). Footnote 4: Abnormally long sentences would not be indexed by regular PostgreSQL v12 B-Tree INDEX. And neither full-text-search, trigrams, or inverted indexes work with these uncommonly large text strings, for that matter. The choice of the hashing algorithm to be MD5 [19] (between MD5, SHA1, and others) were based on small storage requirements, speed of calculation, and the fact that implementations in python v3.6 hashlib and PostgreSQL v12 gave identical results. It is known that the MD5 algorithm has a lower collision resistance (when compared to other more recent algorithms) [20], but as the purpose here was just to sped up the search (not cryptography grade collision resistance), it suffices. Note that in the model 4.1 hash collisions are possible, expected, and well handled. #### 5.2.2 Electronic document formats Plain textUTF-8 encoded text documents.[9] Pros: Simpler to inspect and compare resulting model data to original text. Cons: Separation between text fragments is sometimes not clear. Example: Titles, page headers, footers and foot notes, sentences interrupted by line and page breaks, sometimes placed and mixed amongst the text content without any convention consistency. Sometimes it is possible to develop a special transformation that identifies these occurrences and splits the text into blocks (without these issues). Sometimes not. (Usually not done, because conventions vary a lot between documents, even from the same source). WikiTextConversion to plain text done using [21]. Figure 4: Concurrent PDF batch ingestion. Pros: Same as plain text. Additionally, the structure of Wikipedia's extracted text (and text style) splits into sentences very well, using NLTK's default sentence tokenizer API. Cons: None that came to mind, although is a relatively short source (in terms of available volume of text). Hyper Text Markup Language (HTML)Extraction of text from HTML [22] content is done using [23]. Pros: Block tags force separation of text fragments (forcing sentence breaks). Cons: Consistency of the formatting (and tag use) in the content layout is a very case-by-case approach. Page layout and navigation information filtering is also handled on a specific source-by-source case. Portable Document Format (PDF)Text extraction from PDF files [24] is performed using pdfminer.six. [25] Pros: Largest source volume of documents available (example: arXiv). Cons: Extraction of text from scientific articles in PDF format is problematic ([26] and [27]). This results in many badly broken sentences. Some PDFs files have internal text represented in ways that result in garbled extracted text, while others even break the extraction process (and as such, PDF file upload is not publicly available on the NSOAMT site). #### 5.2.3 Sentence tokenizer NLTK [28] is a python framework for natural language processing. The default sentence tokenizer API was used to extract an ordered list of sentences from plain text content. It is the core of the ingestion pipeline. Using the default API (without a custom tokenizer trained for a specific text style) does not always produces good results (specifically in text extracted from scientific article's), but the same tokenizer results were consistent across all measurements. ### Sentence validation By sampling a few sentences, several examples with unusual (incorrect) grammar are easily spotted: ... , 2017, 1-10 Editors: Will be set by the publisher 7 1 0 2 v o N 7 ] G L. s c [ 3 v 1 0 0 0 0. 2 0 7 1 : v i X r a LEARNING THE DISTRIBUTION WITH LARGEST MEAN: TWO BANDIT FRAMEWORKS * Emilie Kaufmann 1 and Aur'elien Garivier 2 Abstract. ... As it stands now, the system includes a lot of partial sentences resulting from issues like text mis-extraction and sentence tokenization on titles, headers, footers, formulas, tabular data, graphics extracted as text, graphical text overlays, etc. There are at least two possible mitigation strategies for this problem: * Improve the quality of the text extraction (and sentence tokenization). * Exclude the _improperly_ extracted sentences from the metrics. Improving the quality of text extraction and sentence tokenization seems a never-ending battle (recognize/learn/train/develop text extractors for a never-ending variety of distinct specific documents and text styles). As such, the efforts were focused on filtering out _improperly_ extracted sentences (because it simply felt a simpler and smaller task). The "LanguageTool" version 5.7 [29] seemed like a good candidate for an out-of-the-box linguistic tool that classifies sentences as valid or non-valid: It is open-source, suitably licensed, can be used on-premises and off-line, has a python interface language-tool-python, and the set of validation rules can be customized. In the results section (6), sentences that have been checked using this tool are referred to as **valid sentences**. Note that, filtering sentences using such a tool, provides no warranty that we are working with a subset of "commonly used English" sentences that make sense (when read by a human). It just eliminates a large number of sentences that contain grammar rule violations (from the set of rules that the tool implements), some of which may be caused by bad text extraction, others by bad sentence tokenization, amongst other causes. ### Web interface In order to share the knowledge developed and show the planned translation functionality, a web interface for translation based on the technology studied was developed and made publicly available. This site is available at [https://nsoamt.pdmfc.com](https://nsoamt.pdmfc.com). See Figure 5 for the web frontend homepage. A couple of functionalities were made available in this web site. The main one is translation (See Figure 6). Here the user can write is own text or upload a text/plain file and translate it. The translation is made by parsing the input text into sentences and for each sentence search if already exists in the database and if already have a translation (See Figure 7). A list of highlighted sentences will be presented where green means translated and red means the translation was not found. Since the input text needs to be separated into sentences, the concept of paragraph ends up being lost. The process of reconstructing the original translated text would be complex, would require more computation and the correct reconstruction of the text is not guaranteed. One of the main obstacles encountered is the correct processing of uploaded texts due to the wide variety of file formats. To ensure better results for the user, the file formats allowed for direct upload was restricted to text/plain, due to its simplicity and the tests made show better results on sentence division without much pre- and post-processing. Figure 5: Application landing page. Figure 6: Translation page. Figure 7: Translation workflow. Other functionality available on the website is to search the documents loaded into the database (See Figure 8). From the search page it's possible to open a detailed page of a specific document (See Figure 9). Here the user has access to more information like the MIME-type, size and if available, download the document for a complete analysis of the document. In this page is also available the list of sentences taken from the parsing process. With each sentence along comes the number of times the sentence appears in other documents and a small list of those documents. With this we can analyze the parsing process as well as the repetitions of sentences (centerpiece of this project). It's also possible to search the sentences that came from the parsing process of the uploaded documents (See Figure 8). From this page the user can go to the specific sentence page with more information about it. In the sentence information page (See Figure 11) it is possible to find the number of repetitions of the sentences, for the sentence and a list of documents where the sentence appears. ## 6 Results ### Ingested text The total volume of text ingested is shown in Table 3. Tables 4 and 5 break down the volume metrics by source and language. ### Common sentences Tables 6 and 7 displays the number of common distinct sentences between document sources. Figure 8: Search documents page. Figure 9: Document information page. \begin{table} \begin{tabular}{l||r|r|r|r|r} \hline & ArXiv 0802-2112 & Wikipedia (en) & EUR-LEX (en) & tBooks (en) \\ \hline \hline \#documents & 1,511,891 & 61 & 17,190 & 51,593 \\ \hline \#text characters (UTF-8) & 80,399,442,210 & 14,102,084,349 & 8,615,499,262 & 19,883,677,356 \\ \#sentences & 761,978,703 & 130,111,846 & 24,109,494 & 233,973,998 \\ (same source) & & & & \\ \#distinct sentences & 557,332,655 & 114,423,765 & 8,112,606 & 206,490,528 \\ \#distinct sentences \% & 73.14\% & 85.29\% & 33.65\% & 88.25\% \\ (same source) & & & & \\ \#d.sentences with repetitions \% & 18,914,498 & 2,426,673 & 1,712,047 & 5,471,817 \\ (same source) & & 3.39\% & 2.12\% & 21.10\% & 2.65\% \\ (same source) & & & & \\ \#unique d.sentences & 538,418,157 & 111,997,092 & 6 400,559 & 201,018,711 \\ (same source) & & 96.61\% & 97.88\% & 78.90\% & 97.35\% \\ \hline \end{tabular} \end{table} Table 4: Volume of english text ingested from static sources Figure 11: Sentence information page. Figure 10: Search sentence page. ### Evolution of d.sentences with repetitions A question is raised "How much volume of text would need to be ingested to have a desired % of distinct sentences with repetitions?" To project an answer to this question, we analyzed the evolution of metrics on the arXiv data source alone, for the following reasons: 1. avoid mixing writing styles too much (between data sources). 2. arXiv is the largest volume data source, organized by monthly folder groups. Using the arXiv data source segmented by years 2016 to 2020, Table 8, a trend line was elaborated in Figure 125 Footnote 5: The trend line was calculated using LibreOffice’s trend line feature, for the data points shown here. The \(R^{2}=0.985\) gives some confidence on the logarithmic trend, at least for interpolation. (Although not shown here for brevity, the logarithmic trend line was the best fit compared to other curves (linear, exponential, polynomial, etc.).6) Footnote 6: Also not shown here for brevity, but subdividing the same yearly arXiv data into months - and thus having 12 times more data points - was also visually consistent with the shown the logarithmic trend. Using the trend line for extrapolation, (assuming that the trend line would not change the type of curve), Table 9 shows the projected #text characters that would be required to achieve the desired %d.sentences with repetitions. \begin{table} \begin{tabular}{l||r|r|r} \hline Common \#distinct sentences (en) & arXiv 0802-2112 & Wikipedia (en) & EUR-LEX (en) & tBooks \\ \hline \hline arXiv 0802-2112 & 761,978,703 & & \\ \hline Wikipedia (en) & 46,531 & 130,111,846 & \\ \hline EUR-LEX (en) & 5,448 & 28,130 & 24,109,494 & \\ \hline tBooks & 63,747 & 145,199 & 4,665 & 233,973,998 \\ \hline \hline \end{tabular} \end{table} Table 6: Common **#distinct sentences** between English sources \begin{table} \begin{tabular}{l||r|r} \hline Common \#distinct sentences (pt) & Wikipedia (pt) & EUR-LEX (pt) \\ \hline \hline Wikipedia (pt) & 16,594,472 & \\ \hline EUR-LEX (en) & 8,600 & 34,280,621 \\ \hline \hline Common \#distinct sentences (all sources) & 8,600 \\ \hline \end{tabular} \end{table} Table 7: Common **#distinct sentences** between Portuguese sources \begin{table} \begin{tabular}{l||r|r} \hline Common \#distinct sentences (pt) & Wikipedia (pt) & EUR-LEX (pt) \\ \hline \hline Wikipedia (pt) & 16,594,472 & \\ \hline EUR-LEX (en) & 8,600 & 34,280,621 \\ \hline \hline Common \#distinct sentences (all sources) & 8,600 \\ \hline \end{tabular} \end{table} Table 5: Volume of Portuguese text ingested from static sources Even for the 5% objective (the nearest to the current 3.39% shown in Table 4), it does not seem practical to gather 3.77E+13 characters (\(\approx\) 37 TeraBytes) of text in a short notice, to verify this projection. But the projection is still verifiable on the total number of arXiv characters ingested. From Table 4 for 80,399,442,210 text characters: **Predicted:** 3.49% using the trend line. **Observed:** 3.39% distinct sentences with repetitions Projections for higher %d.sentence with repetitions (25%, 50%, 75% and 100%) are also shown in table 9 for curiosity, and should not be taken very seriously, as it is well known that for a logarithmic curve, small variations on the curve coefficient will cause large changes for distant points. These projections also assume that curve does not change shape. ### Evolution of d.v.sentences with repetitions This motivated the use of LanguageTool 5.3 to reduce the _noise level_ in sentences and analyze how it would affect projections. ("v' in "d.v.sentences" stands for "valid"). On Table 10 we compare the full arXiv metrics (already displayed on Table 4) to the same metrics using only validated sentences. The data for years 2016 to 2020 is shown in Table 11, and the trend line plotted on Figure 13. The column #d.v.sentences is the number of d.sentences that pass the validation too check. The column %valid is = \(\frac{\#d.v.sentences}{\#d.sentences}\cdot 100.00\). The validation tool seems to reject approx. 50% of sentences. It can Figure 12: Trend line for %d.sentences with rep. vs #text characters. \begin{table} \begin{tabular}{l||c|c|c|c} \hline arXiv & \#text characters & \#d.sentences & \#d.sentences & \%d.sentences \\ by year & & & with repetitions & with repetitions \\ \hline \hline 2020 & 10,076,799,973 & 72,042,632 & 2,139,053 & 2.97\% \\ \hline 2019+2020 & 18,498,004,627 & 130,817,330 & 4,114,811 & 3.15\% \\ \hline 2018+2019+2020 & 25,986,041,152 & 182,644,683 & 5,900,194 & 3.23\% \\ \hline 2017+2018+2019+2020 & 32,503,697,718 & 227,734,087 & 7,448,879 & 3.27\% \\ \hline 2016+2017+2018+2019+2020 & 38,441,439,656 & 268,911,716 & 8,849,748 & 3.29\% \\ \hline \end{tabular} \end{table} Table 8: arXiv text metrics for years 2016 to 2020. \begin{table} \begin{tabular}{l||c|c|c|c} \hline \%d.sentence with repetitions & **5.00\%** & 25.00\% & 50.00\% & 75.00\% & 100.00\% \\ \hline \hline \#text characters & **3.77E+13** & 9.81E+48 & 1.82E+93 & 3.39E+137 & 6.29E+181 \\ \hline \end{tabular} \end{table} Table 9: arXiv projections for %d.sentences with repetitions vs #text characters. also be noticed that the %d.v.sentences is higher than the %d.sentences (on the distinct but-non-validated sentence data). The trend line predicts the need for 3.93E+10 (\(\approx\) 28.2 GigaBytes of text) to achieve 5% distinct valid sentences with repetitions. (Table 12 shows text volume extrapolations based on the same trend line for higher percentages). From Table 10 for 80,399,442,210 text characters (\(\approx\) 8.03E+10 text characters) (\(\approx\) 74.9 GigaBytes of text), we observe 5.18% distinct valid sentences with repetitions: **Predicted:** 5.33% using the trend line. **Observed:** 5.18% distinct valid sentences with repetitions ## 7 Conclusions Some success was achieved in extrapolating the evolution of the number of distinct sentences with repetitions vs the volume of ingested text, using a trend line of logarithmic nature. \begin{table} \begin{tabular}{l||r|r|r|} \hline arXiv & \#text & \#d.sentences & \#d.v.sentences & \%valid & \%d.v.sentences \\ by year & characters & & & with repetitions \\ \hline \hline 2020 & 10,076,799,973 & 72,042,632 & 36,257,428 & 50.33\% & 4.36\% \\ \hline 2019+2020 & 18,498,004,627 & 130,817,330 & 64,926,392 & 49.63\% & 4.66\% \\ \hline 2018+2019+2020 & 25,986,041,152 & 182,644,683 & 89,785,881 & 49.16\% & 4.82\% \\ \hline 2017+2018+2019+2020 & 32,503,697,718 & 227,734,087 & 110,992,373 & 48.74\% & 4.91\% \\ \hline 2016+2017+2018+2019+2020 & 38,441,439,656 & 268,911,716 & 130,019,555 & 48.35\% & 4.97\% \\ \hline \end{tabular} \end{table} Table 11: arXiv text metrics for years 2016 to 2020 for valid sentences. Figure 13: Trend line for %d.sentences with rep. vs #text characters. \begin{table} \begin{tabular}{l||r|r|r|r|} \hline arXiv & \#text & \#d.sentences & \#d.v.sentences & \%valid & \%d.v.sentences \\ by year & characters & & & with repetitions \\ \hline \hline 2020 & 10,076,799,973 & 72,042,632 & 36,257,428 & 50.33\% & 4.36\% \\ \hline 2019+2020 & 18,498,004,627 & 130,817,330 & 64,926,392 & 49.63\% & 4.66\% \\ \hline 2018+2019+2020 & 25,986,041,152 & 182,644,683 & 89,785,881 & 49.16\% & 4.82\% \\ \hline 2017+2018+2019+2020 & 32,503,697,718 & 227,734,087 & 110,992,373 & 48.74\% & 4.91\% \\ \hline 2016+2017+2018+2019+2020 & 38,441,439,656 & 268,911,716 & 130,019,555 & 48.35\% & 4.97\% \\ \hline \end{tabular} \end{table} Table 10: Full arXiv metrics comparing sentences with validated sentences. The discouraging aspect is that assuming that the trend line does not change its curve nature (from logarithmic nature to something else) at an hypothetical inflection point, it will not be practical to gather enough text volume even for modest repetition coverage (like 50%). Not enough text volume was gathered to show evidence that this hypothetical inflection point may exist. Also, (at these large, extrapolated text volumes) it would not be feasible to crowd-source translations. ## 8 Future work The study showed interesting results for the text analyzes and translation, but one key point needs to be resolved before further work: The projections based on the current sentence string model show that it should not be possible to gather enough text documents for modest translation coverage (and the volumes needed would also be to high for effective crowd-sourcing anyway). Can a different sentence model provide higher rates of common text matching? Preliminary experiments using character-level simplification techniques within a sentence (elimination of punctuation, digits, date tagging, custom sentence tokenizer, etc) have shown residual improvements that where not considered qualitatively significant to be show here. Can a combination of techniques (such as: * Syntax trees with sub-tree matching. * Take inspiration from transformers and attention models (such as other neural-network techniques [30]). ) be mixed with such an approach? And will crowd-sourcing the translations of the most common text structures still be viable?
Translation automation mechanisms and tools have been developed for several years to bring people who speak different languages together. A “new search-only approach to machine translation” was adopted to tackle some of the slowness and inaccuracy of the other technologies. The idea is to develop a solution that, by indexing an incremental set of words that combine a certain semantic meaning, makes it possible to create a process of correspondence between their native language record and the language of translation. This research principle assumes that the vocabulary used in a given type of publication/document is relatively limited in terms of language style and word diversity, which enhances the greater effect of instantaneously and rigorously in the translation process through the indexing process. A volume of electronic text documents were processed and loaded into a database, and analyzed and measured in order to confirm the previous premise. Although the observed and projected metric values did not give encouraging results, it was possible to develop and make available a translation tool using this approach. Please note: This
2309.06350
Stochastic Bridges over Ensemble of Linear Systems
We consider particles that are conditioned to initial and final states. The trajectory of these particles is uniquely shaped by the intricate interplay of internal and external sources of randomness. The internal randomness is aptly modelled through a parameter varying over a deterministic set, thereby giving rise to an ensemble of systems. Concurrently, the external randomness is introduced through the inclusion of white noise. Within this context, our primary objective is to effectively generate the stochastic bridge through the optimization of a random differential equation. As a deviation from the literature, we show that the optimal control mechanism, pivotal in the generation of the bridge, does not conform to the typical Markov strategy. Instead, it adopts a non-Markovian strategy, which can be more precisely classified as a stochastic feedforward control input. This unexpected divergence from the established strategies underscores the complex interrelationships present in the dynamics of the system under consideration.
Daniel Owusu Adu, Yongxin Chen
2023-09-12T16:15:58
http://arxiv.org/abs/2309.06350v1
# Stochastic Bridges over ensemble of linear systems ###### Abstract. We consider particles that are conditioned to initial and final states. The trajectory of these particles is uniquely shaped by the intricate interplay of internal and external sources of randomness. The internal randomness is aptly modelled through a parameter varying over a deterministic set, thereby giving rise to an ensemble of systems. Concurrently, the external randomness is introduced through the inclusion of white noise. Within this context, our primary objective is to effectively generate the stochastic bridge through the optimization of a random differential equation. As a deviation from the literature, we show that the optimal control mechanism, pivotal in the generation of the bridge, does not conform to the typical Markov strategy. Instead, it adopts a non-Markovian strategy, which can be more precisely classified as a stochastic feedforward control input. This unexpected divergence from the established strategies underscores the complex interrelationships present in the dynamics of the system under consideration. ## 1. Introduction This paper concerns the problem of conditioning a Markov process at two endpoints. This problem was first studied by Schrodinger in [1] which we postulate as follows; assume some _fully observed_ particles, with density \(\rho_{0}\) at time \(t=0\), evolve according to a Markov process in \(\mathbb{R}^{d}\) with density \[q^{B}(s,x,t,y):=\frac{1}{(2\pi(t-s))^{\frac{d}{2}}}\mathrm{exp}\left(-\frac{ \|x-y\|^{2}}{2(t-s)}\right), \tag{1.1}\] where \(0\leq s\leq t\leq t_{f}\) and \(x,y\in\mathbb{R}^{d}\). Suppose at time \(t=t_{f}\) the particles are observed to have a distribution \(\rho_{f}\), where \[\rho_{f}(x_{f})\neq\int_{\mathbb{R}^{n}}q^{B}(0,x_{0},t_{f},x_{f})\rho_{0}(x_ {0})dx_{0}.\] Then, \(\rho_{f}\) deviates from the law of large numbers. This means that our assumption of the Markov process is inaccurate. The following question arises: 1. What density \(q\) satisfies \[\rho_{f}(x_{f})=\int_{\mathbb{R}^{n}}q(0,x_{0},t_{f},x_{f})\rho_{0}(x_{0})dx_{0}.\] 2. Among such densities \(q\), which one is closest to \(q^{B}\) in some suitable sense. Statement 1 and 2 constitute the Schrodinger bridge problem and the most likely stochastic process \(\{X(t)\}_{0\leq t\leq t_{f}}\) such that the densities of the distributions of \(X(0)\) and \(X(t_{f})\) coincides with \(\rho_{0}\) and \(\rho_{f}\), respectively, is called the Schrodinger bridge. An important feature of the Markov process is that it is non-degenerate. That is the stochastic term affects all directions in the coordinate space. Related to our work and motivated by questions regarding the transport of particles having inertia, the case where the Markov process is degenerate has been studied in [2]. Irrespective of the type of Markov process, it is well-established that stochastic bridges are generated from stochastic optimal control problems (see [2; 3; 4; 5; 6; 7; 8] and reference therein). Our problem is motivated by ensemble-based Reinforcement Learning [9]. In ensemble-based Reinforcement Learning, the Ornstein-Uhlenbeck (OU) process is a valuable tool for random exploration when an agent has no prior knowledge of how a system's states transition from one to another [10]. One can envision a similar scenario where a robot learns to navigate in a new environment. Initially, the robot knows nothing about the environment's dynamics, such as how it moves from one state to another. To effectively learn and make informed decisions, the robot must explore its surroundings randomly. We consider an ensemble of stochastic processes [11; 12], much like a collection of robots [13; 14] attempting to explore a new environment for the first time (or a robot's various attempts to explore its new environment). Each process, indexed by a parameter, represents a potential trajectory or path the robot might take. Our ultimate goal is to find the paths that are conditioned to meet certain statistical criteria, such as achieving bridging a given state end-points or behaviours. In the context of OU processes, our goal is geared toward understanding its typical behaviour, mean-reverting tendencies, and statistical characteristics which are consistent with the end-states. In this case, averaging the ensemble of OU processes is a practical and effective approach. That is by averaging the ensemble of OU, one can emphasize the mean-reverting behaviour and understand how the system tends to gravitate back to its central trajectory over time. We state here that studying an ensemble of OU processes is not new. In [15], they provided a mathematical framework to study the statistical properties of the centre-of-mass variable and its relation to the individual processes in the ensemble of OU. In particular, they determined a non-autonomous stochastic differential equation (SDE) satisfied by a process that is equivalent in distribution to the centre-of-mass variable of an ensemble of the OU processes. Furthermore, they established the equivalence in the distribution of the centre-of-mass variable with a randomly scaled Gaussian process (the product of a non-negative random variable and a Gaussian process). We state here that in as much as the centre-of-mass variable can be used to estimate the average concentration over the parameters, our result focuses on the average. Following from [2, 3, 4, 5, 6, 7, 8], in our case, the ensemble nature of the Markov process in our problem adds its own set of technical challenges in solving the corresponding stochastic optimal control problem. It turns out that averaging an ensemble of Markov processes fails to be a Markov process and seems to represent a more complex stochastic process than is usually encountered in the literature [2, 16, 17, 18]. Therefore, the standard tools in [2, 6, 7, 8] used to generate a bridge will not be applicable in our case. To overcome this challenge, we rely on the equivalent discrete-time stochastic optimal control problem and characterize the optimal control through the approximation of the continuous-time stochastic process. We show that the parameter-independent optimal control that bridges the endpoint condition for an ensemble of Markov processes is a _stochastic feedforward control input_. This deviates from the characterization of the optimal control that induces a stochastic bridge (see [2, 6, 7, 8, 16, 17, 18, 19]). The distinction follows from the fact that, in a standard Markov process, it is possible to track that state and feed it back into the system to achieve the bridge. This leads to the optimal control strategy being a Markov Strategy. In our case, as you will see, it is not possible to track the average of an ensemble of a given Markov process. Thus leading to an _stochastic feedforward control_. In stochastic feedforward control, the control input is determined based on past and present values of the noise. Optimal feedforward controllers have been described in [20, 21], where it is assumed that the control input is obtained from the output of a linear dynamic system driven by white noise. This characterization of control has applications in flight system control of robotics and crystal growth of power transmission networks (see [20, 21, 22, 23] and reference therein). Secondly, unlike in [2] where controllability of the system is relevant to establish the Schrodinger bridge for the case of degeneracy, as we showed in [24], our result relies on the so-called averaged observability inequality [25, 26, 27, 28] which is equivalent to the invertibility of a matrix (see [24]). This matrix is used to solve both the Schrodinger bridge problem and hence design the optimal control for our problem. We state here that our result is related to ensemble control theory [13, 14, 29, 30, 31] which is motivated by quantum systems [32] and also robust control theory [11, 12] and has applications in a variety of fields including engineering [33, 34, 35] and economics [36, 37, 38, 39]. The organization of the paper is as follows; We discuss the notion of stochastic averaged control problem in Section 2. We state conditions under which this is possible. After that, we state the problem statement and follow with the main result in Section 3. We conclude with remarks on future work in Section 4. ## 2. Stochastic averaged ensemble control Consider the ensemble of a controlled Markov process defined on a naturally filtered probability space \((\Omega,\mathcal{F},\mathbb{P})\) as follows \[dX(t,\theta)= A(\theta)X(t,\theta)dt+B(\theta)u(t)dt+\sqrt{\epsilon}B(\theta) dW(t),\] \[X(0,\theta)= x_{0}, \tag{2.1}\] where \(X(t,\theta)\in\mathbb{R}^{d}\), is the random state of an individual system at time \(t\) indexed by the sample point \(\theta\in\Omega\), \(A:\Omega\to\mathbb{R}^{d\times d}\) and \(B:\Omega\to\mathbb{R}^{d\times m}\) are measurable mappings such that \(\sup_{\theta\in\Omega}\|A(\theta)\|<\infty\) and \(\sup_{\theta\in\Omega}\|B(\theta)\|<\infty\), where the norm here is the Frobenius norm on the space of matrices, \(u\in\mathrm{L}^{1}([0,t_{f}];\mathbb{R}^{m})\) is a _parameter-independent_ control input, and \(x_{0}\) is an initial \(d\)-dimensional deterministic vector and \(\{W(t)\}_{t\geq 0}\subset\mathbb{R}^{m}\) is the Wiener process such that \(W(0)=0\). Note that the Markov process indexed by \(\theta\) at time \(t\) is characterized by \[X(t,\theta)=e^{A(\theta)t}x_{0}+\int_{0}^{t}e^{A(\theta)(t-\tau)}B(\theta)u( \tau)d\tau+\sqrt{\epsilon}\int_{0}^{t}e^{A(\theta)(t-\tau)}B(\theta)dW(\tau). \tag{2.2}\] For reasons that will be clear later, for now, we study the controllability of this Markov process in an appropriate sense. Since the system parameter is unknown but belongs to a deterministic set \(\Omega\), it is natural to control the average over the parameter. For simplicity of presentation, we assume that the probability space \((\Omega,\mathcal{F},\mathbb{P})\) is a uniform distributed probability space with \(\Omega=[0,1]\). To this end, we proceed to the following definition. **Definition 2.1**.: _The ensemble of linear stochastic system (2.1) is said to be averaged controllable if, for any initial state \(x_{0}\in\mathbb{R}^{d}\), final state \(x_{f}\in\mathbb{R}^{d}\), and final time \(t_{f}\), there exists a parameter-independent control input \(u\in L^{1}([0,t_{f}];\mathbb{R}^{m})\) such that the ensemble of states in (2.2) satisfies_ \[\mathbb{E}\int_{0}^{1}X(t_{f},\theta)d\theta=x_{f}.\] Note that by the linearity of the stochastic system (2.1), the expectation of the control will drive the deterministic part of the dynamics (2.1) in the averaged sense. We proceed to the following useful result. **Proposition 2.1**.: _If the matrix_ \[G_{t_{f},0}:=\int_{0}^{t_{f}}\left(\int_{0}^{1}e^{A(\theta)(t_{f}-\tau)}B( \theta)d\theta\right)\left(\int_{0}^{1}B^{\mathrm{T}}(\theta)e^{A^{\mathrm{T} }(\theta)(t_{f}-\tau)}d\theta\right)d\tau, \tag{2.3}\] _is invertible then, the linear stochastic system (2.1) is said to be averaged controllable._ Proof.: Suppose \(G_{t_{f},0}\) is invertible and for any initial state \(x_{0}\in\mathbb{R}^{d}\), final state \(x_{f}\in\mathbb{R}^{d}\) consider \[u(t)=\left(\int_{0}^{1}B^{\mathrm{T}}(\theta)e^{A^{\mathrm{T}}(\theta)(t_{f}-t )}d\theta\right)G_{t_{f},0}^{-1}\left(x_{f}-\left(\int_{0}^{1}e^{A(\theta)t_{f }}d\theta\right)x_{0}\right). \tag{2.4}\] From (2.2), since \[X(t_{f},\theta)=e^{A(\theta)t_{f}}x_{0}+\int_{0}^{t_{f}}e^{A(\theta)(t_{f}- \tau)}B(\theta)u(\tau)d\tau+\sqrt{\epsilon}\int_{0}^{t_{f}}e^{A(\theta)(t_{f} -\tau)}B(\theta)dW(\tau), \tag{2.5}\] by substituting (2.4) in (2.5), we obtain \(\mathbb{E}\int_{0}^{1}X(t_{f},\theta)d\theta=x_{f}\). This finishes the proof. ## 3. Problem Statement and Main Result Consider an ensemble of processes governed by \[dX(t,\theta)=A(\theta)X(t,\theta)dt+\sqrt{\epsilon}B(\theta)dW(t), \tag{3.1}\] with initial condition \[X(0,\theta)=x_{0},\quad\text{almost surely (a.s)}.\] **Problem 1:**_Our goal is to find solutions that are conditioned to have_ \[\int_{0}^{1}X(t_{f},\theta)d\theta=x_{f},a.s. \tag{3.2}\] To characterize such solutions, suppose \(\epsilon=0\), then to ensure that (3.2) is satisfied, one needs to solve the optimal control problem \[c(x_{0},x_{f}):=\min_{u\in\mathcal{U}_{x_{0}}^{x_{f}}}\int_{0}^{t_{f}}\frac{1} {2}\|u(t)\|^{2}dt, \tag{3.3}\] where \(\mathcal{U}_{x_{0}}^{x_{f}}\) is the set of control inputs such that \[\frac{\partial x}{\partial t}(t,\theta)= A(\theta)x(t,\theta)+B(\theta)u(t), \tag{3.4}\] \[x(0,\theta)= x_{0}\quad\text{and}\quad\int_{0}^{1}x(t_{f},\theta)d\theta=x_{f},\] has a solution. This tends to measure the optimal change in the drift of the ensemble of the autonomous system that ensures that condition (3.2) is satisfied. _The fact that the final conditional state in (3.2) is parameter-independent motivates the quest to find a parameter-independent control. If the control depends on \(\theta\in[0,1]\), it might lead to different behaviours for different realizations, making it challenging to ensure that (3.2) is satisfied a.s. Another motivation derived from the condition (3.2) is that the natural quantity one observes is the average over the parameter \(\theta\in[0,1]\)._ A more general problem relating to (3.3)-(3.4) has been studied in [24]. They showed that the optimal value of the control that steers the average of the ensemble of systems in (3.4) is characterized by the Euclidean distance \[c(x_{0},x_{f})=\frac{1}{2}\left\|x_{f}-\left(\int_{0}^{1}e^{A(\theta)t_{f}}d \theta\right)x_{0}\right\|_{G_{t_{f},0}^{-1}}^{2}, \tag{3.5}\] where \(\|x\|_{G_{t_{f},0}^{-1}}^{2}=x^{\mathrm{T}}G_{t_{f},0}^{-1}x\), for all \(x\in\mathbb{R}^{d}\), whenever \(G_{0,t_{f}}\) in (2.3) is invertible. From this observation, let \[q^{\epsilon,G}(s,x,t,y)=(2\pi\epsilon)^{-\frac{d}{2}}(\det(G_{t,s}))^{-\frac{d }{2}}\exp\left(-\frac{1}{2\epsilon}\left\|y-\left(\int_{0}^{1}e^{A(\theta)(t- s)}d\theta\right)x\right\|_{G_{t_{f},s}^{-1}}^{2}\right) \tag{3.6}\] where \(G_{t,s}\) is defined in (2.3) with \(t=t_{f}\) and \(s=0\), be the transition density of the particles moving independently of each other according to the average diffusion in (3.1). Then, following from [6, 7, 8], the solutions of (3.1) condition to be (3.2) is characterized by the stochastic optimal control problem \[\mathbf{Problem\ 2}:\qquad\min_{u\in\mathcal{U}}\mathbb{E}\left[\int_{0}^{t_{f}} \frac{1}{2}\|u(t)\|^{2}dt\right], \tag{3.7}\] subject to \[dX(t,\theta) =A(\theta)X(t,\theta)dt+B(\theta)u(t)dt+\sqrt{\epsilon}B(\theta) dW(t),\] \[X(0,\theta) =x_{0}\text{ a.s and }\int_{0}^{1}X(t_{f},\theta)d\theta=x_{f},a.s. \tag{3.8}\] To be more precise, if \(u\in\mathcal{U}\subset\mathrm{L}^{2}([0,t_{f}];\mathbb{R}^{m})\), then; 1. \(u(t)\) is \(x(t)\)-measurable, where \(x(t):=\int_{0}^{1}X(t,\theta)d\theta\) with \(X(t,\theta)\) characterized in (2.2), for all \(t\in[0,t_{f}]\), 2. \(\mathbb{E}\left[\int_{0}^{t_{f}}\frac{1}{2}\|u(t)\|^{2}dt\right]<\infty\), 3. \(u\) achieves averaged controllability (see Definition 2.1) for (3.8). Note that in this setting, since we aim to steer the final state to our desired state, the only information available to us is the past and present noise. Here we state our main result. **Theorem 3.1**.: _Suppose \(G_{t_{f},s}\), for all \(0\leq s<t_{f}\), in (2.3) is invertible. Then the optimal control for (3.7) subject to (3.8) is characterized as_ \[u^{*}(t)=-\sqrt{\epsilon}\int_{0}^{t}\Phi(t_{f},t)^{T}G_{t_{f},\tau}^{-1}\Phi( t_{f},\tau)dW(\tau)+\Phi(t_{f},t)^{T}G_{t_{f},0}^{-1}x_{f}. \tag{3.9}\] where \[\Phi(t_{f},\tau)=\int_{0}^{1}e^{A(\theta)(t_{f}-\tau)}B(\theta)d\theta. \tag{3.10}\] Note that re-centring the initial ensemble of systems at the origin \(0\) holds no bearing on the system's characterization, given its inherent linearity. However, we see that the characterization of the optimal control is a departure from the conventional stochastic optimal control literature, where the optimal control assumes the form of a Markov strategy [17, 19]. In particular, when dealing with a Markov process subject to parameter perturbations, the optimal control that steers the stochastic bridge adopts an approach--_a stochastic feedforward input_, to be precise. This unique characterization emerges because of the intricate presence of parameters within the system, further complicating the endeavour to trace the ensemble's average behaviour. The exhaustive proof is omitted due to spatial constraints, with the subsequent sections devoted to illuminating the rationale behind this assertion. The remainder of this paper articulates the intricate dynamics that lend credence to this phenomenon. **Remark 3.1**.: _To highlight more on the novelty of the above problem, following from [6] we have that problem (3.7)-(3.8), where \(X(0,\theta)\sim\mu_{0}\) and \(\int_{0}^{1}X(t,\theta)d\theta\sim\mu_{f}\) and \(\mu_{0},\mu_{f}\) are given initial and final distributions, is the stochastic control approach to the Schrodinger bridge problem_ \[\min_{\gamma\in\Pi(\mu_{0},\mu_{f})}\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}} \epsilon\gamma(x_{0},x_{f})\mathrm{log}\left(\frac{\gamma(x_{0},x_{f})}{ \exp\left(-\frac{c(x_{0},x_{f})}{\epsilon}\right)}\right)dx_{0}dx_{f}, \tag{3.11}\] _where \(c\) is in (3.5). Therefore, aside from the fact that problem (3.7)-(3.8) is the Dirac extension of [24] to include white noise, more importantly, it also extends the works in [1, 2, 40, 41] to the case where the Markov process is generated from a linear diffusion which is submitted to parameter perturbations._ Since we require the control \(u(t)\) at time \(t\) to be \(x(t)\)-measurable, our object of interest is the controlled-average process \[x(t)=\int_{0}^{t}\Phi(t,\tau)(u(\tau)d\tau+\sqrt{\epsilon}dW(\tau)), \tag{3.12}\] where we have re-centred the dynamics to initialize at \(0\), without any loss and \(\Phi(t,\tau)\) is defined in (3.10). Therefore, the optimal control problem (3.7)-(3.8) is equivalent to the optimal output control problem (3.7) subject to (3.12), where the final state is conditioned to be \(x(t_{f})=x_{f}\) a.s. Rather than solving problem (3.7) subject to (3.12) conditioned to satisfy \(x(t_{f})=x_{f}\), we consider the corresponding alternative free-endpoint formulation \[\min_{u}J(u):=\mathbb{E}\bigg{[}a(x(t_{f})-x_{f})^{T}(x(t_{f})-x_{f})+\int_{0} ^{t_{f}}\frac{1}{2}\|u(t)\|^{2}d\tau\bigg{]} \tag{3.13}\] subject to (3.12), where \(a>0\). Note that the optimal solution for (3.7) subject to (3.12), is obtained by taking \(a\to\infty\). More precisely, if \[\lim_{a\to\infty}\|u_{a}^{*}-u^{*}\|_{L^{2}([0,t_{f}];\mathbb{R}^{m})}^{2}=0,\] where \(u_{a}^{*}\) is the optimal control for (3.13) subject to (3.12), then \(u^{*}\) is the unique optimal solution for (3.7) subject to (3.12). We emphasize here that \(\Phi(t,\tau)\in\mathbb{R}^{d\times m}\) in (3.10) is not a transition matrix in general. The only affirmative case is where \(A=A(\theta)\), for all \(\theta\in[0,1]\). In the latter case, the average process (3.12) satisfies the time-invariant linear diffusion process \[dx(t)= Ax(t)dt+Bu(t)dt+\sqrt{\epsilon}BdW(t), \tag{3.14}\] \[x(0)= x_{0}\quad\text{ and }\quad x(t_{f})=x_{f}\] where \(B=\left(\int_{0}^{1}B(\theta)d\theta\right)\) and the controllability of the pair \((A,B)\) plays a major role in establishing results similar to [2]. In particular, if the system (3.1) is submitted to parameter perturbation only in the diffusive coefficient and \((A,B)\) is a controllable pair, then by averaging and then solving the standard stochastic linear-quadratic optimal control problem (3.13) subject to (3.14) we generate the Brownian bridges with desired statistics (see [2]). On the other hand, for a fixed \(A\in\mathbb{R}^{d\times d}\), one can check that the average process (3.12) satisfies the dynamics \[dx(t)=\bigg{(}Ax(t)+Bu(t)+\int_{0}^{t}F_{A(\theta),B(\theta)}(\tau,t)(u(\tau)d \tau+\sqrt{\epsilon}dW(\tau))\bigg{)}dt+\sqrt{\epsilon}BdW(t), \tag{3.15}\] where \[F_{A(\theta),B(\theta)}(\tau,t):=\int_{0}^{1}\left(A(\theta)-A\right)e^{A( \theta)(t-\tau)}B(\theta)d\theta.\] In this context, employing the variational approach to optimize (3.13) subject to (3.15) reveals some significant challenges. The drift term within (3.15) assumes the form of a controlled Ito process, causing this equation to deviate from the conventional definition of a stochastic differential equation (SDE), (see for instance [17, 19]). Therefore, the average random differential equation (3.15) seems to represent a more complex stochastic process than is usually encountered in the literature [2, 16, 17, 18]. However, the real-world significance of (3.15) resides in the average process delineated by (3.12). This formulation captures the central tendency behaviour of the system's fluctuations, thereby holding practical importance. Consequently, standard stochastic control techniques, including those rooted in Hamilton-Jacobi Bellman (HJB) conditions [17, 19], prove unsuitable for this scenario. An alternative avenue lies in the PDE approach [3, 4, 5, 6, 7, 8], yet the presence of noise within the drift term presents challenges when adapting the corresponding parabolic PDE. As a result, the methods delineated in [6, 7, 8] and related references are not readily applicable. These observations collectively imply that the optimal control strategy for problem (3.7)-(3.8), or its equivalent form involving (3.12), cannot be a Markov strategy. Intriguingly unrelated, this insight also signifies the formidable nature of stabilizing the average process. **Special Case:** Before delving into solution techniques, let us consider the classical case. Consider particles governed by the following equations: \[dx(t)= \sqrt{\epsilon}dW(t), \tag{3.16}\] \[x(0)= 0,\quad\text{almost surely (a.s)}.\] Our primary goal is to ensure that, at the final time \(x(t_{f})=0\) a.s. In this special case, since \(\Phi(t_{f},t)=I_{d\times d}\), for all \(t\geq t_{f}\) and \(G_{t_{f},\tau}=(t_{f}-\tau)I_{d\times d}\), we have that the stochastic feedforward control input in (3.9) reduces to \[u^{*}(t)=-\sqrt{\epsilon}\int_{0}^{t}(t_{f}-\tau)^{-1}dW(\tau). \tag{3.17}\] What is interesting is that under these conditions, this optimal stochastic feedforward control input simplifies into a Markovian control strategy. To get to this point, we follow the approach outlined in [2]. This involves solving (3.7), which leads us to (3.13) subject to (3.14), where \(A=0_{d\times d}\in\mathbb{R}^{d\times d}\) and \(x_{0}=x_{f}=0\). Utilizing the HJB conditions [17, 19] and taking limit as \(a\to\infty\), we arrive at the following expression for the optimal control \(u^{*}\): \[u^{*}(t)=-(t_{f}-t)^{-1}x(t), \tag{3.18}\] Notably, by substituting \(u^{*}\) in (3.18) into (3.14), where \(A=0_{d\times d}\in\mathbb{R}^{d\times d}\) and \(x_{0}=x_{f}=0\), we find that the closed-loop trajectory is: \[x(t)=\sqrt{\epsilon}\int_{0}^{t}e^{\int_{t}^{\tau}(t_{f}-\alpha)^{-1}d\alpha} dW(\tau),\] thus, \[x(t)=\sqrt{\epsilon}(t_{f}-t)\int_{0}^{t}(t_{f}-\tau)^{-1}dW(\tau). \tag{3.19}\] By substituting (3.19) into (3.18) we obtain (3.17). This illustrates that in cases where the system is not an ensemble, the feedforward control input in reduces to the Markovian strategy in (3.18). **Equivalent discrete-time optimal control problem:** To solve problem (3.13)-(3.12), we transform the problem (3.13)-(3.12) to an equivalent discrete-time optimal control problem. We partition over time so that it is consistent with the definition of the Ito integral [16, 17, 18]. To this end, let \(P:=\{0=t_{0}<t_{2}<\cdots<t_{k-1}=t_{f}\}\) be a regular partition with constant step size \(\triangle t_{k}=t_{i+1}-t_{i}\), for any \(i\in\{1,\ldots,k\}\) and suppose \(u_{k,i}=u(t_{i})\) is a constant \(x(t_{i})\)-measurable random variable in \(L^{2}[t_{i},t_{i+1})\), where \(i\in\{0,\ldots,k-1\}\) and consider the discrete-time optimal control problem \[\min_{u_{k}}J_{k}(u_{k}):=\mathbb{E}\bigg{[}a(x_{k}-x_{f})^{T}(x_{k}-x_{f})+ \frac{1}{2}\sum_{i=0}^{k-1}u_{k,i}^{T}u_{k,i}\triangle t_{k}\bigg{]}, \tag{3.20}\] subject to \[x_{k}=\sum_{i=0}^{k-1}\Phi_{i}(t_{f})\left(u_{k,i}\triangle t_{k}+\sqrt{ \epsilon}\triangle W_{i}\right) \tag{3.21}\] where \(x_{k}:=x(t_{k})\in\mathbb{R}^{d}\), \(u_{k}:=(u_{k,0},\ldots,u_{k,k-1})\in(\mathbb{R}^{m})^{k}\), \(\Phi_{i}(t_{f}):=\Phi(t_{f},t_{i})\in\mathbb{R}^{d\times m}\) and \(\triangle W_{i}:=W(t_{i+1})-W(t_{i})\in\mathbb{R}^{m}\). We call this problem the equivalent discrete-time optimal control problem because the solution (3.20)-(3.21) is exactly the same as the solution for (3.13) subject to (3.12) (see [42, 43]). We proceed to characterize the optimal control. We omit the proof due to space limitations. **Proposition 3.1**.: _Suppose \(G_{t_{f},s}\), for all \(0\leq s<t_{f}\), in (2.3) is invertible. Then the optimal control for (3.20)- (3.21) is characterized as_ \[u_{a,k,i}^{*}=\\ -\sqrt{\epsilon}\sum_{j=0}^{i}\Phi(t_{f},t_{i})^{T}(\sum_{\alpha= j}^{k-1}\Phi(t_{f},t_{\alpha})\Phi(t_{f},t_{\alpha})^{T}\triangle t_{k}+ \frac{1}{2a}I_{n})^{-1}\Phi(t_{f},t_{j})\triangle W_{j}\\ +\Phi(t_{f},t_{i})^{T}(\sum_{\alpha=0}^{k-1}\Phi(t_{f},t_{\alpha} )\Phi(t_{f},t_{\alpha})^{T}\triangle t_{k}+\frac{1}{2a}I_{n})^{-1}x_{f}.\] From this result, we have that \[\lim_{a\to\infty}\lim_{k\to\infty}\|u_{a,k}^{*}-u^{*}\|_{L^{2}([0,t_{f}]; \mathbb{R}^{m})}^{2}=0,\] where \(u^{*}\) is in (3.9). ## 4. Conclusion and future work In this paper, we have discussed the problem of conditioning a Markov process, subjected to parameter perturbations, to initial and final states. The central motivation behind this endeavor lies in our quest to understand and control the dynamics of ensembles of systems characterized by stochastic processes. Specifically, we have explored the problem of steering an ensemble of linear stochastic systems toward average behavior, irrespective of the underlying parameter perturbations. Our investigation has revealed that due to the inherent complexity introduced by parameter perturbations, the optimal control for this problem cannot adopt a traditional Markov strategy. Instead, we've uncovered a unique characterization of the optimal control, involving a stochastic feedforward input that relies on a time-varying drift. One can view the end-point conditions as Dirac distributions for particles emanating and absorbed at particular points in phase space. This characterization provides a powerful tool for controlling and modelling general distributions of particles and interpolation of density functions. This leads to a more general Schrodinger bridge problem- the problem of steering of particles between specified marginal distributions where velocities are uncertain or form an ensemble of systems. In this case, the Schrodinger bridge problem is related to optimal transport problem [40, 41, 44, 45, 46, 24]. In particular, it is known that if the diffusivity turns to zero, the solution of the Schrodinger bridge problem turns to the solution of the optimal transport problem [47, 48, 49, 50, 51, 52, 53]. This extension and other related problems are the subject of ongoing work.
initial and final states. The trajectory of these particles is uniquely shaped by the intricate interplay of internal and external sources of randomness. The internal randomness is aptly modeled through a parameter varying over a deterministic set, thereby giving rise to an ensemble of systems. Concurrently, the external randomness is introduced through the inclusion of white noise. Within this context, our primary objective is to effectively generate the stochastic bridge through the optimization of a random differential equation. As a deviation from the literature, we show that the optimal control mechanism, pivotal in the generation of the bridge, does not conform to the typical Markov strategy. Instead, it adopts a non-Markovian strategy, which can be more precisely classified as a stochastic feedforward control input. This unexpected divergence from the established strategies underscores the complex interrelationships present in the dynamics of the system under consideration.
2309.09063
Blind Deconvolution of Sparse Graph Signals in the Presence of Perturbations
Blind deconvolution over graphs involves using (observed) output graph signals to obtain both the inputs (sources) as well as the filter that drives (models) the graph diffusion process. This is an ill-posed problem that requires additional assumptions, such as the sources being sparse, to be solvable. This paper addresses the blind deconvolution problem in the presence of imperfect graph information, where the observed graph is a perturbed version of the (unknown) true graph. While not having perfect knowledge of the graph is arguably more the norm than the exception, the body of literature on this topic is relatively small. This is partly due to the fact that translating the uncertainty about the graph topology to standard graph signal processing tools (e.g. eigenvectors or polynomials of the graph) is a challenging endeavor. To address this limitation, we propose an optimization-based estimator that solves the blind identification in the vertex domain, aims at estimating the inverse of the generating filter, and accounts explicitly for additive graph perturbations. Preliminary numerical experiments showcase the effectiveness and potential of the proposed algorithm.
Victor M. Tenorio, Samuel Rey, Antonio G. Marques
2023-09-16T18:07:16
http://arxiv.org/abs/2309.09063v1
# Blind Deconvolution of Sparse Graph Signals in the Presence of Perturbations ###### Abstract Blind deconvolution over graphs involves using (observed) output graph signals to obtain both the inputs (sources) as well as the filter that drives (models) the graph diffusion process. This is an ill-posed problem that requires additional assumptions, such as the sources being sparse, to be solvable. This paper addresses the blind deconvolution problem in the presence of imperfect graph information, where the observed graph is a _perturbed_ version of the (unknown) true graph. While not having perfect knowledge of the graph is arguably more the norm than the exception, the body of literature on this topic is relatively small. This is partly due to the fact that translating the uncertainty about the graph topology to standard graph signal processing tools (e.g. eigenvectors or polynomials of the graph) is a challenging endeavor. To address this limitation, we propose an optimization-based estimator that solves the blind identification in the vertex domain, aims at estimating the inverse of the generating filter, and accounts explicitly for additive graph perturbations. Preliminary numerical experiments showcase the effectiveness and potential of the proposed algorithm. Victor M. Tenorio, Samuel Rey, and Antonio G. Marques Dept. of Signal Theory and Communications, King Juan Carlos University, Madrid, Spain Graph Filter Identification, Sparse recovery, Graph Denoising, Robust Graph Signal Processing ## 1 Introduction In recent years, we have witnessed an exponential surge in data utilization, accompanied by a concurrent rise in its complexity. Addressing this growing complexity involves harnessing the inherent structure within the data. In this context, Graph Signal Processing (GSP) [1, 2, 3] emerges as a solution to this challenge, employing graphs to capture the underlying structure of the data and interpreting the data as signals defined on the nodes of the graph. This convergence of signal processing and graph theory facilitates navigating the intricacies of modern information, revealing insights and patterns that traditional methods might overlook. Leveraging the structure of the data becomes even more relevant in ill-posed problems where the available observations are insufficient to solve the task at hand. This is precisely the case of blind deconvolution of graph signals, which is an extension to graphs of the classical problem of blind system identification or blind deconvolution of signals in the time or spatial domain [4, 5]. Specifically in GSP, given a set of output signals that are assumed to be the output of a diffusion process driven by a graph filter (GFi), blind deconvolution tries to jointly identify the sources (nodes) of the diffusion process as well as the GFi that drove the diffusion [6, 7, 8, 9, 10]. This problem finds applications in multiple domains, such as in social analysis (identifying the sources and the propagation dynamics of a rumor in a social network) or in neuroscience (identifying the sources of neurological activity as well as the diffusion patterns in a brain network) [11]. The aforementioned approaches to the blind deconvolution problem assume perfect knowledge of the graph topology. This simplifying assumption allows to compute the frequency representation of the graph signals and the diffusing GFi, leveraging those representations in their proposed algorithms. Nonetheless, in real-world applications, the graph is prone to contain _imperfections_ due to, for example, noise in the observed links or outdated information when the graph varies with time. Furthermore, when in lieu of physical entities the graph captures pairwise relationships that need to be learned from the data, the limitations of the method used to learn the topology may give rise to perturbations. The ubiquity of graph perturbations and their potential impact on the performance of graph-based algorithms highlight the necessity to develop robust algorithms capable of effectively handling imperfect graph information. Unfortunately, this is not a trivial task, since even characterizing the influence that simple perturbations models (e.g., additive noise models) have on classical GSP tools (the eigenvectors of the graph-shift operator (GSO) that define the graph Fourier transform or the powers of the GSO that are used in a GFi) is quite challenging. Despite these challenges, several GSP works have started to look into this relevant limitation. Works in [12, 13] study the influence of perturbations in the spectrum of the graph while [14] focus on the effect of perturbations in GFis of order one. Rather than analyzing the effects of perturbations, [15, 16, 17] address the identification of (different types of) GFis while accounting for imperfections in the observed topology. Finally, the presence of perturbations has also been considered in non-linear methods, where current approaches range from studying the transferability of non-linear GFis to designing novel GFis robust to perturbations [18, 19, 20]. However, notwithstanding the rising focus on the uncertainty in the graph topology, there have been no efforts to tackle the problem of blind deconvolution from a robust standpoint. **Contributions.** To address the previous limitations, this paper poses the task of robust blind deconvolution of graph signals while considering imperfect topology knowledge. It carefully formulates the problem as a non-convex optimization program and, by relying on different convex relaxations, designs an algorithm to find a solution. The key aspects of the proposed approach are: 1) modeling the true (unknown) graph as an explicit optimization variable to account for perturbations in the observed topology; 2) optimizing over the inverse filter rather than the generating one to simplify the objective function; and 3) modeling the dependence of the inverse GFi on the graph via a commutativity constraint in the vertex domain, which bypasses the challenges of dealing with high-order polynomials and working on spectral domain of the graph. The postulated problem uses the observations of the output signals and an imperfect version of the GSO to jointly obtain the sources of the network diffusion, the underlying GFI driving the process, and an enhanced (denoised) estimate of the graph. Since the sources of non-convexity are reduced to a mere bilinearity, the optimization is tackled through an alternating approach, labeled as the Robust Blind Deconvolution over Graphs (RBDG) algorithm. To the best of our knowledge, this is the first work that does not assume perfect knowledge of the graph topology when solving the blind identification problem. ## 2 Blind Deconvolution in Gsp In this section, we introduce notation and some fundamentals of GSP. Then, we discuss blind deconvolution of graph signals, and present how previous works dealt with it. **Notation and GSP preliminaries.** Denote by \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) a graph with \(\mathcal{V}\) and \(\mathcal{E}\) representing its node and edge set, respectively, and with \(|\mathcal{V}|=N\) nodes. Denote by \(\mathbf{A}\in\mathbb{R}^{N\times N}\) its (possibly weighted and directed) adjacency matrix, where \(A_{ij}\neq 0\) if and only if \((i,j)\in\mathcal{E}\). More generally, GSO \(\mathbf{S}\in\mathbb{R}^{N\times N}\) is a matrix encoding the structure of the graph, where \(S_{ij}\) can be non-zero only if \((i,j)\in\mathcal{E}\) or if \(i=j\). Classical examples of matrices playing the role of the GSO are the adjacency matrix or the combinatorial graph Laplacian \(\mathbf{L}=\text{diag}(\mathbf{d})-\mathbf{A}\), where the entries \(d_{j}=\sum_{j}A_{ij}\) represent the nodal degrees. Define also a graph signal as the mapping \(\mathcal{V}\rightarrow\mathbb{R}\), which can be conveniently represented by a vector \(\mathbf{x}\in\mathbb{R}^{N}\), where the entry \(x_{i}\) encodes the signal value at node \(i\in\mathcal{V}\). Finally, a fundamental role in graph signal deconvolution is played by GFs. A GFI is a graph-aware linear operator for graph-signals that can be represented as a matrix polynomial of the GSO of the form \[\mathbf{H}=\sum_{r=0}^{N-1}h_{r}\mathbf{S}^{r}, \tag{1}\] with \(\mathbf{h}=[h_{0},...,h_{N-1}]\) denoting the vector of filter coefficients [21]. Since \(\mathbf{S}^{r}\) encodes the r-hop neighborhood of the graph, GFs are widely used to model diffusion processes over networks [7]. **Blind deconvolution of graph signals.** Consider a diffusion process where the output signal \(\mathbf{y}\in\mathbb{R}^{N}\) is given by \[\mathbf{y}=\mathbf{H}\mathbf{x}+\mathbf{w}, \tag{2}\] with \(\mathbf{x}\in\mathbb{R}^{N}\) being the input signal being diffused, \(\mathbf{H}\) a GFI modeling the diffusion process, and \(\mathbf{w}\in\mathbb{R}^{N}\) a random vector representing noise or model inaccuracies. Then, given a set of \(M\) observed signals \(\{\mathbf{y}_{i}\}_{i=1}^{M}\) generated according to (2), blind deconvolution aims to find the sources of the network diffusion \(\{\mathbf{x}_{i}\}_{i=1}^{M}\) as well as the GFI \(\mathbf{H}\) controlling the diffusion process. This is a challenging and ill-posed problem since both \(\mathbf{x}\) and \(\mathbf{H}\) are unknown. Therefore, to promote its tractability, a workhorse approach is to assume that there are only a few sources for the network diffusion (i.e. \(\mathbf{x}_{i}\) are sparse). Moreover, exploiting that \(\mathbf{H}\) is a GFI so only the coefficients \(\mathbf{h}\) are unknowns becomes critical. Early works dealing with the blind deconvolution problem in the context of GSP appear in [22, 7]. The approach put forth recovers a lifted rank-one matrix \(\mathbf{Z}=\mathbf{x}\mathbf{h}^{T}\), which exhibits certain desirable properties such as being row sparse and rank one. Later on, the works presented in [23, 8] review several existing methods and extend the previous approach to input signals defined as the combination of a few entries in a dictionary, as well as exploring the problem of graph signal sampling and analyzing its similarities with blind deconvolution. Differently, [22, 9] reformulate the problem to identify the frequency response of the inverse filter as a way to bypass the non-convex bilinear term \(\mathbf{H}\mathbf{x}_{i}\) that arises when jointly identifying the filter and the input signals. The work in [9] is further developed in [10], where the authors incorporate unrolling schemes [24] to strengthen their design and limit the impact of selecting the hyperparameters. ## 3 Robust Blind Deconvolution over Graphs After having established the notation and the problem context, along with an overview of prior approaches to address it, this section presents the formal problem statement and introduces our proposed algorithmic solution. As previously mentioned, we assume that we do not have access to the true GSO, but to a noisy version \(\bar{\mathbf{S}}=\mathbf{S}+\mathbf{\Delta}\). Here, \(\mathbf{\Delta}\) represents a perturbation matrix whose particular structure will depend on the perturbation at hand (e.g., creating/destroying links, or noisy edge weights) [17]. It is easy to note that the uncertainty encoded in \(\mathbf{\Delta}\) renders the blind deconvolution problem more challenging to solve. The blind identification problem accounting for the graph imperfections is formally stated next. **Problem 1**: _Let \(\mathcal{G}\) be a graph with \(N\) nodes, \(\mathbf{S}\) the true (unknown) GSO, and \(\bar{\mathbf{S}}\) be the perturbed (observed) GSO. Moreover, let \(\{\mathbf{y}_{i}\}_{i=1}^{M}\) be the observed output signals obtained from the unknown input signals \(\{\mathbf{x}_{i}\}_{i=1}^{M}\) as described in (2). The aim is to use the duplet \(\{\mathbf{y}_{i}\}_{i=1}^{M}\),\(\mathbf{S}\) to find i) the input signals \(\{\mathbf{x}_{i}\}_{i=1}^{M}\); ii) the filter driving the diffusion process \(\mathbf{H}\); and iii) an enhanced estimate of the real GSO \(\mathbf{S}\). To that end, the following assumptions are in order: (**ASI**) The input signals \(\mathbf{x}_{i}\) are sparse, i.e., \(\|\mathbf{x}_{i}\|_{0}=K\ \forall\ i\in\{1,...,M\}\), where \(K\ll N\). (**AS2**) \(\mathbf{H}\) is a polynomial of \(\mathbf{S}\). (**AS3**) \(\mathbf{S}\) and \(\bar{\mathbf{S}}\) are close according to some metric \(d(\mathbf{S},\bar{\mathbf{S}})\), i.e., the observed perturbations are "small" in some sense._ Assumptions (**AS1**) and (**AS2**), which limit the degrees of freedom of the bilinear model, are standard in the context of blind deconvolution of graph signals [9, 7]. By limiting the level of perturbations, Assumption (**AS3**) guarantees that matrices \(\mathbf{S}\) and \(\bar{\mathbf{S}}\) are "similar" and, as a result, that \(\bar{\mathbf{S}}\) contains meaningful information about \(\mathbf{S}\). For convenience, let us group the input and output signals in the columns of the \(N\times M\) matrices \(\mathbf{X}:=[\mathbf{x}_{1},...,\mathbf{x}_{M}]\) and \(\mathbf{Y}:=[\mathbf{y}_{1},...,\mathbf{y}_{M}]\), respectively. A natural optimization-based formulation for Problem 1 is \[\min_{\mathbf{X},\mathbf{H},\mathbf{S}} \|\mathbf{Y}-\mathbf{H}\mathbf{X}\|_{F}^{2}+\beta\|\mathbf{S}\|_{0}\] (3) s.to: \[d(\mathbf{S},\bar{\mathbf{S}})\leq\epsilon_{1} \tag{4}\] \[\|\mathbf{x}_{i}\|_{0}\leq K\ \forall\ i\in\{1,...,M\}\] (5) \[\|\mathbf{H}\mathbf{S}-\mathbf{S}\mathbf{H}\|_{F}^{2}\leq\epsilon_ {2}, \tag{6}\] where we formulated the objective function to minimize the error between the output signals \(\mathbf{Y}\) and the prediction \(\mathbf{H}\mathbf{X}\), along with an \(\ell_{0}\) term of the GSO to promote a sparse solution for \(\mathbf{S}\). The key constraint in our approach is (4), which relates to (**AS3**) and bounds the distance between the GSO \(\mathbf{S}\) and the observation \(\bar{\mathbf{S}}\). The choice of a suitable distance function will depend on the nature of the perturbation encoded in \(\mathbf{\Delta}\), with plausible examples being the \(\ell_{0}\) pseudo-norm when perturbations create/destroy links or the Frobenius norm when the weights of \(\bar{\mathbf{S}}\) present noise. Next, the constraint (5) is used to limit the sparsity of the signals \(\mathbf{X}\) while (6) promotes that \(\mathbf{H}\) is a GFI (i.e., a polynomial on \(\mathbf{S}\)) as stated in (**AS2**). Note that the commutativity exploits the fact that, since \(\mathbf{H}\) is a polynomial of \(\mathbf{S}\), the two matrices share the same eigenvectors. This simple observation prevents us from dealing with the spectrum of \(\mathbf{S}\) and with high-order polynomials, simplifying the optimization problem. Nevertheless, (3) is clearly a non-convex optimization problem, due to the bilinear terms in (3) and (6) and the \(\ell_{0}\) norms in (3) and (5). To design a convex solution, we rely on an alternating optimization approach. To that end, we make the following considerations: * Inspired by [9, 10], we replace the optimization variable \(\mathbf{H}\) by its inverse, denoted by \(\mathbf{G}:=\mathbf{H}^{-1}\). This change of variable simplifies the objective function by replacing the bilinearity \(\|\mathbf{Y}-\mathbf{H}\mathbf{X}\|_{F}^{2}\) with \(\|\mathbf{G}\mathbf{Y}-\mathbf{X}\|_{F}^{2}\), which is convex in both \(\mathbf{G}\) and \(\mathbf{X}\). A key aspect to note is that this variable change still allows us to use the commutativity term, since the inverse filter \(\mathbf{G}\) also shares eigenvectors with \(\mathbf{S}\).1 Footnote 1: To demonstrate this, let us write the generating GFI as \(\mathbf{H}=\mathbf{V}\text{diag}(\bar{\mathbf{h}})\mathbf{V}^{-1}\), where \(\mathbf{V}\) are the eigenvectors of \(\mathbf{S}\) and \(\bar{\mathbf{h}}\) is the frequency response of the filter. We can then write \(\mathbf{G}=\mathbf{V}\text{diag}(\bar{\mathbf{g}})\mathbf{V}^{-1}\), where \(\bar{g}_{i}=1/\bar{h}_{i}\) for all \(i\). Therefore, \(\mathbf{G}\) and \(\mathbf{S}\) share eigenvectors, and thus commute. * We replace the \(\ell_{0}\) norm by its convex surrogate, the \(\ell_{1}\) norm (to simplify exposition, iterative re-weighted \(\ell_{1}\) alternatives [25] are not included in the formulation but they are considered in the numerical section). * We move the constraints to the objective function by adding them as regularizers. * We assume that \(\bar{\mathbf{S}}\) contains perturbations that create and destroy links of \(\mathbf{S}\) and select the \(\ell_{1}\) norm as the distance function between \(\bar{\mathbf{S}}\) and \(\mathbf{S}\). Nevertheless, recall that any other convex distance can be readily employed. Taking into account the previous considerations, we end up with the following optimization problem \[\min_{\mathbf{X},\mathbf{G},\mathbf{S}} \|\mathbf{G}\mathbf{Y}-\mathbf{X}\|_{F}^{2}+\beta\|\mathbf{S}\|_ {1}+\lambda\|\mathbf{S}-\bar{\mathbf{S}}\|_{1} \tag{7}\] \[+\alpha\|\mathbf{X}\|_{1}+\gamma\|\mathbf{G}\mathbf{S}-\mathbf{ S}\mathbf{G}\|_{F}^{2}\] \[\text{s.to:}\quad\mathrm{Trace}(\mathbf{G})=1,\] where the constraint is used to prevent the trivial solution (\(\mathbf{X}=0,\mathbf{G}=0,\mathbf{S}=\bar{\mathbf{S}}\)). The problem in (7) is still non-convex due to the bilinearity in the commutativity term, but can be solved iteratively using an alternating minimization approach where, at each iteration \(t\), two steps are performed: * **Step 1** (filter and source identification): in this step, we find the optimal solutions for both \(\mathbf{G}\) and \(\mathbf{X}\) by solving the following problem \[\mathbf{G}_{(t)},\mathbf{X}_{(t)}=\text{arg}\min_{\mathbf{G}, \mathbf{X}} \|\mathbf{G}\mathbf{Y}-\mathbf{X}\|_{F}^{2}+\alpha\|\mathbf{X}\|_{1}\] (8) \[+\gamma\|\mathbf{G}\mathbf{S}_{(t-1)}-\mathbf{S}_{(t-1)}\mathbf{ G}\|_{F}^{2}\] \[\text{s.to:}\quad\mathrm{Trace}(\mathbf{G})=1,\] where we used the estimation of \(\mathbf{S}\) from the previous iteration, \(\mathbf{S}_{(t-1)}\). This problem, which is convex in both \(\mathbf{G}\) and \(\mathbf{X}\), can be solved using standard convex solvers. However, more efficient approaches, such as coordinate descent or proximal methods can be employed [26]. * **Step 2** (graph denoising): with the solutions of the previous step, we now aim to find a new estimation of the GSO by solving \[\mathbf{S}_{(t)}=\text{arg}\min_{\mathbf{S}} \beta\|\mathbf{S}\|_{1}+\lambda\|\mathbf{S}-\bar{\mathbf{S}}\|_{1}\] (9) \[+\gamma\|\mathbf{G}_{(t)}\mathbf{S}-\mathbf{S}\mathbf{G}_{(t)}\|_ {F}^{2},\] which is also convex and amenable to efficient approaches like the previous step. The complete pseudo-code is presented in Algorithm 1. The two steps (8)-(9), which are coupled by the commutativity term, are repeated for a fixed number of iterations \(T\) or until a stopping criterion is met. For the first iteration, the GSO is initialized to the imperfect observation \(\mathbf{S}_{(0)}=\bar{\mathbf{S}}\). The output of Algorithm 1 are the estimates for the inverse filter \(\bar{\mathbf{G}}=\mathbf{G}_{(T)}\), the source signals \(\hat{\mathbf{X}}=\mathbf{X}_{(T)}\), and the denoised GSO \(\hat{\mathbf{S}}=\mathbf{S}_{(T)}\). ``` Input:\(\mathbf{Y},\bar{\mathbf{S}}\) Output:\(\hat{\mathbf{G}},\hat{\mathbf{X}},\hat{\mathbf{S}}\). 1 Initialize \(\mathbf{S}_{(0)}\) as \(\mathbf{S}_{(0)}=\bar{\mathbf{S}}\). for\(t=1\)to\(T\)do 2 Compute \(\mathbf{G}_{(t)}\) and \(\mathbf{X}_{(t)}\) by solving (8) using \(\mathbf{S}_{(t-1)}\). Compute \(\mathbf{S}_{(t)}\) by solving (9) using \(\mathbf{G}_{(t)}\) and \(\mathbf{X}_{(t)}\). 3 end for 4\(\hat{\mathbf{G}}=\mathbf{G}_{(T)},\ \hat{\mathbf{X}}=\mathbf{X}_{(T)},\ \hat{\mathbf{S}}= \mathbf{S}_{(T)}\). ``` **Algorithm 1**Robust blind deconvolution with graph denoising. Note that, unlike [9], we solve the problem in the vertex domain, without relying on the frequency representation of the signals. This allows us to bypass the instability that the perturbations in \(\bar{\mathbf{S}}\) generate in the graph eigenvectors, yielding a more robust approach. Another advantage of the proposed algorithm is that, unlike [7], we do not require the sparsity pattern of the sources to be the same across all signals. Algorithm 1 incorporates four essential hyperparameters: \(\alpha\), \(\gamma\), \(\beta\), and \(\lambda\). These hyperparameters can be found via grid search or any other hyperparameter search algorithm. Since \((\alpha,\gamma,\beta,\lambda)\) are associated with regularizers derived from constraints, they can be interpreted as Lagrange multipliers; as a result, their values can be potentially tuned using tailored dual algorithms. Alternatively, an approach based on the unrolling technique, like the one proposed in [10], could be implemented. Several of these options will be explored and compared in the journal version of this paper. ## 4 Numerical Experiments In this section, we numerically assess the capabilities of the proposed algorithm. To do so, we first explain the setup common to all experiments and then we present and analyze the results for three test cases. The code used to run the simulations is available in GitHub2. Footnote 2: [https://github.com/vmtcnoriorio/RobustBlindDeconvolution](https://github.com/vmtcnoriorio/RobustBlindDeconvolution) **Experiment setup**: Unless stated otherwise, we sample graphs with \(N=20\) nodes from the small-world random graph model [27], we assign \(K=2\) sources of the network diffusion by selecting \(K\) nodes uniformly at random for each graph signal and we generate \(M=50\) graph signals. We use the adjacency matrix as the GSO, and the coefficients of the GFi are sampled uniformly at random between 0 and 1. The graph is perturbed by rewiring a 10% of the total number of edges. The error represented in the figures is the normalized median error across 25 realizations. We compare Algorithm 1 (labelled as "RBD-G" in the figure legends) with: i) the scheme presented in [9] ("Ye et. al."); and ii) a slight modification of (3)-(6), where we augment the objective in (3) with the constraints (4)-(6), replace the \(\ell_{0}\) pseudo norm with the \(\ell_{1}\) norm, and set the distance function as the \(\ell_{1}\) norm of the difference. The modified problem in ii) is solved using an iterative 3-step process where in step 1 we estimate the GFi \(\mathbf{H}\), in step 2 we optimize \(\mathbf{X}\), and in step 3 we estimate \(\mathbf{S}\). This approach is labeled as "RBD-H" in the figures. Finally, the lines whose labels contain "-rew" represent versions of the previous algorithms where the \(\ell_{1}\) norms of \(\mathbf{X}\) and \(\mathbf{S}\) are replaced with a reweighted \(\ell_{1}\)-approach [25]. **Test case 1**: Here, we analyze the impact of perturbations on the recovered GFi \(\hat{\mathbf{G}}\). Figure 1-(a) shows the normalized error as we increase the perturbations in \(\hat{\mathbf{S}}\). In general, we observe that the error grows as the ratio of rewired links increases and that the proposed approach outperforms the alternatives in every perturbed scenario. As we can see, for the unperturbed case (left-most point of the horizontal axis), the algorithm that yields the most accurate filter estimate is the approach in [9], as we would expect. However, for the smallest perturbation value considered (second left-most point of the horizontal axis), the performance of the approach in [9] drops dramatically. In contrast, our robust algorithms, both using the reweighted version of the \(\ell_{1}\) norm and without it, obtain results in the order of \(10^{-4}\) and \(10^{-1}\), respectively, for all perturbation values. In other words, they are able to properly deal with perturbations in the graph and consistently obtain an accurate representation of the GFi driving the process. Finally, it is worth noting that the naive 3-step approach is not able to properly identify the GFi, most probably due to the increased complexity of introducing a third step along with the non-convexity introduced by the additional bilinear term \(\mathbf{H}\mathbf{X}\). **Test case 2**: Here our focus is on analyzing how the number of sources affects the quality of the estimation \(\hat{\mathbf{X}}\). Figure 1-(b) depicts the error in the recovered \(\hat{\mathbf{X}}\) as the number of non-zero entries (\(K\)) in each \(\mathbf{x}_{i}\) increases. From the results, it follows that our algorithms clearly outperform the alternatives. More precisely, we observe that the error remains below \(10^{-3}\) when \(K<5\), (which corresponds to 25% of the total number of nodes in the graph), and then starts deteriorating. Interestingly, comparing "RBD-G-rew" and "RBD-G", we observe that the reweighted \(\ell_{1}\) norm not only provides a better estimate of \(\hat{\mathbf{X}}\), but it is also more resilient to denser signals \(\mathbf{X}\). The results also illustrate that, for the considered setup (10% of errors in the graph), the alternatives are unable to properly identify the sources. **Test case 3**: In the last test case, we analyze the performance of the graph denoising task when increasing the number of available graph signals \(M\). Note that the approach in "Ye et al." does not perform denoising, and therefore we set \(\hat{\mathbf{S}}=\hat{\mathbf{S}}\) for this algorithm. We can see in Figure 1-(c) that, as expected, the normalized error of the GSO for the algorithms proposed in this work decreases as \(M\) increases. The alternatives are again unable to properly denoise the GSO. It is worth mentioning that, even when only \(M=20\) signals are observed (and hence \(M=N\)), our "RBD-G-rew" algorithm already obtains a normalized error of 0.1, providing a high-quality representation of the true GSO. ## 5 Conclusion This work addressed the problem of blind deconvolution in the context of GSP when, instead of perfectly knowing the GSO, only an imperfect (perturbed) observation \(\hat{\mathbf{S}}\) is available. Our robust estimator was designed as the solution to an optimization problem where the sparse sources, the (inverse) filter, and the true graph were jointly learned. Two critical aspects of our design were: a) declaring the true graph as an optimization variable, and b) exploiting the fact that the inverse of the generative filter is a polynomial of the true graph via a commutativity bilinear constraint. These features resulted in an optimization fully formulated in the vertex domain and whose only source of non-convexity is the bilinear constraint, which in turn, is amenable to be solved via an alternating minimization approach. Preliminary numerical experiments showcased that our algorithm is able to identify the three variables in different scenarios. Future work includes the theoretical characterization of the proposed algorithm, designing more efficient optimization schemes to reduce the computational complexity, and using unrolling techniques to set the value of the regularization constants. Figure 1: Comparing the estimation performance of 5 blind deconvolution approaches. (a) shows the error in the recovered filter \(\hat{\mathbf{G}}\) when modifying the number of links perturbed in \(\hat{\mathbf{S}}\), where the values in the x-axis represent the proportion of the total number of links that have been perturbed, (b) represents the error in the sources of the diffusion \(\mathbf{X}\) with respect to the increase in the number of active sources \(K\), and (c) plots the error in the denoised GSO when increasing the number of samples \(M\).
blind のデコンボリューションは、(観測された)出力グラフ信号を用いて、入力 (ソース) ともフィルター (モデル) を取得します。これは、追加の仮定を必要とする、不適切な問題であり、例えば、ソースがスパースであることなどを要請します。この論文では、観測されたグラフが、 (未知の) 真のグラフの乱れバージョンである、不完全なグラフ情報における盲目のデコンボリューションの問題を扱っています。グラフの完璧な知識を持つことは、むしろ例外であり、この分野の文献は比較的少なくなっています。これは、グラフの拓扑に対する不確実性を標準グラフ信号処理ツール (例えば、グラフの主成分やグラフの多項式) に翻訳すること が困難であるためです。この制限に対処するため、私たちは、頂点域での最適化ベースの推定器を提案しています。この推定器は、生成フィルターの逆を推
2310.04424
Stability Analysis of Non-Linear Classifiers using Gene Regulatory Neural Network for Biological AI
The Gene Regulatory Network (GRN) of biological cells governs a number of key functionalities that enables them to adapt and survive through different environmental conditions. Close observation of the GRN shows that the structure and operational principles resembles an Artificial Neural Network (ANN), which can pave the way for the development of Biological Artificial Intelligence. In particular, a gene's transcription and translation process resembles a sigmoidal-like property based on transcription factor inputs. In this paper, we develop a mathematical model of gene-perceptron using a dual-layered transcription-translation chemical reaction model, enabling us to transform a GRN into a Gene Regulatory Neural Network (GRNN). We perform stability analysis for each gene-perceptron within the fully-connected GRNN sub network to determine temporal as well as stable concentration outputs that will result in reliable computing performance. We focus on a non-linear classifier application for the GRNN, where we analyzed generic multi-layer GRNNs as well as E.Coli GRNN that is derived from trans-omic experimental data. Our analysis found that varying the parameters of the chemical reactions can allow us shift the boundaries of the classification region, laying the platform for programmable GRNNs that suit diverse application requirements.
Adrian Ratwatte, Samitha Somathilaka, Sasitharan Balasubramaniam, Assaf A. Gilad
2023-09-14T21:37:38
http://arxiv.org/abs/2310.04424v1
# Stability Analysis of Non-Linear Classifiers using Gene Regulatory Neural Network for Biological AI ###### Abstract The Gene Regulatory Network (GRN) of biological cells governs a number of key functionalities that enables them to adapt and survive through different environmental conditions. Close observation of the GRN shows that the structure and operational principles resembles an Artificial Neural Network (ANN), which can pave the way for the development of Biological Artificial Intelligence. In particular, a gene's transcription and translation process resembles a sigmoidal-like property based on transcription factor inputs. In this paper, we develop a mathematical model of gene-perceptron using a dual-layered transcription-translation chemical reaction model, enabling us to transform a GRN into a Gene Regulatory Neural Network (GRNN). We perform stability analysis for each gene-perceptron within the fully-connected GRNN sub-network to determine temporal as well as stable concentration outputs that will result in reliable computing performance. We focus on a non-linear classifier application for the GRNN, where we analyzed generic multi-layer GRNNs as well as _E.Coli_ GRNN that is derived from trans-omic experimental data. Our analysis found that varying the parameters of the chemical reactions can allow us shift the boundaries of the classification region, laying the platform for programmable GRNNs that suit diverse application requirements. ## 1 Introduction In recent years, the field of Artificial intelligence (AI) has developed rapidly resulting in sophisticated learning algorithms that have benefited a plethora of applications (e.g., manufacturing, economics, computer vision, robotics, etc.) [(1)]. Inspired by the functions of neurons, the ultimate vision of AI is to create human-like intelligence that one day will have a working capacity close to the brain. Based on the system applications, AI can be categorized into software or hardware-based. Software-based AI includes various forms of algorithms that depends on their structure as well as training process (e.g., convolutional neural networks [(2)], recurrent neural networks [(3)], where a novel applications is large language models such as Generative Pre-trained Transformer (GPT) [(4)]. Neuromorphic processors is a hardware-based AI platform that architecturally consists of neurons and synapses constructed from memristor devices that communicate based on encoded neural spikes. [(5)]. Presently, the vast majority of AI machines are constructed using instruction-encoded circuits and silicon-based semiconductors and nanotechnology [(6)], [(7)], [(8)]. While this enables more efficient computer systems that have capabilities of learning and computing, it also results in significant challenges such as deployments in wet non-silicon mediums (e.g., biological mediums), as well as utilizing large amounts of energy. [(9)]. Current research has aimed to address these challenges and one direction taken is through Biological AI, where computing are performed through living biological cells [(10)], [(11)]. A recent examples is the _DishBrain_, where the system is composed of living neurons that can be trained to play the game of "_Pong_" on a computer [(12)]. In other works, ANNs have been programmed into bacterial cells [(13)], [(14)]. Similarly, molecular circuits programmed to behave like ANN have also been proposed, and one example is the Bio-molecular Neural Network (BNN) [(15)]. The underlying basis for all these approaches, is the communication of molecules [(16)] that operates as part of the chemical reactions to enable computing operations. From the perspective of Gene Regulatory Networks (GRN), there has been a connection between its structure and the opera tion of a ANN. In our recent work [17], we developed a model that transforms the gene-gene interaction within the GRN using weights, forming a **GRNN** while also exploring the impact of structural changes on the computing capacity. In this study, we investigate the behaviour of a fully-connected GRNN derived from a GRN, focusing on the stability analysis of the gene translation and transcription process during its computing operation. The stability analysis focuses on each perceptron of the GRNN, which we term as **gene-perceptron**. Figure 1 illustrates the mapping from ANN to GRNN. In a conventional ANN, a perceptron takes multiple inputs (\(x_{1}\) and \(x_{2}\)) and computes their weighted summation (\(\sum\)) that goes through an activation function (\(z(x)\)). In the context of the GRNN, the weights are represented as Transcription Factors (TF) concentration corresponding to half-maximal RNA concentration (\(K_{A}\)) and gene-product copy number (\(C_{N}\)), which individually impact RNA and protein concentrations. Input-genes (\(g_{X_{1}}\) and \(g_{X_{2}}\)) have TFs that binds to the promoter region of the gene-perceptron \(g_{1,i}\), which transcribes to RNA (\(R_{i}\)) and then translate to protein (\(P_{i}\)). This can be considered as a weighted summation, which results in regulatory effects on gene expression within the gene-perceptron. Based on the stability of each gene-perceptron at the steady state, the maximum-stable protein concentration (\([P]^{*}\)), represents the output. We mathematically model chemical reactions of the transcription and translation process of a gene-perceptron, which we term as the dual-layered transcription-translation reaction model (from here on we simply term this as dual-layered chemical reaction model). The dual-layered chemical reaction model can be integrated with trans-omic data model (transcriptome and proteome) and the cellular GRN in order for us to identify active genes for the specific environments, which will be the basis for us to create the GRNN. Based on this platform, we will perform stability analysis at the steady-state of molecular production (RNA and protein) for the gene-perceptron. Once we prove the stability of the gene-perceptron, as an application we focus on a non-linear classifier relying on the maximum-stable protein concentration for different concentrations of TFs that acts as inputs. To evaluate the model's performance, we analyze two generic multi-layer GRNN networks and an E.Coli GRNN. We also show that we can manipulate and shift the classification areas based on different parameter configurations. The contributions of this study can be outlined as follows: * **Developing GRNN inspired from ANN structures using dual-layer chemical reaction models:** Using the dual-layered chemical reaction model, we show that gene transcription and RNA translation process exhibit a sigmoidal-like molecular concentration dynamics at their stable points. This behavior is governed by the weights, which is a function of gene product copy number and transcription factors TFs concentration corresponding to the half-maximal RNA concentration. * **Stability analysis of GRNN:** We developed full mathematical models derived from the chemical reactions and apply Lyapunov's stability analysis for the gene-perceptron to determine stable protein concentration as well as temporal production that will facilitate reliable GRNN computing. * **GRNN application for non-linear classifiers:** Using the stability analysis, we are able to determine the decision boundaries of the derived GRNNs to classify data within regions of protein concentration output. By varying parameters of the chemical reactions, we demonstrate how the classification area can be shifted, which can serve as a tool for engineering the GRN for several non-linear classifiers based on the application's requirements. ## System Modeling This section describes the mathematical models for the gene transcription and translation within gene-perceptrons, employing a dual-layered chemical reaction model (Figure 2) that breaks down the steps of the translation and transcription process. The production of RNAs depends on RNA polymerase, TFs and \(\sigma\) factors that binds to the promoter (\(Prom\)) [18], as well as the dissociation constant (\(k_{d}\)). Once the TF binds to the promoters \(Prom\), the transcription begins at the rate of \(k_{1}\). This is followed by the RNA degradation at the rate of \(d_{1}\) based on their half-life value [19], RNA binding proteins [20] as well as the degradosome components that includes _RNase E_, _RNA helicase_, as well as _PNParse_[21]. Following the transcription of the RNAs is the translation into protein, which occurs at the rate of \(k_{2}\) facilitated by Ribosome and Transfer RNA (tRNA). Once the RNA is translated, the protein molecules start to degrade gradually at the rate of \(d_{2}\). Significant factors that affect the degradation of protein are non-coding RNA, as well as energy-dependent and energy-independent Proteases. Overall, to maintain the concentration Figure 1: Illustration of mapping between components of ANN to GRNN. In this depiction, \(w_{i}\) and \(w_{i}(K_{A},C_{N})\) represent the weights of a perceptron in ANN and GRNN, respectively, while activation function \(z(x)\) is equivalent to a combination of the transcription process of RNA concentration \([R]_{i}\) as well as translation of maximum-stable protein concentration \([P]_{i}^{*}\). The chemical reactions are governed by the transcriptions rate \(k_{1}\), translation rate \(k_{2}\), degradation rate of RNA \(d_{1}\) and degradation rate of protein \(d_{2}\). stability in the cell, RNAs and protein production are balanced by the degradation process. By taking the dual-layered chemical reactions model into account, we model the concentration changes at the transcriptome and proteome using mathematical models. These models, enable us to assess the stability of the gene-perceptron expression through the eigenvalue method and determine the stabilization time using the Lyapunov stability theorem. After determining if a particular gene-perceptron expression is stable, we determine the stability of the entire GRNN. Then, based on the application study, the classification ranges for each gene-perceptron in a network is determined at the equilibrium maximum-stable protein concentration state. Based on the sigmoidal input-output behavior and adjustable threshold, we deduce that gene-perceptrons in the GRNN consist of conventional NN properties. For the overview of the algorithm mentioned above, please refer to Figure 3. ## Modelling Transcription of a Gene In this section, we discuss transcription and the corresponding RNA concentration model. During the transcription process, the RNA polymerase and TFs bind to the promoter region and then the \(\sigma\) factor attaches to the promoter region and unwinds the DNA (22). This is followed by the \(\sigma\) factor release from the polymerase, allowing for the elongation of the RNA chain. Based on (23), the concentration change over time \(t\) of RNA for a particular gene-perceptron \(i\) can be expressed as follows (chemical species are represented using uppercase letters (e.g., \(X\)), and their corresponding concentration is enclosed within brackets (e.g., \([X]\))) \[\frac{d[R]_{i}}{dt}=k_{1_{i}}C_{N_{i}}\frac{[TF]^{n}}{K_{A_{i}}^{n}+[TF]^{n}}- d_{1_{i}}[R]_{i}. \tag{1}\] The gene-perceptron is activated by the TF, where \([R]_{i}\), \(k_{1_{i}}\), \([TF]\), \(d_{1_{i}}\), \(n\), \(C_{N_{i}}\) and \(K_{A_{i}}\) are the RNA concentration, transcription rate, concentration of TFs, degradation rate of RNA, Hill coefficient, gene product copy number and TF concentration when the production of RNA is at the half maximal point for gene-perceptron \(i\), respectively. Given the initial RNA concentration transcribed by a gene-perceptron is \([R]_{i}(0)\) (i.e., \([R]_{i}(t=0)=[R]_{i}(0)\)), the solution of Eq. 1 is derived as follows \[[R]_{i}=\frac{k_{1_{i}}C_{N_{i}}}{d_{1_{i}}}\left(\frac{[TF]^{n}}{[TF]^{n}+K_ {A_{i}}^{n}}\right)(1-e^{d_{1_{i}}t})+[R]_{i}(0)e^{d_{1_{i}}t}. \tag{2}\] In contrast, in the event that the gene-perceptron is repressed by the TF, the RNA concentration changes over time \(t\) is represented as follows, \[\frac{d[R]_{i}}{dt}=k_{1_{i}}C_{N_{i}}\frac{K_{A_{i}}^{n}}{K_{A_{i}}^{n}+[TF]^ {n}}-d_{1_{i}}[R]_{i}. \tag{3}\] Eq. 1 and 3 is expressed as a mass balance differential equation with the difference between the RNA synthesis, which is modelled using the Hill function integrated with the degradation process of the RNA (24), (25), (26). The Hill coefficient \(n\) represents the number of TF molecules that bind simultaneously to the promoter \(Prom\) with \(K_{d}\) reaction dissociation constant when the gene-perceptron is transcribing RNA (23) and is represented as \(Prom+n\ TF\stackrel{{ K_{d}}}{{\rightleftharpoons}}Prom_{nTF}\). The Hill coefficient is critical for the sigmoidal input-output characteristics of the gene-perceptron, as depicted in Figure 4. According to the plot, we can see that when we increase the hill coefficient, the sigmoidicity increase for the maximum-stable protein concentration (\([P]^{*}\)) over the input-gene concentration (\([TF]\)). Thus, when a gene-perceptron possesses a higher hill coefficient, it exhibits more sigmoidal-like behavior. (for our analytical model we consider \(n=1\)). Figure 3: Flow chart for the calculation of classification areas as well as stability based on the dual-layered transcription-translation chemical reaction model of each gene-perceptron. Figure 2: Illustration of dual-layered transcription-translation chemical reaction model of the gene-perceptron. Each components corresponds to the synthesis and degradation of RNA and protein for the \(j^{\text{th}}\) gene-perceptron in the \(i^{\text{th}}\) layer (\(g_{i,j}\)) of the GRNN. Here, \(RnpB\), \(SsrA\) and \(SsrS\) are examples for non-coding RNA (ncRNA). Examples of energy-dependent proteases include \(Lon\), \(HflB\), \(ClpXP\) and \(HslUV\). Active TF, RNAP, PNpace, RNase E and tRNA corresponds to active TFs, RNA polymerase, Polyt ribonucleotide phosphorylase, Ribonuclease E and transfer RNA, respectively. ## Modelling Translation of a RNA In this section, we describe RNA-to-protein translation and associated models. Initially, the ribosome and tRNAs form a complex that draws the amino acids in the polypeptide chain to attach to the first codon position of the RNA [27]. This is followed by the tRNAs adding amino acids one by one to form a polypeptide chain while moving along the RNA [28]. Once the stop codon is detected, the polypeptide chain is released, dissociating the ribosome complex from the RNA and forming the protein [29]. This process can be summarized through the protein concentration change over time, and is modelled as follows for a particular gene-perceptron \(i\): \[\frac{d[P]_{i}}{dt}=k_{2_{i}}[R]_{i}-d_{2_{i}}[P]_{i}, \tag{4}\] where \([P]_{i},k_{2_{i}}\) and \(d_{2_{i}}\) are the protein concentration, translation rate and degradation rate of protein for gene-perceptron \(i\). Moreover, \([R]_{i}\) is the concentration of RNA from Eq. 1, and the TF activates the gene-perceptron \(i\) based on Eq. 3 if the TF represses the gene-perceptron. Similar to Eq. 1 and 3, Eq. 4 is modelled based on mass-balance differential equation taking the difference between the RNA produced at the transcriptome level which is translated into protein at the rate of \(k_{2_{i}}\) and the amount of protein that is degraded at the rate of \(d_{2_{i}}\) due to the factors presented in Figure 2. Provided that the initial protein concentration translated by a RNA for gene-perceptron \(i\) is \([P]_{i}(0)\) (i.e., \([P]_{i}(t=0)=[P]_{i}(0)\)), the solution of Eq. 4 is given by \[[P]_{i} =\frac{k_{1_{i}}k_{2_{i}}C_{N_{i}}}{d_{1_{i}}}\left(\frac{[TF]^{n }}{[TF]^{n}+K_{A_{i}}^{n}}\right)\left(\frac{1}{d_{2_{i}}}-\frac{e^{d_{1_{i}}t} }{d_{1_{i}}+d_{2_{i}}}\right)\] \[+[R]_{i}(0)k_{2_{i}}\left(\frac{e^{d_{1_{i}}t}}{d_{1_{i}}+d_{2_{i} }}\right)+e^{-d_{2_{i}}t}[P]_{i}(0)-e^{-d_{2_{i}}t}\] \[\times[R]_{i}(0)k_{2_{i}}\frac{1}{(d_{1_{i}}+d_{2_{i}})}-e^{-d_{2_ {i}}t}\frac{k_{1_{i}}k_{2_{i}}C_{N_{i}}}{d_{1_{i}}}\] \[\left(\frac{[TF]^{n}}{[TF]^{n}+K_{A_{i}}^{n}}\right)\times\left( \frac{1}{d_{2_{i}}}-\frac{1}{(d_{1_{i}}+d_{2_{i}})}\right). \tag{5}\] ## Methods This section introduces the mathematical models for the stability analysis and RNA/Protein concentration changes over time, and subsequently demonstrates how to apply these mathematical models in the GRNNs. ### Gene Expression Stability Analysis In this section, we discuss the approach towards analyzing the stability of the gene-perceptron expression. Our view of the stability of the gene-perceptron is when the RNA transcription as well as the protein translation concentrations reach maximum over time and remain stable at that level exhibiting a sigmoidal behavior. To confirm the existence of transcription and translation upper bounds, we use eigenvalue-based stability analysis. This, in turn, ensures a stable classification region of the GRNN due to a protein concentration with minimum fluctuations that can result in minimized computing errors. Moreover, another crucial property that should be considered in GRNN computing is the time it takes the GRNN to reach stability, which is investigated using the Lyapunov function in the following sections. ### Stability of Gene-Perceptron based on Eigenvalues The stability of the gene-perceptron is governed by the concentration changes of the gene expression as well as protein translation using the Jacobian matrix of Eq. 1 and 4, which enables us to define the equilibrium point based on the eigenvalues. While we have only considered the case of gene transcription in Eq. 1, our approach is also applicable for repression process defined in Eq. 3. Since we are analysing the stability of the gene-perceptron at the equilibrium point, we can represent the maximum-stable RNA \([R]_{i}^{*}\) and protein \([P]_{i}^{*}\) concentration as follows: \[[R]_{i}^{*}=\frac{k_{1_{i}}C_{N_{i}}}{d_{1_{i}}}\left(\frac{[TF]^{n}}{[TF]^{n} +K_{A_{i}}^{n}}\right), \tag{6}\] \[[P]_{i}^{*}=\frac{k_{1_{i}}k_{2_{i}}C_{N_{i}}}{d_{1_{i}}d_{2_{i}}}\left(\frac{[ TF]^{n}}{[TF]^{n}+K_{A_{i}}^{n}}\right). \tag{7}\] The maximum-stable RNA and protein concentrations are determined for different TF concentrations. Additionally, we can vary gene-specific parameters such as \(C_{N_{i}}\) to achieve different non-linear classification ranges [30], implying that by engineering the cell, we can change its decision-making process. To determine the eigenvalues of Eq. 1 and 4 at the equilibrium points of Eq. 6 and 7, we use the Jacobian matrix given in Eq. 24 (please see Appendix). Hence, the eigenvalues are \(\lambda_{1}=-d_{1_{i}}\) and \(\lambda_{2}=-d_{2_{i}}\). Since all the eigenvalues (\(\lambda_{1}\) and \(\lambda_{2}\)) are negative, we can conclude that the gene-perceptron reaches maximum-stable concentration level. ### Stability of a Gene-Perceptron using Lyapunov function To determine the temporal stability, we employ Lyapunov stability theorem that is based on the function (\(V([R]_{i},[P]_{i})\)) (from the Appendix Eq. 25) which satisfies the necessary conditions: \(V\left([R]_{i},[P]_{i}\right)=0\) when \([R]_{i}=[R]_{i}^{*}\) and \([P]_{i}=[P]_{i}^{*}\); where \([R]_{i}^{*}\) and \([P]_{i}^{*}\) are RNA and protein concentration at the equilibrium. Additionally, \(V\left([R]_{i},[P]_{i}\right)>0\) due to the quadratic nature of all terms. Finally, we consider the first derivative of Eq. 25 as given by Eq. 27, as the last condition to be satisfied for the stability of the gene-perceptron. Then, according to the Lyapunov's theorem, if Eq. 27 is negative, the gene-perceptron is Figure 4: Sigmoidicity fluctuations for different Hill coefficients. asymptotically stable and if Eq. 27 is less than or equal to zero, the gene-perceptron is Lyapunov stable (See Eq. 25 - 27 in the Appendix for the complete derivation). Since it is difficult to directly determine the sign of the derivative of the Lyapunov function in Eq. 27 (please see the Appendix), we illustrate the temporal fluctuation of Eq. 27 in Figure 5. This provides us the insights into the dynamic stability behavior of the gene-perceptron. ## Gene Regulatory Neural Network Analysis While the previous section present the stability analysis of each individual gene-perceptron, they need to be integrated into a GRNN in order to perform the classification operation. We focus on two types of generic multi-layer GRNNs. In the first network, we consider direct gene relationships within the GRN from the input to the outputs that mimics a multi-layer ANN. In the second case, we consider a Random structured multi-layer GRNN with intermediate gene-perceptrons. ### Multi-layer GRNN This GRNN network, which is illustrated in Figure 6, consists of three hidden layer gene-perceptrons (\(g_{1,1},g_{1,2},g_{1,3}\)) and one output layer gene-perceptron (\(g_{2,1}\)) (\(g_{i,j}\) represents the \(j^{\text{th}}\) gene-perceptron in \(i^{\text{th}}\) layer in the sub-network). The concentrations that is output from layer 1 to layer 2 are \([TF]_{1,1}\), \([TF]_{1,2}\), \([TF]_{1,3}\) and \([P]\) is the output from gene-perceptron \(g_{2,1}\). The two input-genes (\(g_{X_{1}}\) and \(g_{X_{2}}\)) are TFs with corresponding concentrations (\([TF]_{x_{1}}\) and \([TF]_{x_{2}}\)), respectively. The RNA concentration change over time \(t\), for the hidden layer gene-perceptrons, based on Eq. 1, can be expressed as, \[\frac{d[R]_{i}}{dt}=k_{1_{i}}C_{N_{i}}\left(\frac{[TF]_{x_{1}}^{n}}{K_{A_{i}}^{ n}+[TF]_{x_{1}}^{n}}\right)\cdot\left(\frac{[TF]_{x_{2}}^{n}}{K_{A_{i}}^{n}+[TF]_{x_ {2}}^{n}}\right)-d_{1_{i}}[R]_{i}, \tag{8}\] for the activators, \(i=g_{1,1},g_{1,2}\). Since the gene-perceptron \(g_{1,3}\) has a repression from gene-perceptron \(g_{x_{3}}\), the changes in the RNA production based on Eq. 3, is given by \[\frac{d[R]_{g_{1,3}}}{dt}=k_{1_{g_{1,3}}}C_{N_{g_{1,3}}}\left( \frac{[TF]_{x_{1}}^{n}}{K_{A_{g_{1,3}}}^{n}+[TF]_{x_{1}}^{n}}\cdot\frac{K_{A_ {g_{1,3}}}}{K_{A_{g_{1,3}}}^{n}+[TF]_{x_{2}}^{n}}\right)\\ -d_{1_{g_{1,3}}}[R]_{g_{1,3}}. \tag{9}\] The RNA concentration changes of the output gene-perceptron \(g_{2,1}\) that consists of TFs from the gene-perceptrons \(g_{1,1},g_{1,2}\) and \(g_{1,3}\) with the output protein concentration that contribute as TF concentration (\([TF]_{1,1}=[P]_{g_{1,1}},[TF]_{1,2}=[P]_{g_{1,2}}\) and \([TF]_{1,3}=[P]_{g_{1,3}}\)) to accumulate in order to invoke the expression is given by, \[\frac{d[R]_{g_{2,1}}}{dt}=k_{1_{g_{2,1}}}C_{N_{g_{2,1}}}\left( \frac{[TF]_{1,1}^{n}}{K_{A_{g_{2,1}}}^{n}+[TF]_{1,1}^{n}}\right)\\ \cdot\left(\frac{[TF]_{1,2}^{n}}{K_{A_{g_{2,1}}}^{n}+[TF]_{1,2}^ {n}}\right)\cdot\left(\frac{[TF]_{1,3}^{n}}{K_{A_{g_{2,1}}}^{n}+[TF]_{1,3}^ {n}}\right)-d_{1_{g_{2,1}}}[R]_{g_{2,1}}. \tag{10}\] Each of the gene-perceptron also undergoes a translation process. Therefore, the protein concentration change for each gene-perceptron can be modelled using Eq. 4 for \(i=g_{1,1}\), \(g_{1,2}\), \(g_{1,3}\) and \(g_{2,1}\). The maximum-stable protein concentration can be derived by setting Eq. 8 -10 to zero to find \([R]_{i}^{*}\), which is then plugged into Eq. 4 and set to zero for \(i=g_{1,1},g_{1,2},g_{1,3}\) and \(g_{2,1}\), respectively. \[i=g_{1,1},g_{1,2}\Longrightarrow[P]_{i}^{*}=\frac{k_{1_{i}}k_{2_ {i}}C_{N_{i}}}{d_{1_{i}}d_{2_{i}}}\left(\frac{[TF]_{x_{1}}^{n}}{K_{A_{g_{1,3}} }^{n}+[TF]_{x_{1}}^{n}}\right)\\ \times\left(\frac{[TF]_{x_{2}}^{n}}{K_{A_{i}}^{n}+[TF]_{x_{2}}^{ n}}\right), \tag{11}\] \[[P]_{g_{1,3}}^{*}=\frac{k_{1_{g_{1,3}}}k_{2_{g_{1,3}}}C_{N_{g_{1,3}}}}{d_{1_{g_{ 1,3}}}d_{2_{g_{1,3}}}}\Bigg{(}\frac{[TF]_{x_{1}}^{n}}{K_{A_{g_{1,3}}}^{n}+[TF ]_{x_{1}}^{n}}\Bigg{)}\\ \times\left(\frac{K_{A_{g_{1,3}}}}{K_{A_{g_{1,3}}}+[TF]_{x_{2}}^{ n}}\right), \tag{12}\] Figure 5: Temporal stability of a Gene-perceptron based on the derivative of the Lyapunov function with respect to time. This shows that the gene-perceptron reaching stability over time. Figure 6: Multi-layer GRNN with two-input layer nodes, three hidden-layer gene-perceptrons (\(g_{1,1},g_{1,2},g_{1,3}\)) and one output layer gene-perceptron (\(g_{2,1}\)) and their corresponding output concentrations are transcription factors \([TF]_{1,1},[TF]_{1,2},[TF]_{1,3}\) and protein concentration \([P]\) respectively. There are two input-genes (\(g_{x_{1}}\), \(g_{x_{2}}\)) considered as two TFs with concentration of \([TF]_{x_{1}}\) and \([TF]_{x_{2}}\), respectively. In this context, \(g_{i,j}\) represents the \(j^{\text{th}}\) gene-perceptron in \(i^{\text{th}}\) layer in the GRNN. Input-gene activators and input-gene regressors are denoted by \((+)\) and \((-)\) edges, respectively. The weights \((w)\) of this GRNN is a function of the TF concentration corresponding to the half-maximal RNA concentration (\(K_{A_{i}}\)) and gene-product copy number (\(C_{N_{i}}\)) for the gene-perceptron \(i\) represented as \(w(K_{A_{i}},C_{N_{i}})\). \[[P]^{*}_{g_{2,1}}=\frac{k_{1_{g_{2,1}}}k_{2_{g_{2,1}}}C_{N_{g_{2,1}}} }{d_{1_{g_{2,1}}}d_{2_{g_{2,1}}}}\left(\frac{[TF]^{n}_{1,1}}{K^{n}_{A_{g_{2,1}}}+[ TF]^{n}_{1,1}}\right)\] \[\times\left(\frac{[TF]^{n}_{1,2}}{K^{n}_{A_{g_{2,1}}}+[TF]^{n}_{1,2}}\right)\left(\frac{[TF]^{n}_{1,3}}{K^{n}_{A_{g_{2,1}}}+[TF]^{n}_{1,3}} \right). \tag{13}\] Eq. 11 - 13, which are the stable concentration quantity of proteins produced, is used to compute the classification areas for each gene-perceptron based on the value of concentration, which is further elaborated in the Results section as we present a case study. Subsequently, we apply the approach from the Methods Section to show the stability of the gene-perceptron in this GRNN. The overall stability of the GRNN based on the derived Lyapunov function of Eq. 27 (please see Appendix), which can be further expressed for \(l\) number of TFs connected to a gene-perceptron (\(i\)), is represented as follows \[\frac{dV}{dt}=-\prod_{j=1}^{l}\frac{C^{2}_{N_{i}}\cdot[TF]^{2n}_{ j}\cdot k^{2}_{1_{i}}\cdot e^{(-2t(d_{i_{1}}+d_{2_{i}}))}}{d_{1_{i}}d_{2_{i}}([TF]^{ n}_{j}+K^{n}_{A_{j}})^{2}(d_{1_{i}}-d_{2_{i}})^{2}}\] \[\times(d^{3}_{2_{i}}\cdot e^{(2d_{2}t_{i})}-2d_{1_{i}}d^{2}_{2_{i} }\cdot e^{(2d_{2}t_{i})}+d_{1_{i}}d_{2_{i}}\cdot e^{(2d_{2}t_{i})})\] \[\qquad\qquad+(d_{1_{i}}k^{2}_{2_{i}}\cdot e^{(2d_{1}t_{i})}+d_{2_ {i}}k^{2}_{2_{i}}\cdot e^{(2d_{2}t_{i})})-\] \[\qquad\qquad-(d_{1_{i}}k^{2}_{2_{i}}\cdot e^{(t(d_{1_{i}}+d_{2_{i }}))})+d_{2_{i}}k^{2}_{2_{i}}\cdot e^{(t(d_{1_{i}}+d_{2_{i}}))}), \tag{14}\] where \([TF]_{j}\) and \(K_{A_{j}}\) are concentration of \(j^{\text{th}}\) TF and corresponding half maximal RNA concentration for gene-perceptron \(i\), respectively. ### Random Structured GRNN As described earlier, relationship of gene-perceptrons within a GRN that have common TFs may have intermediate gene-perceptrons within the path of connections. We analyze how this impacts on the overall stability of the GRNN, where the network for this case is presented in Figure 7. In this form of networks, it is necessary to consider the RNA concentration change from the intermediate gene-perceptron (\(g_{2,1}\)) and its impact on the output layer gene-perceptron (\(g_{3,1}\)). The expressions for each gene-perceptrons, and their relative TFs from their immediate predecessor, is represented as follows: \[\frac{d[R]_{g_{2,1}}}{dt}=k_{1_{g_{2,1}}}C_{N_{g_{2,1}}}\left( \frac{[TF]^{n}_{1,1}}{K^{n}_{A_{g_{2,1}}}+[TF]^{n}_{1,1}}\right)-d_{1_{g_{2,1}} }[R]_{g_{2,1}}, \tag{15}\] \[\frac{d[R]_{g_{3,1}}}{dt}=k_{1_{g_{3,1}}}C_{N_{g_{3,1}}}\left( \frac{[TF]^{n}_{2,1}}{K^{n}_{A_{g_{3,1}}}+[TF]^{n}_{2,1}}\right)\cdot\left( \frac{[TF]^{n}_{1,2}}{K^{n}_{A_{g_{3,1}}}+[TF]^{n}_{1,2}}\right)\] \[\times\left(\frac{[TF]^{n}_{1,3}}{K^{n}_{A_{g_{3,1}}}+[TF]^{n}_{1,3}}\right)-d_{1_{g_{3,1}}}[R]_{g_{3,1}}. \tag{16}\] Here, the protein concentration from Eq. 5 can be derived from Eq. 15 (i.e., \([TF]_{1,1}=[P]_{1,1}\)), since the gene-perceptron \(g_{2,1}\) is activated by gene-perceptron \(g_{1,1}\). The RNA concentration models behaves similarly to the case without the intermediate gene-perceptron for the gene-perceptrons \(g_{1,1}\), \(g_{1,2}\)\(g_{1,3}\) and can be derived directly from Eq. 8 and 9. Using Eq. 4 we can determine the protein concentration change for each gene-perceptron Figure 7. Using the maximum-stable protein concentration derived from Eq. 15 and 16, we can determine \([R]^{*}_{i}\), which is then applied to Eq. 4 and used to determine the maximum-stable value for \(i=g_{2,1}\) and \(g_{3,1}\). This will result in the following maximum-stable protein production that is represented as follows \[[P]^{*}_{g_{2,1}}=\frac{k_{1_{g_{2,1}}}k_{2_{g_{2,1}}}C_{N_{g_{2,1}}}}{d_{1_{g _{2,1}}}d_{2_{g_{2,1}}}}\left(\frac{[TF]^{n}_{1,1}}{K^{n}_{A_{g_{2,1}}}+[TF]^{n }_{1,1}}\right), \tag{17}\] \[[P]^{*}_{g_{3,1}}=\frac{k_{1_{g_{3,1}}}k_{2_{g_{3,1}}}C_{N_{g_{3,1}}}}{d_{1_{g _{3,1}}}d_{2_{g_{3,1}}}}\left(\frac{[TF]^{n}_{2,1}}{K^{n}_{A_{g_{3,1}}}+[TF]^{n }_{2,1}}\right)\] \[\cdot\left(\frac{[TF]^{n}_{1,2}}{K^{n}_{A_{g_{3,1}}}+[TF]^{n}_{1,2}}\right)\left(\frac{[TF]^{n}_{1,3}}{K^{n}_{A_{g_{3,1}}}+[TF]^{n}_{1,3}} \right). \tag{18}\] We use Eq. 11 to determine \([P]^{*}_{i}\) for \(i=g_{1,1}\) and \(g_{1,2}\), while for \(i=g_{1,3}\) we use Eq. 12. For the stability analysis, Eq. 14 is used with \(l=2\) for \(g_{1,1}\), \(g_{1,2}\) and \(g_{1,3}\), \(l=1\) for \(g_{2,1}\) and \(l=3\) for \(g_{3,1}\) corresponding to the number of TFs for each gene-perceptron. ## 4 Results In this section, we perform the temporal stability analysis and obtain the classification areas for the two multi-layer GRNN network topologies (Figures 6, 7) as well as the GRNN derived from _E.Coli_ GRN. from the input-gene \(g_{X_{1}}\). The output-layer gene-perceptron (\(g_{2,1}\)) followed a similar trend as gene-perceptrons \(g_{1,1}\) and \(g_{1,2}\) attaining Lyapunov stability within the initial 30 seconds because its immediate predecessors are all activators. Given the gene-perceptron's stability at the equilibrium (Figure 8), we can use Eq. 11 - 13 to calculate output protein \([P]_{i}^{*}\) for different input concentrations (\([TF]_{x_{1}}\) and \([TF]_{x_{2}}\)). The calculated output protein \([P]_{i}^{*}\) is illustrated over varying input concentrations, highlighting the values above and below the threshold (\([P]^{*}=0.5\)). Decision boundaries reflect how the classification areas change based on the edge (activation or repression) connected to the target gene-perceptron and corresponding parameters in Eq. 11 - 13. The inputs (\([TF]_{x_{1}}\) and \([TF]_{x_{2}}\)) vary, while parameters like gene product copy number (\(C_{N_{1}}\)), transcription rate (\(k_{1_{j}}\)), translation rate (\(k_{2_{j}}\)), RNA degradation rate (\(d_{1_{j}}\)), protein degradation rate (\(d_{2_{j}}\)) and TF concentration corresponding to the half maximal RNA concentration (\(K_{A_{j}}\)) are kept constant. We consider two parameters sets to determine the different classification regions, which are presented in Table 1. For the parameters set 1, we obtain the classification areas shown in Figure (a)a. The decision boundary and their top-view for each gene-perceptron are shown in the first and second row, respectively. The gene-perceptron \(g_{1,2}\) has the largest classification area above the threshold due its lower TF concentration corresponding to half maximal RNA concentration \(K_{A_{t}}\), compared to gene-perceptrons \(g_{1,1}\) and \(g_{1,3}\). Moreover, the decision boundaries for gene-perceptrons \(g_{1,1}\) and \(g_{1,2}\) exhibits a similar shape classifying majority of the values above the threshold. In contrast, the gene-perceptron \(g_{1,3}\) covers larger area for the values below the threshold since it is repressed by the input-gene \(g_{x_{2}}\). The intersection of classification areas corresponding to hidden layer gene-perceptrons is represented by the output layer gene-perceptron \(g_{2,1}\), where the classification area above the threshold is approximately bounded by input concentrations, \(2.5\leq[TF]_{x_{1}}\leq 3.5\) and \(3.4\leq[TF]_{x_{2}}\). Due to the significant contribution from gene-perceptrons \(g_{1,1}\) and \(g_{1,2}\) beyond the threshold, the output layer gene-perceptron \(g_{2,1}\) exhibits a rightward shift. For the parameter set 2 (Table 1), the lower \(K_{A_{t}}\) values have shifted the classification area above the threshold compared to parameter set 1. This shift is evident in Figure (b)b, particularly for the gene-perceptron \(g_{1,2}\), which results in classifying majority of the values above the threshold. Conversely, for the gene-perceptron \(g_{1,3}\), the classification area shifts below the threshold due to the repression from the input when reducing the half maximal RNA concentration \(K_{A_{t}}\). The classification range for the gene-perceptron \(g_{1,1}\) expands compared to parameter set 1, approximately bounded by \(2.3\leq[TF]_{x,1}\) and \(2.1\leq[TF]_{x,2}\). Considering all gene-perceptrons, the output layer gene-perceptron \(g_{2,1}\) shows a leftward shift in the decision boundary, becoming slightly more linear. Overall, modifying the half maximal RNA concentration \(K_{A_{t}}\) can significantly expand the classification area. Eq. 14 and the parameter set 1 from Table 2. Similar to the Figure 8, gene-perceptrons \(g_{1,1},g_{1,2},g_{3,1}\) and the intermediate gene-perceptron \(g_{2,1}\) exhibit consistent stability fluctuations due to their immediate predecessor being activators. Additionally, gene-perceptron \(g_{1,3}\) shows similar stability fluctuation patterns as the gene-perceptron \(g_{1,3}\) in the network without the intermediate gene-perceptron and this is because both are being influenced by their repressive predecessors. Following the temporal stability analysis, we apply Eq. 11 and 12 to determine the maximum-stable protein concentration (\([P]_{i}^{*}\)) for the gene-perceptrons \(g_{1,1},g_{1,2}\) and \(g_{1,3}\). However, unlike the GRNN in Figure 6, Eq. 13 is not used to determine the classification area for the output layer gene-perceptron. Instead, for the computation of \([P]_{i}^{*}\) for the gene-perceptrons \(g_{2,1}\) and \(g_{3,1}\), both Eq. 17 and 18 is employed due to the addition of the intermediate gene-perceptron compared to the multi-layer GRNN in Figure 6. The calculated protein concentration output \([P]_{i}^{*}\) values for different input concentrations used to determine the classification area for each gene-perceptron is presented in Figure 12. We also used two different sets of parameters from Table 2 to analyze different classification areas. The parameter set 1 results in the classification areas shown in Figure 11(a). As the gene-perceptron \(g_{2,1}\) serves as the intermediate gene-perceptron of \(g_{1,1}\), we observe similar classification areas and decision boundaries. Additionally, repression from the input-gene \(g_{x_{1}}\) to the gene-perceptron \(g_{1,3}\) results in a distinctive decision boundary, approximately within the range of \(3\leq[TF]_{x_{2}}\) and \(3\geq[TF]_{x_{1}}\). Overall, the gene-perceptron \(g_{3,1}\) represents the intersection of the hidden layer gene-perceptrons, with the classification area beyond the threshold bounded by Figure 11: Temporal stability for each gene-perceptrons in the _E. coli_ GRNN. Figure 10: Temporal stability of the gene-perceptrons for the Random Structured GRNN. Figure 9: Parameter configurations for the Multi-layer GRNN depicted in Figure 6. Each graph depicts the classification area of each gene-perceptron and for (a) Parameter set 1, as well as (b) Parameter set 2 (\(g_{2,1}\) is the output gene-perceptron that combines all classification areas of gene-perceptrons from the previous layer). \(2.5\leq[TF]_{x_{2}}\leq 3.5\) and \(3\geq[TF]_{x_{1}}\). In contrast, reducing the TF concentration at the half maximal RNA concentration (\(K_{A_{t}}\)) for a gene-perceptron as shown in parameter set 2, alters the classification areas for both gene-perceptron \(g_{1,1}\) and its immediate intermediate gene-perceptron \(g_{2,1}\), as illustrated in Figure 11(b). The classification area significantly expands above the threshold, while dropping below it when lowering the TF concentration corresponding to the half-maximal RNA concentration \(K_{A_{t}}\), as it is inversely proportional to the maximum protein concentration \([P]_{i}^{*}\) based on Eqs. 8 and 17. Alterations made to gene-perceptron \(g_{1,1}\) notably impacts \(g_{2,1}\), the predecessor gene-perceptron in the GRNN. Other hidden layer gene-perceptrons \(g_{1,2}\) and \(g_{1,3}\) remain unaffected between parameter sets 1 and 2. Parameter set 2 results in a leftward shift in the classification area of the output layer gene-perceptron \(g_{3,1}\) compared to set 1. In summary, parameter adjustments leads to shifts in the decision boundary of the output layer gene-perceptrons; with decreased \(K_{A_{t}}\) causing a leftward shift in the the classification area. ### E.Coli GRNN Classification Analysis This section demonstrates the classification areas for the _E.coli_ GRNN illustrated in Figure 12(a), which is extracted from the trans-omic data of _E.coli_ GRNN (31). The network consists of two input-genes (\(b3025,b3357\)), two hidden layer gene-perceptrons (\(b1891\) and \(b1892\)) and one output layer gene-perceptron (\(b1071\)) with their corresponding TF concentrations \([TF]_{i}\) for \(i=b3025,b3357,b1891\) and \(b1892\), and protein concentration \([P]_{b1071}\). In this specific GRNN, all TFs are considered activators. For the output layer gene-perceptron (\(i=b1071\)), we employ Eqs. 8, 4 and 11 with TFs \(x_{1}=b1891\) and \(x_{2}=b1892\) to calculate RNA, protein concentration change and maximum protein concentration (\([P]_{i}^{*}\)), respectively using the parameter values in Table 3. Similar to the previous GRNNs, we based the stability analysis for this GRNN on Eq. 14. For the 2 input layer gene-perceptrons (\(i=b1891\) and \(b1892\)), we consider TFs \(j=b3025,b3357\), while for the output layer gene-perceptron \(i=b1071\), we evaluate stability with the TFs \(j=b1891,b1891\). In the previous GRNNs, we found that in Figures 8, 10 that the gene-perceptrons with an immediate activator, exhibits a consistent stability fluctuations before reaching Lyapunov stability \(\left(\frac{dV}{dt}\approx 0\right)\). This is also a similar behaviour with the _E.Coli_ GRNN, which is shown in Figure 11, which shows the temporal stability for the gene-perceptrons (\(g_{1,1}\), \(g_{1,2}\) and \(g_{2,1}\)) that is influenced by the immediate activator predecessors displaying uniform stability. Overall, the analysis indicates that all the gene-perceptrons in the GRNN eventually attained the Lyapunov stability, ensuring network-wide stability, but with different timing periods. Once proving the stability of the GRNN, we ascertain the maximum-stable protein concentration to obtain the classification Figure 12: Parameter configurations for the Random Structured GRNN in Figure 6. Each graph depicts the classification area of each gene-perceptron and for (a) Parameter set 1; (b) Parameter set 2 (\(g_{3,1}\) is the output gene-perceptron that combines all classification areas of gene-perceptrons from the previous layer). ranges. In order to compute maximum-stable protein concentration (\([P]_{i}^{*}\)) for gene-perceptrons \(i=b1891\) and \(1892\), we use Eq. 11 with the replacement of \(x_{1}\) and \(x_{2}\) by \(b3025\) and \(b3357\) as input genes. Furthermore, for the computation of output concentrations \([P]_{i}^{*}\), concerning gene-perceptron \(i=b\,1071\), Eq. 11 is used with TFs as \(x_{1}=b\,1891\) and \(x_{2}=b\,1892\) with the assumption that the Hill coefficient \(n\) is equal to \(1\) in all simulations. Since \(K_{A_{i}}\) is the TF concentration corresponding to the half maximal RNA concentration, there are two \(K_{A_{i}}\) values for each gene-perceptron because each has two TFs, as shown in Figure 12(a). The time-series data of gene expression levels for _E.coli_ was used by first identifying the gene's half maximal expression level \(K_{A_{i}}\) and then finding the expression level of its TF at that corresponding time point. For the remaining parameters that was obtained from literature as shown in Table 3, the average value was used. The classification area from our analysis is shown in Figure 12(b). The classification area of gene-perceptron \(b\,1892\) has expanded towards the left when compared to \(b\,1891\), and this is because the expression level of the half-maximal RNA concentration \(K_{A_{i}}\) of both TFs (\(b3025\) and \(b\,3357\)) corresponding to \(b\,1891\) exceed the value of \(K_{A_{i}}\) for \(b\,1892\). The classification area above the threshold of \(b\,1892\) is defined within the limits of \([TF]_{b3025}\geq 2.7\) and \([TF]_{b3357}\geq 2.7\), in contrast to \(b\,1891\) which is approximately bounded by \([TF]_{b3025}\geq 3.5\) and \([TF]_{b3357}\geq 3.8\). Consistent with the decision boundary simulations performed on the two generic multi-layer GRNNs (Figure 9 and 12), the output-layer gene-perceptron (\(b1071\)) of this GRNN also exhibited a intersection of classification areas driven by the input-layer gene-perceptrons. In line with this, as gene-perceptron \(b\,1891\) had the majority of its classification area below the threshold and gene-perceptron \(b\,1892\) had the majority above the threshold, the decision boundary of gene-perceptron \(b\,1071\) is approximately bounded by \([TF]_{b3025}\geq 2.9\) and \([TF]_{b3357}\geq 2.9\). Overall, gene-perceptrons within the GRNN derived from E.coli GRN exhibit tunable decision boundaries by selecting sub-netowrks from the GRN at steady-state and collectively they function as multi-layer GRNN showcasing aspects of biological AI. ## 6 Conclusion In this study, we introduced a GRNN that can be derived from a cell's GRN and mathematical modelling this for the transcription and translation process, transforming a gene into a gene-perceptron. We also performed stability analysis for the GRNN as it functions as a non-linear classifier. This is based on the eigenvalue method and the Lyapunov's stability theorem, with the latter approach capable of determining the time at which the stability is achieved. The classification application was applied to two multi-layer GRNNs as well as a sub-network extracted from the E.coli GRN using trans-omic data. From the simulation for different parameter settings for the two multi-layer GRNN revealed that the TF concentration at the half maximal gene expression level \(K_{A_{i}}\), has a significant impact on the shifting of the classification boundary. Based on the outcomes of the stability analysis and simulations, we can conclude that the GRN exhibits NN properties as the gene-perceptron demonstrated sigmoidal-like behavior for multiple inputs and tunable decision boundary. Further, by engineering living cells it is possible to obtain desired non-linear classifiers based on our application. Our model has potential to transform GRNs into GRNN when the suitable parameters are established for the dual-layered chemical reaction model. ## 7 Author Contributions A.R., S.S. and S.B. designed the theoretical framework of the study. The implementation of the analysis was done by A.R. while Figure 13: _E. coli_ GRNN classification analysis. (a) Fully-connected GRNN derived from the E.coli GRN. This network consists of two input-genes (\(b3025,b3357\)), two hidden layer gene-perceptrons (\(b\,1891\) and \(b\,1892\)), and one output layer gene-perceptron (\(b1071\)). (b) Classification regions of each gene perceptron within the _E. coli_ GRNN, with gene-perceptron \(b\,1071\) as the output. A.G. provided the knowledge for the biological aspect of this study. All the authors wrote and reviewed the final manuscript. ## Acknowledgments This publication has emanated from research conducted with the financial support of National Science Foundation (NSF) under Grant Number 2316960. ## Declaration of Interests The authors declare no competing interests. ## Appendix ### RNA and Protein Concentration Model To model the RNA and protein concentration change, mass-balance differential equations were used based on Hill function. Transcription of a gene-perceptron begins with TF and RNA polymerase binding to the promoter, which is modelled by, \[[Prom.TF]=C_{N_{i}}\frac{[TF]^{n}}{[TF]^{n}+K_{A_{i}}^{n}}, \tag{19}\] where \([TF],n,K_{A_{i}},[Prom.TF]\) and \(C_{N_{i}}\) are concentration of TFs, Hill coefficient, TF concentration corresponding to half maximal RNA concentration, complex produced after TFs bind to promoter and gene product copy number, respectively. The complex, \(Prom.TF\) transcribes into RNA at the rate of \(k_{1_{i}}\) and subsequently RNA degrades at the rate of \(d_{1_{i}}\) which can be modelled as \[\frac{d[R]_{i}}{dt}=k_{1_{i}}[Prom.TF]-d_{1_{i}}[R]_{i}. \tag{20}\] By plugging Eq. 19 in Eq. 20 we can obtain Eq. 1. In contrast, if a gene-perceptron is repressed by a TF, Eq. 19 can be expressed as \[[Prom.TF]=C_{N_{i}}\frac{K_{A_{i}}^{n}}{K_{A_{i}}^{n}+[TF]^{n}}. \tag{21}\] Since the initial RNA concentration transcribed by a gene-perceptron is \([R]_{i}(0)\) (i.e., \([R]_{i}(t=0)=[R]_{i}(0)\)), the solution of Eq. 1 as given by Eq. 2 can be derived using the integrating factor, \(IF=e^{f\,d_{1_{i}}\,dt}=e^{d_{2_{i}}t}\), where \(t\) and \(d_{1_{i}}\) are time and RNA degradation rate, respectively. Transcribed RNA is then translated into protein at the proteome level. To solve the differential equation of protein concentration change for Eq. 4 we can follow 2 steps. **Step 1**: Replacing RNA concentration (\([R]_{i}\)) in Eq. 4 with the solution obtained for the differential equation of RNA concentration change from Eq. 2. **Step 2**: Using the integrating factor (\(IF=e^{f\,d_{2_{i}}\,dt}=e^{d_{2_{i}}t}\)) and initial RNA concentration (\([R]_{i}(0)\)), as well as initial protein concentration \([P]_{i}(0)\) (i.e., \([P]_{i}(t=0)=[P]_{i}(0)\)) we can obtain the equation for the protein concentration in Eq. 5. By setting \(\frac{d\,[R]_{i}}{dt}=0\), we can obtain maximum-stable RNA concentration at the steady-state (\([R]_{i}^{*}\)) expressed by Eq. 6. In addition, protein concentration at the steady-state (\([P]_{i}^{*}\)) can be represented by Eq. 7 which is derived by plugging \(\frac{d\,[P]_{i}}{dt}=0\) in Eq. 4. ### Determining Gene-perceptron Stability In this section, we derive the stability of a gene-perceptron using eigenvalues of differential equations for RNA and protein concentration change (Eq. 1 and 4) and using Lypunov's stability theorem. Based on (15), we applied eigenvalue method to determine the stability in the gene-perceptrons. Suppose \(f\) and \(g\) are functions of \([R]_{i}\) and \([P]_{i}\). Such that, \[\text{Eq.}1 \Longrightarrow\frac{d\,[R]_{i}}{dt}=f\left([R]_{i},[P]_{i} \right), \tag{22}\] \[\text{Eq.}4 \Longrightarrow\frac{d\,[P]_{i}}{dt}=g([R]_{i},[P]_{i}). \tag{23}\] Then, the Jacobian matrix for Eqs. 1 and 4 at the equilibrium point is represented as, \[J_{i}=\begin{bmatrix}\frac{\partial f}{\partial[R]_{i}}&\frac{\partial f}{ \partial[P]_{i}}\\ \frac{\partial g}{\partial[R]_{i}}&\frac{\partial g}{\partial[P]_{i}}\end{bmatrix} =\begin{bmatrix}-d_{1_{i}}&0\\ k_{2_{i}}&-d_{2_{i}}\end{bmatrix}, \tag{24}\] for gene-perceptron \(i\). Using the characteristic equation \(|J_{i}-\lambda I|=0\) we can determine the eigenvalues for the above Jacobian matrix (Eq. 24) as \(\lambda_{1}=-d_{1_{i}},\lambda_{2}=-d_{2_{i}}\). Hence, all the eigenvalues are negative, indicating that the gene-perceptron is stable, where \(\lambda\) is scalar, \(I\) is a \(2\times 2\) identity matrix, \(d_{2_{i}}\) is the protein degradation rate, \(d_{1_{i}}\) is the RNA degradation rate and \(k_{2_{i}}\) is the translation rate. We use the Lyapunov function (\(V\)) to perform the temporal stability analysis defined for the Eqs. 1 and 4 as follows, \[V\left([R]_{i},[P]_{i}\right)=\left([R]_{i}-[R]_{i}^{*}\right)^{2}+\left([P]_{ i}-[P]_{i}^{*}\right)^{2}. \tag{25}\] According to the Lyapunov's stability theorem, \(V\left([R]_{i},[P]_{i}\right)=0\) when \([R]_{i}=[R]_{i}^{*}\) and \([P]_{i}=[P]_{i}^{*}\), where \([R]_{i}^{*}\) and \([P]_{i}^{*}\) are RNA and protein concentration at the equilibrium. It is clear that \(V\left([R]_{i},[P]_{i}\right)>0\), since all terms are quadratic. Finally, we consider the first derivative of Eq. 25 as the last condition for the stability, which is represented as \[\dot{V}([R]_{i},[P]_{i})=\frac{dV}{dt}=\frac{\partial V}{\partial[R]_{i}}. \frac{d[R]_{i}}{dt}+\frac{\partial V}{\partial[P]_{i}}.\frac{d[P]_{i}}{dt}. \tag{26}\] By plugging \(\frac{d[R]_{i}}{dt}\) and \(\frac{d[P]_{i}}{dt}\) from Eq. 1 and 4, differentiating Eq. 25 with respect to \([R]_{i}\) and \([P]_{i}\) to obtain \(\frac{\partial V}{\partial[R]_{i}}\) and \(\frac{\partial V}{\partial[P]_{i}}\) and finally replacing \([R]_{i}^{*},[P]_{i}^{*},[R]_{i}\) and \([P]_{i}\), with Eq. 6, 7, 2 and 5 we get Eq. 26, which is represented as follows \[\text{Eq.}26 \Longrightarrow\frac{dV}{dt}=-\frac{C_{N_{i}}^{2}\cdot[TF]^{2n} \cdot k_{1_{i}}^{2}\cdot e^{(-2(d_{1_{i}}+d_{2_{i}}))}}{d_{1_{i}}d_{2_{i}}([TF] ^{n}+K_{A_{i}}^{n})^{2}(d_{1_{i}}-d_{2_{i}})^{2}}\] \[\cdot(d_{2_{i}}^{3}\cdot e^{(2d_{2_{i}}t)}-2d_{1_{i}}d_{2_{i}}^{2} \cdot e^{(2d_{2_{i}}t)}+d_{1_{i}}^{2}d_{2_{i}}\cdot e^{(2d_{2_{i}}t)})\] \[+(d_{1_{k}}k_{2_{i}}^{2}\cdot e^{(2d_{1_{i}}t)}+d_{2_{k}}k_{2_{i}}^ {2}\cdot e^{(2d_{2_{i}}t)})\] \[-(d_{1_{k}}k_{2_{i}}^{2}\cdot e^{(t(d_{1_{i}}+d_{2_{i}}))})+d_{2_{k} }k_{2_{i}}^{2}\cdot e^{(t(d_{1_{i}}+d_{2_{i}}))}), \tag{27}\] where we assume initial RNA concentration of zero (\([R]_{i}(0)=0\)) and initial protein concentration of zero (\([P]_{i}(0)=0\)). The above equation is used to determine the stability of the gene-perceptron for different parameter configurations.
生物細胞の遺伝的調節ネットワーク(GRN)は、環境の変化に対して適応し生存するための重要な機能を制御します。GRNの近接観察により、その構造と運用原理は人工知能(ANN)に似ています。これにより、生物的人工知能の開発を可能にする可能性があります。特に、遺伝子の転写と翻訳プロセスは、転写因子入力に基づいてシグモイド型特性を示しています。この論文では、遺伝子-感知器を構築するために、二重層の転写-翻訳化学反応モデルを用いた数学モデルを開発しました。これにより、GRNを遺伝的調節神経ネットワーク(GRNN)に変換することができました。GRNNの全結合子ネットワーク内の各遺伝子-感知器に対する安定性解析を行い、時間的および安定した濃度出力を見出し、信頼性の高い計算性能を実現しました。GRNNの非線形分類器適用において、私たちは多層GRNN
2309.16143
Generative Semi-supervised Learning with Meta-Optimized Synthetic Samples
Semi-supervised learning (SSL) is a promising approach for training deep classification models using labeled and unlabeled datasets. However, existing SSL methods rely on a large unlabeled dataset, which may not always be available in many real-world applications due to legal constraints (e.g., GDPR). In this paper, we investigate the research question: Can we train SSL models without real unlabeled datasets? Instead of using real unlabeled datasets, we propose an SSL method using synthetic datasets generated from generative foundation models trained on datasets containing millions of samples in diverse domains (e.g., ImageNet). Our main concepts are identifying synthetic samples that emulate unlabeled samples from generative foundation models and training classifiers using these synthetic samples. To achieve this, our method is formulated as an alternating optimization problem: (i) meta-learning of generative foundation models and (ii) SSL of classifiers using real labeled and synthetic unlabeled samples. For (i), we propose a meta-learning objective that optimizes latent variables to generate samples that resemble real labeled samples and minimize the validation loss. For (ii), we propose a simple unsupervised loss function that regularizes the feature extractors of classifiers to maximize the performance improvement obtained from synthetic samples. We confirm that our method outperforms baselines using generative foundation models on SSL. We also demonstrate that our methods outperform SSL using real unlabeled datasets in scenarios with extremely small amounts of labeled datasets. This suggests that synthetic samples have the potential to provide improvement gains more efficiently than real unlabeled data.
Shin'ya Yamaguchi
2023-09-28T03:47:26
http://arxiv.org/abs/2309.16143v1
# Generative Semi-supervised Learning ###### Abstract Semi-supervised learning (SSL) is a promising approach for training deep classification models using labeled and unlabeled datasets. However, existing SSL methods rely on a large unlabeled dataset, which may not always be available in many real-world applications due to legal constraints (e.g., GDPR). In this paper, we investigate the research question: _Can we train SSL models without real unlabeled datasets?_ Instead of using real unlabeled datasets, we propose an SSL method using synthetic datasets generated from generative foundation models trained on datasets containing millions of samples in diverse domains (e.g., ImageNet). Our main concepts are identifying synthetic samples that emulate unlabeled samples from generative foundation models and training classifiers using these synthetic samples. To achieve this, our method is formulated as an alternating optimization problem: (i) meta-learning of generative foundation models and (ii) SSL of classifiers using real labeled and synthetic unlabeled samples. For (i), we propose a meta-learning objective that optimizes latent variables to generate samples that resemble real labeled samples and minimize the validation loss. For (ii), we propose a simple unsupervised loss function that regularizes the feature extractors of classifiers to maximize the performance improvement obtained from synthetic samples. We confirm that our method outperforms baselines using generative foundation models on SSL. We also demonstrate that our methods outperform SSL using real unlabeled datasets in scenarios with extremely small amounts of labeled datasets. This suggests that synthetic samples have the potential to provide improvement gains more efficiently than real unlabeled data. **Keywords:** generative models; semi-supervised learning; meta-learning ## 1 Introduction Semi-supervised learning (SSL) is a promising approach for training deep neural network models with a limited amount of labeled data and a large amount of unlabeled data. Recent studies on SSL have shown that the labeling cost to achieve high-performance models can be significantly reduced by using the unlabeled dataset to train the models with pseudo-labeling and consistency regularization (Bachman et al., 2014; Xie et al., 2020; Sohn et al., 2020). For example, Wang et al. (2023) have reported that their SSL method can achieve 94.22% test accuracy on CIFAR-10 with only one label per class. This indicates that modern SSL methods can realize practical models with minimal labeling costs. However, whether labeled or not, large-scale datasets are becoming more challenging to obtain and use for machine learning models due to privacy regulations (e.g., GDPR in the EU). To train deep models in a situation where it is challenging to obtain datasets, recent studies using synthetic datasets from deep generative models have attracted much attention in the context of proxying real datasets (He et al., 2023; van Breugel et al., 2023). This approach has been intensively discussed in the community1 and regarded as a promising method for privacy protection since generative models such as GANs (Goodfellow et al., 2014) can produce realistic samples while guaranteeing certain differential privacy (Lin et al., 2021). If the synthetic samples can be used as unlabeled datasets in SSL, we can train a high-performance model without real unlabeled datasets and privacy risks. Thus, we investigate a research question: _Can we train SSL models with synthetic unlabeled datasets instead of real ones?_ Footnote 1: NeurIPS Synthetic Data Workshop ([https://www.syntheticcdata4ml.vanderschaar-lab.com/](https://www.syntheticcdata4ml.vanderschaar-lab.com/)) In this paper, we explore a new problem setting called _generative semi-supervised learning_ (gSSL), where the semi-supervised learners use synthetic unlabeled samples generated from a _generative foundation model_ instead of real unlabeled samples (Figure 1). Generative foundation models are conditional generative models pre-trained on large external datasets containing millions of samples from diverse domains (e.g., ImageNet). Thanks to recent advances (Brock et al., 2019; Sauer et al., 2022), generative foundation models can accurately output synthetic samples in various domains from inputs of latent variables and conditional labels. Therefore, we can expect the synthetic samples to perform as the unlabeled datasets in SSL when the training data space overlaps the data space estimated by the generative foundation models. In gSSL, there are important challenges according to the following two concrete research questions: (i) _How do we find optimal synthetic samples from the generative foundation models for SSL?_ and (ii) _How do we train models with synthetic samples that do not belong to the training class categories?_ For (i), since generative foundation models do not necessarily have the same class categories in the training datasets, we need to find synthetic samples from the generative models related to training datasets. Furthermore, it is essential to find Figure 1: Comparison of semi-supervised learning (SSL) and generative semi-supervised learning (gSSL). In gSSL, we use a generative foundation model \(G_{\text{F}}\) to compute unsupervised losses instead of a real unlabeled dataset \(\mathcal{D}_{\text{u}}\). To this end, we generate synthetic unlabeled samples by querying \(G_{\text{F}}\) with information of the labeled dataset \(\mathcal{D}\). synthetic samples that can improve the classifier performance through the unsupervised loss of SSL. For (ii), even if we can find helpful synthetic samples from the generative foundation models, it is not obvious how these samples can be optimally used to maximize the performance of the classifiers. This is because the synthetic samples are matched to training datasets with respect to the domain (data space), not the class label spaces. Since existing SSL methods assume that unlabeled samples belong to the training class categories, the mismatch between real and synthetic samples in the class label spaces can be detrimental to SSL models. To address these two challenges, we propose a method called _meta-pseudo semi-supervised learning_ (MP-SSL). MP-SSL consists of two techniques corresponding to the two research questions: (i) _latent meta-optimization_ (LMO) and (ii) _synthetic consistency regularization_ (SCR). In LMO, we optimize latent variables that are input to generative foundation models to find synthetic samples that resemble unlabeled training data. To find optimal synthetic samples for SSL models, LMO meta-optimizes the parameters to minimize the validation losses of the target classifier. Furthermore, LMO also minimizes the gaps in the feature spaces between real and synthetic samples to align the domain gap and make the synthetic samples perform as unlabeled data. SRC is a novel unsupervised loss term without the use of pseudo training labels. Unlike existing SSL methods depending on real unlabeled data and the pseudo training labels, the SCR loss is designed as a feature regularization term. This design choice is to avoid the negative effects of the synthetic samples caused by the mismatch of the class label spaces. Specifically, SCR penalizes the feature extractors by maximizing the similarity between variations of a synthetic sample, which is inspired by consistency regularization (Bachman et al., 2014; Xie et al., 2020; Sohn et al., 2020). Since SRC is independent of the relationship between training and foundation label spaces, it can leverage the valuable information contained in synthetic unlabeled data to train the model without negative effects. The training objective of MP-SSL is formalized as an alternating optimization problem of updating latent variables and updating training models through SSL with SCR. To evaluate the effectiveness of MP-SSL, we conduct the experiments on multiple datasets by comparing MP-SSL with competitors, including P-SSL (Yamaguchi et al., 2022). We also compare MP-SSL with SSL methods using real unlabeled datasets. The results show that MP-SSL outperforms the real SSL methods when the labeled datasets are small. This suggests that synthetic samples can promote more effective learning than real unlabeled samples, especially in cases where the number of labels is extremely small. We believe this work will be the baseline for developing a new research area, generative semi-supervised learning. Our contributions are summarized as follows. * We propose a new problem setting of SSL called generative semi-supervised learning (gSSL), where the unlabeled samples are provided by generative foundation models instead of real unlabeled datasets. * We introduce a training method for gSSL called MP-SSL, which finds optimal synthetic samples performing as unlabeled data through meta-optimizing latent variables and trains a classifier with a feature regularization with the synthetic samples. * We confirm that MP-SSL can outperform simple baselines of the gSSL setting and outperform SSL methods with real unlabeled datasets in small amounts of labels. ## 2 Preliminary ### Problem Setting We consider a classification problem in which we train a neural network model \(f_{\theta}:\mathcal{X}\rightarrow\mathcal{Y}\) on a labeled dataset \(\mathcal{D}=\{(x^{i},y^{i})\in\mathcal{X}\times\mathcal{Y}\}_{i=1}^{N}\), where \(\mathcal{X}\) and \(\mathcal{Y}\) are the input and output label spaces, respectively. In this setting, we can use a generative foundation model \(G_{\mathrm{F}}:\mathcal{Z}_{\mathrm{F}}\times\mathcal{Y}_{\mathrm{F}} \rightarrow\mathcal{X}_{\mathrm{F}}\), where \(\mathcal{Z}_{\mathrm{F}}\) is the latent space, \(\mathcal{Y}_{\mathrm{F}}\) is the foundation label space, and \(\mathcal{X}_{\mathrm{F}}\) is the output sample space. We assume that \(G_{\mathrm{F}}\) is pre-trained on a large-scale dataset (e.g., ImageNet) and the output sample space \(\mathcal{X}_{\mathrm{F}}\) contains a subset \(\mathcal{X}^{\prime}\) related to \(\mathcal{X}\), i.e., \(\mathcal{X}_{\mathrm{F}}\supset\mathcal{X}^{\prime}\approx\mathcal{X}\). An input latent variable \(z\in\mathcal{Z}_{\mathrm{F}}\) is sampled from a standard Gaussian distribution \(\mathcal{N}(0,I)\). \(f_{\theta}\) is defined by a composition of a feature extractor \(g_{\psi}\) and a classifier \(h_{\omega}\), i.e., \(f_{\theta}=h_{\omega}\circ g_{\psi}\) and \(\theta=[\psi,\omega]\). To validate \(f_{\theta}\), we can use a small validation dataset \(\mathcal{D}_{\mathrm{val}}=\{(x_{\mathrm{val}}^{i},y_{\mathrm{val}}^{i})\in \mathcal{X}\times\mathcal{Y}\}_{i=1}^{N_{\mathrm{val}}}\), which has no intersection with \(\mathcal{D}\) (i.e., \(\mathcal{D}\cap\mathcal{D}_{\mathrm{val}}=\emptyset\)). ### Semi-supervised Learning Given a labeled dataset \(\mathcal{D}\) and an unlabeled dataset \(\mathcal{D}_{\mathrm{u}}=\{x^{i}\in\mathcal{X}\}_{i=1}^{N_{\mathrm{u}}}\), SSL to train \(f_{\theta}\) is formulated as the following minimization problem. \[\min_{\theta}\,\mathcal{L}(\theta)+\lambda_{\mathrm{u}}\mathcal{L }_{\mathrm{u}}(\theta), \tag{1}\] \[\mathcal{L}(\theta)=\frac{1}{N}\sum_{(x,y)\in\mathcal{D}}\ell(f_{ \theta}(x),y)\] (2) \[\mathcal{L}_{\mathrm{u}}(\theta)=\frac{1}{N_{\mathrm{u}}}\sum_{x _{\mathrm{u}}\in\mathcal{D}_{\mathrm{u}}}\ell_{\mathrm{u}}(f_{\theta}(x_{ \mathrm{u}})) \tag{3}\] where \(\ell\) is a supervised loss for a labeled sample (e.g., cross-entropy loss), \(\ell_{\mathrm{u}}\) is an unsupervised loss for an unlabeled sample \(x_{\mathrm{u}}\), and \(\lambda_{\mathrm{u}}\) is a hyperparameter for balancing \(\mathcal{L}\) and \(\mathcal{L}_{\mathrm{u}}\). SSL assumes a large amount of unlabeled data (i.e., \(N\ll N_{\mathrm{u}}\)). This assumption has long been justified on the premise that the difficulty of the dataset creation is centered on labeling, and the collection of unlabeled data can be easily done (Chapelle et al., 2006). However, unlabeled data are often unavailable due to privacy concerns. Starting with the EU's GDPR, privacy protection legislation has been developed globally, and creating large-scale datasets requires satisfying the privacy policy under the law. This paper explores an alternative SSL approach without collecting large-scale unlabeled datasets. ### Generative Semi-supervised Learning Generative semi-supervised learning (gSSL) is a variant of SSL where \(\mathcal{D}_{\mathrm{u}}\) is prohibited from being accessed and the unlabeled data \(x_{\mathrm{u}}\) is provided by a generative foundation model \(G_{\mathrm{F}}\) by \[x_{\mathrm{u}}=G_{\mathrm{F}}(z,\hat{y}_{\mathrm{F}}), \tag{4}\] where \(\hat{y}_{\mathrm{F}}\) is an estimated foundation label produced by gSSL algorithm. The gSSL algorithms have been rarely studied except for a prior work by Yamaguchi et al. (2022). In a transfer learning setting where the target and source architectures are not consistent, Yamaguchi et al. (2022) have proposed a method called pseudo semi-supervised learning (P-SSL). Although P-SSL is focused on transfer learning, we consider it a simple baseline of gSSL. P-SSL trains \(f_{\theta}\) by using Eq. (1) and estimates a foundation label \(\hat{y}_{\text{F}}\) as \[\hat{y}_{\text{F}}=f_{\theta_{\text{F}}}(x), \tag{5}\] where \(f_{\theta_{\text{F}}}\) is a classifier pre-trained on a foundation dataset (e.g., an ImageNet pre-trained classifier). That is, P-SSL interprets the training sample \(x\) as the conditional sample of an interpolated class in \(\mathcal{Y}_{\text{F}}\) through the output of \(f_{\theta_{\text{F}}}\). This assumes the existence of \(f_{\theta_{\text{F}}}\) and \(y\in\mathcal{Y}\) can be semantically approximated by the soft foundation labels, i.e., \(y_{i}\in\mathcal{Y}\approx f_{\theta_{\text{F}}}(x_{i})\in\mathcal{Y}_{\text{F}}\). However, the synthetic samples by Eq. (5) do not always contribute to the performance of \(f_{\theta}\) because the above assumption does not necessarily hold, and the synthetic samples are not directly optimized to improve \(f_{\theta}\). In fact, Yamaguchi et al. (2022) have reported that the performance gain by P-SSL is limited when the training datasets are not well approximated by Eq. (5). To stably improve the performance of \(f_{\theta}\), we present a meta-learning based SSL approach, which does not require the label assumptions. ## 3 Proposed Method In this section, we describe our proposed method called MP-SSL. MP-SSL is composed of (i) latent meta-optimization (LMO) and (ii) synthetic consistency regularization (SCR). LMO finds synthetic samples performing as unlabeled data in SSL through meta-optimizing input latent variables and foundation class labels. SCR penalizes a feature extractor by maximizing the similarity between variations of a synthetic sample. MP-SSL alternately updates the parameters for sampling synthetic unlabeled data by LMO and training model \(f_{\theta}\) by SRC. The overview of MP-SSL is illustrated in Figure 2. Figure 2: Overview of MP-SSL. We first generate a transformed latent variable \(\hat{z}\) and a pseudo foundation label \(y_{\text{F}}\) through conditional mapper \(M_{\phi}\) and label converter \(I_{\xi}\). Then, we produce a pseudo unsupervised sample \(\hat{x}_{\text{u}}=G_{\text{F}}(\hat{z},\hat{y}_{\text{F}})\) for semi-supervised learning (SSL) of \(f_{\theta}\). To find the optimal \(\hat{z}\) and \(y_{\text{F}}\), we update \(M_{\phi}\) and \(I_{\xi}\) by latent meta-optimization (LMO, Eq. (6)). In the training of \(f_{\theta}\), we use the loss of synthetic consistency regularization (SCR, Eq. (13)) instead of existing SSL loss terms. ### Latent Meta-Optimization (LMO) The goal of LMO is to find a synthetic sample that approximates unlabeled data and contributes to the performance of \(f_{\theta}\) through SSL. To extract unlabeled samples from \(G_{\mathrm{F}}\), we optimize the parameters \(\phi\) and \(\xi\) that generate the latent variables \(\hat{z}\in\mathcal{Z}_{\mathrm{F}}\) and foundation class label \(\hat{y}_{\mathrm{F}}\in\mathcal{Y}_{\mathrm{F}}\), respectively. That is, we search for an optimal pair of \((\hat{z},\hat{y}_{\mathrm{F}})\) through this optimization process. In conditional generative models, the latent variables control overall characteristics without class categories (e.g., size of object and style), and the class labels determine the category of the synthetic samples (Odena et al., 2017; Brock et al., 2019). Searching \((\hat{z},\hat{y}_{\mathrm{F}})\) can be more reasonable than directly optimizing whole parameters of \(G_{\mathrm{F}}\) on \(\mathcal{D}\) because the latter suffers from overfitting and the low-performance of \(f_{\theta}\) due to the low-quality samples (Karras et al., 2020). For the optimization, we use specialized architectures called _conditional mapper_\(M_{\phi}:\mathcal{Z}_{\mathrm{F}}\times\mathcal{Y}\to\mathcal{Z}_{\mathrm{F}}\) and _label converter_\(I_{\xi}:\mathcal{Y}\to\mathcal{Y}_{\mathrm{F}}\). Through optimizing \(M_{\phi}\) and \(I_{\xi}\), we seek a synthetic sample \(\hat{x}_{\mathrm{u}}=G_{\Phi}(M_{\phi}(z_{\mathrm{F}},y),I(y))\). To this end, we formalize the optimization problem of LMO as follows. \[\min_{\phi,\xi}\mathcal{L}_{\mathrm{val}}(\theta^{*})+\lambda_{ \mathrm{gap}}\mathcal{L}_{\mathrm{gap}}(\phi,\xi) \tag{6}\] \[\mathcal{L}_{\mathrm{val}}(\theta^{*})=\mathbb{E}_{(x_{\mathrm{ val}},y_{\mathrm{val}})\in\mathcal{D}_{\mathrm{val}}}\ell(f_{\theta^{*}}(x_{ \mathrm{val}}),y_{\mathrm{val}})\] (7) \[\mathcal{L}_{\mathrm{gap}}(\phi,\xi)=\mathbb{E}_{x\in\mathcal{D}} \|g_{\psi}(x)-g_{\psi}(\hat{x}_{\mathrm{u}}=G_{\mathrm{F}}(M_{\phi}(z,y),I_{ \xi}(y)))\|_{2}^{2}\] (8) \[\mathrm{s.t.}\hskip 56.905512pt\theta^{*}=\operatorname*{arg\, min}_{\theta}\mathcal{L}(\theta)+\lambda\mathcal{L}_{\mathrm{u}}(\theta,\phi,\xi), \tag{9}\] where \(\mathcal{L}_{\mathrm{val}}\) is for seeking samples to improve \(f_{\theta}\) and \(\mathcal{L}_{\mathrm{gap}}\) is for satisfying that \(\hat{x}_{\mathrm{u}}\) approximates training data \(x\) as unlabeled samples. This meta-optimization problem can be solved by stochastic gradient descent by extending the prior meta-learning method such as MAML (Finn et al., 2017). In the rest of this subsection, we describe the design of the conditional mapper and label converter. Conditional Mapper \(M_{\phi}\).The role of \(M_{\phi}\) is to find optimal latent variables producing useful unlabeled samples for SSL through \(G_{\mathrm{F}}\). Our idea is to transform the concatenation input latent variable \(z\) and training class label \(y\) into a new latent variable \(\hat{z}\). This is based on an expectation that partitioning the problem for each class will make searching latent variables easier; we confirm that using \(y\) yields more performance gain in Sec. 4.5.2. \(M_{\phi}\) outputs the estimated latent variable \(\hat{z}\) by \[\hat{z}=M_{\phi}(z,y)=\mathrm{MLP}_{\phi}(\mathrm{Concat}(z,\mathrm{EMB}_{\phi }(y))), \tag{10}\] where \(\mathrm{EMB}_{\phi}:\mathcal{Y}\to\mathbb{R}^{d_{\mathcal{Y}}}\) is an embedding layer for \(y\), \(\mathrm{Concat}(\cdot)\) is a concatenation operation of two vectors, and \(\mathrm{MLP}_{\phi}:\mathbb{R}^{d_{\mathcal{Z}_{\mathrm{F}}}+d_{\mathcal{Y}}} \to\mathcal{Z}_{\mathrm{F}}=\mathbb{R}^{d_{\mathcal{Z}_{\mathrm{F}}}}\) is a multi-layer perception yielding a new latent variable. Label Converter \(I_{\xi}\).\(I_{\xi}\) estimates a foundation label \(\hat{y}_{\mathrm{F}}\) corresponding to a training class label \(y\). To estimate a foundation label, a prior work (Yamaguchi et al., 2022) utilizes a pre-trained classifier on foundation datasets. This approach is simple, but the pre-trained classifiers are not necessarily given, and the estimation of foundation soft labels depends on the performance of the pre-trained classifiers. Thus, if high-performance pre-trained classifiers are unavailable, it is hard to estimate a foundation label correctly. Instead of the pre-trained classifiers, we utilize the Gumbel-softmax (Jang et al., 2017) trick for sampling \(\hat{y}_{\text{F}}\) through the parameter \(\xi\) updated by LMO: \[\hat{y}_{\text{F}}=\text{arg}\underset{i}{\max}\;I_{\xi}(y), \tag{11}\] \[I_{\xi}(y)[i]=\frac{\exp\left((\log(\text{EMB}_{\xi}[i])+\mathbf{ g}[i])/\tau\right)}{\sum_{j=1}^{|\mathcal{Y}_{\text{F}}|}\exp\left((\log(\text{EMB}_{ \xi}[j])+\mathbf{g}[j])/\tau\right)}, \tag{12}\] where \(\text{EMB}_{\xi}:\mathcal{Y}\rightarrow\mathbb{R}^{d\mathcal{Y}}\) is an embedding layer for \(y\), \(\mathbf{g}[i]=-\log(-\log(u_{i}\sim\text{Uniform}(0,1)))\), \(\tau\) is a temperature parameter. This formulation has several advantages: (a) it can be trained by backpropagation since it is fully differentiable, (b) the output \(\hat{x}_{\text{u}}\) is expected to be unbiased due to randomness given by \(\mathbf{g}\), and (c) the number of foundation classes of interest can be adjustable according to the training data by the temperature parameters. We confirm these advantages through comparison to the other variants of \(I_{\xi}\) in Sec. 4.5.3. ### Synthetic Consistency Regularization Although synthetic samples generated from \(G_{\text{F}}\) through LMO can contain useful information for training \(f_{\theta}\), it is hard to expect that they are exactly categorized to the training space \(\mathcal{Y}\) because the training and foundation label spaces are not the same, i.e., \(\mathcal{Y}\neq\mathcal{Y}_{\text{F}}\). Therefore, training with the synthetic samples via unsupervised losses using pseudo training labels in \(\mathcal{Y}\) (e.g., FixMatch (Sohn et al., 2020)) might confuse \(f_{\theta}\) due to the label space mismatch. To avoid the negative effect and maximize the gain from the synthetic samples, we introduce a simple unsupervised loss called synthetic consistency regularization (SCR). In contrast to existing pseudo-label based SSL methods, SCR is computed on the feature extractor \(g_{\psi}\) of \(f_{\theta}\). That is, we regularize \(g_{\psi}\) by synthetic samples instead of the classifier head \(h_{\omega}\). To regularize \(g_{\psi}\), we design SCR based on consistency regularization (Bachman et al., 2014; Xie et al., 2020; Sohn et al., 2020), which minimizes the gap between the outputs of two variants of samples that are transformed by different data augmentations. Concretely, we formalize the loss function of SCR as follows. \[\ell_{\text{SCR}}(\hat{x}_{\text{u}};\psi) = 1-\frac{g_{\psi}(T_{\text{w}}(\hat{x}_{\text{u}}))\cdot g_{\psi}(T _{\text{s}}(\hat{x}_{\text{u}}))}{\|g_{\psi}(T_{\text{w}}(\hat{x}_{\text{u}})) \|\|g_{\psi}(T_{\text{s}}(\hat{x}_{\text{u}}))\|}, \tag{13}\] where \(T_{\text{w}}(\cdot)\) and \(T_{\text{s}}(\cdot)\) are a weak augmentation (e.g., flip and crop) and a strong augmentation (e.g., RandAugment (Cubuk et al., 2020)). As the measurement of the gap, we choose cosine distance; we empirically found that this formulation achieves the best results when comparing with L2, L1, and smooth L1 distance as shown in Sec. 4.5.4. By applying SCR to \(g_{\psi}\), we expect that \(g_{\psi}\) learns robust feature representations that are useful for classifying real samples by \(h_{\omega}\). Finally, we show the overall procedure of MP-SSL using LMO and SCR in Algorithm 1. ## 4 Experiments This section evaluates our MP-SSL through experiments on multiple image classification datasets. We mainly aim to answer three research questions with the experiments: (1) Can MP-SSL improve the baselines without real unlabeled datasets? (2) What can training models learn through MP-SSL? (3) Is the MP-SSL design reasonable? We compare MP-SSL with baselines with synthetic samples, e.g., P-SSL (Yamaguchi et al., 2022), and baselines with real samples e.g., FreeMatch (Wang et al., 2023) in Sec. 4.2 and 4.3. Furthermore, we provide a detailed analysis of MP-SSL, such as the visualization of synthetic samples (Sec. 4.4) and detailed ablation studies of MP-SSL (Sec. 4.5). ### Setting Baselines.We compare our method with the following baselines in the gSSL setting. **Base Model**: training \(f_{\theta}\) with only \(\mathcal{D}\). **Naive gSSL**: training \(f_{\theta}\) with \(\mathcal{D}\) and \(G_{\text{F}}\), where a synthetic sample \(\hat{x_{\text{u}}}\) is generated from uniformly sampled \(z\) and \(y_{\text{F}}\), then we train \(f_{\theta}\) by an existing SSL method with the real and synthetic samples. **P-SSL**(Yamaguchi et al., 2022): training \(f_{\theta}\) with \(\mathcal{D}\) and \(G_{\text{F}}\) with sampling \(y_{\text{F}}\) by Eq. (5) and existing SSL methods updating \(h_{\omega}\). We also test SSL methods using a real unlabeled dataset \(\mathcal{D}_{\text{u}}\) to assess the practicality of the gSSL setting; We refer this setting to oracle SSL because they can access \(\mathcal{D}_{\text{u}}\) that is prohibited in gSSL. As the oracle SSL methods, We used three representative SSL methods: UDA (Xie et al., 2020), FixMatch (Sohn et al., 2020), and FreeMatch (Wang et al., 2023). Datasets.We used six image datasets for classification tasks: Cars (Krause et al., 2013), Aircraft (Maji et al., 2013), Birds (Welinder et al., 2010), DTD (Cimpoi et al., 2014), Flowers (Nilsback and Zisserman, 2008), and Pets (Parkhi et al., 2012). To evaluate both generative and oracle SSL settings at the same time, we randomly split them into \(\mathcal{D}\) and \(\mathcal{D}_{\text{u}}\) (\(50:50\), by default), and discarded \(\mathcal{D}_{\text{u}}\) in gSSL and used in oracle SSL. Furthermore, to evaluate the effect of dataset size, we varied the size of labeled datasets of Cars by \(\{10,25,50,100\}\%\) in volume. Note that we used all of the rest of the unlabeled samples as \(\mathcal{D}_{\text{u}}\) in this setting. After creating \(\mathcal{D}\), we randomly split \(\mathcal{D}\) into \(9:1\) and used the former as \(\mathcal{D}\) and the latter as \(\mathcal{D}_{\text{val}}\) in the training. Architectures.We used ResNet-18 (He et al., 2016) as \(f_{\theta}\) and BigGAN for \(256\times 256\) images (Brock et al., 2019) as \(G_{\rm F}\). \(M_{\phi}\) was composed of a three-layer perceptron with a leaky-ReLU activation function. We used the ImageNet pre-trained weights of ResNet-18 distributed by PyTorch.2 For BigGAN, we used the ImageNet pre-trained weights provided by Brock et al. (2019). Note that we used the same \(G_{\rm F}\) in the baselines and our method. Footnote 2: [https://github.com/pytorch/vision](https://github.com/pytorch/vision) Training.We trained \(f_{\theta}\) by the Nesterov momentum SGD for 200 epochs with a momentum of 0.9 and an initial learning rate of 0.01; we decayed the learning rate by 0.1 at 60, 120, and 160 epochs. We trained \(M_{\phi}\) and \(I_{\xi}\) by the Adam optimizer for 200 epochs with a learning rate of \(1.0\times 10^{-4}\). We used mini-batch sizes of 64. The input samples were resized into a resolution of \(224\times 224\); \(\hat{x}_{u}\) was resized by differentiable transformations. For synthetic samples from \(G_{\rm F}\) in MP-SSL, the weak transformation \(T_{\rm w}\) was horizontal flip and random crop, and the strong transformation \(T_{\rm s}\) was RandAugment (Cubuk et al., 2020) by following Xie et al. (2020); it was implemented with differentiable transformations provided in Kornia (Riba et al., 2020). We determined the hyperparameter \(\lambda\) by grid search among \([0.1,1.0]\) with a step size of 0.1 for each method by \(\mathcal{D}_{\rm val}\). We used \(\lambda_{\rm gap}\) of 10. For the hyperparameters of oracle SSL methods, we followed the default settings of the original papers (Xie et al., 2020; Sohn et al., 2020; Wang et al., 2023). We selected the final model by checking the validation accuracy for each epoch. We ran the experiments three times on a 24-core Intel Xeon CPU with an NVIDIA A100 GPU with 40GB VRAM and recorded average test accuracies with standard deviations evaluated on the final models. ### Evaluation on Multiple Datasets First, we evaluate our MP-SSL's performance by comparing it with the baseline methods of gSSL and oracle SSL on various training datasets. Table 1 shows the results on six datasets. Note that we did not use the unlabeled dataset \(\mathcal{D}_{\rm u}\) in the gSSL setting. Our MP-SSL achieved the best results among the gSSL methods with a large margin (up to 3pp). While P-SSL degraded the base model on DTD due to the mismatch between training and \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Method / Dataset & Aircraft & Birds & Cars & DTD & Flower & Pets \\ \hline Base Model & 44.05\({}^{\pm.59}\) & 60.74\({}^{\pm.29}\) & 71.62\({}^{\pm.30}\) & 61.56\({}^{\pm.56}\) & 88.14\({}^{\pm.18}\) & 84.44\({}^{\pm.48}\) \\ \hline **Oracle SSL (\(\mathcal{D}+\mathcal{D}_{\rm u}\))** & & & & & & \\ UDA (Xie et al., 2020) & 44.65\({}^{\pm.38}\) & 60.22\({}^{\pm.03}\) & 60.22\({}^{\pm.03}\) & 70.90\({}^{\pm.58}\) & 61.90\({}^{\pm.10}\) & 87.72\({}^{\pm.31}\) \\ FixMatch (Sohn et al., 2020) & 47.89\({}^{\pm.38}\) & 60.58\({}^{\pm.84}\) & 80.98\({}^{\pm.36}\) & 61.31\({}^{\pm.11}\) & 90.08\({}^{\pm.48}\) & 81.73\({}^{\pm.39}\) \\ FreeMatch (Wang et al., 2023) & 49.55\({}^{\pm.33}\) & 66.09\({}^{\pm.16}\) & 82.73\({}^{\pm.41}\) & 63.83\({}^{\pm.49}\) & 90.07\({}^{\pm.27}\) & 86.61\({}^{\pm.40}\) \\ \hline **Generative SSL (\(\mathcal{D}+G_{\rm F}\))** & & & & & & \\ Naive gSSL (FreeMatch) & 46.83\({}^{\pm.34}\) & 60.95\({}^{\pm.29}\) & 73.67\({}^{\pm.67}\) & 59.41\({}^{\pm.17}\) & 86.41\({}^{\pm.25}\) & 83.66\({}^{\pm.69}\) \\ P-SSL (Yamaguchi et al., 2022) & 45.43\({}^{\pm.24}\) & 60.54\({}^{\pm.25}\) & 72.45\({}^{\pm.30}\) & 60.82\({}^{\pm.61}\) & 88.20\({}^{\pm.15}\) & 84.84\({}^{\pm.41}\) \\ MP-SSL (Ours) & **49.48\({}^{\pm.25}\)** & **62.86\({}^{\pm.23}\)** & **76.33\({}^{\pm.31}\)** & **62.34\({}^{\pm.46}\)** & **88.44\({}^{\pm.51}\)** & **85.43\({}^{\pm.09}\)** \\ \hline \hline \end{tabular} \end{table} Table 1: Performance comparison of ResNet-18 classifiers on multiple datasets (Top-1 Acc. (%)). Underlined scores are the best of the oracle SSL setting (i.e., using real unlabeled datasets), and **Bolded scores** are the best among the methods of the generative SSL (gSSL) setting (i.e., using foundation generative models). foundation label spaces (Yamaguchi et al., 2022), our MP-SSL succeeded in improving it. This indicates that MP-SSL is not sensitive to the label space mismatch and stably improves classifiers in various settings. Furthermore, on the Aircraft and DTD datasets, MP-SSL is competitive with the oracle SSL methods. This suggests that MP-SSL and gSSL have the potential to approximate the oracle SSL methods in terms of the final model accuracy. ### Evaluation by Varying Dataset Size We evaluate MP-SSL by varying the size of training labeled datasets. We used all of the remaining unlabeled samples as \(\mathcal{D}_{\text{u}}\) for the oracle SSL methods and did not use \(\mathcal{D}_{\text{u}}\) for the gSSL methods. Table 2 shows that our MP-SSL achieves the best results in the gSSL setting for all dataset sizes. More interestingly, MP-SSL significantly outperformed the best result of the oracle SSL methods when the labeled dataset is extremely small (i.e., 10% \(\leq\) 1,000 samples). This trend is consistent with multiple datasets, as shown in Fig. 3. These results suggest that the synthetic samples from \(G_{\text{F}}\) are more valuable than real unlabeled samples for improving classification performance when the labeled datasets are quite small. \begin{table} \begin{tabular}{l c c c c} \hline \hline Method / Labeled Dataset Size & 10\% & 25\% & 50\% & 100\% \\ \hline Base Model & 19.74\({}^{\pm 1.5}\) & 47.54\({}^{\pm 6.67}\) & 71.62\({}^{\pm 3.0}\) & 85.75\({}^{\pm 0.08}\) \\ \hline **Oracle SSL (\(\mathcal{D}+\mathcal{D}_{\text{u}}\))** & & & & \\ UDA (Xie et al., 2020) & 19.36\({}^{\pm 4.44}\) & 47.95\({}^{\pm 3.0}\) & 72.76\({}^{\pm 5.3}\) & N/A \\ FixMatch (Sohn et al., 2020) & 20.98\({}^{\pm 9.9}\) & 63.58\({}^{\pm 6.4}\) & 83.94\({}^{\pm 6.5}\) & N/A \\ FreeMatch (Wang et al., 2023) & 18.07\({}^{\pm 0.03}\) & 60.13\({}^{\pm 6.1}\) & 82.60\({}^{\pm 2.8}\) & N/A \\ \hline **Generative SSL (\(\mathcal{D}+G_{\text{F}}\))** & & & & \\ Naive gSSL (FreeMatch) & 20.11\({}^{\pm 0.03}\) & 49.33\({}^{\pm 5.4}\) & 72.91\({}^{\pm 3.8}\) & 81.68\({}^{\pm 1.8}\) \\ P-SSL (Yamaguchi et al., 2022) & 20.34\({}^{\pm 4.2}\) & 48.27\({}^{\pm 4.8}\) & 72.62\({}^{\pm 3.3}\) & 85.78\({}^{\pm 2.3}\) \\ MP-SSL (Ours) & **23.82\({}^{\pm 5.55}\)** & **53.37\({}^{\pm 5.6}\)** & **76.33\({}^{\pm 3.1}\)** & **86.84\({}^{\pm 1.0}\)** \\ \hline \hline \end{tabular} \end{table} Table 2: Performance comparison of ResNet-18 classifiers on the reduced Cars datasets (Top-1 Acc. (%)). Underlined scores are the best of the oracle SSL setting (i.e., using real unlabeled datasets), and **Bolded scores** are the best among the methods of the gSSL setting (i.e., using foundation generative models). Figure 3: Performance Comparisons in Small Labeled Dataset (ResNet-18) ### Analysis of Synthetic Samples We examine what the classifier is learning through MP-SSL. To this end, we visualize the synthetic samples generated by MP-SSL and compare them to real and synthetic samples generated by P-SSL. Figure 4 shows the real and synthetic samples. Interestingly, we see that P-SSL produces more relative samples to real samples (Cars), whereas MP-SSL produces not so relative but diverse samples. Since the performance studies in Sec. 4.2 and 4.3 show that MP-SSL completely outperformed P-SSL, this visualization result is contrary to intuition. We consider that this can be caused by the unsupervised regularization of MP-SSL, which penalizes the feature extractor instead of the entire model. As defined in Eq. (6), LMO of MP-SSL optimizes the latent vectors through the backpropagation from the unsupervised loss, and thus, the synthetic samples generated from the latent vectors are not optimized to become similar to real samples in its label spaces. The results suggest that the regularization of feature extractors does not necessarily require perfect imitation of the training data, and the diversity of samples is more important. ### Ablation Study #### 4.5.1 Meta-Learning and Gap Loss in LMO We evaluate the effectiveness of LMO by decomposing the objective function defined in Eq. (6). Eq. (6) is composed of the meta-learning loss \(\mathcal{L}_{\text{val}}(\theta^{*})\) and the feature gap loss \(\mathcal{L}_{\text{gap}}(\phi,\xi)\). Table 3 shows the impact of these components on accuracy by ablating them in MP-SSL. The row of MP-SSL w/o LMO denotes the test pattern of discarding LMO from MP-SSL, i.e., producing \(\hat{x}_{\text{u}}\) by random sampling from \(G_{\text{F}}\). From the results, we confirm that \(\mathcal{L}_{\text{val}}(\theta^{*})\) and \(\mathcal{L}_{\text{gap}}(\phi,\xi)\) equally contribute to the test performance. In other words, the Figure 4: Real and Synthetic Samples in Training (Cars) meta-learning loss and the feature gap loss have different effects on the synthetic samples and are complementary. #### 4.5.2 Conditional Mapper We assess the design validity of conditional mapper \(M_{\phi}(z,y)\). In Eq. (10), we define \(M_{\phi}\) to be conditioned by a training class label \(y\). To confirm the effectiveness of using labels, we tested unconditional mapper \(M_{\phi}(z)\), which is created by discarding the components for labels from \(M_{\phi}(z,y)\). Table 4 summarises the results. MP-SSL with a conditional mapper significantly outperformed one with an unconditional mapper. Therefore, we can say that using conditional labels for transforming a latent variable \(z\) helps boost models' performance. #### 4.5.3 Label Converter In Sec. 3.1, we design label converter \(I_{\xi}\) composed of the Gumbel softmax module as Eq. (11). This section provides the ablation study to evaluate the design choice. We varied the implementation of \(I_{\xi}\) with (a) soft label by embedding layer, i.e., \(\hat{y}_{\text{F}}=\text{EMB}_{\xi}\), (b) soft Gumbel softmax, i.e., \(\hat{y}_{\text{F}}=I_{\xi}\). Furthermore, we varied the hyperparameter \(\tau\) in Eq. (11). Table 5 shows the results. Using the Gumbel softmax with hard label output brings better test accuracy. This indicates that using the soft label output might not be appropriate for the unsupervised regularization loss since it results in ambiguous and low-quality output as in P-SSL, which uses soft labels for generating synthetic samples (Figure 4.3). #### 4.5.4 Synthetic Consistency Regularization We lastly provide an ablation study of SCR defined by a cosine distance form as Eq. (13). We tested four variants of \(\ell_{\text{u}}\) in MP-SSL: (a) FreeMatch (Wang et al., 2023) that updates the entire model \(f_{\theta}\) including the classifier head \(h_{\omega}\), (b) L1 distance, i.e., \(|g_{\psi}(T_{\text{W}}(\hat{x}_{\text{u}}))-g_{\psi}(T_{\text{S}}(\hat{x}_{ \text{u}}))|\), (c) L2 distance, i.e., \(\|g_{\psi}(T_{\text{W}}(\hat{x}_{\text{u}}))-g_{\psi}(T_{\text{S}}(\hat{x}_{ \text{u}}))\|_{2}^{2}\), (d) Smooth L1 distance (Girshick, 2015). We list the results in Table 11. First, we see that our SCR loss significantly outperforms the FreeMatch loss. This means that the consistency regularization on the feature spaces is quite effective for gSSL, as we expected in Sec. 3.2. Second, among the variants of SCR, the cosine distance based loss function achieved the best results. We conjecture that losses that directly minimize differences between feature vectors, such as L1 and L2 distance, involve the L1 and L2 norm of the feature vector. Therefore, the norm of the feature vectors during training is relatively smaller, which hurts the norm of the loss gradients of classification tasks (Hariharan and Girshick, 2017). ## 5 Related Work Semi-supervised Learning.Semi-supervised Learning (SSL) is a paradigm that trains a supervised model with labeled and unlabeled samples by simultaneously minimizing supervised and unsupervised loss. Historically, various SSL algorithms have been proposed for deep learning such as entropy minimization (Grandvalet and Bengio, 2005), pseudo-label (Lee et al., 2013), and consistency regularization (Bachman et al., 2014; Sajjadi et al., 2016; Laine and Aila, 2016). UDA (Xie et al., 2020) and FixMatch (Sohn et al., 2020), which combine ideas of pseudo-label and consistency regularization, have achieved remarkable performance. More recent methods such as FreeMatch (Wang et al., 2023) improve UDA and FixMatch to adaptively control the confidence threshold of acceptance of the pseudo labels for preventing error accumulation and overfitting. These SSL algorithms assume that many unlabeled data are provided because unlabeled samples can be more easily obtained than labeled samples with human annotations. However, we point out that even unlabelled data is becoming more difficult in today's increasingly privacy-conscious world. This paper opens up a new SSL paradigm that makes unlabelled data unnecessary by leveraging pre-trained generative foundation models. Leveraging Generative Models for Training Discriminative Models.In the context of data augmentation and transfer learning, several studies have applied the expressive power of conditional generative models to boost the performance of discriminative models, e.g., classifiers. Zhu et al. (2018), Yamaguchi et al. (2020), Yamaguchi et al. (2023), and He et al. (2023) have exploited the generated images from conditional GANs and diffusion models for data augmentation and representation learning, and Sankaranarayanan et al. (2018) have introduced conditional GANs for domain adaptation setting to learn feature spaces of source and target domains jointly. Li et al. (2020) have implemented an unsupervised domain adaptation technique with conditional GANs in a setting of no accessing source datasets. More similar to our work, Yamaguchi et al. (2022) have proposed a transfer learning method called P-SSL using pre-trained generative foundation models in semi-supervised learning. However, we note that P-SSL and our method differ three-fold: (a) problem setting, (b) assumptions of data and label spaces, and (c) optimization methods. For (a), the problem setting of our method is focused on SSL, whereas P-SSL is for transfer learning, where the neural architectures of source and target classifiers are different. For (b), our method assumes the generative foundation model \(G_{\mathrm{F}}\) covers the training data space \(\mathcal{X}\). In contrast, P-SSL assumes the label space of \(G_{\mathrm{F}}\) covers the training label space i.e., \(\mathcal{Y}\subset\mathcal{Y}_{\mathrm{F}}\); the latter is more strict and thus the performance might degrade when it does not hold (Yamaguchi et al., 2022). For (c), we directly optimize the latent variables of \(G_{\mathrm{F}}\) to find optimal unlabeled samples for SSL, whereas P-SSL just samples related synthetic samples via similarity in the label spaces through source pre-trained classifier. These differences produce the performance improvements of our method in SSL, as shown in Sec. 4. ## 6 Conclusion This paper presents a new semi-supervised learning (SSL) problem setting called generative SSL, where real unlabeled datasets are unavailable, where a generative foundation model is given as the source of unlabeled data. This setting is important because we are often restricted from obtaining real unlabeled data due to privacy concerns. To solve this problem, we propose a training method called MP-SSL, which consists of latent meta-optimization (LMO) and synthetic consistency regularization (SCR). We experimentally demonstrate that our MP-SSL outperforms existing baselines and can potentially replace real unlabeled datasets with generative foundation models. One of the limitations of this work is the dependency on the existence of foundation generative models, but this limitation will be relaxed because the foundation model trend is rapidly developing for various modalities in the community. Important future steps are to speed up or avoid the computation of meta-learning in LMO and to extend our method to diffusion models, which produce synthetic samples with higher fidelity but require higher computational costs for sampling than GANs.
semi-教師あり学習 (SSL) は、ラベル付きと未ラベルのデータセットを用いて深い分類モデルを訓練するための有望なアプローチです。しかし、既存の SSL 方法は、大規模な未ラベルのデータセットに依存しており、これは、多くの現実的なアプリケーションにおいて、法的制約(例:GDPR)によって、必ずしも入手可能ではない場合があります。この論文では、以下の研究質問に焦点を当てます。SSL モデルを未ラベルのデータセットなしでどのようにトレーニングすることが可能か?この論文では、現実の未ラベルデータセットを使用する代わりに、多様なドメイン(例:ImageNet)にトレーニングされた生成ative foundation model から生成された合成データセットを用いた SSL メソッドを提案します。私たちの主な概念は、生成ative foundation model から生成された合成サンプルを識別し、これらの合成サンプルを使用して分類器をトレーニングすることです。このために、私たちの方法は、交互の最適化問題として表
2302.14426
At-Scale Evaluation of Weight Clustering to Enable Energy-Efficient Object Detection
Accelerators implementing Deep Neural Networks for image-based object detection operate on large volumes of data due to fetching images and neural network parameters, especially if they need to process video streams, hence with high power dissipation and bandwidth requirements to fetch all those data. While some solutions exist to mitigate power and bandwidth demands for data fetching, they are often assessed in the context of limited evaluations with a scale much smaller than that of the target application, which challenges finding the best tradeoff in practice. This paper sets up the infrastructure to assess at-scale a key power and bandwidth optimization - weight clustering - for You Only Look Once v3 (YOLOv3), a neural network-based object detection system, using videos of real driving conditions. Our assessment shows that accelerators such as systolic arrays with an Output Stationary architecture turn out to be a highly effective solution combined with weight clustering. In particular, applying weight clustering independently per neural network layer, and using between 32 (5-bit) and 256 (8-bit) weights allows achieving an accuracy close to that of the original YOLOv3 weights (32-bit weights). Such bit-count reduction of the weights allows shaving bandwidth requirements down to 30%-40% of the original requirements, and reduces energy consumption down to 45%. This is based on the fact that (i) energy due to multiply-and-accumulate operations is much smaller than DRAM data fetching, and (ii) designing accelerators appropriately may make that most of the data fetched corresponds to neural network weights, where clustering can be applied. Overall, our at-scale assessment provides key results to architect camera-based object detection accelerators by putting together a real-life application (YOLOv3), and real driving videos, in a unified setup so that trends observed are reliable.
Martí Caro, Hamid Tabani, Jaume Abella
2023-02-28T09:13:52
http://arxiv.org/abs/2302.14426v1
# At-Scale Evaluation of Weight Clustering to Enable Energy-Efficient Object Detection ###### Abstract Accelerators implementing Deep Neural Networks (DNNs) for image-based object detection operate on large volumes of data due to fetching images and neural network parameters, especially if they need to process video streams, hence with high power dissipation and bandwidth requirements to fetch all those data. While some solutions exist to mitigate power and bandwidth demands for data fetching, they are often assessed in the context of limited evaluations with a scale much smaller than that of the target application, which challenges finding the best tradeoff in practice. This paper sets up the infrastructure to assess at-scale a key power and bandwidth optimization - weight clustering - for You Only Look Once v3 (YOLOv3), a neural network-based object detection system, using videos of real driving conditions. Our assessment shows that accelerators such as systolic arrays with an Output Stationary architecture turn out to be a highly effective solution combined with weight clustering. In particular, applying weight clustering independently per neural network layer, and using between 32 (5-bit) and 256 (8-bit) weights allows achieving an accuracy close to that of the original YOLOv3 weights (32-bit weights). Such bit-count reduction of the weights allows shaving bandwidth requirements down to 30%-40% of the original requirements, and reduces energy consumption down to 45%. This is based on the fact that (i) energy due to multiply-and-accumulate operations is much smaller than DRAM data fetching, and (ii) designing accelerators appropriately may make that most of the data fetched corresponds to neural network weights, where clustering can be applied. Overall, our at-scale assessment provides key results to architect camera-based object detection accelerators by putting together a real-life application (YOLOv3), and real driving videos, in a unified setup so that trends observed are reliable. ## 1 Introduction Computing platforms deliver increasing levels of performance over time, which has allowed deploying performance-hungry functionalities across many applications and domains. For instance, camera-based object detection is used in a plethora of applications building on deep neural networks (DNNs), whose accuracy already matches the needs of many of those applications [1, 2], with realizations such as ResNet-101 [3], ResNet-152 [3], YOLO [4] and YOLOv3 [5]. Detection accuracy and computing cost (e.g., in the form of number of arithmetic operations needed) have also been used as key parameters to compare DNNs [5]. However, other parameters such as memory bandwidth and energy consumption are also key to adopt specific DNN implementations. Due to efficiency, DNN applications are often implemented using specialized accelerators that devote most resources (e.g. area and power) to do effective computing work such as performing multiply-and-accumulate (MAC) operations, which have been shown to be the basis of DNNs [6]. Those accelerators include systolic arrays [7], GPUs [8], as well as other application-specific designs [9]. However, in all cases, those accelerators require fetching large amounts of data that include (i) the image to be processed, and (ii) the weights of the DNN implemented [10]. Generally, this implies that large volumes of data need to be fetched from DRAM memory where data is stored. DRAM energy consumption has been shown to be the main source of energy consumption whenever data locality is poor and DRAM accesses are frequent [11], hence making the computing unit (e.g. the accelerator) require large and sustained amounts of data being fetched from DRAM memory. In the particular case of camera-based object detection, DNNs with some acceptable degree of accuracy processing large enough images (e.g. as in the case of autonomous driving) may easily need to fetch some gigabytes of data to handle each image, being most of it due to the weights that need to be fetched for the DNN [5]. Hence, data locality cannot realistically be achieved since images to be processed change every few milliseconds, and the weights that need to be reused across images require too much space (i.e. hundreds of MBs) to store them in any local cache or register file. Therefore, reducing the amount of data to be fetched for energy reasons, but also to reduce the memory bandwidth requirements, is a key challenge for the design of systems for camera-based object detection. Different approaches have been investigated to decrease memory energy and bandwidth requirements of DNNs, being reduced precision and weight clustering two of the most prominent ones. The former, reduced precision [12], aims at using lower precision arithmetic, and hence, lower precision data representations, to decrease the amount of data needed. For instance, using 32-bit floating point numbers instead of 64-bit ones halves the amount of data needed. On the other hand, reducing precision further discretizes the weights of the DNN, hence, impacting accuracy. Therefore, it can be used to some extent. The latter, weight clustering [12], is orthogonal to the former and aims at restricting the number of different weights to a limited number (e.g., 128 different values) while keeping their precision (e.g., each of those 128 values is a 32-bit floating point number) so that those high-precision weights need being fetched just once (e.g., 128 32-bit values), and the weights used by the DNN are encoded with fewer bits (e.g., 7 bits per weight instead of 32) to select the particular full-precision weight that must be used in each case. While reduced precision could be used instead of weight clustering matching the number of precision bits (e.g., 7-bit weights) to the bits used for clustered weights, both of them would have almost identical memory bandwidth requirements (e.g., 7 bits per weight both, plus 128 32-bit values for weight clustering), but weight clustering provides much higher precision (e.g., 32-bit weights instead of 7-bit ones) and hence, better prediction accuracy [12]. The energy savings achieved with weight clustering have only been illustrated with limited examples, but, to our knowledge, the bandwidth reduction and energy savings of weight clustering have not been evaluated in realistic (at scale) case studies. This paper addresses this challenge by assessing the gains that can be achieved with weight clustering in terms of memory bandwidth and energy in the context of a real-size application, the YOLOv3 camera-based object detector [5], which we assess with real driving videos for its use in the context of autonomous driving. In particular, the contributions of this work are as follows: * We integrate different power models for the MAC units, the DRAM memory, and the SRAMs needed by weight clustering in a consistent framework to estimate the energy consumption of the DRAM and accelerator devices with and without weight clustering. * We integrate YOLOv3 with the power models to estimate the energy consumption to process each frame from a video. * We assess the tradeoff between overall energy, memory bandwidth and prediction accuracy of weight clustering using both, reference labelled data sets and videos from real driving conditions. Our evaluation considers both, global and per-layer clustered weights, and different numbers of weight clusters, from 32 (5 bits) up to 256 (8 bits). Our results show that weight clustering can be effectively used to preserve detection accuracy while decreasing significantly memory bandwidth requirements and overall energy in the context of autonomous driving. In particular, per-layer weight clusters show to be particularly effective to preserve accuracy unaffected with 7-bit and 8-bit clusters decreasing DRAM energy consumption by around 60% w.r.t. the original 32-bit data used by YOLOv3. Our results show that, with negligible accuracy loss, 5-bit clusters allow reaching DRAM energy reductions of around 68%. Bandwidth also drops from 200 GB/s for 25 frames-per-second (FPS) down to 60-80 GB/s depending on the bits used for clustering. The rest of the paper is organized as follows. Section 2 provides some background on DNNs, appropriate accelerators and weight clustering. Section 3 presents the case study. Section 4 evaluates the case study. Related work is provided in Section 5. Section 6 summarizes this work. ## 2 Background This section provides some background on DNNs in general, and Convolutional Neural Networks (CNNs) in particular. It also provides some details on dataflow accelerators for DNNs, and introduces clustering and its application to DNN weights. ### DNNs DNNs may include tens or even more than one hundred hidden layers, which is not generally the case for other types of neural networks. Figure 1 illustrates the general structure of a DNN. DNNs consist of an input layer with \(n\) inputs, an output layer with \(m\) outputs and \(k\) hidden layers in-between, where \(n\), \(m\) and \(k\) are at least 1. Each node is called a _neuron_ and its value is computed with the values of the neurons in the previous layer that are connected to it, the weights associated to these connections and a bias. The weights and biases are across layers, however, this is not always the case. Formally, a neuron is computed as presented in Equation 1, where \(n_{i,j}\) denotes a neuron in position \(j\) of layer \(i\), \(w_{i,k,j}\) is the weight of the connection between \(n_{i-1,k}\) and \(n_{i,j}\), \(m_{i-1}\) denotes the number of neurons in layer \(i-1\), and \(b_{i,j}\) is the bias of \(n_{i,j}\). \[n_{i,j}=b_{i,j}+\sum_{k=1}^{m_{i-1}}n_{i-1,k}\times w_{i,k,j} \tag{1}\] CNNs are one of the most popular types of DNNs. CNNs architecture encodes image-specific features allowing it to be primarily used for finding patterns in images to recognize objects. CNNs are comprised of an input layer that holds the pixel values of the image, convolutional layers that perform convolutions of the inputs with the weights and biases, pooling layers that downsample the spatial dimensionality of the input, fully-connected layers that produce class scores, and an output layer that holds the final classification of the objects [10]. Figure 2 shows the traditional 2-D convolution used for image processing. The weights Figure 1: Structure of a DNN. Figure 2: 2-D convolution process. consist of an \(W_{h}\times W_{w}\) matrix, the inputs of an \(I_{h}\times I_{w}\) matrix, and the outputs of an \(O_{h}\times O_{w}\) matrix. The output height \(O_{h}\) can be calculated as \(O_{h}=\frac{W_{h}-I_{w}+2P}{S}+1\), where \(W_{h}\) is the number of rows of the weights, \(I_{h}\) is the number of rows of the inputs, \(P\) is the padding value (i.e. the number of rows and columns added to the frame of the image if needed, to cover the entire image with the sliding window), and \(S\) is the stride value (i.e. the number of positions to move the sliding window). The output width is calculated using the number of columns instead of the number of rows, \(O_{w}=\frac{W_{w}-I_{w}+2P}{S}+1\). The process of a 2-D convolution is performed with a sliding window in which the weight window moves through the entire input, performing an element-wise multiplication and accumulation. In the example shown, we can see that the weights are multiplied by a \(W_{h}\times W_{w}\) subset of the input and accumulated to obtain a single output element. The window then moves through the entire image to compute all the \(O_{h}\times O_{w}\) outputs. ### YOLOv3 Object Detection System YOLOv3 [4, 5] is a camera-based detection software framework capable of processing images efficiently in real-time. YOLOv3 implements one of the DNNs delivering higher accuracy, and hence, it is very popular and has already been adopted as part of several industrial applications, including Apollo [13], which is an industrial autonomous driving framework. YOLOv3 [5] is capable of detecting around 80 different classes of objects, including multiple animals, common objects and food, but also multiple types of vehicles, pedestrians and road signs, with the latter three groups being of particular relevance for autonomous driving. For each image processed, YOLOv3 detects the objects in it and attaches each one a confidence level describing the confidence on the detection and classification of such object (i.e., a value between 0% and 100%). However, a confidence threshold is used to drop those objects whose confidence is below the threshold (e.g., 50%), hence keeping only those for which the detection has been performed with sufficiently high confidence. We use Darknet framework [14], developed by YOLO authors, implementing YOLOv3 model for image and video processing. For the sake of reliability, Darknet averages confidence levels across a specific number of images in the case of video processing (e.g., across 3 images). This allows mitigating false positives and false negatives that occur in just one frame, since their sporadic and erroneous confidence is averaged with that of the two previous frames. For that purpose, the average for each value of the YOLO layer output of the current image and the YOLO layer output of the previous two images is calculated. Finally, YOLOv3 provides two different models; Tiny YOLOv3, which is a lighter and simplified version of YOLOv3 that can run over 200 frames per second, but at the cost of lower accuracy, and the baseline YOLOv3, which is a more sophisticated model with longer execution time but also more accurate. In this work we have used the baseline YOLOv3, which provides the highest accuracy and is, generally, more relevant for industry. ### Accelerators for DNNs In this work, our reference accelerator is assumed to have an architecture such that its energy per MAC is low (i.e., most power dissipation occurs in MAC units), and whose utilization can be kept high (e.g., close to 100%) in the case of DNNs. These two characteristics are the basis of a high-performance and power-efficient accelerator for DNNs. Systolic arrays [15] adhere to those characteristics. Their typical architecture is depicted in Figure 3 for illustration purposes. As shown, such an accelerator includes an array of processing elements (PEs) capable of performing MACs (e.g., a MAC per cycle if properly pipelined). In particular, those PEs multiply inputs and weights, and accumulate the result with the output. Each PE is normally implemented with its own local (and small) register file (RF) or scratchpad memory, which is used to retain data to be reused so that data locality can be effectively exploited minimizing costly data transactions. Generally, those RFs allow retaining one of the three operands of the MAC, whereas the other two need being fetched for each operation from a slower and higher power memory (DRAM in our case), or forwarded by other PEs. Depending on what data is kept in the PE-local RFs, different dataflows can be defined. For instance, a _weight stationary_ dataflow keeps the weights to be operated in those RFs with the aim of not having to fetch them, hence maximizing their reuse and reducing the energy consumption devoted to fetch weight. An _output stationary_ dataflow, instead, stores in the RF the results of the MACs to be accumulated with the results of the following MACs to be executed. Such architecture intends to minimize the energy consumption needed to read and write output results. Other possible dataflows - although less popular - include _input stationary_ dataflows to minimize the energy devoted to fetch inputs, _row stationary_ dataflows that aim at reducing the energy devoted to fetch all data types simultaneously (i.e., inputs, weights and outputs), and _no local reuse_ dataflows that do not reuse any data, but allow removing RFs from the PEs. As shown later in this work, we consider an Output Stationary dataflow, as it is a popular dataflow that can benefit significantly from using weight clustering. ### Weight Clustering Weights of a DNN typically are the largest set of data to fetch, and so it is in the case of YOLOv3, which requires roughly 8 GB of weights to be fetched from memory to process each single image. This poses, obviously, high pressure on the memory bandwidth to allow Figure 3: Overview of a CNN accelerator architecture. real-time operation. For instance, if a value of 25 FPS is wanted - typical for video processing - around 200 GB/s of memory bandwidth are required. This is an overwhelming cost for autonomous driving systems, both in terms of the hardware procurement costs to allow such a high bandwidth, and in terms of power to read so much data per second. Note that autonomous driving systems employ multiple cameras and lots of other sensors which increase the bandwidth and computation requirements manifold resulting in significant pressure on the memory subsystem. This work assesses the effectiveness of a quantization technique called Weight Clustering or Sharing to mitigate the cost of fetching weights for DNNs. In particular, we assess its impact in both, accuracy and energy consumption. Weight clustering has been shown to be an effective solution to decrease the size of the data to be operated with very limited impact on accuracy. Weight clustering builds on limiting the number of different weights to be used, which are represented with fewer bits while preserving the original precision of the weights. This is achieved by making multiple weights being identical by using the mean value of a cluster (i.e. a group of weights), with those weights being stored in a centroid table (also known as codebook table). K-means [16, 17] is the most popular clustering algorithm. This algorithm works by finding the similarity between the data and grouping the data in \(K\) groups called clusters. Using this technique, the FP32 values of the weights used by YOLOv3 that are stored in a DRAM are replaced by smaller integer indices that are used to access the centroid table. Note that this table may be orders of magnitude smaller than the weights and can fit in a small on-chip SRAM within the systolic array. This adds the extra cost of performing an indirection, but since the indirection fetches data from a tiny on-chip SRAM with a significantly shorter access time than the DRAM, the expected overhead is negligible and clustering is expected to significantly reduce the power and memory bandwidth used to fetch the weights from memory. In terms of delay, weight translation through the centroid table can be organized to occur ahead of time so that full 32-bit weights are available in the PEs whenever needed. For instance, if we set \(K=256\) clusters for K-means, we will require only 256 different FP32 weights for the DNN, and we will be able to encode the weights of the DNN with only 8 bits to identify the particular FP32 weight to use for each one of them. The centroid table will have to store 256 unique FP32 values, which is only 1KB of data. Note that the size of the centroid table is independent of the number of weights of the DNN. For the sake of illustration, we provide a small example where we have 1,000 32-bit (e.g., FP32) weights, see left side of Figure 4. Those require 32,000 bits of storage. However, using K-means, in this example, we identify 4 32-bit values that can be used for clustering the 1,000 potentially different 32-bit values. We refer to those 4 clustered weights as _weight A_, _weight B_, _weight C_, and _weight D_ in the figure. As shown in the central part of the figure, we replace each one of the original 32-bit weights by the representative weight of the cluster where the corresponding weight has been mapped to with K-means. This intermediate step is only shown for illustrative purposes, but not applied in practice. Finally, we replace each of the 1,000 32-bit weights by a 2-bit index indicating to what of the 4 32-bit values such weight has been mapped. Therefore, as shown in the rightmost part of the figure, we need 1,000 2-bit values to store the clustered weights and the translation table with the 4 32-bit weights, 2,128 bits in total, which corresponds to a 15x storage reduction in this example. Note that using reduced precision, as mentioned before, leads to similar memory band width requirements (2,000 bits in the example). However, using lower precision constrains the accuracy of the DNN. In the example, we could only use 2-bit values for the weights. This reduces the cost of the arithmetic, but restricting precision impacts accuracy if the number of bits used becomes too low. Instead, weight clustering keeps similar bandwidth requirements to those of comparable precision reduction (e.g., 2,128 bits vs 2,000 bits in the example), but allows using weights with much higher precision (e.g., 32-bit weights), which are operated with the appropriate arithmetic units (e.g., FP32 arithmetic in the example) and hence, allows the DNN reaching much higher accuracy than just using lower precision [18, 19, 20, 21, 22, 23, 24]. ## 3 Case Study Modelling This section presents the design and modelling choices for our case study. Those choices relate to the underlying design of the systolic array, the actual application of weight clustering, the power model, and the input data used for evaluation. ### Systolic Array As discussed in Section 2.3, systolic arrays can be organized in different ways depending on what data is fetched, and what data remains stationary - if any. Given that weight clustering allows a drastic reduction of the bandwidth and energy requirements related to fetching weights, it is expected that the most appropriate dataflow is the one where any other type of data (inputs and outputs) requires low bandwidth and energy. Otherwise, Amdahl's law would severely limit the achievable gains. Therefore, a systolic array with an output stationary dataflow is the most appropriate baseline organization to consider. Such an organization allows keeping intermediate results local in the PEs until they are final, hence not needing to fetch them. Weights, instead, must be fetched every time they are Figure 4: Example where 1,000 32-bit weights are clustered into 1,000 2-bit weights. required, and they account for a much larger size than input data operated, potential gains with weight clustering are huge if we use an output stationary dataflow since the number of bits per weight that must be fetched from memory is drastically reduced. Moreover, despite not being explicitly evaluated in our study, output stationary systolic arrays minimize write operations to memory since intermediate results are not sent out. If the memory space where those results are stored is coherent, write operations may trigger coherence messages into the cores, potentially harming performance. Hence, minimizing those write operations may also bring benefits in terms of reduction of coherence messages triggered. ### Weight Clustering Application In our case study, we consider two alternative ways to realize weight clustering: * _All layers_ (global). In this case, clustering is applied using as input all weights of all layers at once. This approach minimizes the number of weights needed, but may impact accuracy since all layers are enforced to use the same set of weights. * _Per layer_ (local). Clustering is applied separately to the weights of each layer individually. Accuracy is gained since each layer may use the best set of weights for the corresponding layer. On the other hand, weights need to be loaded every time a new layer starts. Note, however, that the cost of fetching few floats (e.g. between 32 and 256 floats) per layer is negligible compared to the typical tens of MBs of data needed per layer. To implement both alternatives, we have used the K-means algorithm, as indicated before, and modified the relevant functions of YOLOv3 to read and process the weights accordingly. Listing 1 shows the changes made to the convolution operation, showing the added indirection needed to fetch the weights. Line 10 corresponds to the original weight read operation, and line 11 to the operation using clustering. Note that the example corresponds to the simple case with 8-bit clustered weights. We have used also lower numbers of bits (5, 6 and 7), whose code is similar to the case of 8 bits, but require some bit manipulation given that they are not byte aligned. ``` 1voidgemm_nn_centroids(intM,intN,intK,floatALPHA, 2float*A,intlda, 3float*B,intldb, 4float*C,intldc, 5float*centroids,unsignedchar*indexes) 6{ 7inti,j,k; 8for(i=0;i<M;++i){ 9for(k=0;k<K;++k){ 10//floatAPART=ALPHA*A[i*lda+k]; 11floatAPART=ALPHA*centroids[indexes[i*lda+k]]; 12for(j=0;j<N;++j){ 13C[i*ldc+j]+=APART*B[k*ldb+j]; 14}}}}} ``` Listing 1: C implementation of gemm function using clustered weights. K-means has already been considered in the past realize clustering of activations and/or weights [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30]. However, to our knowledge, our work is the first one assessing the benefits, in terms of energy and bandwidth, of post-training weight clustering for an at-scale case study such as YOLOv3. ### Power Model For our assessment, we have modelled the main power contributors of an accelerator running the DNN, namely DRAM accesses and MAC operations [12]. Since centroids tables are direct overheads of weight clustering, we have also included their energy consumption modelling them as SRAM tables. Instead, other components of the accelerator, such as its peripheral logic, have not been included as their contribution to the total energy of the accelerator is relatively small and, in any case, it will not experience significant variations regardless of whether weight clustering is used or not. #### 3.3.1 Modelled DRAM and SRAM specifications In our at-scale assessment of YOLOv3, we must take into account two type of memories: DRAM memory where data is stored and fetched from, and the centroids tables (SRAM memories) used when weight clustering is implemented. Both types of memories have been modelled using the CACTI tool [31], which models energy consumption, as well as area and delay, for a wide range of memories. Hence, it allows collecting DRAM and SRAM energy estimates building on comparable models. Tables 1 and 2 describe the specific parameters used to model DRAM and SRAM memories respectively in our case study. Regarding the technology node considered, it has been chosen to keep consistency with the energy values available for MAC operations, taken from [19], as described later. This allows modelling properly the contribution to total energy of each component, and hence, obtaining results where the relative cost of DRAM and MAC energy consumption is kept consistent and trends are meaningful, despite absolute values do not correspond to the latest technology nodes in the market. \begin{table} \begin{tabular}{|l||c|} \hline \multicolumn{2}{|c|}{**DRAM Configuration**} \\ \hline **Type:** & DDR4-3200 \\ \hline **Size (GB)** & 1 \\ \hline **Channels** & 8 \\ \hline **Bus width (bits)** & 64 \\ \hline **Burst length (bits)** & 64 \\ \hline **Technology (nm)** & 45 \\ \hline **Page size (KB)** & 1 \\ \hline **Energy per read (pJ)** & 2937 \\ \hline **Energy per write (pJ)** & 2937 \\ \hline **Interface frequency (GHz)** & 3.2 \\ \hline \end{tabular} \end{table} Table 1: Modelled DRAM configuration. For DRAM accesses, the energy reported in Table 1 corresponds to 8-byte random accesses. However, an output stationary dataflow exploits spatial reuse in the memory row buffers [32]. Hence, assuming 1 KB row buffers, we can expect 1 row miss every 128 accesses, while the remaining 127 accesses are row hits, whose energy consumption is significantly lower. Since CACTI does not provide data for row hits, we have used DRAMsim3 [33] to estimate the relation between row hits energy consumption and row misses one. For that purpose, we have collected energy measurements for a sequence of one of the default DRAM configurations (DDR4_1Gb_x16_1866) and we have used two traces to estimate the energy ratio of row hits and misses. To do so we have executed read and write access traces with spatial locality (1 miss followed by 127 hits) and without spatial locality (all misses), and extrapolated the energy consumption of hits w.r.t. misses. Results of each access type are shown in Table 3. Regarding the SRAM memory for centroids, its size and energy vary depending on the clustering configuration as reported in Table 4. Note that, in this case, the reported SRAM read and write energy is for fetching 32 bits of data, since weights for YOLOv3 are 32-bit floating point numbers. The read ports number of the SRAM memory has been set large enough to allow translating as many weights as fetched per cycle in each case. In any case, SRAM energy cost is negligible regardless of its number of ports and size since it is several orders of magnitude smaller than that of DRAM accesses. Regarding SRAM static energy, it has also been included but it is completely negligible even if we assume a physical SRAM per layer as potentially needed in the per-layer clustering. In fact, very few tables would be needed if we load centroids right before they are needed, but as said, SRAM static energy is simply irrelevant (below 0.1% of the total energy in all cases). #### 3.3.2 Number of Memory Accesses Memory access counts have been computing building on the feature maps of the weights, outputs, and inputs, their padding and stride, and the dataflow of the accelerator (Output \begin{table} \begin{tabular}{|l||c|} \hline \multicolumn{2}{|c|}{**SRAM Configuration**} \\ \hline **Bus width (bits)** & 32 \\ \hline **Burst length (bits)** & 32 \\ \hline **Technology (nm)** & 45 \\ \hline \end{tabular} \end{table} Table 2: Modelled SRAM configuration. \begin{table} \begin{tabular}{|l||c|} \hline Row Miss Read Energy (pJ) & 2937 \\ \hline Row Hit Read Energy (pJ) & 1735 \\ \hline **Average Read Energy (pJ)** & 1753 \\ \hline \hline Row Miss Write Energy (pJ) & 2953 \\ \hline Row Hit Write Energy (pJ) & 1859 \\ \hline **Average Write Energy (pJ)** & 1876 \\ \hline \end{tabular} \end{table} Table 3: DRAM Average Read and Write Energy. Stationary in our case), as well as considering spatial data reuse. We consider the memory accesses of all layers in the neural network used by YOLOv3 for object detection, namely, the convolutional, route, upsample, shortcut, and YOLO layers. We note that YOLOv3 has 3 types of weight filters: 3x3 with Stride 1, 3x3 with Stride 2, and 1x1 with Stride 1. Next, we specify the formulas for calculating the number of memory accesses based on the three types of weights, where appropriate. The meanings of the abbreviations used in the formulas are, W: Weights, I: Inputs, O: Outputs and the meaning of the subscripts, w: width, h: height, c: channels, and f: filters. For the number of memory accesses, it is assumed that each access fetches/writes a single FP element (32 bits). Knowing that the modelled DRAM has a bus width of 64 bits and that the read/write energy reported is for 64-bit accesses, we divide the number of accesses by 2 for DRAM accesses. * Convolutional Layer: Performs the convolutions between weights and inputs. * Weight Reads (3x3, Stride 1, Stride 2) = \(W_{w}\times W_{h}\times W_{c}\times W_{f}\times(I_{h}-2)\) * Weight Reads (1x1, Stride 1) = \(W_{w}\times W_{h}\times W_{c}\times W_{f}\times I_{h}\) * Input Reads (3x3, Stride 1) = \(I_{w}\times W_{h}\times W_{c}\times(I_{h}-2)\) * Input Reads (3x3, Stride 2) = \((I_{w}+1)\times W_{h}\times W_{c}\times(I_{h}-2)\) * Input Reads (1x1, Stride 1) = \(I_{w}\times W_{h}\times W_{c}\times I_{h}\) * Output Writes = \(O_{h}\times O_{w}\times W_{f}\) * Shortcut Layer: Add the feature maps of two layers. * Reads: \(I_{h}\times I_{w}\times I_{c}+I_{h^{\prime}}\times I_{w^{\prime}}\times I_{c^ {\prime}}\) * Writes: \(I_{h}\times I_{w}\times I_{c}+I_{h^{\prime}}\times I_{w^{\prime}}\times I_{c^ {\prime}}\) * Route Layer: Outputs a specific layer or concatenates two layers. * Reads One Layer: \(I_{h}\times I_{w}\times I_{c}\) * Reads Concatenation: \(I_{h}\times I_{w}\times I_{c}+I_{h^{\prime}}\times I_{w^{\prime}}\times I_{c^ {\prime}}\) * Writes One Layer: \(I_{h}\times I_{w}\times I_{c}\) * Writes Concatenation: \(I_{h}\times I_{w}\times I_{c}+I_{h^{\prime}}\times I_{w^{\prime}}\times I_{c^ {\prime}}\) * Upscale Layer: Upscales the output of a layer (2x upscaling in our case). \begin{table} \begin{tabular}{|l||c|c|} \hline **Configuration** & **SRAM size (bytes)** & **Read Energy (pJ)** \\ \hline **Cluster-8 bits** & 1024 & 0.85 \\ \hline **Cluster-7 bits** & 512 & 0.52 \\ \hline **Cluster-6 bits** & 256 & 0.40 \\ \hline **Cluster-5 bits** & 128 & 0.36 \\ \hline \end{tabular} \end{table} Table 4: SRAM read energy for each cluster configuration. Reads: \(I_{h}\times I_{w}\times I_{c}\) * Writes: \(4\times I_{h}\times I_{w}\times I_{c}\) * YOLO Layer: Performs the object detections. * Reads: \(I_{h}\times I_{w}\times I_{c}\) * Writes: \(I_{h}\times I_{w}\times I_{c}\) #### 3.3.3 Number of FP operations and their energy estimation The number of FP operations has been computed using different procedures for convolutional layers and the other layers. For convolutional ones, we use the following formula to compute the number of MACs: \[\#MACs=O_{w}\times O_{h}\times W_{w}\times W_{h}\times W_{c}\times W_{f}\] To calculate the number of FP operations from other layers or sources (such as the activations), we have used software counters. The FP operations used during object detection are the following: addition (FPADD), multiplication (FPMUL), subtraction (FPSUB), division (FPDIV), exponential (FPEXP), and square root (FPSQRT). Energy consumption of FP operations has been obtained from [19] in the case of FPADD and FPMUL, for a 45nm technology node, which, as discussed before, is in synch with the one sued for our memory models (DRAM and SRAM) to perform fair energy comparisons. However, we lack explicit energy estimates for FPDIV, FPSUB, FPSQRT and FPEXP. Nevertheless, those operations represent a negligible part of the FP operations performed by YOLOv3 since the MACs of the convolutional layer already account for 99.54% of the overall FP operations in YOLOv3. Hence, we have modelled those other operations, which represent a fraction of the remaining 0.46% FP operations, considering that their energy consumption matches that of an FPMUL operation. ### Datasets For our case study, we have used two different types of datasets: a labelled one, and a set of real driving videos. A labelled dataset is composed by a set of images labelled by humans. This means that, for each image, the bounding boxes of the objects have been drawn and classified as accurately as possible by a human. These accurate bounding boxes are called ground truths and are used to compare against the predicted bounding boxes of an object detector, to determine its accuracy by using different metrics. Due to its large number of images, and due to defining object categories comparable to those of YOLOv3, we have used the COCO dataset [34]. In particular, this dataset contains ten thousand high quality images of complex everyday scenes containing common objects in their natural context. Moreover, the training set of this dataset is the one used to train YOLOv3. The drawback of this dataset is that we are mostly interested in images of road scenes taken from vehicle cameras, but this dataset contains more general contexts. Therefore, we have dropped all those images that do not include any vehicle, therefore keeping 880 images. In any case, note that the fact that those images include vehicles does not imply that they belong to driving scenarios. Hence, the accuracy of the object detection process for this set of images is only relevant in relative terms to compare the accuracy of different setups, but not to obtain conclusions from the absolute values. We have also considered a set of unlabelled automobile videos to complement the analysis of labelled data. To perform this analysis, we have used thirty seconds extracts, taken at 30 FPS, from six videos of real driving conditions, which provide more relevant input data for your case study [35, 36, 37, 38, 39, 40]. The main disadvantage of using these videos is that we lack a ground truth for comparison purposes since videos are unlabelled. Hence, the only possible comparison is to build the default YOLOv3 without weight clustering as if it was the ground truth. This may lead to some pessimism in terms of accuracy for weight clustering since it is frequent that objects in the boundary between being identified or discarded (e.g. those whose confidence of detection is around 50%) are relatively often misclassified, being either false positives or false negatives. Hence, a relevant number of misclassified objects in the baseline YOLOv3 may be properly classified with weight clustering - partly due to the stochastic nature of the object detection process - but accounted as misclassifications due to the lack of a ground truth. In the case of videos, we fully build on the redundancy delivered by YOLOv3 so that, as explained before, object detections are given averaging the confidence level obtained across multiple frames - 3 in particular - rather than the confidence values obtained frame by frame. This approach mitigates the risk of sporadic misclassifications. For instance, an object detected with confidence levels 80%, 90% and 40% across three frames would be regarded as a detection with 70% confidence, thus above the threshold (e.g. 50%) despite in the last image it was below the threshold. ### Practical Integration in an Accelerator Weight clustering can be integrated in multiple manners in an accelerator. The simplest one consists of a translation layer for weights fetched from memory so that the internal design of the accelerator remains unchanged. This is illustrated in Figure 5 for a systolic array accelerator, where arrows width indicates bit-width. As shown, the accelerator stores and operates full weights as in a non-weight clustering design. Cluster IDs are fetched from memory and immediately translated into full weights to keep the accelerator design unchanged. This has the advantage of being non-intrusive with the accelerator design. However, this prevents exploiting the fact that cluster IDs require fewer bits than full weights, so that IDs could be propagated further delaying the translation. Figure 6 shows a completely opposed design approach where ID translation occurs as late as possible, right before data are operated in the PEs. This allows column and row management logic (i.e. feeders) use small IDs (e.g. 6-bit) rather than full weights (e.g. 32-bit), as well as narrower connections to transmit weights to PEs. On the other hand, translations may occur in further locations simultaneously, so likely weight translation tables need being replicated across multiple locations. Overall, there is a trade-off between decreasing the width of the weights propagated by using IDs instead of full weights, and the cost of replicating translation logic in multiple locations. Note that intermediate solutions where, for instance, translation occurs per column or per row are also possible, therefore providing other trade-offs to choose the most effective one, which is fully accelerator dependent and may change across different accelerator architectures or topologies. ## 4 Evaluation This section evaluates the different tradeoffs between accuracy, energy consumption and memory bandwidth for both, the COCO dataset and the real driving videos. ### Energy Breakdown To illustrate the energy contribution of DRAM accesses to fetch DNN weights in the baseline YOLOv3 configuration, we start estimating the energy consumption devoted to both components, namely DRAM accesses and FP operations, with the model described in previous section applied to YOLOv3. In particular, we estimate the energy needed to process one image. As shown in Figure 7, 2086 mJ are required to process an image, which would correspond to 52 Watts if one image was processed every 40ms (i.e., 25 FPS). As shown, 84.4% of such energy is devoted to DRAM accesses, which are, therefore, the main energy contributor. Figure 8 shows the distribution of memory accesses of the baseline YOLOv3, dividing the memory accesses into weights (only reads), inputs (only reads), and outputs (reads and Figure 5: Example of clustering integration at the data fetch interface. Arrows only reflect DNN weights, not image data. Figure 6: Example of clustering integration at the processing element (PE) interface. Arrows only reflect DNN weights, not image data. writes). As shown, 81.9% of the DRAM accesses correspond to weights, whereas inputs and outputs account for 12.0% and 6.1% only respectively. Hence, weights are the main energy contributor of DRAM accesses. If we put both results together, we come to the conclusion that DRAM accesses to fetch weights almost account for 70% of the total energy for the considered output stationary systolic array. Hence, weight clustering has significant potential to decrease both, energy consumption and bandwidth requirements. ### Accuracy Evaluation To compare the accuracy of the different configurations considered, we use the Mean Average Precision (mAP) [41]. The mAP is used as a standard metric to evaluate and compare the accuracy of object detectors leveraging true positives, false negatives and false positives into a single value (the mAP). Such value allows assessing and comparing the prediction accuracy of object detectors. The mAP builds upon the Intersection over Union (IoU) metric [42], which determines whether two bounding boxes overlap enough to regard them as the same object detection. We refer the interested reader to the original works describing those metrics for further details. To measure the mAP in our experiments, we use the public implementation provided in [43]. The results when applying weight clustering for the labelled dataset are shown in Figure 9, where we provide the mAP for person and vehicle categories together. Clustering accuracy is given for those configurations providing relevant mAP values, namely between 5 and 8 bits, thus discarding weight clustering with 4 bits or fewer. More than 8 bits have not been considered since 8 bits already achieves mAP values very close to those of the non-clustered YOLOv3. Results include per-layer clustering (left) as well as global clustering for all layers (right). As shown in the figure, mAP remains high for 7-bit and 8-bit clusters independently of whether clustering is applied for all layers at once or per layer. However, if we further reduce the number of clusters to 64 (6 bits) or 32 (5 bits), then only per-layer clustering achieves acceptable mAP. Instead, with those few clusters, if we apply clustering simultaneously to all layers, accuracy degrades significantly. The reduction of the data size for the different configurations is shown in Figure 10. For instance, 8-bit weight clustering produces a 4x data reduction given that each weight is stored as an 8-bit index rather than a 32-bit FP number in DRAM memory. Note that, if only the labelled dataset is considered, 5-bit clusters per layer offer very good accuracy and a size reduction factor for weights above 6, meaning that original weights (32 bits) require more than 6x memory space than clustered ones (5 bits). Figure 11 shows the mAP for vehicle and person objects together for the videos. First, note that there is some drop in the mAP w.r.t. the baseline. Visual inspection reveals that, as indicated before, this is not real accuracy loss but different object classifications for objects in the boundary. In general, this effect is mostly captured in the mAP drop when using 8-bit clusters. Therefore, additional accuracy loss can be confidently attributed to the impact of clustering on accuracy. As shown, 8-bit clusters (global and per layer) and 7-bit clusters (global only) achieve high accuracy. Regarding 7, 6, and 5-bit clusters per layer, mAP drops moderately. However, 6 and 5-bit clusters applied globally lead to dramatic accuracy loss. Overall, the best setups correspond to 8-bit clusters, or to 7, 6 or 5-bit per layer clusters depending on the degree of accuracy that can be sacrificed for the sake of reducing energy consumption and memory bandwidth requirements. Figure 10: Size reduction factor for the different clustering configurations. Figure 9: mAP of vehicles and people for the Labelled dataset for the baseline and different clustering configurations. ### Bandwidth Evaluation Memory bandwidth requirements for the different clustering configurations as well as for the baseline YOLOv3 are provided in Table 5. Note that in all cases we assume that object detection must be performed at a rate of 40ms per frame (25 FPS). Under these assumptions, the baseline YOLOv3 requires 200 GB/s (199.97 to be more precise) of DRAM memory bandwidth. The particular DRAM modelled has a theoretical maximum bandwidth of 204.8 GB/s (8 channels * 3.2 GHz * 8 bytes per transfer). Hence, it could sustain the bandwidth required very tightly. Clustering, instead, allows reducing bandwidth requirements down to 63-77 GB/s (i.e., a reduction of the bandwidth required between 61% and 68%), therefore enabling the use of less aggressive (and cheaper) DRAM memory systems. Hence, we could achieve 25 FPS with a memory system delivering much lower bandwidth (e.g., a memory offering 63.4 GB/s would allow reaching 25 FPS if 5-bit clusters are used). In other words, if we keep the same DRAM memory reaching 204 GB/s, weight clustering would allow reaching 65-79 FPS. ### Energy Evaluation First, we have evaluated the energy consumption of the centroids table using SRAM memories across the different configurations, hence varying the SRAM size, the number of ports, as well \begin{table} \begin{tabular}{|l||c|c|c|c|} \hline **Configuration** & **Bandwidth** & **FPS** & \multicolumn{2}{c|}{**Relative energy**} \\ \cline{3-5} & **GB/s** & & **Memory only** & **Overall** \\ \hline **Baseline** & 200.0 & 25 & 100.0\% & 100.0\% \\ \hline **Clustered 8 bits** & 77.1 & 65 & 38.9\% & 48.4\% \\ \hline **Clustered 7 bits** & 73.0 & 68 & 38.9\% & 48.4\% \\ \hline **Clustered 6 bits** & 68.9 & 73 & 34.8\% & 45.9\% \\ \hline **Clustered 5 bits** & 63.4 & 79 & 32.0\% & 42.6\% \\ \hline \end{tabular} \end{table} Table 5: Bandwidth, FPS, and energy comparison. Figure 11: mAP of vehicles and people for the Videos for the baseline and different clustering configurations. as the number of SRAMs (tables) needed in each case. Energy consumption of the centroids tables is below 0.1% of the total energy in all cases. Therefore, while it is included in the estimates, we do not further discuss its contribution in the rest of this section. Table 5 (column corresponding to relative memory energy) provides the results for DRAM and SRAM memories consolidated. In particular, it reports their relative energy consumption with respect to the baseline configuration. As shown, weight clustering achieves large memory energy reductions in the range 61-68%. In our evaluation, we assume that all data are word aligned (32-bit alignment) so that no weight value spans across two different 32-bit words. Therefore, 8-bit and 7-bit clustering allow storing 4 weights per word, whereas 6-bit clustering and 5-bit clustering can encode 5 and 6 weights per 32-bit word respectively. Hence, 8-bit and 7-bit clustering achieve analogous energy savings, as shown in the table. Since the centroids tables produce negligible energy consumption, the difference between applying clustering per-layer or globally is negligible (largely below 0.1%), so results with one decimal digit match across both configurations in all cases, and therefore, we do not report them twice. Results also accounting for the energy consumed by FP operations are shown in the last column. Overall, clustering achieves total energy reductions in the range 52-57%, and hence, energy consumption is between 43% and 48% that of the original YOLOv3 without weight clustering. Note also that the reduction between a given number of bits for clustering between All Layers and Per Layer approaches is virtually the same as discussed before. ### Tradeoff Analysis Figures 12 and 13 put together accuracy and energy savings in a single plot for the video dataset and labelled dataset, respectively. In the figures, the AL abbreviation indicates clustering of _All Layers_, and the PL abbreviation indicates clustering _Per Layer_. Note that, since bandwidth relates highly linearly with energy consumption, we use only energy consumption in the plots. Figure 12: mAP and Energy reduction tradeoff for the different clustering configurations (Video dataset). The baseline configuration corresponds to (0% energy reduction, 100 mAP). Looking at both plots, we can reach the following two conclusions: (1) if accuracy has more importance than energy, then 8-bit clusters, either for all layers or per layer, provide high energy savings with limited impact in accuracy. (2) However, if energy consumption is the most relevant feature to optimize, then 5-bit clusters per layer are the best choice since they deliver an additional 6% energy reduction w.r.t. 8-bit clusters, and its accuracy is larger than that of any other configuration providing comparable energy savings. Note that 7-bit AL provides virtually the same accuracy as 8-bit clustering for the labelled dataset. However, its accuracy is lower for the video dataset. Therefore, 8-bit clustering is a better option overall. Analyzing the results, we reach the conclusion that some configurations could be safely discarded: e.g., 7-bit PL and 7-bit AL could be discarded, since 8-bit PL and 8-bit AL provide the same energy reduction but provide higher accuracy. 6-bit PL and 6-bit AL could also be discarded, since 5-bit PL provides both, higher energy reductions and higher accuracy. Lastly, 5-bit AL can be discarded since its accuracy is not acceptable, and 5-bit PL provides the same energy reduction with an acceptable accuracy. ## 5 Related Work Related works in the area of clustering for neural networks are abundant. The work from Han et al. [18] analyses the impact of clustering and network pruning during training and also Huffman coding after training for LeNet-300, LeNet-5, AlexNet, and VGG-16, but does not analyze the energy impact of weight clustering alone. Han et al. [19] also propose the first accelerator using clustering, pruning and Huffman coding building on top of the previous work. The work analyzes the three techniques combined and 16-bit fixed-point data to be able to reduce the model to fit it into an on-chip SRAM, consuming orders of magnitude less energy than a DRAM. This is not possible in our case since the YOLOv3 model cannot be reduced as drastically applying only clustering without sacrificing accuracy significantly. Figure 13: mAP and Energy reduction tradeoff for the different clustering configurations (Labelled dataset). The baseline configuration corresponds to (0% energy reduction, 61 mAP). Choi et al. [20] propose a similar workflow to Han et al. [18], implementing pruning, clustering, and Huffman coding on LeNet, ResNet, and AlexNet. The work proposes the use of the Hessian-weighted k-means to perform weight quantization, showing that the Hessian-weighted k-means is a viable alternative to the traditional k-means algorithm and allows applying weight clustering to the entire model, rather than per layer as used by Han et al. [18], with negligible accuracy loss. Wang et al. [21] propose an accelerator for CPU+FPGA-based heterogeneous platforms for YOLOv2 by exploiting pruning, weight-clustering, and quantization. However, this work uses the PASCAL VOC dataset to assess the accuracy of the different configurations, while our work uses a more recent version of YOLO, and provides a more complex case study of real driving videos. Moreover, our work focuses on the weight-clustering technique, providing an in-depth analysis of its hardware implications, and the accuracy and energy impact for different clustering configurations. Ye et al. [22] is a more recent work that studies pruning and clustering in LeNet-5, AlexNet, and VGG-16 using the MNIST and Imagenet datasets. The work proposes the use of ADMM-based pruning and clustering to exploit the maximum degree of data redundancy. Their approach quantizes a portion of the weights in each training iteration, retraining the non-quantized weights, outperforming prior works. Tung et al. [25] propose to perform pruning and quantization in parallel during the training process, and analyze the accuracy impact in AlexNet, GoogleNet, and Resnet. Authors claim that their methodology allows recovery from premature pruning. However, their work uses uniform quantization, which consists of dividing the numerical range of weights into uniform parts, as opposed to k-means, which is a non-uniform quantization method. Other works [23, 24] also explore K-means clustering in terms of accuracy, but do not report energy analyses. It is also worth noting that most works analyze clustering in much less complex neural networks than YOLO, and focus on the image classification domain. Because of this significant complexity difference, works such as [23] achieve as low as 1-bit weights, but this is not feasible with YOLOv3 when applying post-training clustering. Works such as [26, 27] also propose binary weights during training. Other works consider other forms of quantization such as integer quantization [28] or uniform quantization [29, 30]. In our case study, we assess the impact of post-training weight clustering in terms of accuracy and energy for a large case study such as YOLOv3. The main contribution of our work is the in-depth accuracy and energy analysis of weight clustering for a complex camera-based object detection system, covering the gap of previous works. To the best of our knowledge, our work is the first one to provide such a complex case study, thus helping designers make the right choice for the design of their accelerators and providing hints on how to evaluate accuracy, energy and bandwidth at scale. ## 6 Conclusions and Future Work Weight clustering is a promising solution to decrease the memory bandwidth requirements and energy consumption of DNNs. However, it has been evaluated in limited examples so far, hence providing insufficient information of its real benefits and tradeoffs. This paper presents an at-scale case study considering different flavors of weight clustering, the main characteristics of an appropriate accelerator with the right dataflow, a relevant application
Deep Neural Networks を用いた画像ベースの物体検出を実装するアクセラレータは、画像の取得とニューラルネットワークのパラメータの取得のために大量のデータを処理するため、特に動画ストリームを処理する場合、高い電力消費と帯域幅の要求となります。データ取得に対する電力と帯域幅の要件を軽減するための解決策はいくつかありますが、それらは、ターゲットアプリケーションの規模よりも小さい評価の範囲で行われます。これにより、実際の運用における最適なバランスを見つけるのが困難となります。この論文では、You Only Look Once v3 (YOLOv3) に対する、 at-scale の評価を可能にするキーの電力と帯域幅の最適化 - 重量クラスタリング - を構築します。YOLOv3 は、画像ベースの物体検出のためのニューラルネットワークベースのシステムです。私たちの評価では、システム構成における、出力動態構造を持つサイストミック配列
2301.13619
Brillouin and Kerr nonlinearities of a low-index silicon oxynitride platform
Nonlinear optical effects including stimulated Brillouin scattering (SBS) and four-wave mixing (FWM) play an important role in microwave photonics, optical frequency combs, and quantum photonics. Harnessing SBS and FWM in a low-loss and versatile integrated platform would open the path to building large-scale Brillouin/Kerr-based photonic integrated circuits. In this letter, we investigate the Brillouin and Kerr properties of a low-index (n=1.513 @ 1550 nm) silicon oxynitride (SiON) platform. We observed, for the first time, backward SBS in SiON waveguides with a Brillouin gain coefficient of 0.3$\rm m^{-1}W^{-1}$, which can potentially be increased to 0.95$\rm m^{-1}W^{-1}$ by just tailoring the waveguide cross-section. We also performed FWM experiments in SiON rings and obtained the nonlinear parameter $\gamma$, of 0.02 $\rm m^{-1}W^{-1}$. Our results point to a low-loss and low-index photonic integrated platform that is both Brillouin and Kerr active.
Kaixuan Ye, Yvan Klaver, Oscar A Jimenez Gordillo, Roel Botter, Okky Daulay, Francesco Morichetti, Andrea Melloni, David Marpaung
2023-01-31T13:23:38
http://arxiv.org/abs/2301.13619v1
# Brillouin and Kerr nonlinearities of a low-index silicon oxynitride platform ###### Abstract Nonlinear optical effects including stimulated Brillouin scattering (SBS) and four-wave mixing (FWM) play an important role in microwave photonics, optical frequency combs, and quantum photonics. Harnessing SBS and FWM in a low-loss and versatile integrated platform would open the path to building large-scale Brillouin/Kerr-based photonic integrated circuits. In this letter, we investigate the Brillouin and Kerr properties of a low-index (n=1.513 @ 1550 nm) silicon oxynitride (SiON) platform. We observed, for the first time, backward SBS in SiON waveguides with a Brillouin gain coefficient of 0.3 m\({}^{-1}\)W\({}^{-1}\), which can potentially be increased to 0.95 m\({}^{-1}\)W\({}^{-1}\) by just tailoring the waveguide cross-section. We also performed FWM experiments in SiON rings and obtained the nonlinear parameter \(\gamma\), of 0.02 m\({}^{-1}\)W\({}^{-1}\). Our results point to a low-loss and low-index photonic integrated platform that is both Brillouin and Kerr active. + Footnote †: Corresponding author: [email protected] ## I Introduction Stimulated Brillouin scattering (SBS), which is an interaction between optical and acoustic waves, is currently revolutionizing photonic integrated circuit designs [1; 2; 3; 4; 5; 6; 7; 8]. Featuring a narrow-band (tens of MHz) gain resonance shifted around tens of GHz away from the pump light, the on-chip SBS plays a significant role in microwave photonics [9; 10; 11], narrow-linewidth integrated lasers [7; 12; 13], and on-chip nonreciprocal light propagation [3; 14]. Efficient on-chip SBS process requires simultaneously guiding both the optical and gigahertz acoustic waves in a waveguide, making it challenging to be realized in most integrated platforms. Several encouraging results have been demonstrated recently in various platforms, including chalcogenide [2], silicon [5], doped silica [15], aluminum gallium arsenide [16], and aluminum nitride [17]. In addition, SBS has also been observed in silicon nitride-based waveguides [7; 8; 18], opening the pathway to intersect Brillouin scattering with Kerr nonlinearities in low-loss and mature platforms. Silicon oxynitride (SiON) is another highly-developed integrated platform that has appealing properties including low propagation loss, wide transparency window, absence of multi-photon absorption effects, and stress-free fabrication [19; 20]. The optical and mechanical properties of SiON could be tuned continuously between those of SiO\({}_{2}\) and Si\({}_{3}\)N\({}_{4}\) at different nitrogen/oxygen (N/O) ratios [21; 22]. For example, a variety of SiON, known as Hydex (n=1.7 @ 1550 nm), has been widely used for Kerr-based nonlinear optic applications including optical frequency comb [23], optical neural network [24], and quantum photonics [25]. A slightly higher index SiON (n=1.83 @ 1550 nm) was also proposed in [20; 26] for Kerr-based applications. In both cases, the SiON platforms have a refractive index close to silicon nitride (n=1.98 @ 1550 nm) instead of silicon oxide (n=1.45 @ 1550nm). The relatively high refractive index induces a high nonlinear index, making it useful for Kerr-based nonlinear optic applications. But from the Brillouin perspectives, high refractive index SiON is less attractive due to the high content of the nitrogen that leads to a meager photoelastic coefficient p\({}_{12}\) because of the weak p\({}_{12}\) of the Si\({}_{3}\)N\({}_{4}\)[18]. Moreover, high-index SiON also has similar mechanical properties Figure 1: (a) Artistic representation of the SiON waveguides, showing the four-wave mixing process in an all-pass microring resonator and the backward stimulated Brillouin scattering (SBS) in a spiral waveguide. (b) The cross-section of the SiON platform in our work. (c) The chip photograph of the SiON microring resonators with a FSR of 50 GHz. (d) The chip photograph of the 5-cm SiON straight waveguide. to Si\({}_{3}\)N\({}_{4}\), such as high acoustic velocity that prevents acoustic confinement when cladded with SiO\({}_{2}\)[7; 8; 18]. In this paper, we investigate the Brillouin and Kerr properties of a SiON integrated platform with a relatively lower refractive index (n=1.513 @ 1550 nm). Contrasting to SiON platforms mentioned above, the SiON platform investigated here has a larger photoelastic coefficient p\({}_{12}\), lower acoustic velocity, and a larger cross-section, all of which lead to an enhanced SBS effect. We experimentally observed, for the first time to our knowledge, backward SBS in SiON waveguides. We also characterized the Brillouin gain coefficient \(g_{b}\) of the SiON waveguides with different widths. We found out the \(g_{b}\) of this SiON waveguide can potentially be increased to around 0.95 m\({}^{-1}\)W\({}^{-1}\) by simply tailoring the waveguide cross-section. This sufficiently large Brillouin gain coefficient, together with the low propagation loss, makes it possible to generate decent SBS gain for a plethora of Brillouin-based applications in this SiON platform. Furthermore, we also measured the nonlinear parameter \(\gamma\) and nonlinear index \(n_{2}\) of this SiON platform through four-wave mixing (FWM) experiments in a ring resonator. While the measured \(\gamma\) is an order of magnitude lower when compared to that of high-index SiON, we expect that with lower losses and higher pump power, the unique interplay between the SBS and Kerr effect such as Brillouin-assisted Kerr frequency comb [27; 28] could be observed in this integrated platform. ## Results We performed the backward SBS and four-wave mixing experiments in single-pass (spiral or straight) waveguides and microring resonators respectively, as shown in Fig. 1(a). The cross-section of this platform is shown in Fig. 1(b) [29; 30]. The 2.2 um-thick SiON layer has a refractive index \(n\) of 1.513 at 1550 nm. It is on top of a 15-um SiO\({}_{2}\) layer and is covered by a 7 um-thick SiO\({}_{2}\) upper cladding. The refractive index contrast \(\Delta n\) between the core and the cladding is 4.4%, enabling a bending radius of 600 um with negligible radiation losses. Fig. 1(c) shows the photograph of the microring resonators in this platform with a free spectral range (FSR) of 50 GHz and coupling coefficients varying from 0.05 to 0.8. Fig. 1(d) shows the photograph of several groups of 5-cm straight waveguides with different widths. The measured propagation loss of those straight waveguides is 0.25 dB/cm with coupling loss to lensed-tip fibers of approximately 3 dB/facet. ### Stimulated Brillouin Scattering in SiON Waveguides We developed a finite element model [8] in COMSOL to estimate the SBS response of the SiON waveguides. The simulated optical field and the corresponding acoustic response of the 2.2 um-wide SiON waveguide are shown in Fig. 2(a) and (b), respectively. The optical field is well confined around the SiON core area because of the total internal reflection (TIR). However, the TIR condition does not hold for the acoustic response because the acoustic velocity of the SiON (\(\sim\) 6.2 km/s) is higher than that of the SiO\({}_{2}\) (\(\sim\) 5.9 km/s). As a result, part of the acoustic field would leak into the cladding as shown in Fig. 2(b). Nevertheless, most of the acoustic field still Figure 2: (a) Simulated optical mode of the SiON waveguide. (b) Simulated acoustic response of the SiON waveguide. (c)-(h) Measured SBS gain spectra of the 2.0 μm, 2.2 μm, 2.3 μm, 2.4 μm, 2.6 μm, and 3.5 μm-wide SiON waveguides, respectively. (i) Brillouin gain coefficients and linewidth of the SiON waveguides with different widths. remains inside the SiON core because of the relatively large cross-section area [31]. This results in a large overlap between the optical and acoustic fields that leads to improved Brillouin gain coefficient. Extensive simulation results of the SBS gain coefficients are included in the Supplementary. To verify the simulation results, we characterized the SBS responses of the SiON waveguides with a pump-probe experimental apparatus [8; 18]. The pump and probe light are intensity-modulated and coupled into the opposite facets of the waveguide. We keep the pump frequency fixed at 1561 nm while sweeping the probe at frequencies down shifted from the pump by about 15 GHz. When the frequency difference between the pump and the probe is close to the Brillouin frequency shift of the SiON waveguide, the probe will experience the SBS gain and a peak will be detected at the lock-in amplifier (See the Supplementary for more details about the SBS experiment). Several 5 cm-long SiON waveguides are characterized to investigate the influence of waveguide width on the Brillouin gain spectra. The measured SBS responses of the 2.0 \(\upmu\)m, 2.2 \(\upmu\)m, 2.3 \(\upmu\)m, 2.4 \(\upmu\)m, 2.6 \(\upmu\)m, and 3.5 \(\upmu\)m-wide waveguides are shown in Fig. 2(c) to (h), respectively. All waveguides show a clear SBS peak well above the noise floor with the Brillouin frequency shift increases from 14.22 GHz for the 2.0 \(\upmu\)m-wide waveguide to 14.48 GHz for the 3.5 \(\upmu\)m-wide waveguide. Fig. 2(i) plots the measured Brillouin gain coefficient \(g_{b}\) and the SBS linewidth of the SiON waveguides with different widths (See the Supplementary for more details about the Brillouin gain coefficient calculation). The Brillouin gain coefficient \(g_{b}\) increases from 0.1 m\({}^{-1}\)W\({}^{-1}\) to 0.32 m\({}^{-1}\)W\({}^{-1}\) when the waveguide width increases from 2.0 \(\upmu\)m to 3.5 \(\upmu\)m. In the meantime, the linewidth of the SBS peak reduces from 358 MHz to 105 MHz. The increasing Brillouin gain coefficient and the narrowing of the SBS linewidth indicate an improvement in acoustic confinement when the SiON waveguides become wider. The Brillouin gain coefficient can be further increased by optimizing the cross-section of the waveguide through the genetic algorithm [8]. Fig. 3 (a) and (b) show the simulated optical mode and the acoustic response of a SiON waveguide with the same core refractive index but with an optimized cross-section for SBS gain. The dimension of such a waveguide is 4.0 \(\upmu\)m \(\times\) 3.2 \(\upmu\)m with a top cladding of 3 \(\upmu\)m and a bottom cladding of 10 \(\upmu\)m. Compared to the optical and acoustic fields of the waveguide structure in this work, less acoustic field is scattered into the cladding while the optical field is still well confined in the optimized waveguide structure. The Brillouin gain spectrum of the optimized waveguide structure is shown in Fig. 3 (c). The simulated peak Brillouin gain coefficient of this waveguide is 0.95 m\({}^{-1}\)W\({}^{-1}\), which is 3\(\times\) higher than the waveguide structure measured in this work. Furthermore, the propagation loss in this SiON platform can also be significantly lowered by reducing sidewall roughness and improving the thermal annealing process [30], allowing for longer effective waveguide length for the SBS process. Fig. 3 (d) estimates the SBS gain of both the measured and the optimized SiON waveguides with different propagation losses. The optimized Brillouin gain coefficient (around 0.95 m\({}^{-1}\)W\({}^{-1}\)), along with the improved propagation loss (around 0.1 dB/cm), can enhance the SBS gain from less than 0.5 dB to near 1.5 dB for a 60-cm waveguide, which is sufficient for applications like SBS-based narrow-bandwidth microwave photonic notch filters [8; 10]. ### Four-wave mixing in SiON Waveguides We further investigate the Kerr nonlinearities of this SiON platform. High-index SiON platforms are widely used for Kerr-based nonlinear optics applications because of the relatively large nonlinear parameter \(\gamma\)[19]. However, the nonlinear parameter \(\gamma\) is highly dependent on the refractive index and the geometry of the waveguide. The SiON waveguide in this work has a relatively lower refractive index and a larger cross-section compared with other SiON platforms [19; 20], and the nonlinear index \(n_{2}\) and nonlinear parameter \(\gamma\) of the SiON waveguide in this platform has never been characterized before. We devised a four-wave mixing (FWM) experiment for the nonlinear parameter characterization. Because of the limited effective length of the available samples, the FWM conversion efficiency of the straight waveguide is comparable with that of the fiber pigtails, making it difficult to accurately measure the \(n_{2}\) and the \(\gamma\). We use the all-pass ring resonators to enhance the FWM in the SiON waveguide so that the contribution from fibers in the setup can be neglected [32]. The ring resonator Figure 3: (a) Simulated optical mode and (b) simulated acoustic response and (c) simulated Brillouin gain spectrum of the optimized SiON waveguide. (d) Estimated SBS gain from the optimized and current SiON waveguides. applied in our experiment is made of the 2.2 um-wide SiON waveguide and it has a free spectral range (FSR) of 50 GHz and a power coupling coefficient of 0.05. The pump laser is locked close to the resonance of the ring resonator to mitigate the thermal influence on the ring resonator. The signal laser is set close to 2 \(\times\) FSR away from the pump signal and is combined with the pump light with a 99:1 coupler. The combined pump and signal are coupled into the all-pass ring resonator with a lensed fiber with a spot size of 2 um. The generated idler is then coupled out from the chip and sent to the optical spectrum analyzer to measure the conversion efficiency from the signal to the generated idler (See the Supplementary for details of the FWM experiment). To determine the field enhancement factor of the FWM process in the ring resonator, we first characterized the resonance response of the ring resonator with a vector network analyzer, as shown in Fig. 4 (a) (See the Supplementary for details of the characterization). The measured full-width at half-maximum (FWHM) is 612 MHz with an extinction ratio of 8.9 dB, corresponding to a loaded Q-factor of 330,000 and a propagation loss of 0.27 dB/cm. Fig. 4 (b) shows the measured FWM response of the 50 GHz SiON ring resonator. A clear peak is shown at 2 \(\times\) FSR down shifted from the pump frequency, which is the idler generated from the FWM process between the pump and signal in the ring resonator. The nonlinear index \(n_{2}\) and nonlinear parameter \(\gamma\) of the SiON waveguide in this platform can be estimated from the conversion efficiency between the signal and the idler (See the supplementary for details of the calculation). Fig. 4 (c) shows the measured conversion efficiency of the FWM process at different pump power. Based on this measurement, the calculated \(\gamma\) and \(n_{2}\) of the 2.2 um-wide SiON waveguide are 0.024 m\({}^{-1}\)W\({}^{-1}\) and 4.16 \(\times 10^{-20}\) m\({}^{2}\)/W, respectively. We also estimated the nonlinear parameter \(\gamma\) of the SiON waveguides with different widths based on the measured value of \(n_{2}\), as shown in Fig. 4 (d). The \(\gamma\) decreases from around 0.025 m\({}^{-1}\)W\({}^{-1}\) to 0.020 m\({}^{-1}\)W\({}^{-1}\) when the waveguide width reduces from 2.0 um to 3.5 um. ## Discussion For Brillouin-Kerr interactions, the balance between the nonlinearities needs to be considered. In microcavities, it is generally preferred to have larger Brillouin gain, as it is easier to inhibit cascading or other unwanted interactions via mode manipulation. Comparing the values of the measured \(g_{b}\) in Fig. 2 (i) and \(\gamma\) in Fig. 4 (a), the SiON waveguides reported here have an order of magnitude larger Brillouin gain compared to Kerr nonlinearity. This \(g_{b}/\gamma\) ratio is similar to previous demonstrations of Brillouin-assisted Kerr frequency combs in [27; 28], showing the potential to realize it in an integrated platform. In conclusion, we have investigated the Brillouin and Kerr properties of a SiON integrated platform with a relatively low refractive index. We observed, for the first time, the backward SBS response of those SiON waveguides. We also measured its nonlinear index \(n_{2}\) and nonlinear parameter \(\gamma\). These SiON waveguides can be fabricated in a versatile and low-loss integrated platform, and can potentially lead to a plethora of Brillouin and Kerr-based applications, including narrow-bandwidth microwave photonic filters, and narrow-linewidth lasers, and optical frequency combs. ## Author contributions D.M. and K.Y. developed the concept and proposed the physical system. K.Y. and Y.K. developed and performed numerical simulations. K.Y. performed the SBS characterisation with input from R.B., K.Y., and O.D. Y.K. and K.Y. performed the FWM experiments. O.A.J.G., F.M., and A.M. developed and fabricated the samples. K.Y., D.M., and Y.K. wrote the manuscript. D.M. led and supervised the entire project. ## Funding information This project is funded by the European Research Council Consolidator Grant (101043229 TRIFFIC) and Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO) projects (740.018.021 and 15702). Figure 4: (a) Measured resonance response of the SiON ring resonator. (b) Measured four-wave mixing response of the SiON ring resonator. (c) Conversion efficiency of the four-wave mixing at different pump power. (d) The estimated nonlinear parameter \(\gamma\) of the SiON waveguides with different widths.
非線形光学効果、刺激されたブリルアン散乱 (SBS) と四波 mixing (FWM) は、マイクロ波光子学、光周波数 comb と量子光子学において重要な役割を果たします。SBS と FWM を低損失かつ多用途の統合プラットフォームで活用することで、大規模なBrillouin/Kerr ベースの光子統合回路を構築する道を開きます。この論文では、低インデックス (n=1.513 @ 1550nm) シリコン オキノライト (SiON) プラットフォームのブリルアンとケアーの特性を調査します。初めて、1550nm でシリコン オキノライト (SiON) 波guide には、ブリルアン増益係数 0.3 $\rm m^{-1}W^{-1}$ で逆向きの SBS が観測されました。これは、波guide の
2309.10532
A Cognitively-Inspired Neural Architecture for Visual Abstract Reasoning Using Contrastive Perceptual and Conceptual Processing
We introduce a new neural architecture for solving visual abstract reasoning tasks inspired by human cognition, specifically by observations that human abstract reasoning often interleaves perceptual and conceptual processing as part of a flexible, iterative, and dynamic cognitive process. Inspired by this principle, our architecture models visual abstract reasoning as an iterative, self-contrasting learning process that pursues consistency between perceptual and conceptual processing of visual stimuli. We explain how this new Contrastive Perceptual-Conceptual Network (CPCNet) works using matrix reasoning problems in the style of the well-known Raven's Progressive Matrices intelligence test. Experiments on the machine learning dataset RAVEN show that CPCNet achieves higher accuracy than all previously published models while also using the weakest inductive bias. We also point out a substantial and previously unremarked class imbalance in the original RAVEN dataset, and we propose a new variant of RAVEN -- AB-RAVEN -- that is more balanced in terms of abstract concepts.
Yuan Yang, Deepayan Sanyal, James Ainooson, Joel Michelson, Effat Farhana, Maithilee Kunda
2023-09-19T11:18:01
http://arxiv.org/abs/2309.10532v3
A Cognitively-Inspired Neural Architecture for Visual Abstract Reasoning Using Contrastive Perceptual and Conceptual Processing ###### Abstract We introduce a new neural architecture for solving visual abstract reasoning tasks inspired by human cognition, specifically by observations that human abstract reasoning often interleaves perceptual and conceptual processing as part of a flexible, iterative, and dynamic cognitive process. Inspired by this principle, our architecture models visual abstract reasoning as an iterative, self-contrasting learning process that pursues consistency between perceptual and conceptual processing of visual stimuli. We explain how this new Contrastive Perceptual-Conceptual Network (CPCNet) works using matrix reasoning problems in the style of the well-known Raven's Progressive Matrices intelligence test. Experiments on the machine learning datasets, the RAVEN family and PGM, show that CPCNet achieves higher accuracy than all previously published models while also using the weakest inductive bias. We also point out a substantial and previously unremarked class imbalance in the original RAVEN dataset, and we propose a new variant of RAVEN--AB-RAVEN--that is more balanced in terms of abstract concepts. ## Introduction Analogy-making--the process of comparing and contrasting two or more things to enable additional relational inferences of various kinds--has been argued to be one of the foundational aspects of human intelligence [12]. So, how do humans make analogies? Consider the simple analogy in Figure 1. What relationships do you notice? Initially, one might recognize fish and birds as animals that move around in the water and air, respectively. Fish and birds both have similar body structures in terms of their heads, fins/wings, and tails. However, one might further reflect that birds get propulsion from their wings, whereas many fish get propulsion from their tails. This alternate mapping (bird wings to fish tails, and bird tails to fish fins) is influenced by conceptual processing of the initial perceptual inputs of the figure, and can in turn influence further perceptual processing of which similarities we emphasize and how we build our analogical representations. Theories of human perceptual and conceptual systems (e.g., [1]), including in the context of analogy-making (e.g., [1]), have made observations about this kind of bidirectional interplay between perceptual and conceptual processing, and forms of this interplay have also been explored in knowledge-based (i.e., symbolic) computational models of analogical reasoning [12]. In this paper: 1. We propose a new, cognitively-inspired Contrastive Perceptual-Conceptual neural Network (CPCNet) that models this kind of interplay between perceptual and conceptual processing in the context of visual abstract reasoning tasks like the example shown in Figure 2. 2. Using the abstract reasoning datasets-RAVEN, I-RAVEN, RAVEN-FAIR, and PGM, we experimentally demonstrate that CPCNet is more effective than previous architectures by achieving the highest accuracy with the weakest inductive bias. Figure 1: A is to B as C is to D. But in what ways? Figure 2: An example item of Raven’s Progressive Matrices [13]. 3. Finally, we point out a substantial, previously unremarked class imbalance in the original RAVEN dataset, and we propose a new variant--AB-RAVEN--that is more balanced in terms of abstract concepts. ## Approaches to Visual Abstract Reasoning Raven's Progressive Matrices (RPM) is a family of human intelligence tests created by Raven (1936) about a century ago. RPM is acclaimed as the best single-format intelligence test that exists to date for evaluating the core intelligence factors, such as general intelligence and fluid intelligence [14, 15]. RPM is a kind of visual abstract reasoning task, where human subjects are expected to discover abstract patterns and concepts from raw visual stimuli, and apply these abstract patterns to reason about the visual stimuli [16]. Figure 2 gives an example item of RPM. It consists of a matrix of images with the last entry missing and multiple (usually eight) answer choices. To solve such an item, the human subject needs to select an answer choice to complete the matrix so that the abstract patterns among rows and columns are consistent. For example, the abstract pattern in Figure 2 is that taking the union of the first two entries in a row (or a column) leads to the third entry in the row (or column), which leads to the correct answer of the fourth choice. In the recent surge of work using deep learning to tackle visual abstract reasoning, most deep neural network models have followed a standard image classification paradigm, as shown in Figure 2(a). Taking as input the raster images of matrix entries and answer choices, this paradigm repeatedly applies feature extractions as in the famous ImageNet work [13], decreasing the sizes of the spatial dimensions but increasing the size of the channel dimension, until a single vector can be obtained to represent the entire input problem, and then a MLP classification head is appended to predict the class label, which is the index of the correct answer choice. An alternate approach leverages the observations about human cognition outlined in the introduction above, i.e., that reasoning can often be enhanced by interleaving perceptual and conceptual processing, allowing each process to influence the other. Figure 2(b) illustrates this kind of approach. Taking the same raw visual stimuli as in the image classification paradigm, this alternate paradigm uses feature extractors, simulating early vision processing, to form an initial visual representation of input images. Then there follows two types of processing: (1) perceptual processing that refines the perceptual (visual) representation of input images, for example, refining blurry feature maps of lines and angles to clear feature maps of shapes, and (2) conceptual processing that enriches the representation of abstract concepts, i.e., the relations between input images. Then comes the main difference between the image classification paradigm and this paradigm--these two types of processing form a dynamic cycle, in which the perceptual and conceptual processing depend on each other's output. This cycle allows for direct interplay between perceptual and conceptual processing. The cycle keeps running for multiple steps until a consistency between perceptual and conceptual Figure 3: Two Paradigms for Solving RPM. Note that the sizes and numbers of tensors are diagrammatic, not representing the implementation. processing is reached (thus adding a computational requirement for checking or representing the consistency at every step). The resulting consistent representation takes on a dual role as perceptual and conceptual representation both and is used to predict the answer label. Figure 2(b) depicts reasoning on RPM-like problems as a complex, flexible, and dynamic process. While it is not difficult to mechanically construct a deep neural net that mimics this kind of processing, the real technical difficulty lies is how to optimize it properly---given its dynamics, how can we make sure the network steadily converges to achieve the consistency needed to drive robust reasoning? We describe our solution to this challenge in this paper. ## Detailed Motivation We focus on the interplay between perceptual and conceptual representation, not only because we expect it to provide added flexibility and dynamics to a reasoning process, but also because this kind of entangled representation is frequently implied in human cognitive studies [1]. This theory of representation could also be corroborated by the eye tracking studies on RPM [1], in which the subject's attention moved back and forth between the matrix entries, visiting each entry multiple times, rather than scanning the entries linearly (though other explanations also exist for such gaze phenomena). This section will explain the feasibility of our approach in terms of implementation and the rationale in terms of analogical reasoning. The effectiveness of this paradigm will be shown in the experiment section. ### Feasibility for implementation This subsection explains how this paradigm can be implemented and the implementation can behave as we described in the introduction section. Given the complex and dynamic nature of the cognitively-inspired paradigm, it is apparently inadvisable to mechanically compose multiple neural net modules into a neural net according to Figure 2(b). Also, it is possible that the dynamic nature of the interplay could stop the training and inference from converging, if conditions that encourage convergence are not found. In the feed-forward architectures commonly used in deep neural nets, we do not have this type of convergence issue. Thus, we can try to approximate the dynamic process with a feed-forward network. Observe that there are two paths in Figure 2(b) that give the paradigm its dynamic nature, i.e., changing its own internal states---(1) _Path 1_ starting from the perceptual representation through the conceptual representation and returning back to the perceptual representation, and similarly, (2) _Path 2_ starting from the conceptual representation through the perceptual representation and returning back to the conceptual representation. Therefore, if we unroll the cycles with these two paths and add the consistency computation mentioned above, we will have a feed-forward architecture, as shown in Figure 4, which **approximates** the fully iterative, cognitively-inspired paradigm in a feed-forward manner. We thus name our approach the Contrastive Perceptual-Conceptual Network (CPCNet). As indicated in the introduction section, this paradigm pursues consistency between perceptual and conceptual representations (the rationale will be explained later). There are two designs in the feed-forward architecture to make this happen. First, after each iteration of _Path 1_ and _Path 2_, the red paths in Figure 4 first compute the consistency information between the perceptual and conceptual representations. This consistency information could be computed as a shared component between the perceptual and conceptual representations through a linear mapping, or more complex non-linear operations could be used to have more representational power. Either way, the consistency information is then used to update the perceptual and conceptual representations, for example, deducting the consistency information from the perceptual and conceptual representations. Intuitively, this way, it would become easier and easier for later iterations to research a "full" consistency because the job of finding consistency is amortized over multiple iterations. Second, the above computation structure only makes the consistency more likely to happen. But it does not necessarily happen. Thus, we designed two classification heads at the end of the architecture, which classify the perceptual and conceptual representations, respectively. Then, during training, the loss function is used to pull predictions of the perceptual and conceptual representations toward the same correct answer label. If the classification heads are not too deep, the pressure of loss function will go back through the classification heads and pull the perceptual and conceptual representations toward a consistent position. Here, the meaning of "consistent" becomes more clear---consistent representations could be mapped to the same correct answer label though some simple mappings, like a two-layer MLP. This design here is very similar to the idea of supervised contrastive learning [14], but it does not require data augmentation or pre-training. Instead, it relies on the delicate architecture design, which is inspired by the interleaved human cognitive process, to achieve the contrastive effect. ### Rationale for analogical reasoning Visual abstract reasoning tasks like the RPM can be considered as analogy tasks, because when an RPM item is solved, the human subject is making analogies between rows or columns (or both). To explain the rationale more clearly, let's consider the simpler visual analogy _"A is to B as C is to D"_ in Figure 1 from the introduction in more depth. Suppose that a human subject has formed an initial visual representation for each analog in the analogy by looking at the figure for two seconds, but it is probably not the final, correct representation. According to the influential structure-mapping theory of analogy [1], the subject needs to construct a mapping \(F\) between the base domain \((A,B)\) and the target domain\((C,D)\). This mapping depends on how the analogs are represented. Given the initial visual representations of analogs, the fish and the bird are probably mapped to each other according to their appearance, e.g., _head to head, fins_ to wings, and tail to tail,_ and the air and the water are mapped in a holistic way. Then, if the subject's thinking moves to a higher level and tries to map the relations (i.e., \(G\) in Figure 1) in \((A,B)\) to the ones in \((C,D)\), she will find that they do not exactly match. In particular, many fish use tails for propulsion and fins for direction, whereas birds use wings for propulsion and tails for direction. This observation on \(G\) updates the mapping \(F\) and the representations of analogs--_fish fins to bird tails, fish tails to bird wings, fish heads to bird heads_, and _air to water holistically_. Given this clearer mapping \(F\), if the subject moves up to a higher level again and compare the relations \(G\), the mapping between \(B\) and \(D\) could be further refined to _air dynamics is mapped to fluid dynamics_ (rather than their colors) and thus the representation of water and air are also updated to focus on their dynamics properties. If the subject can give initial representations of analogs that can directly lead to the final correct mappings \(F\) and relations \(G\), she may not need to go through any iterative process like this. However, in real-life situations where stimuli have so many meanings and connections, the correct representations of analogs cannot always be formed immediately. This iterative process of working on \(F\), \(G\), and the representations of analogs is always needed to make and understand analogies. Given \(F\) corresponding to the perceptual processing and \(G\) corresponding to the conceptual processing, this iterative process is equivalent to the interplay between perceptual and conceptual processing. About the desire for consistency, its rationale follows from an assumption that the analogy is completely interpreted or understood only if the iterative process has ended, i.e., no updates are being made to representations of analogs anymore. In other words, it has been well recognized that analogical proportions enjoy central permutation as a characteristic property [10]. That is, _A is to B as C is to D_ if and only if _A is to C as B is to D_. This corresponds to interpretations of the analogy in Figure 1 in the horizontal or vertical direction. Two directions are equivalent. That one direction holds implies that the other direction also holds. Given this symmetry, \(G\) could also be regarded as a mapping between \((A,C)\) and \((B,C)\). If the interpretation of the analogy is unique, i.e., the mappings are unique, we will have \(F\circ G=G\circ F\), i.e., \(F\) and \(G\) are commutative. This equation is a very concise and beautiful description of analogy-making. And by pursuing consistency between perceptual and conceptual processing, we are actually pursuing equality in this equation, albeit in a data-driven and approximate way. ## Related Work There is a long line of research on computational solutions to RPM tasks, especially RPM-like datasets. Reviewing every one of them would be impossible here. We thus use a taxonomy to classify them into four categories and briefly describe each of them. More extensive reviews can be found in [13, 14, 15]. **Imagency-Based Approach.** Visual mental imagery refers to mental imagistic representation in human cognition [12]. It plays a crucial role in human visual reasoning ability. The most important characteristic of mental imagery is that human can experience mental imagery in the absence of the concurrent sensory input. The imagery-based approach simulates human mental imagery by directly operating on the raw visual input and, through mental operations like rotation, addition, and subtraction, it can solve a substantial portion of original RPM tests [13, 14]. **Logical Reasoning.** The computational models using logical reasoning work on symbolic representations of RPM items and reason in formal logic. For example, an entry image \(A\) in a matrix could be described by a set of propositions: "triangle(\(A\))=True, triangle-large(\(A\))=False, triangle-on-the-left(\(A\))=True, square(\(A\))=True, square-small(\(A\))=True, etc". The symbolic representations in these models are manually constructed or obtained through a preprocessing module. Representatives of this approach are ANALOGY [14], FAIRMAN,and BETTERMAN [13]. **Neuro-Symbolic Approach.** The neuro-symbolic models consist of two modules--a neural perception frontend and a symbolic reasoning backend. The neural perception frontend (usually implemented as neural nets) extracts the symbolic representation of entry images (including but not limited to logical representation and probability representation) which are based on a predefined formal representation system. The Figure 4: CPCNet: A Feed-Forward Architecture that unrolls the paradigm in Figure 2(b) symbolic reasoning backend performs symbolic manipulation or probability calculation according to the predefined formal representation system. Neuro-symbolic approach and the next approach--learning models--are data-driven approach, whereas the first two approaches are knowledge-based approaches. Examples of neuro-symbolic models for solving RPM-like tasks include ALANS2, PrAE, VAE-GPP, TRIVR, LoGe, NVSA and AMR(Zhang et al., 2022, 2021; Shi, Li, and Xue, 2021; He, Ren, and Bai, 2021; Yu et al., 2021; Hersche et al., 2023; Xu et al., 2023). **Learning Models.** Unlike the previous approach, learning approach does not rely on any predefined representation system of geometric objects and abstract patterns. Instead, the representations are learned from raw perceptual input and represented as feature vectors. When this paper is written, almost all of the popular deep learning architectures have been experimented on RPM tasks, such as CNN, ResNet family, recurrent neural networks, and attention models (Hu et al., 2021; Benny, Pekar, and Wolf, 2021; Sahu, Basioti, and Pavlovic, 2023; Wei, Chen, and Yuan, 2023). This approach has become more and more popular recently because of large RPM-like datasets created in the last five years (Barrett et al., 2018; Zhang et al., 2019). However, it needs to be pointed out that these datasets are not perfect. For example, the RAVEN dataset (Zhang et al., 2019) is flawed because of the context-blind issue, i.e., training only on the answer choices leads to good performance(Hu et al., 2021). Thus, two variants of RAVEN--I-RAVEN (Hu et al., 2021) and RAVEN-FAIR(Benny, Pekar, and Wolf, 2021)--were proposed to fix this issue. Besides different variants of RAVEN, the evaluation setting of learning models is a more complicated issue. We will elaborate more on this in the experiment section. ## CPCNet for Solving RPM-Like Datasets Based on the above discussion about the feasibility and rationale, we can now formalize our method for solving RPM-like problems. In the main paper, we describe what kind of operations are applied at each step; detailed implementation and hyper-parameters can be found in the supplementary material. We use the single-choice evaluation protocol, which is more challenging than the commonly-used multi-choice evaluation protocol, because comparing answer choices gives the model advantage over evaluating each single answer choice independently (see more about this in (Benny, Pekar, and Wolf, 2021)). Thus, by inserting each answer choice into the matrix and evaluating them individually, we actually turn every multi-choice item into eight binary classification items, where the input to our model is a real tensor \(x\) of shape \((R=rows,C=columns,H_{orig}=height,W_{orig}=width,channels=1)\) and the class label \(y\) indicates whether the inserted answer choice is correct. For the feature extractor that simulates early vision in Figure 4, we adopt a convolution-based encoder \(f_{E}\) whose output channel number is set to \(K>1\). Since the early vision usually does not involve forming abstract relations which are between entry images, \(f_{E}\) is applied to encode each matrix entry individually: \[z_{r,c} =f_{E}(x_{r,c})\;\forall(r,c)\in\{1,\ldots,R\}\times\{1,\ldots,C\} \tag{1}\] \[z =[z_{r,c}]_{r=1,\ldots,R,c=1,\ldots,C}\in\mathbb{R}^{R\times C \times H\times W\times K} \tag{2}\] where \(H<H_{orig}\) and \(W<W_{orig}\) as channels are increased from 1 to \(K\). Let \(z_{1}^{(0)}=z_{2}^{(0)}=z\) for the following Path 1 and 2, respectively. For each iteration \(i\in\{1,\ldots,L\}\) after \(f_{E}\), we need to define Path 1, Path 2, and the consistency computation between them. For Path 1, we define perceptual and conceptual processing as convolution-based modules \(h_{1}^{(i)}\) and \(g_{1}^{(i)}\), respectively. Similarly, for Path 2, we define the perceptual and conceptual processing as convolution-based modules \(h_{2}^{(i)}\) and \(g_{2}^{(i)}\). For the consistency computation, we define a two-layer MLP \(q^{(i)}\). The hyper-parameters of these modules are all set to values that preserve the input tensor shape \((R,C,H,W,K)\), i.e., the output channels of \(h_{1}^{(i)}\), \(g_{1}^{(i)}\), \(h_{2}^{(i)}\), and \(g_{2}^{(i)}\) and the output units of \(q^{(i)}\) are all set to \(K\). For RPM task, the abstract concepts lie in the row and column dimensions as the abstract concepts are represented by rows and columns. We thus apply the convolutions of conceptual processing \(g_{1}^{(i)}\) and \(g_{2}^{(i)}\) on the \((R,C)\) dimensions of the input tensor, and apply the convolutions of perceptual processing \(h_{1}^{(i)}\) and \(h_{2}^{(i)}\) on on the \((H,W)\) dimensions. And the consistency computation \(q^{(i)}\) is applied on the channel dimension. Note that dimensions when not being computed are treated as transparent, i.e., like extended batch dimensions. Let the outputs from Path 1 and 2 of Iteration \(i-1\) be \(z_{1}^{(i-1)}\) and \(z_{2}^{(i-1)}\), the computation of Iteration \(i\) is: \[u_{1} =h_{1}^{(i)}\circ g_{1}^{(i)}(z_{1}^{(i-1)}) \tag{3}\] \[u_{2} =g_{2}^{(i)}\circ h_{2}^{(i)}(z_{2}^{(i-1)})\] (4) \[v_{1} =q^{(i)}(u_{1})\] (5) \[v_{2} =q^{(i)}(u_{2})\] (6) \[z_{1}^{(i)} =u_{1}-v_{2}\] (7) \[z_{2}^{(i)} =u_{2}-v_{1} \tag{8}\] At last, we define two classification heads \(p_{1}\) and \(p_{2}\) for Path 1 and Path 2, respectively. \[\hat{y}_{1} =p_{1}(flatten(mean(z_{1}^{(L)}))) \tag{9}\] \[\hat{y}_{2} =p_{2}(flatten(mean(z_{2}^{(L)}))) \tag{10}\] where the \(mean\) takes the mean over the channel dimension of size \(K\) and the \(flatten\) flattens the input to a vector of length \(R\times C\times H\times W\). For training, we compute binary cross entropy losses for both \(\hat{y}_{1}\) and \(\hat{y}_{2}\) with respect to \(y\) and add them up as the final loss. For testing, we simply add up the \(z_{1}^{(L)}\) and \(z_{2}^{(L)}\) as a score of the input \(x\) and select the highest score among all the answer choices. ## Experiments ### Datasets and a Striking Observation We did our experiments on the RAVEN dataset (Zhang et al., 2019) and its variants. RAVEN items have 7 spatial config \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & Number & Position & Number/Position & Type & Size & Color \\ \hline Constant & 0 & 0 & 31803 & 19240 & 16451 & 21817 \\ \hline Progression & 1857 & 1809 & 0 & 19386 & 14049 & 12797 \\ \hline Arithmetic & 872 & 2933 & 0 & 0 & 13012 & 12686 \\ \hline Distribute-Three & 2925 & 2791 & 0 & 19447 & 16355 & 12767 \\ \hline Sum by Color & \multicolumn{4}{c|}{15187} & \multicolumn{4}{c|}{209810} \\ \hline \end{tabular} \end{table} Table 1: Numbers of RAVEN (I-RAVEN and RAVEN-FAIR) items containing each combination of abstract rules and attributes. \begin{table} \begin{tabular}{|c|l c c c c c c c c|} \hline & Model & Avg. Acc. & Center & 2x2Grid & 3x3Grid & L-R & U-D & O-IC & O-IG \\ \hline \multirow{3}{*}{Multi-Choice} & LEN & 78.3\% & 82.3\% & 58.5\% & 64.3\% & 87.0\% & 85.5\% & 88.9\% & 81.9\% \\ & MXGNet & 83.91\% & - & - & - & - & - & - & - \\ Evaluation & CoPINet & 91.42\% & 95.05\% & 77.45\% & 78.85\% & 99.10\% & 99.65\% & 98.50\% & 91.35\% \\ Protocol & DCNet & 93.58\% & 97.80\% & 81.70\% & 86.65\% & 99.75\% & 99.75\% & 98.95\% & 91.45\% \\ & SAVIR-T & 94.0\% & 97.8\% & 94.7\% & 83.8\% & 97.8\% & 98.2\% & 97.6\% & 88.0\% \\ \hline \multirow{9}{*}{Single-Choice} & WReN & 14.69\% & 13.09\% & 28.62\% & 28.27\% & 7.49\% & 6.34\% & 8.38\% & 10.56\% \\ & ARNe & 19.67\% & - & - & - & - & - & - & - \\ & NCD & 39.66\% & 45.45\% & 35.50\% & 39.50\% & 34.85\% & 33.40\% & 40.25\% & 30.00\% \\ & PraE & 65.03\% & 76.50\% & 78.60\% & 28.55\% & 90.05\% & 90.85\% & 48.05\% & 42.60\% \\ Single-Choice & ALANS & 74.4\% & 69.1\% & 80.2\% & 75.0\% & 72.2\% & 73.3\% & 76.3\% & 74.9\% \\ Evaluation & MRNet & 84.0\% & - & - & - & - & - & - & - \\ Protocol & NVSA & 87.7\% & 99.7\% & 93.5\% & 57.1\% & 99.8\% & 99.1\% & 98.1\% & 65.4\% \\ & SCL & 91.6\% & 98.1\% & 91.0\% & 82.5\% & 96.8\% & 96.5\% & 96.0\% & 80.1\% \\ & AlgebraicMR & 92.9\% & 98.8\% & 91.9\% & **93.1\%** & 99.2\% & 99.1\% & 98.2\% & 70.1\% \\ & Rel-AIR & 94.1\% & 99.0\% & 92.4\% & 87.1\% & 98.7\% & 97.9\% & 98.0\% & 85.3\% \\ & CPCNet(Ours) & **96.92\%** & **100.0\%** & **96.70\%** & 86.05\% & **100.0\%** & **99.90\%** & **99.90\%** & **95.90\%** \\ \hline & Human & 84.4 & 95.5\% & 81.8\% & 79.6\% & 86.4\% & 81.8\% & 86.4\% & 81.8\% \\ \hline \end{tabular} \end{table} Table 2: Numbers of AB-RAVEN items containing each combination of abstract rules and attributes. Figure 5: 7 spatial configurations of the RAVEN dataset. Each configuration is illustrated by a complete 3x3 matrix. In the **Center** configuration, each entry contains only one object located at the center. In **2x2Grid** and **3x3Grid**, objects can only be located on the grid positions in each entry. In **Left-Right** and **Up-Down**, each entry contains exactly two objects located at the fixed positions shown in the figure. **Out-InCenter** is a combination of two center configurations and **Out-InGrid** is a combination of a center configuration and a 2x2Grid configuration. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & Number & Position & Number/Position & Type & Size & Color \\ \hline Constant & 0 & 0 & 19574 & 17507 & 15263 & 21220 \\ \hline Progression & 4058 & 4040 & 0 & 17641 & 11998 & 11010 \\ \hline Arithmetic & 6329 & 6307 & 0 & 0 & 11508 & 10932 \\ \hline Distribute-Three & 6335 & 6421 & 0 & 17379 & 15080 & 10907 \\ \hline Sum by Color & \multicolumn{4}{c|}{33490} & \multicolumn{4}{c|}{180219} \\ \hline \end{tabular} \end{table} Table 1: Numbers of RAVEN (I-RAVEN and RAVEN-FAIR) items containing each combination of abstract rules and attributes. urations to organize the geometric objects in matrix entries (see Figure 5). Each configuration has 6000, 2000, and 2000 items for training, validation, and test, respectively. A very striking result from almost all previous works on RAVEN is that the accuracies on 2x2Grid, 3x3Grid, and Out-InGrid are always significantly lower than on other item configurations. Some argue this result is because specific abstract rules (i.e., concepts, relations, or patterns) are difficult to learn [22]; some argue that the noise attributes in the grid configurations causes this result [23]. Although these arguments are not wrong, the fundamental reason for this result might be a much more mundane (but apparently previously unremarked!) one--that RAVEN is an extremely imbalanced dataset in terms of the abstract rules represented in its problems. This point can be seen in Table 1, which counts dataset items for each combination of abstract rules and attributes. There are two types of combinations in this table--the red ones that exist only in the 2x2Grid, 3x3Grid, and Out-InGrid configurations and the green ones that exist mainly in the other four configurations and exist in roughly 65% of the items of 2x2Grid, 3x3Grid, and Out-InGrid. Moreover, the sum of the green ones is roughly 15 times of the sum of the red ones. Therefore, the red ones are much less represented both in their own configurations and globally. This argument also applies to RAVEN's variants, such as I-RAVEN and RAVEN-FAIR because they share the same set of abstract rules and attributes with the the original RAVEN. As we all know, deep learning models require sufficient data to work properly and every deep learning model has a lower limit for training data. We thus hypothesize that it is because the red ones in Table 1 are less represented that previous models usually work relatively worse on 2x2Grid, 3x3Grid, and Out-InGrid. To verify that this, we constructed a new variant of RAVEN which is more Balanced in terms of Abstract rules and attributes (we thus call it AB-RAVEN). Table 2 shows the statistics of AB-RAVEN. It was made more balanced by decreasing the number of non-grid training items and increasing the number of grid training items while keeping the overall size of training set unchanged. The validation and test sets of AB-RAVEN remain the same as RAVEN's. If the hypothesis is true, training on this dataset will lead to a smaller gap between grid and non-grid (testing) accuracies. Meanwhile, this dataset can also check if previous models' high accuracies on non-grid configurations are a result of excessively many training items of non-grid configurations. As you can see in Table 2, AB-RAVEN is not perfectly balanced, but just just more balanced than RAVEN. This is because making it perfectly balanced will violate the design of 7 configurations, i.e., need to remove all non-grid configuration items. More details about AB-RAVEN is provided in the supplementary material. ## Results and Discussion We designed our model to avoid using meta-targets and structure information for auxiliary training because this kind of handcrafted auxiliary information is not always available for human RPM tests and general visual abstract reasoning tasks; instead, only the score of each answer choice is predicted individually and the highest score is selected as the answer, i.e., using the single-choice evaluation protocol, which is more difficult than the opposite--multi-choice evaluation protocol [1]. Although not discussed very often in literature, it has been shown by multiple works [22, 23, 24, 25, 26, 27, 28] that when using single-choice evaluation, i.e., not allowing the model to comparing answer choices before scoring them and thus not allowing it to use the backdoor of the original RAVEN, the original RAVEN is more challenging than I-RAVEN and RAVEN-FAIR, that is, the same model always achieves a higher accuracy on I-RAVEN and RAVEN-FAIR than on RAVEN. This makes sense because the way the answer choices of RAVEN were generated makes the answer choices more similar to each other than in I-RAVEN and RAVEN-FAIR and thus more confusing to the model when evaluated individually; on the contrary, the ways answer choices were generated in I-RAVEN and RAVEN-FAIR make the distractors differ from the correct answer by more attributes and thus less confusing to the model when evaluated individually. Therefore, due to the page limit, we report only the results on datasets of interest to us--RAVEN and AB-RAVEN--here. Other results can be found in the supplementary material. Table 3 shows that our model achieve the best average accuracy compared to previous models and the best configuration accuracies on 6 out of 7 configurations. Although our model's accuracy is only 2.82%\(\sim\)2.92% higher than the second and third highest accuracies of Rel-AIR and SAVIR-T, our model is solving the RAVEN dataset in a more difficult and general way--SAVIR-T uses the the easier multi-choice evaluation protocol and is designed to utilize the inductive bias that is specific to RPM [23], and while Rel-AIR uses the harder single-choice evaluation, Rel-AIR employs a separately-trained entry-encoder to explicitly extract values of size and position attributes, which are also specific to RPM rules. Many of the models in Table 3 more or less used inductive bias that is specific to RAVEN either in the model design or in the training procedure. On the contrary, our inductive bias, if we consider it as a kind of inductive bias, is the interplay and consistency between perceptual and conceptual processing, which is more meaningful for solving and understanding general visual abstract reasoning. In particular, CoPINet and DCNet, which have been reported to utilize the back \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline Model & Avg. Acc. & Center & 2x2Grid & 3x3Grid & L-R & U-D & O-IC & O-IG \\ \hline CPCNet(ours) & 98.84\% & 99.75\% & 99.20\% & 94.95\% & 99.70\% & 99.80\% & 99.50\% & 98.95\% \\ \hline \hline \end{tabular} \end{table} Table 4: Accuracies on the AB-RAVEN, using single-choice evaluation protocol. door of RAVEN Hu et al. (2021), also achieved lower accuracies than ours. Table 4 shows our model's accuracy on the AB-RAVEN dataset. Our model achieved a nearly perfect overall accuracy and, compared to Table 3, the accuracy gap between grid and non-grid configurations has been reduced from 7.07% to 1.98%. This verifies our hypothesis about the imbalanced dataset in the last subsection. Besides the RAVEN family, we also tested CPCNet on PGM and achieved the best accuracy of 98.4%. More details can be found in the supplementary material. ## Conclusion In this paper, we investigated a cognitively-inspired paradigm for solving visual abstract reasoning tasks by leveraging the interplay between perceptual and conceptual processing. We discussed feasibility of this approach and detailed rationale, and, we designed a deep neural net architecture to simulate the interplay of processes suggested in the cognitively-inspired paradigm, i.e., the interplay and consistency between perceptual and conceptual processing. Experiments shows that our CPCNet architecture is more effective than all previous models for solving RPM-like datasets. Moreover, with a balanced dataset (AB-Raven) in terms of abstract concepts, this architecture can even achieve a nearly perfect performance.
``` 私たちは、人間認知に由来する視覚抽象推理タスクを解決するための新しいニューラルアーキテクチャを導入します。人間の抽象的推理は、柔軟でiterative、dynamicな認知プロセスの一部として、感覚的な処理と概念的な処理を交互に行うことがよくあります。この原理に基づいて、私たちのアーキテクチャは、視覚刺激の感覚的な処理と概念的な処理の間で、反復的で自己対比的な学習プロセスとして視覚抽象推理をモデル化しています。この新しい対比的感覚概念ネットワーク(CPCNet)の動作を説明するために、よく知られているRaven's Progressive Matricesの数学的問題を使って説明します。RAVENの機械学習データセットを使用した実験で、CPCNetは、既存のすべてのモデルに比べて精度が高く、また、最も弱い誘導バイアスを使用しています。また、RAVENの元のデータセットに存在する大きなクラス不均衡を指摘し、新しい
2301.00103
Renormalization of Ising cage-net model and generalized foliation
A large class of type-I fracton models, including the X-cube model, have been found to be fixed points of the foliated renormalization group (RG). The system size of such foliated models can be changed by adding or removing decoupled layers of $2$D topological states and continuous deformation of the Hamiltonian. In this paper, we study a closely related model -- the Ising cage-net model -- and find that this model is not foliated in the same sense. In fact, we point out certain unnatural restrictions in the foliated RG, and find that removing these restrictions leads to a generalized foliated RG under which the Ising cage-net model is a fixed point, and which includes the original foliated RG as a special case. The Ising cage-net model thus gives a prototypical example of the generalized foliated RG, and its system size can be changed either by condensing / uncondensing bosonic planon excitations near a 2D plane or through a linear depth quantum circuit in the same plane. We show that these two apparently different RG procedures are closely related, as they lead to the same gapped boundary when implemented in part of a plane. Finally, we briefly discuss the implications for foliated fracton phases, whose universal properties will need to be reexamined in light of the generalized foliated RG.
Zongyuan Wang, Xiuqi Ma, David T. Stephen, Michael Hermele, Xie Chen
2022-12-31T03:02:38
http://arxiv.org/abs/2301.00103v2
# Renormalization of Ising cage-net model and generalized foliation ###### Abstract A large class of type-I fracton models, including the X-cube model, have been found to be fixed points of the foliated renormalization group (RG). The system size of such _foliated_ models can be changed by adding or removing decoupled layers of 2D topological states and continuous deformation of the Hamiltonian. In this paper, we study a closely related model - the Ising cage-net model - and find that this model is not foliated in the same sense. In fact, we point out certain unnatural restrictions in the foliated RG, and find that removing these restrictions leads to a generalized foliated RG under which the Ising cage-net model is a fixed point, and which includes the original foliated RG as a special case. The Ising cage-net model thus gives a prototypical example of the generalized foliated RG, and its system size can be changed either by condensing / uncondensing bosonic planon excitations near a 2D plane or through a linear depth quantum circuit in the same plane. We show that these two apparently different RG procedures are closely related, as they lead to the same gapped boundary when implemented in part of a plane. Finally, we briefly discuss the implications for foliated fracton phases, whose universal properties will need to be reexamined in light of the generalized foliated RG. ## I Introduction The renormalization group (RG) plays a fundamental role in the characterization and classification of quantum phases of matter.[1; 2; 3] It is a piece of conventional wisdom that each phase - defined as a deformation class of quantum systems - is characterized by a unique RG fixed point, which encodes the universal long-distance and low-energy properties of the phase. Moreover, the existence of such a fixed point underlies the key role played by continuum quantum field theory as a tool to describe universal properties of phases (and phase transitions) while discarding extraneous non-universal information. Fracton models in three spatial dimensions (3D) [4; 5] provide exceptions to this conventional wisdom, and accordingly challenge our understanding of the relationships among quantum phases of matter, the renormalization group, and quantum field theory. This is nicely illustrated in the X-cube model,[6] perhaps the simplest fracton model. The defining characteristic of a fracton model is the presence of excitations of restricted mobility, and the X-cube model supports point-like excitations mobile in planes (planons), along lines (lineons), and for which an isolated excitation is fully immobile (fractons). The model is exactly solvable and has zero correlation length, so we might expect it to be a fixed point of the RG, as is the case for toric code and string-net models.[7; 8] However, the X-cube model on a lattice of linear size \(L\) is equivalent (under the application of a finite-depth circuit) to an X-cube model on a smaller lattice stacked with 2D toric code layers.[9] Therefore, when trying to coarse-grain the X-cube model, non-trivial 2D layers are left behind. These layers cannot be integrated out or otherwise removed, thus preventing the model from being a fixed point of any conventional RG procedure. This behavior is closely related to the striking system-size dependence of certain properties, such as the ground state degeneracy (GSD) and the number of types of fractional excitations, both of which grow exponentially in the linear system size.[9; 10] Similar phenomena occur in other fracton models, including Haah's cubic code[11]. It is interesting to ask whether some fracton models are fixed points of a suitably generalized RG. While there are many schemes and procedures for carrying out RG in different settings, it is important to emphasize that simply finding a new RG scheme is not enough. Instead, a more radical generalization of what we usually mean by RG is needed, because, for instance, any RG procedure that can have the fracton models as fixed points must allow for the increase / decrease in GSD and the addition / removal of fractional excitations in the process. Along these lines, it was found the X-cube model is a fixed point of a _foliated RG_ procedure.[9; 12; 13; 14] It is helpful to recall the conventional RG procedure for gapped phases[2; 3], which allows, in each RG step, for continuous deformations of the Hamiltonian that keep the gap open, and for the addition/removal of trivial gapped systems (those whose ground state is a product state). In the foliated RG, one also allows addition or removal of decoupled, gapped 2D systems. Such 2D systems can be topologically ordered and thus carry non-trivial GSD and fractional excitation types, hence allowing for these properties to change under RG. In the case of the X-cube model, we can remove 2D toric code layers under the foliated RG, thus making the model into a fixed point. More generally, a large class of type-I fracton models[6] - those where some of the fractional excitations are mobile - are fixed points of the foliated RG. The foliated RG leads to the closely related notion of _foliated fracton phases_.[10; 15] Foliated fracton phases, which we define in Appendix A, are a coarser equivalence relation on ground states than ordinary phases, and each foliated fracton phase contains a fixed point of the foliated RG. This fixed point captures certain universal properties that are the same everywhere in the foliated phase, and these properties are referred to as _foliated fracton order_. When a model belongs to a foliated fracton phase, it is a convenient shorthand terminology to refer to the model as being foliated. An interesting type-I fracton model that has not been investigated from this perspective is the Ising cage-net model.[16] The Ising cage-net model is very similar to the X-cube model in many ways. Both are exactly solvable models that can be obtained from a coupled layer construction, based on toric code layers in the X-cube case,[17; 18] and doubled-Ising string-net layers in the cage-net case.[16] Both have fracton excitations that are created at the corners of a rectangular membrane operator. Both have lineon excitations (abelian in the X-cube model and non-abelian in the cage-net model) that move in the \(x\), \(y\) and \(z\) directions. Both have other planon excitations that move in \(xy\), \(yz\) or \(zx\) planes. Despite these similarities, it has not been clear whether the Ising cage-net model is foliated in the sense defined above. It is important to emphasize that, while both involve a layer structure, the coupled-layer constructions of X-cube and cage-net models are very different from foliated RG and from the notion of foliated fracton phases. In particular, there is no obvious relationship between whether a model can be obtained by a coupled-layer construction and whether it is foliated. By analogy with the X-cube model, it is natural to guess that the Ising cage-net model is a foliated RG fixed point upon adding/removing doubled-Ising string-net layers. However, this cannot be the case, because the doubled-Ising string-net model contains non-abelian excitations with quantum dimension \(\sqrt{2}\), while the cage-net model has excitations with integer quantum dimension only.[16] While this argument does not rule out the possibility of a foliated RG fixed point with other 2D topological states as resources, in fact the Ising cage-net model is not foliated. This can be seen by studying the model's GSD, which has been computed by some of the authors in a separate paper.[19] It is found that the GSD does not grow by integer multiples when the system size grows by unity in the \(x\), \(y\) or \(z\) directions. The question is then open again: can we think of the Ising cage-net model as a fixed point of a suitably generalized RG? More specifically, can the foliated RG be generalized somehow to include the Ising cage-net model? In fact, we argue in this paper that the foliated RG _should_ be extended, independent of the Ising cage-net example. We do this by re-examining foliated RG from two complementary perspectives, one based on planon condensation, and the other based on quantum circuits, and point out that in both these pictures, the foliated RG has unnatural restrictions. These observations lead us to a generalized foliated RG under which, remarkably, the Ising cage-net model is a fixed point. The generalized foliated RG can be carried out either by condensing or uncondensing bosonic planon excitations supported near a 2D plane, or by acting with a quantum circuit, supported near a 2D plane, whose depth scales with the linear size of the plane. We show that either of these operations can be used to decrease or increase the system size of the Ising cage-net model, which is thus a generalized foliated RG fixed point. The two apparently different ways of carrying out the generalized foliated RG are closely related, through a connection that we explain between anyon condensation and a class of linear depth circuits that we refer to as _sequential circuits_. We note that the original foliated RG arises as a special case of the generalized procedure introduced here. In particular, for the X-cube model, instead of decoupling a toric code layer and removing it to decrease system size, we can condense the bosonic planon that effectively comes from the toric code layer (either \(e\) or \(m\)), which has the same effect as removing the layer. Alternatively, we can act with a certain linear-depth circuit (more specifically, a sequential circuit) whose effect is to condense the same bosonic planon. Therefore, we can use generalized foliation to study the X-cube model, the Ising cage-net model and many other type-I fracton models within a single framework. Just as foliated RG comes with the notion of foliated fracton phases and foliated fracton order, we expect that the generalized foliated RG comes with corresponding notions of generalized foliated Figure 1: Top: the foliated RG scheme, where a layer of topologically ordered state (shown in orange) can be added into or removed from a foliated fracton model via a finite depth circuit. Bottom: generalized foliated RG scheme realized by condensation of bosonic planons or a sequential linear depth circuit around the plane. fraction phases and generalized foliated fracton order. It will be interesting to study these notions in future work. The paper is structured as follows: In Sec. II, we review the original foliated RG by focusing on the X-cube model. In Sec. III, we review the Ising cage-net model, which is not foliated according to the original scheme. Section IV then briefly points out some unnatural restrictions within the original foliated RG, and proposes a generalized foliated RG where these restrictions are removed. In Sec. V, we show that the Ising cage-net model is foliated in terms of a generalized foliated RG defined by planon condensation. Then, in Sec. VI, we demonstrate that the generalized foliated RG can also be implemented by a planar linear depth circuit. The linear depth circuit has a special structure, and we dub it a sequential circuit; in Sec. VII we show how the sequential circuit we use is closely related to the condensation of planons via gapped boundaries. Finally, in Sec. VIII, we conclude with a brief discussion on the implications of and outlook for the generalized foliated RG. ## II Foliation in X-cube Before our discussion of the 'generalized foliation', it is instructive to review the original notion of foliation and see how the corresponding RG procedure is carried out for the X-cube. The X-cube model has a foliated structure, where layers of the toric code can be added to or removed from the X-cube via a finite depth circuit \(\mathcal{S}\). [9] Given an X-cube ground state \(|\Psi_{\text{X.C.}}\rangle\) of the system size \(L_{x}\times L_{y}\times L_{z}\) and a toric code ground state \(|\Psi_{\text{T.C.}}\rangle\), \(\mathcal{S}\) yields a \(|\Psi_{\text{X.C.}}\rangle\) of the size \(L_{x}\times L_{y}\times(L_{z}+1)\). In rest of this section, we review the finite depth circuit \(\mathcal{S}\) on the three-torus. Let us consider the X-cube Hamiltonian defined on a cubic lattice on the three-torus; and one copy of the toric code Hamiltonian defined on a square lattice on the two-torus. For both models, the local qubit DOFs are placed on the edges. The X-cube Hamiltonian [6] \[H_{\text{X.C.}}=-\sum_{v}\left(A_{v}^{x}+A_{v}^{y}+A_{v}^{z}\right)-\sum_{c}B _{c} \tag{1}\] contains three types of vertex terms \(A_{v}^{x}\), \(A_{v}^{y}\), and \(A_{v}^{z}\); and one type of cube term \(B_{c}\), as shown in Fig. 2. The toric code Hamiltonian [20] \[H_{\text{T.C.}}=-\sum_{v}Q_{v}-\sum_{p}B_{p} \tag{2}\] is a sum of local terms as shown in Fig. 3. To construct the circuit, we first insert a decoupled toric code into the X-cube. As depicted in Fig. 4, when the inserted toric code lies in the \(xy\)-plane, it bisects the \(z\)-direction edges in the X-cube model, thus creating new qubit edges \(k^{\prime}\) colored in orange. These new \(k^{\prime}\) edges are added to the system as product states whose Hamiltonian is chosen to be \(H_{0}=-\sum_{\{k^{\prime}\}}Z_{k^{\prime}}\). For each bisected edge \(i\) in the X-cube Hamiltonian, we substitute \(Z_{i}\to Z_{i^{\prime}}\) and \(X_{i}\to X_{i^{\prime}}\). The circuit \(\mathcal{S}\) is a product of two finite depth circuits \(\mathcal{S}_{2}\) and \(\mathcal{S}_{1}\), \(\mathcal{S}=\mathcal{S}_{2}\mathcal{S}_{1}\). Each is a product of the controlled-NOT (CNOT) gates. The circuit \(\mathcal{S}_{1}\) acts on the edges of the modified X-cube Hamiltonian, as shown in Fig. 4(a). Every CNOT gate in \(\mathcal{S}_{1}\) has an \(i^{\prime}\) edge serving as the controlled qubit and the corresponding \(k^{\prime}\) edge as the target. On the other hand, \(\mathcal{S}_{2}\) acts on both edges of the X-cube and those of the toric code. Every edge of the Figure 4: The insertion of a layer of toric code living on an \(xy\)-plane (blue colored square lattice) into a cubic lattice, which hosts the X-cube. The inserted layer bisects an edge \(i\) near the inserted plane into edges labeled by \(i^{\prime}\) and \(k^{\prime}\). For every bisected edge, the X-cube Hamiltonian is modified by replacing \(Z_{i}\to Z_{i^{\prime}}\) and \(X_{i}\to X_{i^{\prime}}\). The new edges \(k^{\prime}\) are product states with the Hamiltonian of \(H_{0}=-\sum_{\{k^{\prime}\}}Z_{k^{\prime}}\). Figure 3: (a) The vertex term \(Q_{v}\) in the toric code Hamiltonian. (b) The plaquette term \(B_{p}\). toric code serves as the controlled qubit for the CNOT gates whose targets are edges in the modified X-cube. An illustration of \(\mathcal{S}_{2}\) is given in Fig. 5b. The CNOT gate, acting by conjugation, has the actions of \[\begin{split} ZI\mapsto ZI,& IZ\leftrightarrow ZZ,\\ XI\leftrightarrow XX,& IX\mapsto IX,\end{split} \tag{3}\] where the first qubit is the control and the second is the target. All the CNOT gates in \(\mathcal{S}_{1}\) or \(\mathcal{S}_{2}\) commute with each other. Therefore, \(\mathcal{S}\) is a finite depth circuit. By direct computation, we see that \[\mathcal{S}\left(\tilde{H}_{\mathrm{X.C.}}^{(L_{x},L_{y},L_{z})}+H_{\mathrm{T. C.}}+H_{0}\right)\mathcal{S}^{\dagger}\cong H_{\mathrm{X.C.}}^{(L_{x},L_{y},L_{z}+1)}, \tag{4}\] where \(\tilde{H}_{\mathrm{X.C.}}\) is the modified X-cube Hamiltonian, and the symbol \(\cong\) denotes that the L.H.S. and the R.H.S. share the same ground space. ## III Ising cage-net In this section, we review the basic definition and properties of the Ising cage-net model. The Ising cage-net is an exactly solvable model obtained from the coupled layer construction [16], in which decoupled layers of the doubled-Ising string-net [21; 22; 23; 24; 25] are coupled together through the particle-loop (p-loop) condensation. Specifically, we take three stacks of the doubled-Ising string-net defined on a square-octagon lattice (see Fig. 6), and stack them together to form a truncated cubic lattice, as shown in Fig. 7. Each of the six faces of a cube is an octagonal plaquette. We call an edge \(l\), parallel to the \(\mu\)-direction for \(\mu\in\{x,y,z\}\), a \(\mu\)-principal edge, and denote it by \(l_{\mu}\). As a 2D lattice model, the doubled-Ising string-net is built from the Ising unitary modular tensor category [26; 27], which consists of an index set \(\{0,1,2\}\) and a set of symbols \((\delta_{ijk},d_{s},F_{kln}^{ijm},R_{l}^{ij})\). The model has a three-dimensional local Hilbert space of \(\mathrm{span}_{\mathbb{C}}\{\left|\left\langle 0\right\rangle,\left|1\right\rangle, \left|2\right\rangle\right\}\) for each edge of the square-octagon lattice. The states \(\left|0\right\rangle\), \(\left|1\right\rangle\), \(\left|2\right\rangle\) are dubbed as 0-string, 1-string, and 2-string respectively. The commuting projector Hamiltonian \[H_{\mathrm{D.I.}}=-\sum_{v}Q_{v}-\sum_{p}B_{p} \tag{5}\] consists of the vertex projector \(Q_{v}\) and the plaquette projector \(B_{p}=\sum_{s=0}^{2}(d_{s}/D)B_{p}^{s}\) (see Fig. 6). The symbol \(d_{s}\) takes values in \(d_{0}=d_{2}=1\), and \(d_{1}=\sqrt{2}\). \(D=\sum_{s}(d_{s})^{2}\) is _the total quantum dimension_ of the model. Figure 7: A truncated cubic lattice. It is formed by intersecting layers of the square-octagon lattice. Every cube has six octagonal faces. At the corners of each cube are octahedrons (see Fig. 9). The edges \(l\), parallel to \(\mu\) direction for \(\mu\in\{x,y,z\}\), are called the \(\mu\)-principal edges, which are denoted by \(l_{\mu}\). For the system of decoupled layers, a \(\mu\)-principal edge has a nine-dimensional local space given by the tensor product of \((\,\mathrm{span}_{\mathbb{C}}\{\left|0\right\rangle,\left|1\right\rangle, \left|2\right\rangle\})^{\otimes 2}\). Figure 5: An illustration of the finite depth circuit \(\mathcal{S}=\mathcal{S}_{2}\mathcal{S}_{1}\). (a) The action of the circuit \(\mathcal{S}_{1}\) when focus on an elementary cube of the original cubic lattice. The arrows, representing the CNOT gates, point from the controlled qubits to the targets. (b) \(\mathcal{S}_{2}\)’s action viewed at a cube. \(Q_{v}\)'s action is defined by \[Q_{v}\Bigg{|}\begin{array}{c}j\\ i\end{array}\Big{\rangle}=\delta_{ijk}\Bigg{|}\begin{array}{c}j\\ i\end{array}\Big{\rangle}, \tag{6}\] where the symbol \(\delta_{ijk}\) is symmetric under permutation of its indices. The non-zero elements are \(\delta_{000}=\delta_{011}=\delta_{211}=\delta_{022}=1\), up to permutations. The subspace where all the vertex terms \(Q_{\mu}\) are satisfied is called _the stable vertex subspace_\(\mathcal{H}_{Q_{\mu}}^{\mathrm{DL.25}}\). The plaquette operator \(B_{p}^{s}\)'s action are evaluated by the graphical rules, which are defined via the \(d\)- and \(F\)-symbols (Appendix B). \(B_{p}^{s}\) acts on a plaquette by fusing a loop of \(s\) into the edges as, for example, (7) For every ground state \(\ket{\Psi_{\mathrm{D.I.}}}\), which is a superposition of different configurations of closed loops satisfying \(Q_{v}\) at each vertex, \(B_{p}^{s}\) acts as \[B_{p}^{s}\ket{\Psi_{\mathrm{D.I.}}}=d_{s}\ket{\Psi_{\mathrm{D.I.}}}. \tag{8}\] Moreover, the \(B_{p}^{s}\) operators form a commutative fusion algebra of \[B_{p}^{i}B_{p}^{j}=\sum_{k=0}^{2}\delta_{ijk}B_{p}^{k}. \tag{9}\] The doubled-Ising string-net has nine topological excitations \(\{1,\psi,\psi,\sigma,\bar{\sigma},\sigma\psi,\psi\bar{\sigma},\sigma\bar{ \sigma},\psi\bar{\psi}\}\). In terms of the theory of anyons, these excitations come from a copy of the chiral Ising anyon \(\{1,\sigma,\psi\}\), and an anti-chiral copy \(\{1,\bar{\sigma},\bar{\psi}\}\). The fusion rules for the chiral Ising anyon are \[\begin{array}{c|ccc}\times&\mathbbm{1}&\sigma&\psi\\ \hline\mathbbm{1}&\mathbbm{1}&\sigma&\psi\\ \sigma&\sigma&\mathbbm{1}+\psi&\sigma\\ \psi&\psi&\sigma&\mathbbm{1}\end{array} \tag{10}\] The anti-chiral Ising anyon obeys the same fusion rules; we simply replace the anyon labels above with the barred version. Among the nine excitations, the non-abelian \(\sigma\bar{\sigma}\) and the abelian \(\psi\bar{\psi}\) are bosons. They are also the only non-trivial pure fluxon excitations. A _fluxon_ excitation violates exactly one \(B_{p}\) term and none of the \(Q_{v}\) terms. A fluxon string-operator \(W_{l}^{\mathrm{fluxon}}\) creates the fluxon and its anti-particle on the two adjacent plaquettes sharing the edge \(l\) (see Fig. 6). In particular, the \(\psi\bar{\psi}\) has a string-operator \[W_{l}^{\psi\bar{\psi}}=(-1)^{n_{1}(l)}, \tag{11}\] where \(n_{1}(l)=1\) if the edge \(l\) is in the state \(\ket{1}\), and \(n_{1}(l)=0\) otherwise. To couple the stacks of the doubled-Ising string-net layers together, we condense the \(\psi\bar{\psi}\) p-loop. Illustrated in Fig. 8 is the smallest \(\psi\bar{\psi}\) p-loop created by the coupling operator \[V_{l_{\mu}}=W_{l_{\mu}}^{(\psi\bar{\psi})_{\mu\nu}}W_{l_{\mu}}^{(\psi\bar{\psi })_{\mu\rho}}, \tag{12}\] which is a product of \(\psi\bar{\psi}\) string-operators, from the \(\mu\nu\)- and \(\mu\rho\)-planes, acting on the edge \(l_{\mu}\). We add \(-V_{l_{\mu}}\) for every principal edge to the Hamiltonian of the decoupled layers. \(-V_{l_{\mu}}\) penalizes the presence of the states \(\ket{01}\), \(\ket{10}\), \(\ket{21}\), and \(\ket{12}\) on \(l_{\mu}\). Using the Brillouin-Wigner degenerate perturbation theory and treating doubled-Ising string-nets as perturbations, we arrive at the Ising cage-net. Hence, on a principal edge, the Ising cage-net has a five-dimensional local Hilbert space of \(\mathrm{span}_{\mathbb{C}}\{\ket{00},\ket{11},\ket{02},\ket{20},\ket{22}\}\). Other edges are unchanged. The Ising cage-net has a commuting Hamiltonian of \[H_{\mathrm{L.C.}}=-\sum_{\mu\nu,v}A_{v}^{\mu\nu}-\sum_{p_{e}}B_{p_{e}}-\sum_{p _{a}}\frac{1}{2}\left(B_{p_{a}}^{0}+B_{p_{a}}^{2}\right)-\sum_{c}B_{c}, \tag{13}\] where \(A_{v}^{\mu\nu}\) is the vertex projector in a \(\mu\nu\)-plane; \(B_{p_{e}}\) is the doubled-Ising string-net plaquette projector for a square plaquette; \(\frac{1}{2}\left(B_{p_{a}}^{0}+B_{p_{a}}^{2}\right)\) is a plaquette term associated with each octagonal plaquette \(p_{c}\); and \[B_{c}=\prod_{p_{a}\in c}\frac{\sqrt{2}}{2}B_{p_{a}}^{1} \tag{14}\] Figure 8: An elementary \(\psi\bar{\psi}\) particle-loop (p-loop), the red loop, created by the coupling operator \(V_{l_{\mu}}\) shown by the green tube. We represent a flux by a line segment normal to the hosting plaquette. Joining the segments together, we have the red loop. is the cube term. The vertex term acts as \[A_{v}^{\mu\nu}\left|\begin{array}{c}j\\ \end{array}\right|\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! with a finite-depth quantum circuit. The topological layer cannot itself be created with a finite-depth circuit from a product state. However, it is now well-understood that it can be created with a linear-depth circuit [28; 29]. Therefore, if we view foliated RG as a generalization of usual entanglement RG [2; 3], in which one is allowed to add ancillary degrees of freedom in a product state and then apply finite-depth circuits, moving to foliated RG corresponds to additionally allowing linear-depth circuits within a 2D subsystem of the 3D model. However, from this perspective, the current definition of foliated RG is restricted, in that we only allow the linear-depth circuit to act on the ancillae qubits and not on the 3D bulk. A more natural definition would be to allow the linear-depth circuit to act arbitrarily within a 2D layer on both the ancillae and the bulk. We remark that the kinds of linear-depth circuits involved here have a special structure that preserves the area law of entanglement, as discussed in more detail in Sec. VII. Second, we can also view foliated RG in terms of condensation. Namely, suppose we want to implement the inverse process of removing a single layer from the X-cube model, reducing its size in one direction. This can be achieved by condensing a planon within a single layer, corresponding to disentangling the toric code layer and then trivializing that layer by condensing a boson. In this case, the planon which we condense is very special: it can be viewed as being part of a 2D theory that is decoupled from the rest of the excitation spectrum of the 3D bulk. To be more general, if we allow condensation of planons in RG, we should allow condensation of arbitrary planons, not only those that are part of decoupled 2D theories. In light of the above, there are two natural ways to extend the notion of foliated RG: linear-depth circuits and planon condensation. In what follows, we will show that both approaches lead to a generalized foliated RG that is applicable to the Ising cage-net model. Then, in Sec. VII, we argue that these two approaches, while seemingly distinct, are in fact very closely related to each other. ## V RG via condensation How can the system size of the Ising cage-net model be increased / decreased? In this section, we show that it can be changed through condensation and uncondensation of bosonic planons. This is closely tied to the topic of anyon condensation in 2D systems, and we refer the reader to Ref. [30] and references therein for a review. Let us begin by considering the process of condensing planons in an \(xy\)-plane to decrease the system size in the \(z\) direction by one (Fig. 11). Recall from the last section that for each \(xy\)-plane there is a bosonic planon \(\psi\bar{\psi}\) which can be condensed. When \(\psi\bar{\psi}\) in plane \(z=0\) is condensed, the quasi-particle content of the model changes as follows: 1. Since \(\psi\bar{\psi}\) is the fracton dipole, fractons between planes \(z=0\) and \(z=1\) are identified with the corresponding fracton between planes \(z=-1\) and \(z=0\). 2. The planons \(\psi\) and \(\bar{\psi}\) on the \(z=0\) plane are identified. 3. The \(\sigma\bar{\sigma}\) planon on the \(z=0\) plane splits into two abelian bosonic planons \(e\) and \(m\) with a mutual \(-1\) braiding statistics. 4. The lineons in the \(z=0\) plane composed of \(\sigma_{xy}\sigma_{xz}\), \(\bar{\sigma}_{xy}\sigma_{xz}\), \(\sigma_{xy}\bar{\sigma}_{xz}\), and \(\bar{\sigma}_{xy}\bar{\sigma}_{xz}\) are all confined. 5. Planons and lineons on other planes are unchanged. After this step, we can further condense either \(e\) or \(m\). This gets rid of the remaining planons on the \(z=0\) plane without affecting other quasi-particle excitations. Now, we see that the quasi-particle content of the model is the same as that of an Ising cage-net model with the \(z=0\) plane removed. The planons and lineons on planes other than \(z=0\) are left intact. Moreover, the fracton between \(z=0\) and \(z=1\), which is now identified with the fracton between \(z=-1\) and \(z=0\), becomes the new fracton between \(z=-1\) and \(z=1\). Therefore, the size of the Ising cage-net model can be decreased by one in the \(z\) direction by first condensing the \(\psi\bar{\psi}\) planon in a plane, and then by condensing one of the split channels of the \(\sigma\bar{\sigma}\) planon on the same plane. We see that if we allow condensation of bosonic planons as a RG operation, we obtain a generalized foliated RG under which the Ising cage-net model is a fixed point. As noted in Sec. IV, the original foliated RG for the X-cube model can also be viewed in terms of such condensation. The condensation of planons is, of course, a singular process where the bulk gap needs to close and then reopen, corresponding to a phase transition between Figure 11: An illustration of the relevant \(xy\)-planes of a \(L_{x}\times L_{y}\times L_{z}\) Ising cage-net. Via the condensation process described in the text, we remove the \(z=0\) plane and obtain a \(L_{x}\times L_{y}\times(L_{z}-1)\) Ising cage-net. different standard phases (see Appendix A for the definition of standard phases). This means that, similar to the original foliated RG, the generalized foliated RG operations can move across certain phase boundaries. However, only certain phase boundaries can be crossed; the singularity involved in planon condensation is localized to a selected plane and is hence a "subsystem" singularity, not one in the full 3D bulk. A useful way to think about the condensation process is to use the fact that the Ising cage-net model can be obtained by gauging the planar \(Z_{2}\) symmetries of a subsystem symmetry protected topological (SSPT) model protected by such symmetries [31]. The planons being condensed correspond to the symmetry charges of the planar symmetries in the SSPT model. Hence the condensation of the planons in a given plane corresponds to breaking / removing that planar symmetry and reducing the size of the model. On the other hand, if we want to increase the size of the system by adding a plane at \(z=0\), we need to add the planar symmetry and the corresponding planar state back to the SSPT model and're-gauge' the planar symmetry. ## VI RG via planar linear depth circuit The planar linear depth circuit we construct for the Ising cage-net model is a direct generalization of a RG scheme that maps product states to ground states of a string-net model, introduced by Liu Y. _et al._[29]. In Sec. VI.1, we review this RG procedure for the string-net models. We describe carefully an initialization step that is nontrivial for non-abelian string-net models, which was not discussed in detail in Ref. [29]. In Sec. VI.2, we describe the RG scheme as a linear depth circuit for the Ising cage-net model. We will see that the initialization step is also important and nontrivial. ### String-net RG In this section, we will first describe an important step in the RG procedure - the 'controlled gate' which adds a plaquette to the string-net wave-function. After that, we will describe the full RG procedure starting from the string-net wave-function on the minimal lattice on a torus and then adding plaquetes row by row. A brief review of the string-net models is given in Appendix B.1. #### vi.1.1 Adding plaquettes via the controlled gate The controlled gate can be used to add a plaquette to the string-net wave-function. We present the definition and properties of the gate in this sub-section. Computational details of the results discussed here can be found in Appendix D. Suppose that on a trivalent lattice, a plaquette is added by adding an edge (the red edge in the diagrams below), and we want to extend the string-net wave-function from the original lattice to that including this new plaquette. When the edge is added, it is not entangled with the rest of the lattice and is in the state \(|0\rangle\). To merge the added edge into the lattice, first, map it to \(\sum_{s}\frac{d_{s}}{\sqrt{D}}|s\rangle\) where \(D\) is the total quantum dimension of the string-net. \[|0\rangle\mapsto\sum_{s}\frac{d_{s}}{\sqrt{D}}|s\rangle \tag{18}\] Then, we use this edge as the control to draw loops around the added plaquette. More specifically, we can represent the controlled gate \(G_{p}=\sum_{s}G_{p}^{s}\) graphically as in Eq. (19). The action of \(G_{p}^{s}\) is similar to the action of \(B_{p}^{s}\) which adds a loop \(s\) to a plaquette, but for the graphical evaluation of \(G_{p}^{s}\), we treat the control edge as if it is in the state \(|0\rangle\), i.e. \[G_{p}^{s}\] (19) \[=\delta_{ss^{\prime}}\sum_{\begin{subarray}{c}\alpha,\beta, \gamma,\\ \delta,\varepsilon,\eta,\tau\end{subarray}}F_{ss^{\prime}\alpha}^{\ell_{1} \ell_{1}0}F_{ss^{\prime}\alpha}^{\bar{\ell}_{1}a\ell_{1}}F_{ss^{\prime}\gamma }^{\bar{\ell}_{2}\ell_{2}a^{*}}F_{s\gamma^{\prime}\delta}^{\bar{\ell}_{2}b \ell_{2}}F_{s\delta\delta^{*}\varepsilon}^{\bar{\ell}_{3}\ell_{3}b^{*}}F_{s \delta^{*}\varepsilon}^{\bar{\ell}_{3}\ell_{3}}F_{s\eta^{*}\tau}^{\bar{\ell}_{4 }\epsilon\epsilon^{*}}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\ where the red line with an arrow marks the control edge. We carry out the explicit graphical evaluation in Appendix D.1. Note that \(G_{p}^{s}\) can be defined on any polygonal plaquette. \(G_{p}^{s}\) is not a unitary on the full Hilbert space, but only between subspaces. More specifically, it is an isometry from \(\mathcal{V}_{p,s}^{\text{SN}}\) to \(\mathcal{H}_{p,s}^{\text{SN}}\), both of which involve the DOF around a plaquette \(p\). In \(\mathcal{V}_{p,s}^{\text{SN}}\), the control edge is set to \(\ket{s}\) while the other edges come from the string-net wave-function on the lattice with the control edge missing (pretending that it is set to \(\ket{0}\)). The vertices containing the control edge, then, involve configurations like \[\tikzfig{fig:cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cptpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cptpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cptpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cptpt_cpt_cpt_cptpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cptpt_cpt_cpt_cptpt_cpt_cpt_cpt_cpt_cpt_cptpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cptpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cptpt_cpt_cpt_cpt_cptpt_cpt_cpt_cptpt_cpt_cptpt_cptpt_cpt_cpt_cpt_cpt_cpt_cpt_cptpt_cpt_cpt_cpt_cptpt_cpt_cpt_cpt_cpt_cptpt_cpt_cpt_cptpt_cpt_cpt_cpt_cptpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cptpt_cpt_cpt_cpt_cptpt_cpt_cpt_cptpt_cpt_cptpt_cpt_cpt_cpt_cpt_cpt_cptpt_cpt_cpt_cpt_cpt_cpt_cpt_cptpt_cpt_cpt_cpt_cptpt_cpt_cpt_cpt_cpt_cpt_cptpt_cpt_cptpt_cpt_cpt_cptpt_cpt_cptpt_cpt_cpt_cptpt_cpt_cpt_cptpt_cpt_cpt_cpt_cptpt_cpt_cptpt_cpt_cpt_cpt_cptpt_cpt_cpt_cpt_cpt_cptpt_cpt_cpt_cpt_cptpt_cpt_cpt_cptpt_cpt_cptpt_cpt_cpt_cpt_cptpt_cpt_cpt_cptpt_cpt_cpt_cptpt_cpt_cptpt_cptpt_cpt_cpt_cptpt_cptpt_cpt_cpt_cptpt_cpt_cptpt_cpt_cpt_cpt_cptpt_cptpt_cpt_cptpt_cpt_cptpt_cpt_cpt_cptpt_cpt_cptpt_cpt_cptpt_cptpt_cpt_cpt_cptpt_cpt_cptpt_cpt_cpt_cptpt_cpt_cptpt_cpt_cptpt_cpt_cptpt_cpt_cptpt_cpt_cptpt_cpt_cptpt_cptpt_cpt_cptpt_cpt_cptpt_cpt_cptpt_cpt_cpt_cptpt_cptpt_cpt_cptpt_cptpt_cpt_cptpt_cpt_cptpt_cptpt_cptpt_cpt_cptpt_cpt_cptpt_cpt_cptpt_cptpt_cpt_cptpt_cptpt_cpt_cptpt_cpt_cptpt_cptpt_cpt_cpt_cptpt_cptpt_cpt_cptpt_cpt_cpt_cptpt_cpt_cptpt_cpt_cptpt_cptpt_cptpt_cptpt_cptpt_cptpt_cpt_cptpt_cpt_cptpt_cptpt_cptpt_cpt_cptpt_cptpt_cptpt_cpt_cptpt_cpt_cptpt_cptpt_cptpt_cptpt_cptpt_cptpt_cpt_cptpt_cptpt_cpt_cptpt_cptpt_cptpt_cpt_cptpt_cpt_cptpt_cpt_cptpt_cptpt_cptpt_cptpt_cptpt_cpt_cptpt_cpt_cptpt_cptpt_cpt_cptpt_cptpt_cptpt_cptpt_cpt_cptpt_cpt_cptpt_cptpt_cpt_cptpt_cptpt_cptpt_cptpt_cptpt_cptpt_cptpt_cpt_cptpt_cptpt_cpt_cptpt_cptpt_cptpt_cptpt_cptpt_cptpt_cptpt_cptpt_cptpt_cptpt_cptpt_cptpt_cptpt_cptpt_cptptpt_cpt_cptpt_cptptpt_cptpt_cptpt_cptpt_cptpt_cptptpt_cptpt_cptpt_cptpt_cptpt_cptptpt_cptpt_cptpt_cptpt_cptpt_cptptpt_cptptpt_cptpt Step 2 is also of linear depth. The minimal lattice has only one plaquette. In step 2, we add more plaquettes to the lattice using the controlled gate introduced in Sec. VI.1.1. The plaquettes cannot be added all at once, because the controlled gates commute only when they do not act on each other's control edge. A linear depth circuit is hence needed to add all the plaquettes to the square-octagon lattice. A particular sequence for adding these plaquettes is shown in Fig. 13. Firstly, all the square plaquettes (red circles) can be added at the same time because they do not overlap with each other. The small circle indicates the control edge while the big circle indicates the action of \(G_{p}^{s}\). Secondly, we add the square-octagon lattice in row one (labeled (1) in Fig. 13). All controlled gates in row one commute with each other so they can be added in one step. Then we add row two, row three, etc., until the next to last row (labeled \((L_{y}-1)\) in Fig. 13). For the last row, we need to choose the control edges side ways because we need un-entangled edges to be used as control edge. Due to this change, the plaquettes in the last row need to be added sequentially as the controlled gates do not commute any more. As shown in the figure, we can add them in the order of (green labels) (1), (2),..., \((L_{x}-1)\). We do not need to act in the last plaquette (labeled \(\tilde{p}\)) as the constraint due to the last plaquette is already implied by that of the largest plaquette that we started from combined with all the small plaquettes added so far. Therefore, at this point, we have finished the linear depth RG procedure that starts from a product state and maps it to the the string-net wave-function on the square-octagon lattice. ### Ising cage-net In this section, we use the controlled gate of Eq. (19) to build up the RG circuit to enlarge an Ising cage-net ground state on the three-torus by one layer. We will start, in Sec. VI.2.1, by introducing finite depth circuits that grow cages on the cage-net ground state. They serve as the building blocks of the full planar linear depth RG circuit, which we discuss in Sec. VI.2.2. #### vi.2.1 Adding cages via the controlled gate In 2D, we have seen that a plaquette can be added to the string-net wave function, via the controlled gates, after an edge is added to the lattice. We can extend this procedure to 3D cage-net states. Suppose that we start with the Ising cage-net ground state on the truncated cubic lattice (Fig. 7) and add a plane in the \(xy\) direction. At each point where the added plane bisects the \(z\) direction edges, an octahedron is added, as shown in Fig. 14, to ensure the trivalent structure in each of the coupled planes. In the added plane, octagonal plaquettes fill in the space between the octabndrons. Every edge of the added octahedrons carries a three dimensional Hilbert space spanned by \(\{|0\rangle,|1\rangle,|2\rangle\}\). We start with these edges all set to the state \(|0\rangle\). The principal edges on the octagons each carry a five dimensional Hilbert space spanned by \(\{|00\rangle,|02\rangle,|20\rangle,|22\rangle,|11\rangle\}\), which is a subspace of the tensor product Hilbert space of two three dimensional DOFs \(\{|0\rangle,|1\rangle,|2\rangle\}\otimes\{|0\rangle,|1\rangle,|2\rangle\}\) that come from the two intersecting planes. We start with these principal edges in the state \(|00\rangle\). Figure 12: The initialization step in the RG circuit for generating the string-net wave-function. Left: pick three edges around a vertex and map them into one of the ground states of the string-net on the minimal lattice. Right: grow the minimal structure by copying the string states \(|i\rangle\) and \(|j\rangle\) along non-contractible loops so that they reach the full extent of the lattice. Figure 13: Adding loops to plaquettes in step 2 of the RG circuit for generating the string-net wavefunction. The state has been initialized into one of the ground states on the minimal lattice (black lines). First, loops are added to the square plaquettes (shown in red) in a single step. Then, loops are added to octagon plaquettes in row (1), (2),... \((l_{y}-1)\) sequentially. For the last row, loops are added to octagon plaquette in column (1), (2),...., \((L_{x}-1)\) sequentially. No action is needed in the last plaquette \(\tilde{p}\). We describe first the process to add one cube into the new layer, which consists of two steps: 1. add the octahedrons to the cage-net wave-function; 2. grow a cage structure in the upper truncated cube of Fig. 14. In step one, we first need to copy the state of the bisected \(z\)-principal edge onto some of the octahedron edges so that the vertex rules are satisfied at the octahedrons' vertices. Suppose the bisected edge is in the state \(|xy\rangle\). The copying process can be achieved with the controlled gates \(\sum_{xy}|xy\rangle\langle xy|\otimes|x\rangle\langle 0|\) and \(\sum_{xy}|xy\rangle\langle xy|\otimes|y\rangle\langle 0|\) as indicated by the blue and green arrows in Fig. 15. Then, we add the square plaquettes to the cage-net wave-function. This can be done as described in the previous section on how to add a square plaquette to the doubled-Ising string-net wave function, as the square plaquettes remain unaffected when the doubled-Ising layers are coupled into Ising cage-net. More specifically, for each square plaquette, we pick an edge in the state \(|0\rangle\) as the control edge, map it to \(\sum_{s}\frac{d_{s}}{\sqrt{D}}|s\rangle\), and use it as the control in the controlled gate \(G_{p}\) that adds loops into the plaquette. Step 2, which adds a cage structure to the cube, is more complicated. As shown in Fig. 16, first we add loops to the bottom and top faces and then to the side faces. More specifically, first we pick a principal edge on the bottom face in the state \(|00\rangle\) as the control. We will use the convention where the first \(|0\rangle\) comes from the \(xy\) plane while the second \(|0\rangle\) comes from the vertical \(xz\) and \(yz\) planes. Map the control edge as \[|00\rangle\mapsto\sum_{s}\frac{d_{s}}{\sqrt{D}}\left|s0\right\rangle, \tag{26}\] Note that this takes the controlled edge out of the five dimensional subspace of \(\{|00\rangle,|02\rangle,|20\rangle,|22\rangle,|11\rangle\}\) but keeps it in the nine dimensional space of \(\{|0\rangle,|1\rangle,|2\rangle\}^{\otimes 2}\). This will also happen to other principal edges as we implement the procedure, but at the end of the process of growing a cube, all principal edges will be back to the five dimensional subspace. Now, using the \(|s\rangle\) state as the control, apply the controlled gate to the bottom face \(p_{b}\) and top face \(p_{t}\) as \[G_{p_{b}}^{0}+G_{p_{b}}^{2}+\frac{1}{\sqrt{2}}G_{p_{b}}^{1}B_{p_{t}}^{1} \tag{27}\] as shown in Fig. 16 (a). Note that \(G_{p_{b}}^{s}\) and \(B_{p_{t}}^{s}\) act on the first part of the principal edges (the part that comes from horizontal planes). After these controlled gates, the projector on the control edge \(|0\rangle\langle 0|\) (the first part) gets mapped to \[\begin{split}\left(|0\rangle\langle 0|\right)_{\rm ct}& \mapsto\sum_{ss^{\prime}}\frac{d_{s}d_{s^{\prime}}}{D}\left(|s \rangle\langle s^{\prime}|\right)_{\rm ct}\\ &\mapsto B_{p_{b}}^{0}+B_{p_{b}}^{2}+B_{p_{b}}^{1}B_{p_{t}}^{1}, \end{split} \tag{28}\] where in deriving the last line, we used the fact that the top face is part of the original cage-net wave-function Figure 16: Growing a cage structure in an added cube. (a) First, using an edge from the bottom face (colored green) as control, add loops to the bottom and top faces, (b) then use the edges on the side faces (colored green) as control to add loops to the side face. Figure 14: Insertion of an \(xy\)-plane bisects a cube in the original cage-net lattice into two cubes. Each intersection point between the \(xy\)-plane and the \(z\)-principal edges is expanded into an octahedron to preserve the trivalent structure in the \(xy\), \(yz\) and \(zx\) planes. Figure 15: ‘Copying’ the states on the bisected \(z\)-principal edges onto edges of the added octahedron to satisfy vertex rules in the \(xz\) and \(yz\) planes. The copying process can be performed by controlled gates of the form \(\sum_{xy}|xy\rangle\langle xy|\otimes|x\rangle\langle 0|\) and \(\sum_{xy}|xy\rangle\langle xy|\otimes|y\rangle\langle 0|\), indicated by the arrows pointing from the control to the target. and \(B^{0}_{p_{t}}=B^{2}_{p_{t}}=1\). Note that it might seem that the operator in Eq. (27) is not unitary as \(B^{1}_{p}\) is not. But since \(B^{1}_{p_{t}}{B^{1}_{p_{t}}}=B^{0}_{p_{t}}+B^{2}_{p_{t}}=2\), the action of the operator restricted to the ground space of the original cage-net model is indeed unitary. Next, we need to add loops to the side faces. To do this, we take the principal edges on the bottom face, which are now in the states \(\ket{s0}\) and send them to \(\ket{s\alpha_{s}}\), where \(\alpha_{s}\) comes from the \(xz\) or \(yz\) planes and \(\alpha_{s}=0\) if \(s\) is even, \(\alpha_{s}=1\) if \(s\) is odd. This brings the principal edges on the bottom face back to the five dimensional Hilbert space. Then map the \(\ket{\alpha_{s}}\) states to \[\ket{0}\mapsto\frac{1}{\sqrt{2}}\left(\ket{0}+\ket{2}\right),\ \ket{1}\mapsto\ket{1} \tag{29}\] Use the \(\ket{\alpha_{s}}\) states as the control to draw loop on the side faces by applying \(\sum_{\alpha_{s}}G^{\alpha_{s}}_{p_{s}}\) as shown in Fig. 16 (b) to each side face. Let us see how the Hamiltonian terms in Eq. (28) transforms. We show the step by step calculation for the third term \(B^{1}_{p_{t}}B^{1}_{p_{t}}\). The \(B^{1}_{p_{t}}\) part is not affected by the transformation and will be omitted from the following equation. Let us focus on the transformation induced by on principal edge. We label the two three-dimensional DOFs on the principal edge as \(1\) and \(2\) respectively, where \(1\) comes from the bottom face whose state is labeled by \(s\) and \(2\) comes from the side face whose state is labeled by \(\alpha_{s}\). \[\begin{split}&\left[(P^{0}_{1}+P^{2}_{1})B^{1}_{p_{b}}P^{1}_{1}+P ^{1}_{1}B^{1}_{p_{b}}(P^{0}_{1}+P^{2}_{1})\right]\otimes\left(\ket{0}\bra{0} \right)_{2}\\ \mapsto&\frac{1}{\sqrt{2}}(P^{0}_{1}+P^{2}_{1})B^{1}_ {p_{b}}P^{1}_{1}\otimes\left(\ket{0}_{2}+\ket{2}\right)_{2}{}_{2}\langle 1 |\\ &+\frac{1}{\sqrt{2}}P^{1}_{1}B^{1}_{p_{b}}(P^{0}_{1}+P^{2}_{1}) \otimes\left|1\right\rangle_{2}\left({}_{2}\langle 0|+{}_{2}\langle 2|\right) \\ \mapsto&\frac{1}{\sqrt{2}}(P^{0}_{1}+P^{2}_{1})B^{1}_ {p_{b}}P^{1}_{1}\otimes\left(P^{0}_{2}+P^{2}_{2}\right)B^{1}_{p_{s}}P^{1}_{2} \\ &+\frac{1}{\sqrt{2}}P^{1}_{1}B^{1}_{p_{b}}(P^{0}_{1}+P^{2}_{1}) \otimes P^{1}_{2}B^{1}_{p_{s}}\left(P^{0}_{2}+P^{2}_{2}\right)\end{split} \tag{30}\] The result is the product of \(B^{1}_{p_{b}}\) and \(B^{1}_{p_{s}}\) projected onto the five dimensional subspace of the principal edge, as promised. This works for all side faces. Similar calculations can be carried out for the first two terms in Eq. (28). If we put everything together and omit the projection onto the five dimensional subspace of the principal edges, we see the Hamiltonian terms in Eq. (28) becomes \[\left(B^{0}_{p_{b}}+B^{2}_{p_{b}}\right)\prod_{p_{s}}\left(B^{0}_{p_{s}}+B^{2 }_{p_{s}}\right)+B^{1}_{p_{b}}B^{1}_{p_{t}}\prod_{p_{s}}B^{1}_{p_{s}}, \tag{31}\] which is a sum over the desired plaquette terms on the bottom and side faces as well as the cube term on the cube. In the RG circuit to be discussed in the next section, we need to grow cubes in the same row at the same time. This works in a similar way as growing a single cube and we describe the procedure here. First, as shown in Fig. 17 which illustrates the situation with two cubes in the row, a new plane is added which bisects the row of cubes into two. Octahedrons are added to the intersection points to preserve the trivalent structure in the coupled \(xy\), \(yz\) and \(zx\) planes. The 'copying' process illustrated in Fig. 15 is then used to restore vertex rules at the vertices of the octahedrons and then the square plaquettes in the octahedrons are added to the cage-net wave-function. Figure 17: Adding a row of cubes to the cage-net state, step \(1\): the inserted \(xy\)-plane bisects the cubes into two; octahedrons are added at the intersection point. Figure 18: Adding a row of cubes to the cage-net state, step \(2\): (a) first, we simultaneously add loops to the bottom and the top faces of all cubes in the row; (b), use the edges on the side face (colored green) as control to add loops to all the side faces at the same time. The next step is illustrated in Fig. 18, which adds cage structures to a whole row of cubes at the same time. This is done by first picking the principal edge in, for example, the \(x\) direction and use them as controls to add loops in the bottom and top faces as described above for each cube in the row (Fig. 18 (a)). The operations in each cube commute with that in another cube, and hence they can be done all at the same time. Next, loops are added to the side faces using the principal edges on the bottom face as control, as shown in Fig. 18 (b). Again, the operations on each side face commute with each other, so they can be done at the same time. As a result of this process, all the cubes in the row are now added to the cage-net wavefunction. Note that the process illustrated in Fig. 18 applies to the first row in the added plane. When we try to add subsequent rows, some of the side faces would have been added to the cage-net state already. Those side faces can be treated in the same way as the top face. That is, apply \(B_{p_{s}}^{1}\) in step Fig. 18 (a) when the \(x\)-principal edge is in the state \(\ket{10}\), instead of applying \(\sum_{\alpha_{s}}G_{p_{s}}^{\alpha_{s}}\) controlled by the bottom principal edge of the side face in the state \(\ket{s\alpha_{s}}\). A similar procedure applies to the cubes in the last row of the added plane as well, which have to be added one by one. #### v.2.2 RG circuit - Ising cage-net The processes for adding single cubes and a row of cubes are building blocks for the full RG circuit that adds a full plane to the cage-net state. Similar to the case of the doubled-Ising, we first need to initialize the added plane into proper eigenstates of the non-local logical operators before adding the local structures of cubic cages (plaquettes in the case of doubled-Ising). A commuting set of logical operators of the Ising cage-net ground space can be chosen to be generated by the string-operators of \(\psi,\bar{\psi}\) planons in each \(\mu\nu\) plane along the \(\mu\) and \(\nu\) directions respectively. We can choose the original cage-net state (before adding the plane) to be an eigenstate of all such logical operators. The added \(xy\) plane can be initialized into an eigenstate of \(\psi^{x}\), \(\psi^{y}\), \(\bar{\psi}^{x}\) and \(\bar{\psi}^{y}\) on that plane. The circuit described in the last section on how to add cubic cages and plaquette terms to the wave-function does not affect these nonlocal logical operators. Therefore, the resulting cage-net state after the RG circuit remains an eigenstate of all the \(\psi,\bar{\psi}\) logical operators. But the choice of the eigenvalue for the \(\psi,\bar{\psi}\) logical operators is not arbitrary as the operators are related to each other and hence their eigenvalues are constrained. In Ref. [19], we study carefully the relations among these operators, which allowed us to derive the ground state degeneracy of the Ising cage-net model. The relations are listed below. For derivation, see the discussion in section VII of Ref. [19]. For \(\{\mu,\nu,\lambda\}=\{x,y,z\}\) \[\prod_{i}\left(\psi\bar{\psi}\right)_{\mu\lambda}^{\mu}(\nu=i) \prod_{j}\left(\psi\bar{\psi}\right)_{\nu\lambda}^{\nu}(\mu=i)=1\] \[r_{\mu\nu}(\lambda=i)\bar{r}_{\mu\nu}(\lambda=i)=1,\forall i, \forall\{\mu,\nu\}\] \[r_{\mu\nu}(\lambda=i)r_{\mu\nu}(\lambda=i+1)=1,\forall i,\forall \{\mu,\nu\} \tag{32}\] where \(r_{\mu\nu}=\frac{1}{2}\left(1+\psi_{\mu\nu}^{\mu}+\psi_{\mu\nu}^{\nu}-\psi_{ \mu\nu}^{\mu}\psi_{\mu\nu}^{\nu}\right)\), \(\bar{r}_{\mu\nu}=\frac{1}{2}\left(1+\bar{\psi}_{\mu\nu}^{\mu}+\bar{\psi}_{\mu \nu}^{\nu}-\bar{\psi}_{\mu\nu}^{\mu}\bar{\psi}_{\mu\nu}^{\nu}\right)\). As we started from a ground state of the cage-net model, the original set of \(\psi,\bar{\psi}\) operators satisfy the relations in Eq. (32). When we add a new \(xy\)-plane, we need to make sure that after the new \(\psi_{xy}^{x}\), \(\psi_{xy}^{y}\), \(\bar{\psi}_{xy}^{x}\), \(\bar{\psi}_{xy}^{y}\) operators are added to the original set, the total set still satisfy the relations in Eq. (32). This can be guaranteed when the added string-operators satisfy \[\psi_{xy}^{x}\bar{\psi}_{xy}^{x}=1,\ \psi_{xy}^{y}\bar{\psi}_{xy}^{y}=1 \tag{33}\] \[r_{xy}=\bar{r}_{xy}=\pm 1 \tag{34}\] The choice of \(\pm 1\) in the last relation depends on whether \(r_{xy}(z=i)=1\) or \(-1\) in the original set. Compared to the eigenstates listed in Appendix C.1, \(|\Psi_{\rm min}^{\rm D.I.}\rangle_{1}\), \(|\Psi_{\rm min}^{\rm D.I.}\rangle_{5}\), \(|\Psi_{\rm min}^{\rm D.I.}\rangle_{9}\) satisfy the relations in Eq. (33) and \(r_{xy}=1\) while \(|\psi\bar{\psi}_{\rm min}^{\rm D.I.}\rangle\) satisfies the relations in Eq. (33) and \(r_{xy}=-1\). Therefore, we can initialize the added layer into one of these states. In particular, consider the added \(xy\)-plane in Fig. 19. Each red ball represents an octahedron. The added DOF are initially set to be either in state \(\ket{0}\) (on edges of the octahedron) or \(\ket{00}\) (on principal edges). Now initialize the trivalent lattice in the \(xy\)-plane into one of \(|\Psi_{\rm min}^{\rm D.I.}\rangle_{1}\), \(|\Psi_{\rm min}^{\rm D.I.}\rangle_{5}\), \(|\Psi_{\rm min}^{\rm D.I.}\rangle_{9}\) and \(|\psi\bar{\psi}_{\rm min}^{\rm D.I.}\rangle\) following the procedure described in Fig. 12. This linear depth process set up the stage for the next step of the RG circuit: adding cage structures to the cubes. Figure 19: Inserting an \(xy\)-plane into the original cage-net lattice. Each red ball represents an octahedron. The new principal edges are shown in blue. Now we can use the procedure described in the last section to add cage structures to the cubes. As shown in Fig. 20, on top of the minimal structure set up in the initialization step (red lines), cage structures are added to the cubes in the 1st row, the 2nd row,... the \((L_{y}-1)\)th row in each step. In the last row, cage structures are added to the cube in the 1st column, 2nd column,..., \((L_{x}-1)\)th column in each step. No action is required in the last cube. This process has depth \(\sim(L_{x}+L_{y})\) and completes the addition of a new layer into the cage-net wave-function. ## VII Relating condensation and linear-depth circuits via gapped boundaries ### General discussion In Sec. V, we discussed the RG process in terms of condensation of planons. In Sec. VI, we discussed the RG process in terms of a linear depth circuit. In this section, we show that these two are closely related to each other by understanding each in terms of gapped boundaries. We first consider a gapped boundary between a 2D topological order and vacuum. If an excitation moves from the bulk to the boundary, it may become trivial in the sense that it can be destroyed by a local operator on the boundary. This phenomenon is referred to as condensation at the boundary. On the other hand, some excitations remain non-trivial as they approach the boundary. These phenomena can be characterized precisely in a category-theoretic language [32; 33; 34; 35]; in the abelian case, this amounts to specifying a maximal subset of bosons that can simultaneously condense at the boundary [36; 37; 38; 39]. It is believed the universality class of a gapped boundary is fully determined by its category-theoretic characterization. The above discussion allows us to _define_ distinct types of anyon condensation (to vacuum) in a precise way, as distinct types of gapped boundaries (to vacuum). Such a definition is natural if we view the vacuum as a condensate of certain anyons in the 2D topological order. For instance, creating a puddle of anyon condensate within the bulk 2D topological order amounts to creating a puddle of trivial state (vacuum) separated from the bulk by a gapped boundary. This discussion, and the definition of anyon condensation in terms of gapped boundaries, can be generalized to gapped boundaries between arbitrary 2D topological orders. In the context of generalized foliated RG, we consider condensation of planons. Condensation of a single planon can similarly be associated with - and defined in terms of - certain gapped boundaries between two fracton orders, with the property that the boundary should be transparent to mobile excitations away from the selected plane where the condensation occurs. It will be an interesting problem for future work to fully characterize those boundaries between fracton phases that correspond to planon condensation. We note that there has been some related prior work discussing gapped boundaries of fracton models in terms of condensation [40; 41]. It turns out that the kind of linear-depth circuits considered here can also be associated with a type of gapped boundary. A linear depth circuit has the general form \(\mathcal{U}=\prod_{\ell=1}^{K}U_{\ell}\) where each layer \(U_{\ell}\) consists of a number of local unitary gates with non-overlapping support, and the number of layers \(K\) is proportional to the linear system size \(L\). In general, \(U_{\ell}\) can contain gates acting across the entire system. However, for the circuits we employed for RG, each layer \(U_{\ell}\) only contains gates acting in a lower dimensional subsystem of the entire system, such as the rows in Figs. 13 and 20. Such circuits are much more restrictive than generic dense linear-depth circuits, particularly because they preserve the area law when acting on a state. We call this class of circuits _sequential circuits_. Again we first focus on the 2D case, where as we have discussed, sequential circuits can be used to generate topologically ordered ground states from an initial product state (the topological "vacuum"). In order to avoid complications associated with periodic boundary conditions, we make a simplification as compared to the circuits discussed in Sec. VI; namely, we work with an infinite system and consider circuits that generate a disc Figure 20: Adding cage structures to the cubes in step 2 of the RG circuit for the cage-net state. The red lines indicate the minimal lattice state determined by the initialization step. Cage structures are added to the cubes in the 1st row, the 2nd row,... the \((L_{y}-1)\)th row in each step. In the last row, cage structures are added to the cube in the 1st column, 2nd column,..., \((L_{x}-1)\)th column in each step. No action is required in the last cube. of 2D topological order from vacuum. If desired, the size of the disc can later be taken to infinity. This allows us to drop the initialization step, whose role is to take care of the non-trivial ground state degeneracy on a 2-torus. We can also drop the final linear-depth sequence of gates needed to stitch two gapped boundaries together in a manner consistent with periodic boundary conditions. With these simplifications, the circuits operate in the following way. We slice the 2D space into 1D concentric circles surrounding the center of the disc, and order these subspaces according to their radial coordinate. The \(\ell\)th layer of the circuit is assumed to be supported near (but not entirely within) the \(\ell\)th circle. After applying some number of layers of the circuit, one is left with a disc of topological order which has a gapped boundary to the vacuum region which has not yet been acted on by the circuit. Then, the next layer in the circuit acts only within the vicinity of the one-dimensional gapped boundary between the topological order and the vacuum. The action of the unitary in this layer is to "grow" the topological order by a small amount, pushing the gapped boundary further into the vacuum region. Continuing in this way allows one to grow the topologically ordered region arbitrarily. Based on the above, given a sequential circuit, we can associate the universality class of the gapped boundary to vacuum which emerges when the circuit is truncated at some radius. This association is well-defined in the following sense. We can define a truncation of the circuit \(\bar{\mathcal{U}}=\sum_{\ell=1}^{K_{0}}V_{\ell}\) where \(K_{0}<K\). This will create a disc of topological order with a particular gapped boundary to vacuum. Now, consider a different truncation \(\bar{\mathcal{U}}^{\prime}=\sum_{\ell=1}^{K_{0}}V_{\ell}\) where each \(V_{\ell}\) again consists of non-overlapping gates such that \(V_{\ell}=U_{\ell}\) for \(\ell\) sufficiently less than \(K_{0}\), but the layers near the boundary may differ. By definition, the two truncated circuits differ only by a finite-depth circuit near the boundary. But a 1D finite depth circuit cannot change the universality class of the gapped boundary, _i.e._ it cannot change the set of anyons which can condense on the boundary. So the gapped boundary type is independent of how the sequential circuit is truncated. We note this conclusion only holds for truncations that are compatible with the 1D layer structure of concentric circles; the key property is that the truncation only cuts through a finite number of 1D layers, which is bounded above as the size of the disc increases. We emphasize that this discussion can be generalized to gapped boundaries between two different 2D topological orders. That is, given two topological orders referred to as A and B that admit a gapped boundary, an A-ground-state can be converted into a B-ground-state by applying a sequential circuit. Or, if we apply a truncated version of the same sequential circuit, we can create a puddle of B within the bulk topological order A, separated by a gapped boundary whose universality class does not depend on how the circuit is truncated. In formulating the generalized foliated RG in terms of quantum circuits, we apply sequential circuits within 2D layers of a 3D fracton model. Truncating such a sequential circuit (along its 1D layer structure) results in a gapped boundary between two different fracton orders, where some of the mobile excitations may condense along the layer where the circuit is applied. This is how we described planon condensation above, and thus we propose that planon condensation and applying 2D sequential circuits are different ways to realize the same operation in generalized foliated RG. ### Condensation in the Ising cage-net circuit In accordance with the above discussion, we now identify the type of gapped boundary that is associated with the sequential circuits used to create Ising cage-net model. To accomplish this, we are going to apply the circuit only to a finite disc-shaped region within a plane; we will not take the limit that the size of the disc goes to infinity. Inside the region, we get the fracton order as expected. Outside of the region, the added degrees of freedom remain unentangled. There is a gapped boundary between the two sides. We show that the gapped boundary and the region outside can be obtained by condensing bosonic plonons starting from a complete fractonic state. First, let's see how a similar relation works in the doubled-Ising string-net state. We imagine a very large disc of string-net state, and we ignore the curvature of the disc's boundary to simplify the following discussion. Recall that in the RG circuit, the plaquettes are added row by row. Suppose that we stop the process at row \(i\). The boundary between row \(i\) and row \(i+1\) is a smooth boundary on the lattice. As the Hamiltonian terms remain commuting throughout the process, the boundary is gapped. The gapped boundary can be induced by the condensation of 'fluxon excitations'[22]\(\psi\bar{\psi}\) and \(\sigma\bar{\sigma}\) on the boundary and beyond. To see that, consider a string-operator of the form shown in Fig. 21, which consists of a string segment above the lattice, a parallel segment under the lattice and the two are connected by segments that vertically go through the lattice plane. Note that, while embedded in the 3D space, the string-operator is a closed loop, from the 2D perspective, it ends at the locations where the string goes through the lattice plane and can create excitations at those points. In particular, such string-operators in general violate the plaquette term at their ends, as the plaquette terms correspond to a loop operator that links with the string-operator and the linking generates nontrivial action. Therefore, in the bulk of the string-net state, the string-operator generates 'fluxon excitations' at its ends. In the doubled-Ising model, there are two string-operators of this type, corresponding respectively to a loop of string type 1 and a loop of string type 2. The two string-operators generate the \(\psi\bar{\psi}\) and \(\sigma\bar{\sigma}\) excitations, respectively. If the string-operator ends (goes vertically through the lattice plane) outside of the smooth boundary (Fig. 21), there are no more plaquette terms to violate and the string-operator does not generate any excitations. Detailed calculations can be found in Appendix C.2. Therefore, the \(\psi\bar{\psi}\) and \(\sigma\bar{\sigma}\) excitations condense on the boundary and beyond, thus demonstrating the connection between anyon condensation and the linear depth circuit for the doubled-Ising string-net state. The situation is very similar in the Ising cage-net model. The RG circuit is again implemented row by row in a sequential manner. Suppose that we stop the process at row \(i\), there will be a gapped boundary between row \(i\) and row \(i+1\). As shown in Fig. 22, like for the string-nets, a vertical loop operator that goes through the lattice plane at two points generates planon excitations \(\psi\bar{\psi}\) and \(\sigma\bar{\sigma}\) in the bulk of the cage-net state (in rows \(j\leq i\)). Beyond row \(i\), however, it does not generate any excitations and hence the \(\psi\bar{\psi}\) and \(\sigma\bar{\sigma}\) are condensed. This agrees with the RG procedure driven by condensation described in Sec. V. Therefore, the process of sequential application in the linear depth circuit can be interpreted as moving the boundary between the cage-net state and the condensed state, hence enlarging or shrinking the fracton order in the plane. ## VIII Summary and discussion In this paper, we studied the renormalization group transformation for the Ising cage-net model and found that the system size of the Ising cage-net model can be decreased / increased by condensing / uncondensing planon excitations near a 2D plane, or correspondingly through a so-called sequential circuit which preserves the area law and whose depth scales with the linear size of the plane. We argued that these two ways of carrying out the RG are closely related through gapped boundaries. We call this procedure the generalized foliated RG, because the previously defined foliated RG, under which the X-cube and related models are fixed points,[9] fits into this new definition as a special case. On the one hand, the system size of the X-cube can be decreased / increased by condensing / uncondensing a lineon dipole or fracton dipole on a given plane (both these excitations are planons). Or, the RG procedure can be carried out with a linear depth circuit in the same plane. One way to construct the linear depth circuit is to use the finite depth circuit discussed for the original foliation scheme[9] to decouple a layer of toric code out of the X-cube model, and then disentangled the toric code into product state Figure 21: Condensation of the \(\psi\bar{\psi}\) and the \(\sigma\bar{\sigma}\) fluxons on the smooth boundary of the doubled-Ising model. The vertex details are omitted. The dashed lines represent the unentangled edges. An open ended fluxon string-operator is constructed from a loop of \(s\)-string that passes through the lattice plane vertically at a plaquette. If the plaquette (for example, the one labeled \(p\)) lies within the doubled-Ising region, it creates a fluxon excitation. If the plaquette (for example, the one labeled \(p^{\prime}\)) falls outside the string-net region, then no excitation is generated. Thus, all fluxons condense on the smooth boundary. For computational details on the condensation, see Appendix C.2. Figure 22: Condensation of the \(\psi\bar{\psi}\) and the \(\sigma\bar{\sigma}\) fluxon excitations in the half \(xy\)-plane (shown in blue) in the Ising cage-net. If the end of the the fluxon string operator falls within the Ising cage-net region (for example at the plaquette \(p\)), a fluxon excitation is created. If the end falls outside of the Ising cage-net region (for example at the plaquette \(p^{\prime}\)), then no excitation is generated. Therefore, both \(\psi\bar{\psi}\) and \(\sigma\bar{\sigma}\) planons condense on the boundary. with a linear depth circuit. Altogether this is a linear depth circuit. Alternatively, we can use a circuit similar to that discussed in Sec. VI to remove cage structures in a plane row by row and hence removing a plane from the X-cube model. On the other hand, the generalized foliated RG allows a broader class of RG operations. Indeed, the Ising cage-net model is not a fixed point of the original foliated RG as can be seen from its ground state degeneracy calculation [19]. We recall that the original foliated RG led to an associated notion of foliated fracton phases (see Appendix A for a definition), with the key property that two systems related by a foliated RG operation lie within the same foliated fracton phase. Similarly, we expect that there exists a notion of _generalized foliated fracton phase_ (GFF phase), again with the key property that two systems related by a generalized foliated RG operation lie in the same GFF phase. GFF phases should be a coarser equivalence relation on quantum systems than foliated fracton phases, because a broader class of RG operations are allowed. We do not currently know how to give a definition of GFF phases along the lines of those in Appendix A; however, one possibility is to give a definition based on circuit equivalence of ground states, where one allows certain linear depth circuits supported on planes. In Sec. IV, we pointed out that the original foliated RG contains certain unnatural restrictions, while the generalized foliated RG seems to be more natural. Therefore, we expect that GFF phases are correspondingly a more natural concept than foliated fracton phases as originally defined, so it will be important to revisit what we have learned about foliated fracton phases. In particular, several invariants have been devised for foliated fracton phases as originally defined, including those based on fractional excitations and entanglement entropy [15; 10]. Now, with a new notion of GFF phases, we need to reconsider the question of what quantities remain invariant under the new equivalence relation, and which models belong to the same GFF phase and which do not. For example, we can ask whether the twisted foliated fracton model proposed in Ref. [13] is still in a different phase than the X-cube model or not under the new definition. Finally, we want to comment that the generalized foliation defined in this paper makes the discussion of type I fracton models more in-line with that of Subsystem Symmetry Protected Topological (SSPT) phases with planar symmetry in e.g. Ref. [42; 43; 44]. In the definition of'strong SSPT' in these papers, when a decoupled layer with planar symmetry is added to the bulk of the system, the planar symmetry can be combined with an existing planar symmetry in the system, which corresponds to the condensation of the composite of the symmetry charges from the decoupled plane and a planar symmetry charge in the bulk of the system. The'strong SSPT' orders discussed in these papers hence may become nontrivial (twisted) foliated fracton orders when the planar symmetries are gauged. ###### Acknowledgements. We are indebted to inspiring discussions with Dave Aasen, Kevin Slagle, Nathan Seiberg, and Dominic Williamson, and helpful correspondence with Fiona Burnell and Michael Levin. Z.W., X.M. and X.C. are supported by the National Science Foundation under award number DMR-1654340, the Simons Investigator Award (award ID 828078) and the Institute for Quantum Information and Matter at Caltech. X.C. is also supported by the Walter Burke Institute for Theoretical Physics at Caltech. The research of MH is supported by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES) under Award number DE-SC0014415. This work is also partly supported by the Simons Collaboration on Ultra-Quantum Matter, which is a grant from the Simons Foundation (651438, XC and ZW; 651440, MH and DTS). The work of MH on general aspects of the generalized foliated RG (Sections IV, V and VII) was supported by the DOE BES project, while his work on the RG in the Ising cage-net model (Sec. VI) was supported by the Simons Foundation. X.C. wants to thank the Institute for Advanced Study at Tsinghua University for hospitality when the paper was written.
型IFractonモデルの大きなクラス、X-cubeモデルを含む、は、葉状のrenormalization group (RG) の固定点となっている。このような葉状モデルのシステムサイズを、2Dtopological状態の追加または削除、ハミルトン関数の連続的変形によって変化させることができる。本稿では、関連するモデルであるIsingcage-netモデルを検討し、このモデルは葉状とは異なる。実際、私たちは葉状RGにおける不自然な制限点を指摘し、これらの制限を解除すると、Ising cage-netモデルが固定点となる、一般化された葉状RGが生じる。一般化された葉状RGは、元の葉状RGを特例として含む。Ising cage-netモデルは、一般化された葉状RGの原型であり、そのシステムサイズを、2D平面に近接したボソニックプランオン励起の凝縮/解凝、または同じ平面における
2309.05881
On the pre- and post-positional semi-random graph processes
We study the semi-random graph process, and a variant process recently suggested by Nick Wormald. We show that these two processes are asymptotically equally fast in constructing a semi-random graph $G$ that has property ${\mathcal P}$, for the following examples of ${\mathcal P}$: - ${\mathcal P}$ is the set of graphs containing a $d$-degenerate subgraph, where $d\ge 1$ is fixed; - ${\mathcal P}$ is the set of $k$-connected graphs, where $k\ge 1$ is fixed. In particular, our result of the $k$-connectedness above settles the open case $k=2$ of the original semi-random graph process. We also prove that there exist properties ${\mathcal P}$ where the two semi-random graph processes do not construct a graph in ${\mathcal P}$ asymptotically equally fast. We further propose some conjectures on ${\mathcal P}$ for which the two processes perform differently.
Pu Gao, Hidde Koerts
2023-09-12T00:04:30
http://arxiv.org/abs/2309.05881v1
# On the pre- and post-positional semi-random graph processes ###### Abstract We study the semi-random graph process, and a variant process recently suggested by Nick Wormald. We show that these two processes are asymptotically equally fast in constructing a semi-random graph \(G\) that has property \(\mathcal{P}\), for the following examples of \(\mathcal{P}\): * \(\mathcal{P}\) is the set of graphs containing a \(d\)-degenerate subgraph, where \(d\geq 1\) is fixed; * \(\mathcal{P}\) is the set of \(k\)-connected graphs, where \(k\geq 1\) is fixed. In particular, our result of the \(k\)-connectedness above settles the open case \(k=2\) of the original semi-random graph process. We also prove that there exist properties \(\mathcal{P}\) where the two semi-random graph processes do not construct a graph in \(\mathcal{P}\) asymptotically equally fast. We further propose some conjectures on \(\mathcal{P}\) for which the two processes perform differently. Introduction The semi-random graph process is a single player game initially suggested by Peleg Michaeli, and formally introduced by Ben-Eliezer, Hefetz, Kronenberg, Parczyk, Shikhelman and Stojakovic [4]. In this game, a graph is iteratively constructed from an empty graph on \(n\) vertices, denoted by \([n]=\{1,2,\ldots,n\}\). Every round, one edge is added to the graph. The first end-vertex of the edge, \(u\), is chosen uniformly at random (u.a.r.) from all the vertices in \([n]\). Given the choice of \(u\), the other end-vertex \(v\) is chosen strategically by the player (either deterministically, or by some random strategy). The semi-random graph process is part of a larger category of random processes where a player has limited power of choice among a set of random options. This category of combinatorial random processes traces its origins to the work of Azar, Broder, Karlin and Upfal [1] on placing \(n\) balls into \(n\) bins. They showed that if the player can choose from two u.a.r. selected bins rather than just one, there exists a strategy to decrease the expected number of balls in the fullest bin by an exponential factor. Similar load-balancing schemes have been investigated by Mitzenmacher [16]. Another well-known example of such random processes is the so-called Achlioptas graph process, suggested by Dimitris Achlioptas during a Fields Institute workshop. Instead of adding a single edge picked u.a.r. every round as in the classical Erdos-Renyi random graph process [6], he suggested that every round the player is offered \(k\geq 2\) such edges, and one of the \(k\) edges can be chosen and added to the graph. The Achlioptas graph process was first investigated by Bohman and Frieze [5], who showed that allowing the player to choose from \(k\geq 2\) edges enables the player to delay the appearance of a giant component. In the seminal paper on the semi-random graph process, Ben-Eliezer, Hefetz, Kronenberg, Parczyk, Shikhelman and Stojakovic [4] provided asymptotic upper and lower bounds on the number of rounds needed to achieve certain objectives a.a.s. (asymptotically almost surely, see Section 2 for a precise definition), including having minimum degree at least \(k\geq 1\), having clique number \(k\geq 1\), and being \(k\)-connected. Additionally, they demonstrated how the semi-random graph process can be used to model other random graph models. Specifically, they established how to couple the semi-random process to the Erdos-Renyi random graph model, the \(k\)-out model, and the min-degree process. Further research by Behague, Marbach, Pralat and Rucinski [2] gave tight asymptotic bounds for the minimum number of rounds needed to construct a graph that contains a subgraph isomorphic to a fixed graph \(H\) based on the degeneracy of \(H\). Moreover, they generalised the semi-random graph process to hypergraphs, and similarly showed tight bounds for constructing a fixed \(s\)-uniform hypergraph. In terms of spanning subgraphs, Ben-Eliezer, Gishboliner, Hefetz and Krivelevich [3] showed that one can construct any fixed bounded-degree spanning subgraph a.a.s. in linear time. Moreover, MacRury, Pralat and the first author [10, 11, 12] obtained bounds on the minimum number of rounds needed to construct a graph with a perfect matching or a Hamilton cycle. The upper bound on the minimum number of rounds required for the construction of a Hamiltonian graph was further improved by Frieze and Sorkin [7]. Recently, Gamarnik, Kang and Pralat [9] have found bounds for the number of rounds needed to force the appearance of cliques and independent sets, and to ensure the graph has at least a given chromatic number. Pralat and Singh [18] have recently also considered the properties of minimum degree, the existence of a perfect matching and the existence of a Hamilton cycle in a generalisation of the semi-random graph process, where each round the player is presented with \(k\) random vertices. ### Two semi-random processes Recently, Nick Wormald proposed (via personal contact) an alternative version of the semi-random graph process. Instead of the first vertex being chosen u.a.r. in each round and the second vertex being chosen according to some strategy, he proposed switching this order. That is, the first vertex in each round is chosen strategically by the player, whereas the second vertex is chosen u.a.r. We refer to this new model as the _pre-positional semi-random graph process_, and the original model by Ben-Eliezer, Hefetz, Kronenberg, Parczyk, Shikhelman and Stojakovic [4] as the _post-positional semi-random graph process_. By a simple coupling argument, it is easy to see that the post-positional process can construct a graph in \(\mathcal{P}\) at least as fast as the pre-positional process, for any graph property \(\mathcal{P}\) (See Lemma 3.1 in Section 3). The interesting question arising from comparing these two processes is, whether the post-positional process significantly outperforms the pre-positional process in constructing a member of \(\mathcal{P}\). Perhaps a little surprisingly, for quite many properties \(\mathcal{P}\), these two processes perform equally well. However, we also give an example of \(\mathcal{P}\) where the post-positional process construct a graph in \(\mathcal{P}\) significantly faster. ### Main results Our first main result concerns the minimum number of rounds required to construct a \(k\)-connected graph. **Theorem 1.1**.: _Let \(k\geq 1\) be fixed. For every \(\epsilon>0\), a.a.s. there exists a real number \(\alpha_{k}\) such that the following hold:_ 1. _no strategy in a post-positional or pre-positional semi-random graph process can construct a_ \(k\)_-connected graph in at most_ \((\alpha_{k}-\epsilon)n\) _rounds;_ 2. _there exist strategies in post-positional and pre-positional semi-random graph processes that construct a_ \(k\)_-connected graph in at most_ \((\alpha_{k}+\epsilon)n\) _rounds._ **Remark 1.2**.: 1. _Theorem_ 1.1 _was proved to be true for the post-positional semi-random graph process for_ \(k\geq 3\) _by Ben-Eliezer, Hefetz, Kronenberg, Parczyk, Shikhelman, and Stojakovic_ _[_4_]__. We settled the open case_ \(k=2\)_._ 2. _The theorem confirms that the two variants of the semi-random graph process perform asymptotically equally well on constructing_ \(k\)_-connected graphs._ 3. _The constant_ \(\alpha_{1}\) _is set to be 1. The real numbers_ \(\alpha_{k}\) _for_ \(k\geq 2\) _are derived from a system of differential equations and follow from applying Wormald's differential equation method (See [20]). The first few values are_ \[\alpha_{2} =\ln 2+\ln(1+\ln 2),\] \[\alpha_{3} =\ln((\ln 2)^{2}+2(1+\ln 2)(1+\ln(1+\ln 2))),\] _as calculated in [14]._ Next, we show that the two processes perform equally well in constructing graphs with a given small subgraph. A graph \(H\) is said to be \(d\)-degenerate if each subgraph of \(H\) contains a vertex of degree at most \(d\). In their seminal paper, Ben-Eliezer, Hefetz, Kronenberg, Parczyk, Shikhelman, and Stojakovic [4] considered the number of rounds needed to construct a fixed size \(d\)-degenerate graph as a subgraph. They showed the following upper bound in the post-positional process. **Theorem 1.3** ([3, Theorem 1.10]).: _Let \(H\) be a fixed \(d\)-degenerate graph, and let \(f:\mathbb{N}\to\mathbb{R}\) be a function such that \(\lim_{n\to\infty}f(n)=\infty\). Then there exists a strategy in the post-positional process such that the resulting graph \(G\) contains a subgraph isomorphic to \(H\) in a.a.s. \(f(n)\cdot n^{(d-1)/d}\) rounds._ They conjectured that this bound is tight if \(d\geq 2\), which was subsequently shown by Behague, Marbach, Pralat, and Rucinski [2]. We show that the same bounds hold for the pre-positional process. **Theorem 1.4**.: _Let \(H\) be a fixed \(d\)-degenerate graph, and let \(f:\mathbb{N}\to\mathbb{R}\) be a function such that \(\lim_{n\to\infty}f(n)=\infty\). Then a.a.s. the following hold:_ 1. _If_ \(d\geq 2\)_, no strategy in a post-positional or pre-positional semi-random graph process can construct a graph containing a copy of_ \(H\) _in at most_ \(n^{(d-1)/d}/f(n)\) _rounds;_ 2. _there exist strategies in post-positional and pre-positional semi-random graph processes that construct a graph containing a copy of_ \(H\) _in at most_ \(f(n)\cdot n^{(d-1)/d}\) _rounds._ Theorem 1.4 immediately gives the following corollary. **Corollary 1.5**.: _Let \(H\) be a fixed graph containing a cycle, and \(f:\mathbb{N}\to\mathbb{N}\) a function such that \(\lim_{n\to\infty}f(n)=\infty\). Then a.a.s. the following hold:_ 1. _no strategy in a post-positional or pre-positional semi-random graph process can construct a graph containing an_ \(H\)_-minor in at most_ \(n^{1/2}/f(n)\) _rounds;_ 2. _there exist strategies in post-positional and pre-positional semi-random graph processes that construct a graph containing an_ \(H\)_-minor in at most_ \(f(n)\cdot n^{1/2}\) _rounds._ Proof.: For (a), it suffices to show that a.a.s. \(G_{t}\) is acyclic for \(t\leq n^{1/2}/f(n)\) in any post-positional process. Suppose \(G_{t}\) has a cycle. Considering only the edges (each of which joins a square and a circle) that make up the cycle, there must exist a square which lands on a vertex that has already received either a square or a circle earlier. For every \(t\leq n^{1/2}/f(n)\), the probability that \(u_{t}\) lands on a vertex that has already received a square or a circle (there are \(2(t-1)\) of them) is \(O(t/n)\) and hence, the probability that \(G_{t}\) contains a cycle for some \(t\leq n^{1/2}/f(n)\) is bounded by \(\sum_{t\leq n^{1/2}/f(n)}O(t/n)=o(1)\). Part (b) follows from considering the \(1\)-subdivision of \(H\), that is the graph obtained from \(H\) by subdividing each edge in \(E(H)\) exactly once, and noting that this subdivision is \(2\)-degenerate. The bound then directly follows from Theorem 1.4. Next, we investigate the performance of the two processes in constructing a graph containing a large bipartite subgraph. **Theorem 1.6**.: _Suppose \(m=o(n^{2})\). Let \(\mathcal{P}\) be the set of graphs on \([n]\) that contain a bipartite subgraph with \(m\) edges. Then, the minimum number of rounds required to construct a graph in \(\mathcal{P}\) is a.a.s. \((1+o(1))m\) in both the pre-positional and the post-positional processes._ While the proof for Theorem 1.6 is straightforward (See Section 6), it is interesting to see if the theorem fails to hold when \(m=\Theta(n^{2})\). We think containing a bipartite subgraph with \(\Omega(n^{2})\) edges might be an increasing property (see its definition in Section 2) for which the post-positional process outperforms the pre-positional process. However, proving it seems not an easy task. We make the following conjecture. **Conjecture 1.7**.: _Suppose \(m\geq cn^{2}\) for some fixed \(c>0\). Let \(\mathcal{P}\) be the set of graphs on \([n]\) that contain a bipartite subgraph with \(m\) edges. There exists \(\delta_{c},\eta_{c}>0\) such that a.a.s. there is a strategy in a post-positional process that construct a graph in \(\mathcal{P}\) in less than \(\eta_{c}n^{2}\) rounds, whereas no strategy in a pre-positional process can construct a graph in \(\mathcal{P}\) within \((\eta_{c}+\delta_{c})n^{2}\) rounds._ Finally, we give an example of non-increasing \(\mathcal{P}\) where the two processes perform very differently. **Theorem 1.8**.: _Let \(\mathcal{P}\) be the set of multigraphs on \([n]\) that contains an induced simple \((n-1)\)-cycle. Then, a.a.s. no pre-positional process can produce a multigraph in \(\mathcal{P}\), whereas, a.a.s. a post-positional process can construct a multigraph in \(\mathcal{P}\) in \(O(n\log n)\) rounds._ ## 2 Notation For a graph \(G\), we denote its vertex and edge sets by \(V(G)\) and \(E(G)\) respectively. We denote the degree of a vertex \(v\in V(G)\) in graph \(G\) by \(\deg_{G}(v)\). We use \(\delta(G)\) and \(\Delta(G)\) to denote the minimum and maximum degrees of a graph respectively. For a set \(S\subseteq V(G)\) of vertices, we use \(G[S]\) for the subgraph induced by set \(S\) in graph \(G\). The open and closed neighbourhoods of a vertex \(v\in V(G)\) in graph \(G\) will be denoted by \(N_{G}(v)\) and \(N_{G}[v]\) respectively. Both variants of the semi-random graph processes are a single-player game in which a multi-graph is iteratively constructed in a sequence of rounds. Because all the graph properties we consider are invariant under adding multi-edges and loops, we generally consider the underlying simple graph. Notably, we define the degree of a vertex in the multi-graph to be the number of distinct neighbours, not including itself. That is, \(\deg_{G}(v)=|N_{G}(v)\setminus\{v\}|\) for each vertex \(v\in V(G)\). Moreover, we will use the previously introduced notation for simple graphs for the graphs generated by the process as well. In each round of the semi-random graph process (either variant), a single edge is added to the graph. We will denote the graph obtained after \(\ell\) rounds by \(G_{\ell}\). The initial graph, \(G_{0}\), is an empty graph with vertex set \([n]\). In the \(t^{\text{th}}\) round, we construct graph \(G_{t}\) from graph \(G_{t-1}\) as follows. Let \(u_{t}\) be a vertex picked u.a.r. from \([n]\). We say that vertex \(u_{t}\) is hit in round \(t\). We choose a vertex \(v_{t}\in[n]\) according to some strategy, and add edge \(u_{t}v_{t}\) to graph \(G_{t-1}\) to obtain graph \(G_{t}\). The strategy can be a function of \(u_{t}\) in the post-positional variant, and must be independent of \(u_{t}\) in the pre-positional variant. Note that if \(u_{t}=v_{t}\) the new edge is a loop, and if \(G_{t-1}\) already contained \(u_{t}v_{t}\) the new edge is a multi-edge. Thus, \(V(G_{t})=V(G_{t-1})=[n]\), and \(E(G_{t})=E(G_{t-1})\cup\{u_{t}v_{t}\}\). Additionally, we refer to \(u_{t}\) as a square, and \(v_{t}\) as a circle in round \(t\), as introduced by Gao, MacRury and Pralat [11]. Each edge in graph \(G_{t}\) then connects a square and a circle in the round that it is added. We denote a graph \(G\) having property \(\mathcal{P}\) by \(G\in\mathcal{P}\). We say that a graph property \(\mathcal{P}\) is _increasing_ if for every \(G\in\mathcal{P}\), \(H\in\mathcal{P}\) provided that \(G\subseteq H\). Note that by this definition, if \(G_{t}\in\mathcal{P}\) for some \(t>0\) and a monotone graph property \(\mathcal{P}\), it follows that \(G_{t^{\prime}}\in\mathcal{P}\) for all \(t^{\prime}\geq t\) as well. Except for the example in Theorem 1.8, all properties investigated in this paper are increasing properties. If \(\mathcal{P}\) is increasing, it is sufficient to construct a graph \(G_{t}\) which has a subgraph \(G^{\prime}\) in \(\mathcal{P}\). In some rounds, given \(G_{t-1}\) (and vertex \(u_{t}\) in the post-positional process), we may choose vertex \(v_{t}\) arbitrarily and not use the edge \(u_{t}v_{t}\) for the construction of \(G^{\prime}\). We will consider such a round a _failure round_. Allowing failure rounds in some cases leads to algorithms that are easier to analyse. In Section 7 where a non-increasing property is studied, we cannot simply ignore "undesirable" edges to make use of failure rounds. We say an event \(A=A_{n}\) occurs asymptotically almost surely (a.a.s.) in \(G_{t}\) if \(\mathbb{P}(A_{n})\to 1\) as \(n\to\infty\). Unless specified otherwise, all asymptotic notation relates to \(n\), i.e. \(o(1)\) implies a function that tends to \(0\) as \(n\to\infty\). ## 3 Pre- and post-positional processes In this section we prove that the post-positional process can construct a graph in \(\mathcal{P}\) at least as fast as the pre-positional process, for any graph property \(\mathcal{P}\). **Lemma 3.1**.: _Let \(n\geq 2\) and \(\mathcal{P}\subseteq 2^{\binom{[n]}{2}}\). For every \(t\geq 0\), the probability that there exists a pre-positional strategy to construct \(G_{s}\in\mathcal{P}\) for some \(s\leq t\) is at most the probability that there exists a post-positional strategy to construct \(G_{s}\in\mathcal{P}\) for some \(s\leq t\)._ Proof.: We can couple the two processes so that no matter which strategy the pre-positional process uses, the post-positional process can copy the moves and can stop the process at the same time as the pre-positional one. Let \((u_{i})_{i\geq 0}\) be a sequence of i.i.d. copies of \(u\) chosen u.a.r. from \([n]\). Present \(u_{i}\) to be the \(i\)-th square for both processes. For each \(t\geq 1\), let \(v_{t}\) be the choice of the \(t\)-th circle by the pre-positional process. Note that the choice of \(v_{t}\) depends only on \(\{u_{i},v_{i},1\leq i\leq t-1\}\). The post-positional process simply copies the choices of \(v_{t}\) for every \(t\), which are valid moves given its definition. Thus, the two processes always terminate at the same time. Thanks to Lemma 3.1, it suffices to prove Theorem 1.1(a) and 1.4(a) for post-positional processes, and prove Theorem 1.1(b), 1.4(b) and Theorem 1.6 for pre-positional processes. We prove Theorem 1.1 in Section 4, Theorem 1.4 in Section 5, and Theorem 1.6 in Section 6, and Theorem 1.8 in Section 7. ## 4 \(k\)-Connectivity: proof of Theorem 1.1 A connected graph \(G\) is said to be \(k\)-connected if it remains connected when removing fewer than \(k\) vertices. In their seminal paper, Ben-Eliezer, Hefetz, Kronenberg, Parczyk, Shikhelman and Stojakovic [4] provide tight asymptotic bounds for the minimum number of rounds needed in the post-positional process to produce a \(k\)-connected graph for all \(k\geq 3\). Their lower bounds follow directly by coupling with a well-known random graph process called the \(k\)-min process. By Lemma 3.1, these lower bounds are valid for the pre-positional process as well. As a warming up, we will go through their argument and show how it also works directly in the pre-positional setting. ### Min-degree process: proof of Theorem 1.1(a) The min-degree process is a variant on the classical random graph process and was introduced and first studied by Wormald [20]. In the min-degree process, \(G_{0}\) is an edgeless graph on \([n]\). Given \(G_{t}\), choose a vertex \(u\) of minimum degree in \(G_{t}\) u.a.r., and subsequently choose a vertex \(v\) not adjacent to vertex \(u\) in graph \(G_{t}\) u.a.r. Graph \(G_{t+1}\) is then constructed by adding edge \(uv\) to graph \(G_{t}\). Recall \(\alpha_{k}\) in Theorem 1.1. Wormald used his differential equation method to prove that the minimum \(t\) where \(G_{t}\) has minimum degree at least \(k\) is a.a.s. \(\alpha_{k}n\), for each \(k\geq 2\). We denote the graph property of having minimum degree \(k\) by \(\mathcal{D}_{k}\). Ben-Eliezer, Hefetz, Kronenberg, Parczyk, Shikhelman and Stojakovic [4] have since studied adapted versions of the min-degree process as modelled by the semi-random graph process. By choosing \(v_{t}\) u.a.r. from all vertices of minimum degree not adjacent to \(u_{t}\) in graph \(G_{t}\), the resulting semi-random graph process is contiguous to the min-degree process. That is, asymptotically the two processes are equivalent. We refer to this strategy as \(\mathcal{S}_{\min}\). Ben-Eliezer, Hefetz, Kronenberg, Parczyk, Shikhelman and Stojakovic additionally considered strategies without the restrictions on \(v_{t}\) and \(u_{t}\) to be non-adjacent in \(G_{t}\) and \(u_{t}\) and \(v_{t}\) to be distinct. They showed that each of these strategies are optimal in ensuring graph \(G_{t}\) having minimum degree \(k\) in as few rounds as possible when taking \(n\) to infinity, and each asymptotically require \(\alpha_{k}n\) rounds (\(k\geq 2\)). Each of these strategies thus obtains a graph in \(\mathcal{D}_{k}\) in asymptotically the same number of rounds as the min-degree process. We first provide a formal definition of strategy \(\mathcal{S}_{\min}\). For each round \(t\), distribution function \(f_{t}\) is defined as follows. Let \(Y_{t,\min}=\{v\in[n]\,|\,\deg_{G_{t-1}}(v)=\delta(G_{t-1})\}\). Then, given \(u_{t}\) chosen u.a.r. from \([n]\), if \(Y_{t,\,\min}\setminus N_{G_{t-1}}[u_{t}]=\emptyset\), the round is considered a failure round. Otherwise, vertex \(v_{t}\) is chosen u.a.r. from \(Y_{t,\,\min}\setminus N_{G_{t-1}}[u_{t}]\). By this formulation, strategy \(\mathcal{S}_{\min}\) does not create loops nor multi-edges. Let \(G_{\min}(n,m)\) be the graph on \(n\) vertices with \(m\) edges generated by the min-degree process. To show that strategy \(\mathcal{S}_{\min}\) can be used to model the min-degree process for \(m=o(n^{2})\) with a.a.s. \(o(m)\) failure rounds, Ben-Eliezer, Hefetz, Kronenberg, Parczyk, Shikhelman and Stojakovic [4] look at an auxiliary strategy where \(v_{t}\) is chosen u.a.r. from all minimum degree vertices. This strategy thus does not take the neighbourhood of \(u_{t}\) into account. They then show that the number of multi-edges and loops is asymptotically bounded by \(o(m)\), which directly bounds the failure rounds of strategy \(\mathcal{S}_{\min}\) as well. We note that this auxiliary strategy is also valid in the pre-positional process, where the first vertex is chosen u.a.r. from the vertices of minimum degree. Hence, the pre-positional process can also model the min-degree process with a.a.s. \(o(m)\) failure rounds. Since having minimum degree \(k\) is a prerequisite for being \(k\)-connected for \(n>k\), this immediately implies Theorem 1.1(a) for \(k\geq 2\). The case \(k=1\) is trivial, as a connected graph has at least \(n-1\) edges and thus no strategy can build a connected graph in at most \((1-\epsilon)n\) rounds. ### Proof of Theorem 1.1(b) We consider the set of \(k\)-connected graphs on \([n]\), which is an increasing property. It is convenient to define some notation to assist the proof of Theorem 1.1(b). For an increasing property \(\mathcal{P}\), a strategy \(\mathcal{S}\) (\(\mathcal{S}\) may be a pre-positional or a post-positional strategy), and a real number \(0<q<1\), let \(\tau_{\mathcal{P}}(\mathcal{S},q,n)\) be the minimum value \(t\geq 0\) such that \(\mathbb{P}\left[G_{t}\in\mathcal{P}\right]\geq q\); recalling that \(n\) is the number of vertices in \(G_{t}\). If no such value \(t\) exists, we say that \(\tau_{\mathcal{P}}(\mathcal{S},q,n)=\infty\). Let \(\tau_{\mathcal{P}}(q,n)\) denote the minimum value of \(\tau_{\mathcal{P}}(\mathcal{S},q,n)\) over all possible strategies \(\mathcal{S}\). We are interested in the asymptotic value of \(\tau_{\mathcal{P}}(q,n)\) when probability \(q\) approaches \(1\). Therefore, we define \[\tau_{\mathcal{P}}:=\lim_{q\,\uparrow\,1}\limsup_{n\to\infty}\frac{\tau_{ \mathcal{P}}(q,n)}{n},\] where the limit exists since \(\mathcal{P}\) is increasing. This definition is useful for studying linear-time strategies (strategies that a.a.s. builds a graph in \(\mathcal{P}\) in \(\Theta(n)\) rounds), which is the case for \[\mathcal{P}=\mathcal{C}_{k}:=\{G\subseteq\binom{[n]}{2}:\text{ $G$ is $k$-connected}\}.\] To show Theorem 1.1(b), it suffices to prove that in the pre-positional process, \[\tau_{\mathcal{C}_{k}}\leq\alpha_{k}\quad\text{for every fixed $k\geq 1$}.\] Let \(k\)_-min process_ be the process of applying strategy \(\mathcal{S}_{\min}\) until obtaining a graph with minimum degree at least \(k\). Ben-Eliezer, Hefetz, Kronenberg, Parczyk, Shikhelman and Stojakovic [4] proved Theorem 1.1 for the case \(k\geq 3\) in the post-positional process. Their proof is based on a slightly modified variant of the \(k\)-min process tailored for multigraphs and builds on a proof by Kang, Koh, Ree and Luczak [14]. The strategy \(\mathcal{S}_{\min}^{*}\) underlying their modified process is identical to the strategy for the \(k\)-min process as long as the graph is simple, and simplifies the analysis in the semi-random graph process. The proof shows that the graph resulting from the modified \(k\)-min process is a.a.s. \(k\)-connected for all \(k\geq 3\). This proof cannot be directly extended to \(k<3\), as Kang, Koh, Ree and Luczak [14] showed that the graph resulting from the \(k\)-min process is only a.a.s. connected for \(k\geq 3\). Strategy \(\mathcal{S}_{\min}^{*}\) chooses vertex \(v_{t}\) u.a.r. from all vertices of \(V(G_{t-1})\setminus\{u_{t}\}\) that have the smallest number of distinct neighbours. This strategy can be modelled in the pre-positional process by the following strategy: we choose \(u_{t}\) u.a.r. from all vertices that have the smallest number of distinct neighbours, and consider the round a failure if \(u_{t}=v_{t}\). The probability of any given round being a failure round is thus \(1/n\). Hence, the number of additional rounds needed to cover the additional failure rounds is a.a.s. \(o(n)\). Hence, it immediately gives the following lemma. **Lemma 4.1**.: _In both the pre-positional and the post-positional process, \(\tau_{\mathcal{C}_{k}}=\alpha_{k}\) for all fixed \(k\geq 3\)._ Moreover, note that the case \(k=1\) is trivial in the post-positional process. Namely, we observe that one can build a forest containing \(m\leq n-1\) edges in exactly \(m\) rounds. In each round, we simply choose \(v_{t}\) that lies in a different component of \(G_{t-1}\) from \(u_{t}\). Hence, we can build a spanning tree in \(n-1\) rounds, which is obviously optimal. The following lemma shows that the pre-positional process requires asymptotically the same number of rounds to construct a connected graph. **Lemma 4.2**.: \(\tau_{\mathcal{C}_{1}}=1\) _in the pre-positional process._ Proof.: It is obvious that \(\tau_{\mathcal{C}_{1}}\geq 1\), since a connected graph on \([n]\) has at least \(n-1\) edges. Recall that \(u_{t}\) is the vertex uniformly chosen from \([n]\), and \(v_{t}\) is the vertex strategically chosen by the player. For the upper bound, we consider a strategy \(\mathcal{S}\) which chooses \(v_{t}\) u.a.r. from the smallest component (if there is a tie, pick an arbitrary smallest component). If \(u_{t}\) lands in a different component, we add edge \(u_{t}v_{t}\). Otherwise we consider the round a failure round. Each successfully added edge then decreases the number of components in the graph by one. We analyse the process with this strategy in a number of phases. Let phase \(i\) be defined as the rounds in which the number of components in the graph decreases from \(\frac{n}{2^{i-1}}\) to \(\frac{n}{2^{i}}\). Thus, there are \(\log_{2}n\) such phases. We note that phase \(i\) consists of \(\frac{n}{2^{i-1}}-\frac{n}{2^{i}}=\frac{n}{2^{i}}\) non-failure rounds, and a number of failure rounds. Let \(T_{i}\) be the total number of rounds in phase \(i\), and let \(f_{i}\) be the number of failure rounds in phase \(i\). Thus, \(T_{i}=\frac{n}{2^{i}}+f_{i}\). Next, we observe that the smallest component in any round in phase \(i\) contains at most \(2^{i}\) vertices. The probability that a round is a failure round in phase \(i\) is thus at most \(2^{i}/n\). Couple the process with the following experiment; consider a sequence of i.i.d. Bernoulli random variables with success probability \(1-2^{i}/n\). We terminate the sequence once we have observed \(n/2^{i}\) successes. Let \(\mathcal{T}_{i}\) denote the random variable corresponding to the number of Bernoulli random variables in the sequence before it terminates. We observe that \(T_{i}\) is stochastically dominated by \(\mathcal{T}_{i}\). By the negative binomial distribution, it follows that \(\mathbb{E}[\mathcal{T}_{i}]=(n/2^{i})/(1-2^{i}/n)\). Hence, \((n/2^{i})/(1-2^{i}/n)\). Then, as \(T_{i}=\frac{n}{2^{i}}+f_{i}\), we find that for all \(i\leq\log_{2}(n)-1\): \[\mathbb{E}[f_{i}]\leq\frac{\frac{n}{2^{i}}}{1-\frac{2^{i}}{n}}-\frac{n}{2^{i}} =1+O\left(\frac{2^{i}}{n}\right)=O(1).\] For the last phase (i.e. \(\log_{2}(n)-1<i\leq\log_{2}n\)) only a single successful round is needed, and as the failure probability is at most \(1/2\), it follows that for all \(i\) it holds that \(\mathbb{E}[f_{i}]=O(1)\). Therefore, \(\mathbb{E}[\sum_{i\leq\log_{2}n}f_{i}]=O(\log_{2}n)\), and thus By Markov's inequality, a.a.s. \(\sum_{i\leq\log_{2}n}f_{i}=O(\log^{2}n)\). Hence, total number of rounds needed to ensure the graph is connected is a.a.s. at most \[\sum_{i>0}T_{i}=n-1+\sum_{i=1}^{\log_{2}n}f_{i}=(1+o(1))n.\] Therefore, \(\tau_{\mathcal{C}_{1}}\leq 1\) in the pre-positional process, as desired. Thus, asymptotically, the number of required rounds to ensure connectivity is equal between the pre- and post-positional processes. In this section we prove tight asymptotic bounds for the final open case in both the pre- and post-positional processes, \(k=2\). The best bound previously known for the post-positional process, as observed by Ben-Eliezer, Hefetz, Kronenberg, Parczyk, Shikhelman and Stojakovic [4], is the tight upper bound for \(k=3\). That is, \(\tau_{\mathcal{C}_{2}}\leq\tau_{\mathcal{C}_{3}}\). They also gave a lower bound, based on the 2-min process. The 2-min process aims to ensure that each vertex has degree at least two as fast as possible, a prerequisite for 2-connectedness. Using a known result by Wormald [19, 20] on the min-degree process, they showed that the 2-min process a.a.s. takes \((\ln 2+\ln(\ln 2+1))+o(1))n\) rounds to complete. Hence, \(\tau_{\mathcal{C}_{2}}\geq\ln 2+\ln(\ln 2+1)\) in the post-positional process. Note that as the 2-min process can be modelled by the pre-positional process as well, it similarly holds that \(\tau_{\mathcal{C}_{2}}\geq\ln 2+\ln(\ln 2+1)\) in the pre-positional process. In this section we show a novel upper bound for the pre-positional process, which asymptotically matches the known lower bound. Note that by Lemma 3.1, this directly gives an asymptotically tight upper bound for the post-positional process as well. **Lemma 4.3**.: \(\tau_{\mathcal{C}_{2}}=\ln 2+\ln(\ln 2+1)\) _in both the pre- and post-positional processes._ That is, the minimum number of rounds required for a semi-random process to build a 2-connected graph on \(n\) vertices is asymptotic to \((\ln 2+\ln(\ln 2+1))n\) in both processes. As a result of Lemma 4.3, and the previous analysis of existing proofs for bounds on \(\tau_{\mathcal{C}_{k}}\) for \(k\geq 1\), it follows that the property of \(k\)-connectedness requires asymptotically the same number of rounds in the pre- and post-positional processes. #### 4.2.1 Overview For the upper bound, our approach differs significantly from the strategy used by Ben-Eliezer, Hefetz, Kronenberg, Parczyk, Shikhelman and Stojakovic [4] to prove the tight upper bounds for \(k\)-connectedness for \(k\geq 3\). Namely, while their approach is predominantly probabilistic, we use a more structural approach. Our strategy is based on analysing the structure of the maximal \(2\)-connected components of the graph resulting from the \(2\)-min process. In the first phase, we use the \(2\)-min process to obtain a graph in which each vertex has degree at least \(2\). We show that a.a.s. most of the vertices in this graph will be contained in relatively large \(2\)-connected subgraphs. This moreover allows us to conclude that the graph contains \(o(n)\) maximal \(2\)-connected subgraphs. In the second phase, the aim is to ensure that the graph becomes connected. We bound the number of components by the number of maximal \(2\)-connected subgraphs, recalling that the graph has \(o(n)\) such subgraphs after the first phase. As such, by adding edges between components, we can quickly ensure the graph becomes connected. In the third phase, we then want to make the graph \(2\)-connected. We achieve this by considering a tree structure on the maximal \(2\)-connected subgraphs, and showing that by balancing this tree, we can efficiently eliminate cut-vertices. We show that the second and third phases both take \(o(n)\) steps a.a.s. Therefore, the first phase, consisting of the \(2\)-min process, dominates the total number of rounds in the process of building a \(2\)-connected graph on \([n]\). In Section 4.2.2, we first introduce the purely structural definitions and results we will use. Section 4.2.3 then builds upon these structural results to analyse the random process given by our strategy. #### 4.2.2 Supporting structural results In this section we restate the conventional definitions of blocks and block graphs (see for instance [15]). **Definition 4.1** (Block).: Let \(B\subseteq V(G)\) be a maximal set of vertices such that for any two vertices \(x,y\in B\) with \(xy\not\in E(G)\), in order to separate vertex \(x\) from vertex \(y\), it is necessary to remove at least \(2\) vertices from \(G\). Then \(B\) is called a block. Note that by this definition, each block in a graph either induces a maximal \(2\)-connected subgraph, an edge, or an isolated vertex. Moreover, when considering connected graphs on at least \(2\) vertices, each block thus induces a maximal \(2\)-connected subgraph or an edge. Based on this definition, we can then decompose a graph \(G\) into such blocks. **Definition 4.2** (Block decomposition).: Let \(\mathcal{B}(G)\subseteq\mathcal{P}(V(G))\) denote the set of all blocks of graph \(G\). Then \(\mathcal{B}(G)\) is called the block decomposition of graph \(G\). We observe that by the definition of blocks, for each edge \(uv\in E(G)\) in a graph \(G\) there exists a unique block \(B\in\mathcal{B}(G)\) such that \(u,v\in B\). Moreover, by the maximality of the blocks, the block decomposition \(\mathcal{B}(G)\) is unique. Note that \(\mathcal{B}(G)\) is generally not a partition of \(V(G)\). However, each pair of blocks shares at most one vertex, as given in the following proposition. **Proposition 4.4** (Konig, [15, Theorem XIV.7]).: _Let \(G\) be a graph. Then, for each pair of blocks \(B_{1},B_{2}\in\mathcal{B}(G)\), it holds that \(|B_{1}\cap B_{2}|\leq 1\)._ **Definition 4.3** (Block graph).: Let \(G\) be a graph. Then let \(G_{\mathcal{B}}\) be the graph defined by \(V(G_{\mathcal{B}})=\mathcal{B}(G)\) and \(E(G_{\mathcal{B}})=\{B_{1}B_{2}\,|\,B_{1}\cap B_{2}\neq\emptyset\}\). Then graph \(G_{\mathcal{B}}\) is called the block graph of graph \(G\). For a graph \(G\) to be 2-connected, it must hold that \(\mathcal{B}(G)=\{V(G)\}\). We aim to use the blocks and their relative structure in a graph to identify moves in a semi-random process which join multiple blocks together into a single larger block. If a semi-random edge \(u_{t}v_{t}\) joins two blocks then we call the addition of such an edge an _augmentation_. A natural augmentation to consider is to join two blocks \(B_{i}\) and \(B_{j}\) where there is a path between \(B_{i}\) and \(B_{j}\) in \(G_{\mathcal{B}}\). If \(u_{t}\) and \(v_{t}\) are not themselves cut-vertices, this augmentation will immediately join all blocks along the path into a single block. To that purpose, we want to consider a tree structure on the blocks. The traditional such structure, called the block-cut tree of a graph, was originally introduced independently by Gallai [8], and Harary and Prins [13]. **Definition 4.4** (Block-cut tree).: Let \(G\) be a connected graph, and let \(S\) be the set of cut vertices of graph \(G\). Then, the graph \(T\), given by \(V(T)=\mathcal{B}(G)\cup S\) and \(E(T)=\{vB\,|\,v\in S,B\in\mathcal{B}(G),v\in B\}\), is a tree and called the block-cut tree of graph \(G\). We consider a structure similar to the block-cut tree, based on the block graph. Instead of including the cut-vertices in the tree, we take a spanning tree on the block graph. This ensures that we only have to work with blocks, while still providing the desired tree structure. To that aim, we introduce the following definition. **Definition 4.5** (Reduced block tree).: Let \(G_{\mathcal{B}}\) be the block graph of a connected graph \(G\). Then, a spanning tree \(T_{\mathcal{B}}\) of graph \(G_{\mathcal{B}}\) is called a reduced block tree of graph \(G\). A reduced block tree can equivalently be constructed recursively. Let \(v\in V(G)\) be a cut-vertex in a connected graph \(G\), and let \(G_{1}\) and \(G_{2}\) be the induced subgraphs of graph \(G\) such that \(V(G_{1})\cup V(G_{2})=V(G)\), \(E(G_{1})\cup E(G_{2})=E(G)\), and \(V(G_{1})\cap V(G_{2})=\{v\}\). We note that as vertex \(v\) is a cut-vertex, each block \(B\in\mathcal{B}(G)\) is contained in either graph \(G_{1}\) or graph \(G_{2}\). Therefore, \(\mathcal{B}(G_{1})\cup\mathcal{B}(G_{2})=\mathcal{B}(G)\). Let \(T_{\mathcal{B}_{1}}\) and \(T_{\mathcal{B}_{2}}\) be reduced block trees for graphs \(G_{1}\) and \(G_{2}\) respectively. Then, we can construct a reduced block tree for graph \(G\) with block decomposition \(\mathcal{B}(G)\) by joining trees \(T_{\mathcal{B}_{1}}\) and \(T_{\mathcal{B}_{2}}\) with a single edge from a vertex in \(T_{\mathcal{B}_{1}}\) representing a block containing vertex \(v\) to a vertex in \(T_{\mathcal{B}_{2}}\) also representing a block containing vertex \(v\). We observe that by Definition 4.5, the reduced block tree of a graph is generally not unique. This occurs when a vertex is contained in at least three blocks, and the block graph thus contains a clique of size at least 3. **Proposition 4.5**.: _Let \(T_{\mathcal{B}}\) be a reduced block tree of a connected graph \(G\). For \(v\in V(G)\), the set \(\{B\in V(T_{\mathcal{B}})\,|\,v\in B\}\) induces a (connected) subtree in \(T_{\mathcal{B}}\)._ Proof.: Suppose not. Let \(S\subseteq V(T_{\mathcal{B}})\) be the set of all blocks \(B\in V(T_{\mathcal{B}})\) such that \(v\in B\). Then the set \(S\) induces a disconnected subgraph in tree \(T_{\mathcal{B}}\). Let \(C_{1}\) and \(C_{2}\) be two components of this induced subgraph \(T_{\mathcal{B}}[S]\). Moreover, let \(P\) be a shortest path between sets \(V(C_{1})\) and \(V(C_{2})\) in \(T_{\mathcal{B}}\), and let blocks \(B_{1},B_{2}\in S\) be the endpoints of this path \(P\) such that \(B_{1}\in V(C_{1})\) and \(B_{2}\in V(C_{2})\). We note that \(P\) has length at least 2. Then, as \(P\) is a shortest such path, none of the internal vertices of \(P\) are contained in \(S\). Hence, the corresponding blocks do not contain vertex \(v\). Let \(G_{P}\) be the subgraph of \(T_{\mathcal{B}}\) induced by the internal vertices of path \(P\). Additionally, let \(S_{P}\subseteq V(G)\) be the set of all vertices of graph \(G\) contained in at least one of the blocks in \(G_{P}\). We observe that by the definition of path \(P\), subgraph \(G_{P}\) contains blocks adjacent to blocks \(B_{1}\) and \(B_{2}\), respectively, in the tree \(T_{\mathcal{B}}\). Therefore, \(B_{1}\cap S_{P},B_{2}\cap S_{P}\neq\emptyset\). Moreover, by Proposition 4.4 we find that \(B_{1}\cap B_{2}=\{v\}\). Therefore, as \(v\not\in S_{P}\), there exist vertices \(v_{1}\in B_{1}\cap S_{P}\) and \(v_{2}\in B_{2}\cap S_{P}\). Then, because blocks \(B_{1}\) and \(B_{2}\) are by definition connected, there exists a \(v-v_{1}\) path \(P_{1}\) in block \(B_{1}\) and a \(v-v_{2}\) path \(P_{2}\) in block \(B_{2}\). Similarly, the set \(S_{P}\) induces a connected subgraph in \(G\), and thus contains a \(v_{1}-v_{2}\) path \(P^{\prime}\). We note that the union of the paths \(P_{1}\), \(P_{2}\) and \(P^{\prime}\) gives a subgraph of \(G\) containing a cycle \(C\) containing vertex \(v\). We note that the cycle \(C\) is \(2\)-connected and hence is contained in a block \(B_{C}\). Moreover, as this cycle contains at least \(2\) vertices of block \(B_{1}\), by Proposition 4.4, we find that \(B_{1}=B_{C}\). Analogously, it follows that \(B_{2}=B_{C}\). However, this implies that \(B_{1}=B_{2}\), contradicting these blocks being in different components \(C_{1}\) and \(C_{2}\). By this contradiction, we conclude that the proposition holds. **Proposition 4.6**.: _Let \(T_{\mathcal{B}}\) be a reduced block tree of a connected graph \(G\) with \(\delta(G)\geq 2\). Let \(B\in\mathcal{B}\) be a block such that \(B=\{u,v\}\). Then there exist distinct blocks \(B_{u},B_{v}\in\mathcal{B}\) adjacent to \(B\) in \(T_{\mathcal{B}}\) such that \(u\in B_{u}\) and \(v\in B_{v}\)._ Proof.: Because \(\delta(G)\geq 2\), there exists another edge \(uw\in E(G)\). Hence, as each edge is contained in a block, there exists a block \(B^{\prime}\in\mathcal{B}\) such that \(u\in B^{\prime}\) and \(B^{\prime}\neq B\). It then follows from Proposition 4.5 that there exists a block \(B_{u}\in\mathcal{B}\) such that \(u\in B_{u}\) and \(B_{u}\) adjacent to \(B\) in \(T_{\mathcal{B}}\). Analogously, there exists a block \(B_{v}\in\mathcal{B}\) adjacent to \(B\) in \(T_{\mathcal{B}}\) such that \(v\in B_{v}\). By the maximality of block \(B\), it follows that \(v\not\in B_{u}\) and \(u\not\in B_{v}\). Hence, \(B_{u}\neq B_{v}\), as desired. **Corollary 4.7**.: _Let \(T_{\mathcal{B}}\) be a reduced block tree of a connected graph \(G\) with \(\delta(G)\geq 2\). Then each leaf in \(T_{\mathcal{B}}\) corresponds to a \(2\)-connected block in graph \(G\) of at least \(3\) vertices._ Proof.: By Proposition 4.6, blocks of size \(2\) cannot be leaves in \(T_{\mathcal{B}}\). Then, by the definition of blocks, the result follows. **Proposition 4.8**.: _Let \(G\) be a connected graph such that \(|B|<n/4\) for all blocks \(B\in\mathcal{B}(G)\), and let \(T_{\mathcal{B}}\) be a corresponding reduced block tree. Then there exists a vertex \(B^{*}\in V(T_{\mathcal{B}})\) and a colouring \(\phi:V(T_{\mathcal{B}})\setminus\{B^{*}\}\to\{\text{red},\,\text{blue}\}\) such that all components of \(T_{\mathcal{B}}-B^{*}\) are monochromatic and that for \(S_{\text{red}}=\{v\in B\setminus B^{*}\,|\,B\in\mathcal{B},\phi(B)=\text{red}\}\) and \(S_{\text{blue}}=\{v\in B\setminus B^{*}\,|\,B\in\mathcal{B},\phi(B)=\text{ blue}\}\) it holds that \(|S_{\text{blue}}|\leq|S_{\text{red}}|\leq 3|S_{\text{blue}}|\)._ Proof.: Firstly, we note by Proposition 4.5 that \(S_{\text{red}}\cap S_{\text{blue}}=\emptyset\) and hence \(V(G)\) is partitioned by the sets \(B^{*}\), \(S_{\text{red}}\), and \(S_{\text{blue}}\). Therefore, \(|B^{*}|+|S_{\text{red}}|+|S_{\text{blue}}|=n\). Assume that the proposition does not hold. Then, let \(B^{*}\in V(T_{\mathcal{B}})\) and \(\phi:V(T_{\mathcal{B}})\to\{\text{red},\text{blue}\}\) be a vertex and a colouring respectively such that all components of \(T_{\mathcal{B}}-B^{*}\) are monochromatic, \(|S_{\text{red}}|\geq|S_{\text{blue}}|\), subject to which \(|S_{\text{red}}|\) is minimised. We note that as it concerns a counterexample, we must have \(|S_{\text{red}}|>3|S_{\text{blue}}|\). We observe that as \(|B^{*}|<n/4\), \(T_{\mathcal{B}}-B^{*}\) is non-empty. Therefore, due to \(|S_{\text{red}}|\geq|S_{\text{blue}}|\), \(T_{\mathcal{B}}-B^{*}\) contains at least one red component. Suppose that \(T_{\mathcal{B}}-B^{*}\) contains exactly one red component. Then, because \(T_{\mathcal{B}}\) is a tree, vertex \(B^{*}\) has exactly one red neighbour \(B^{\prime}\in V(T_{\mathcal{B}})\) in \(T_{\mathcal{B}}\). Then consider using vertex \(B^{\prime}\) instead of vertex \(B^{*}\), uncolouring \(B^{\prime}\) and colouring \(B^{*}\) blue. Let \(\phi^{\prime}\) denote the resulting new colouring, and let \(S^{\prime}_{\text{red}}\) and \(S^{\prime}_{\text{blue}}\) be the sets of vertices in \(G\) corresponding to \(\phi^{\prime}\). We note that as blocks \(B^{*}\) and \(B^{\prime}\) both contain less than \(n/4\) vertices, it holds that \(|S^{\prime}_{\text{red}}|>|S_{\text{red}}|-n/4\) and \(|S^{\prime}_{\text{blue}}|<|S_{\text{blue}}|+n/4\). Moreover, we note that by the maximality of blocks, \(B^{*}\setminus B^{\prime},B^{\prime}\setminus B^{*}\neq\emptyset\), and hence \(|S^{\prime}_{\text{red}}|<|S_{\text{red}}|\) and \(|S^{\prime}_{\text{blue}}|>|S_{\text{blue}}|\). If \(|S^{\prime}_{\text{red}}|>|S^{\prime}_{\text{blue}}|\), the new colouring \(\phi^{\prime}\) is more balanced, and thus contradicts the minimality of \(|S_{\text{red}}|\). Therefore, it holds that \(|S^{\prime}_{\text{red}}|<|S^{\prime}_{\text{blue}}|\). Because we assumed that \(|S_{\text{red}}|>3|S_{\text{blue}}|\), and as \(|B^{*}|+|S_{\text{red}}|+|S_{\text{blue}}|=n\), it follows that \(|S_{\text{blue}}|\leq n/4\). Thus, \(|S^{\prime}_{\text{blue}}|<|S_{\text{blue}}|+n/4\leq n/2\). But then, as \(|B^{\prime}|+|S^{\prime}_{\text{red}}|+|S^{\prime}_{\text{blue}}|=n\), it follows that \[|S^{\prime}_{\text{red}}| =n-|S^{\prime}_{\text{blue}}|-|B^{\prime}|\] \[>n-\frac{n}{2}-\frac{n}{4}\] \[=\frac{n}{4}.\] Then, inverting the colours red and blue results in a colouring satisfying all the conditions of the proposition, contradicting \(T_{\mathcal{B}}\) being a counterexample. Hence, we may assume that forest \(T_{\mathcal{B}}-B^{*}\) contains at least \(2\) red components. Then let \(C_{1},C_{2},\ldots,C_{\ell}\) be the red components of \(T_{\mathcal{B}}-B^{*}\), and let \(S_{1},S_{2},\ldots,S_{\ell}\) be defined by \(S_{i}=\{v\in B\setminus B^{*}\,|\,B\in C_{i}\}\) for \(i\in[\ell]\). Then, by Proposition 4.5, the sets \(S_{1},S_{2},\ldots,S_{\ell}\) partition set \(S_{\text{red}}\). Suppose that there exists an index \(i\in[\ell]\) such that \(|S_{i}|>|S_{\text{blue}}|\). Then, recolouring all blue components red, and recolouring component \(C_{i}\) blue leads to sets \(S^{\prime}_{\text{red}}\) and \(S^{\prime}_{\text{blue}}\) such that, as \(\ell\geq 2\), \(\min(|S^{\prime}_{\text{red}}|,|S^{\prime}_{\text{blue}}|)>\min(|S_{\text{ red}}|,|S_{\text{blue}}|)\). Thus, as \(|S^{\prime}_{\text{red}}|+|S^{\prime}_{\text{blue}}|=|S_{\text{red}}|+|S_{ \text{blue}}|\), by possibly inverting the colours, we find a more minimal counterexample. Hence, we may assume that \(|S_{i}|\leq|S_{\text{blue}}|\) for all \(i\in[\ell]\). Then, as \(|S_{\text{red}}|=\sum_{i=1}^{\ell}|S_{i}|\), we find that \(|S_{\text{red}}|\leq\ell|S_{\text{blue}}|\). Therefore, as \(|S_{\text{red}}|>3|S_{\text{blue}}|\), it holds that \(\ell>3\). Similarly, suppose that there exists an index \(i\in[\ell]\) such that \(|S_{i}|<(|S_{\text{red}}|-|S_{\text{blue}}|)/2\). Then clearly recolouring component \(C_{i}\) blue contradicts the minimality of \(|S_{\text{red}}|\). Hence, we may assume that \(|S_{i}|\geq(|S_{\text{red}}|-|S_{\text{blue}}|)/2\) for all \(i\in[\ell]\). Then, as \(|S_{\text{red}}|=\sum_{i=1}^{\ell}|S_{i}|\), we find that \(|S_{\text{red}}|\geq\ell\cdot(|S_{\text{red}}|-|S_{\text{blue}}|)/2\). It then follows that, because \(\ell>3\), \(|S_{\text{red}}|\leq\frac{\ell}{\ell-2}|S_{\text{blue}}|\). But then, as \(\frac{\ell}{\ell-2}<3\) for \(\ell>3\), we conclude that vertex \(B^{*}\) and colouring \(\phi\) do not form a counterexample. Thus, we conclude that the proposition holds. #### 4.2.3 Building \(2\)-connected semi-random graphs In this section, we describe our strategy and analyse the corresponding process for building a \(2\)-connected semi-random graph, and obtain the tight upper bound of \(\tau_{\mathcal{C}_{2}}\) in the pre-positional process as in Lemma 4.3. Our strategy consists of three phases. In the first phase, we use the \(2\)-min process as described in Section 4.1. The following proposition shows useful properties of the resulting graph. **Proposition 4.9**.: _Let \(G\) be the semi-random graph resulting from the \(2\)-min process. Then, a.a.s., \(G\) contains \(o(n)\) vertices that are contained in \(2\)-connected induced subgraphs of order at most \(\sqrt{\ln n}\) in graph \(G\)._ Proof.: Let \(X\) be the number of vertices contained in \(2\)-connected induced subgraphs of order at most \(\sqrt{\ln n}\). We note that it suffices to show that \(\mathbb{E}[X]=o(n)\). Moreover, let \(Y_{\ell}\) denote the number of \(2\)-connected induced subgraphs of order \(\ell\) for \(1\leq\ell\leq\sqrt{\ln n}\). Thus, by linearity of expectation, \(\mathbb{E}[X]\leq\sum_{\ell=1}^{\sqrt{\ln n}}\ell\mathbb{E}[Y_{\ell}]\). For \(1\leq\ell\leq\sqrt{\ln n}\), let \(Z_{\ell}\) denote the number of induced subgraphs of order \(\ell\) with at least \(\ell\) edges. Because each \(2\)-connected graph contains at least as many edges as the vertices, it follows immediately that \(Y_{\ell}\leq Z_{\ell}\), and thus, \(\mathbb{E}[X]\leq\sum_{\ell=1}^{\sqrt{\ln n}}\mathbb{E}[\ell Z_{\ell}]\). Hence it suffices to show that \(\sum_{\ell=1}^{\sqrt{\ln n}}\mathbb{E}[\ell Z_{\ell}]=o(n)\). Let \(1\leq\ell\leq\sqrt{\ln n}\), and fix \(S\subseteq[n]\) such that \(|S|=\ell\). Let \(p_{S}\) be the probability that \(G[S]\) contains at least \(\ell\) edges. Note that \(\mathbb{E}[Z_{\ell}]=\sum_{S\in\binom{[n]}{\ell}}p_{S}\). Next, we estimate \(p_{S}\). We first split the \(2\)-min process into two phases. The first phase ends after the step where the last isolated vertex becomes incident with an edge, and thus the second phase starts with a graph with minimum degree one. We further split each phase into subphases for analysis. Specifically, for the first phase we define subphases \(\alpha_{1},\alpha_{2},\ldots\) such that \(\alpha_{i}\) consists of the steps where \(\frac{n}{2^{i}}<|\{v\in V(G)\,|\deg(v)=0\}|\leq\frac{n}{2^{i-1}}\) for \(i\in\{1,2,\ldots\}\). We note that these subphases are well defined, as by the definition of the first phase of the \(2\)-min process, the number of isolated vertices is strictly decreasing. We then define subphases \(\beta_{1},\beta_{2},\ldots\) of the second phase the \(2\)-min process such that subphase \(\beta_{i}\) consists of the steps where \(\frac{n}{2^{i}}<|\{v\in V(G)\,|\,\deg(v)=1\}|\leq\frac{n}{2^{i-1}}\) for \(i\in\{1,2,\ldots\}\). Note that some of the subphases might be empty, e.g. subphase \(\beta_{1}\) is empty if the number of vertices with degree \(1\) at the beginning of the second phase is already smaller than \(n/2\). We observe that there are \(\log_{2}n\) subphases of both phases of the \(2\)-min process. To bound \(p_{S}\), we first choose a set \(T\) of \(\ell\) edges from \(\binom{S}{2}\). There are thus \(\binom{\binom{I}{2}}{\ell}\leq\binom{\ell^{2}}{\ell}\) choices for set \(T\). Then we determine an ordering for the edges in \(T\). There are \(\ell!\) ways to fix such an ordering. Fixing an ordering \(e_{1},\ldots,e_{\ell}\), we bound the probability that these edges are added to \(G\) in this order. The probability that a specific edge \(xy\in T\) is added in a specified step in subphase \(\alpha_{i}\) (and \(\beta_{i}\)) is at most \(2\cdot\frac{2^{i-1}}{n}\cdot\frac{1}{n}=\frac{2^{i}}{n^{2}}\), since the first vertex of the edge is chosen u.a.r. from the isolated vertices, of which there are at most \(n/2^{i-1}\), and the second vertex is chosen u.a.r. from all vertices. The factor \(2\) accounts for whether \(x\) or \(y\) is the square or the circle of the edge (note that due to the structure of the \(2\)-min process, sometimes only one of the two may be relevant). Let \(\ell_{\alpha_{i}}\) and \(\ell_{\beta_{i}}\) be the number of edges of \(e_{1},\ldots,e_{\ell}\) that are added in subphases \(\alpha_{i}\) and \(\beta_{i}\) respectively. Let \(\boldsymbol{\ell}_{\alpha}=(\ell_{\alpha_{i}})_{i\geq 0}\) and \(\boldsymbol{\ell}_{\beta}=(\ell_{\beta_{i}})_{i\geq 0}\). Note that the number of isolated vertices decreases by at least \(1\) in each step of the first phase of the \(2\)-min process. Thus the number of steps in subphase \(\alpha_{i}\) is at most \(\frac{n}{2^{i-1}}-\frac{n}{2^{i}}=\frac{n}{2^{i}}\). Thus, given \(\boldsymbol{\ell}_{\alpha}\) and \(\boldsymbol{\ell}_{\beta}\), there are at most \(\prod_{i}\binom{n/2^{i}}{\ell_{\alpha_{i}}}\binom{n/2^{i}}{\ell_{\beta_{i}}}\) ways to specify steps in the \(2\)-min process where edges in \(T\) are added. Combining all, we have the following bound on \(p_{S}\): \[p_{S}\leq\binom{\ell^{2}}{\ell}\ell!\sum_{\mathbf{\ell}_{\alpha},\mathbf{\ell}_{\beta}} \left(\prod_{i=1}^{\log_{2}n}\binom{n/2^{i}}{\ell_{\alpha_{i}}}\binom{n/2^{i}}{ \ell_{\beta_{i}}}\left(\frac{2^{i}}{n^{2}}\right)^{\ell_{\alpha_{i}}+\ell_{ \beta_{i}}}\right),\] where the first summation is over all choices for \(\mathbf{\ell}_{\alpha}\) and \(\mathbf{\ell}_{\beta}\) such that \(\sum_{i=1}^{\log_{2}n}\left(\ell_{\alpha_{i}}+\ell_{\beta_{i}}\right)=\ell\). Using \(\binom{n/2^{i}}{\ell_{\alpha_{i}}}\leq(n/2^{i})^{\ell_{\alpha_{i}}}\) and \(\binom{n/2^{i}}{\ell_{\beta_{i}}}\leq(n/2^{i})^{\ell_{\beta_{i}}}\), we then obtain \[p_{S}\leq\binom{\ell^{2}}{\ell}\ell!n^{-\ell}\sum_{\mathbf{\ell}_{\alpha},\mathbf{ \ell}_{\beta}}1.\] The set of \(\{(\mathbf{\ell}_{\alpha},\mathbf{\ell}_{\beta})\,|\,\sum_{i=1}^{\log_{2}n}\left(\ell _{\alpha_{i}}+\ell_{\beta_{i}}\right)=\ell\}\) corresponds to the set of weak integer compositions of \(\ell\) into \(2\log_{2}n\) parts of non-negative integers, and thus has cardinality \(\binom{\ell+2\log_{2}n-1}{2\log_{2}n-1}\leq\binom{\ell+2\log_{2}n}{\ell}\). Hence, it follows that \[\mathbb{E}[X] \leq\sum_{\ell=1}^{\sqrt{\ln n}}\mathbb{E}[\ell Z_{\ell}]\] \[=\sum_{\ell=1}^{\sqrt{\ln n}}\left(\ell\cdot\sum_{S\in\binom{[n]} {\ell}}p_{S}\right)\] \[\leq\sum_{\ell=1}^{\sqrt{\ln n}}\ell\binom{n}{\ell}\binom{\ell^{2 }}{\ell}\ell!n^{-\ell}\binom{\ell+2\log_{2}n}{\ell}.\] Using \(\binom{n}{\ell}\leq n^{\ell}/\ell!\), \(\binom{\ell^{2}}{\ell}\leq(e\ell)^{\ell}\) and \(\binom{\ell+2\log_{2}n}{\ell}\leq(e(\ell+\log_{2}n)/\ell)^{\ell}\leq(10\log_{2 }n/\ell)^{\ell}\) (as \(\ell\leq\sqrt{\ln n}\)), we then obtain \[\mathbb{E}[X]\leq\sum_{\ell=1}^{\sqrt{\ln n}}\ell(10e\log_{2}n)^{\ell}=\exp \left(\sqrt{\ln n}\ln\log_{2}n+O(\sqrt{\ln n})\right)=o(n),\] as desired. **Corollary 4.10**.: _Let \(G\) be the semi-random graph resulting from the \(2\)-min process. Then, a.a.s., \(G\) contains \(o(n)\) maximal \(2\)-connected induced subgraphs._ Proof.: Consider the set \(\mathcal{T}:=\{(v,B):v\in B,B\in\mathcal{B}(G)\}\). Using the block-cut tree structure (Definition 4.4), it follows that \(|\mathcal{T}|\leq n+|\mathcal{B}|-1\). Moreover, the sets \(\mathcal{T}_{B}:=\{(v^{\prime},B^{\prime})\in T:B^{\prime}=B\}\) for \(B\in\mathcal{B}(G)\) partition \(\mathcal{T}\). Let \(B_{1},\ldots,B_{\ell}\) be the set of blocks of size at least \(\sqrt{\ln n}\). Then, by Proposition 4.9, \(\sum_{1\leq i\leq\ell}|\mathcal{T}_{B_{i}}|\leq|\mathcal{T}|\leq n+\ell+o(n)\). However, \(|\mathcal{T}_{B_{i}}|\geq\sqrt{\ln n}\) for every \(i\), and thus it follows then that \(\ell\sqrt{\ln n}\leq n+\ell+o(n)\). Thus it follows that \(\ell=o(n)\), as desired. The resulting graph thus contains \(o(n)\) blocks of size at least \(3\). Because we have not bounded the number of blocks consisting of \(2\) vertices, we will use Corollary 4.7 and the other structural results in Section 4.2.2 to ensure the graph becomes \(2\)-connected. Let \(G_{1}\) be the graph obtained after the first phase, i.e. the graph resulting from the \(2\)-min process. In the second phase, we add semi-random edges to make \(G_{1}\) connected. The following proposition shows that we can achieve this a.a.s. with \(o(n)\) additional semi-random edges. **Proposition 4.11**.: _A.a.s. \(G_{1}\) can be made connected by the addition of \(o(n)\) semi-random edges._ Proof.: By Corollary 4.10, \(G_{1}\) contains \(o(n)\) maximal \(2\)-connected induced subgraphs. We claim that each vertex not contained in a \(2\)-connected induced subgraph is contained in a component that contains a \(2\)-connected induced subgraph. Suppose not. Then \(G_{1}\) must contain a tree component, contradicting the fact that the minimum degree of \(G_{1}\) is at least two. Hence the number of components of graph \(G_{1}\) is bounded from above by the number of maximal \(2\)-connected induced subgraphs, and therefore is \(o(n)\). By choosing \(v_{t}\) to be one of the vertices in the smallest component, each semi-random edge has a probability of at least \(1/2\) to decrease the number of components. Hence, by standard concentration arguments, \(G_{1}\) can be made connected in \(o(n)\) additional rounds. Let \(G_{2}\) be the graph obtained after the second phase. In the third phase, we ensure that \(G_{2}\) becomes \(2\)-connected by adding \(o(n)\) semi-random edges. **Proposition 4.12**.: _A.a.s. \(G_{2}\) can be made \(2\)-connected by the addition of \(o(n)\) semi-random edges._ Proof.: Let \(\mathcal{B}\) be the block decomposition of \(G_{2}\) and \(T_{\mathcal{B}}\) be a reduced block tree of \(G_{2}\). By Corollary 4.7, each leaf in \(T_{\mathcal{B}}\) is a \(2\)-connected block. Thus, by Corollary 4.10, \(T_{\mathcal{B}}\) a.a.s. contains \(o(n)\) leaves. First consider the case that \(\mathcal{B}\) contains a block \(B^{*}\) such that \(|B^{*}|\geq n/4\). We consider the following strategy. Take an arbitrary enumeration \(B_{1},\ldots,B_{h}\) of all leaf blocks of \(T_{\mathcal{B}}\). For each \(1\leq j\leq h\), we will add a semi-random edge between \(B_{j}\) and \(B^{*}\) in increasing order of \(j\). Suppose these semi-random edges have already been added between \(B_{i}\) and \(B^{*}\) for all \(i<j\). Let \(B_{j}B_{1}^{\prime}B_{2}^{\prime}\ldots B_{\ell}^{\prime}B^{*}\) be the unique path from \(B_{j}\) to \(B^{*}\) in \(T_{\mathcal{B}}\). Moreover, let \(x\) be the unique vertex in \(B_{j}\cap B_{1}^{\prime}\), and \(y\) the unique vertex in \(B_{\ell}^{\prime}\cap B^{*}\). Note that possibly \(x=y\). Then, in each subsequent round \(t\), we choose \(v_{t}\) to be an arbitrary vertex in \(B_{j}\setminus\{x\}\). If \(u_{t}\) is contained in \(B^{*}\setminus\{y\}\), we add the edge \(u_{t}v_{t}\). If instead square \(u_{t}\) is not contained in \(B^{*}\setminus\{y\}\), we consider the round a failure. Note that in each round, the probability of the second vertex landing in \(B^{*}\setminus\{y\}\) is \((|B^{*}|-1)/n\geq 1/4-o(1)\), and as a.a.s. \(T_{\mathcal{B}}\) contains \(o(n)\) leaves, the number of rounds required to add semi-random edges between \(B^{*}\) and all \(B_{1},\ldots,B_{h}\) is \(o(n)\) in expectation. Let \(G_{2}^{\prime}\) be the graph resulting from the addition of the \(h\) semi-random edges as described above. Then, for each leaf block \(B\), \(G_{2}^{\prime}\) contains two vertex-disjoint paths from \(B\) to \(B^{*}\) Namely, one path via the blocks on the path between \(B\) and \(B^{*}\) in \(T_{\mathcal{B}}\), and the other being the edge that was added between \(B\) and \(B^{*}\). Because this holds for all leaves, using Proposition 4.6, the resulting graph is 2-edge-connected. Moreover, as each block is on a cycle with \(B^{*}\) and a leaf, and as the blocks of size at least 3 are 2-connected, for each cut-vertex \(v\) it follows that graph \(G_{2}^{\prime}-v\) contains one large component containing \(B^{*}\setminus\{v\}\), and all other components are of the form \(B\setminus\{v\}\) where \(B\in\mathcal{B}\) is a block of size at least 3. We note that these blocks \(B\) such that \(B\setminus\{v\}\) is a component for some cut-vertex \(v\in[n]\) correspond exactly to the blocks that are leaves in the block-cut tree (Definition 4.4), but not in \(T_{\mathcal{B}}\). By argumentation analogous to that used in the proof of Corollary 4.7, all such blocks \(B\) are 2-connected. Hence, by Proposition 4.10, there are \(o(n)\) such blocks. Moreover, each such a block contains at most one cut-vertex. We then use the following strategy to absorb these cut-vertices. We iteratively consider pairs \((B,v)\) where \(v\in B\) is a cut-vertex and \(B\in\mathcal{B}\) a block such that \(B-v\) is a component when removing \(v\). As noted earlier, there are \(o(n)\) such pairs. If \(|B\setminus\{v\}|\leq n/2\), we choose \(v_{t}\in B\setminus\{v\}\) arbitrarily. With probability at least \(1/2\), \(u_{t}\in[n]\setminus B\). Similarly, if \(|B\setminus\{v\}|<n/2\), we choose \(v_{t}\in[n]\setminus B\), and with probability at least \(1/2-o(1)\), \(u_{t}\in B\setminus\{v\}\). In either case, \(v\) no longer separates block \(B\) from the rest of the graph. Note that as this described the only configuration of remaining cut-vertices in the graph, eliminating all such pairs eliminates all cut-vertices. Since there are \(o(n)\) such pairs, the total number of rounds needed to absorb all such cut-vertices is \(o(n)\) in expectation. It thus takes at most \(o(n)\) rounds in total in expectation to ensure that the graph becomes 2-connected. Standard concentration inequalities such as Chernoff bounds then immediately imply that also a.a.s. it takes \(o(n)\) rounds to extend \(G_{2}\) to a 2-connected graph. Hence we may assume that each block \(B\) in \(\mathcal{B}\) is of size strictly smaller than \(n/4\). We use a different strategy in this case. Instead of adding edges from leaves to a single block, we will consider balancing the tree into two subforests. We will then add edges between the leaves in one forest and vertices in the other forest, and vice versa. Let vertex \(B^{*}\in V(T_{\mathcal{B}})\), colouring \(\phi:V(T_{\mathcal{B}})\setminus\{B^{*}\}\to\{\text{red, blue}\}\), and sets \(S_{\text{red}}\) and \(S_{\text{blue}}\) be as given by Proposition 4.8. For each \(v\in B^{*}\) let \(T_{\mathcal{B},v}\) denote the components of \(T_{\mathcal{B}}-v\) that contain a block containing \(v\). Thus, \(T_{\mathcal{B},v}\) denotes the blocks \(B\) where \(v\) is the last cut-vertex on the path from \(B\) to \(B^{*}\) in \(T_{\mathcal{B}}\). We refer to \(T_{\mathcal{B},v}\) as the branch rooted at \(v\). Moreover, let \(S_{\mathcal{B},v}\) denote \(\bigcup_{B\in V(T_{\mathcal{B},v})}B\setminus B^{*}\). That is, \(S_{\mathcal{B},v}\) is the set of all vertices contained in blocks in \(T_{\mathcal{B},v}\) except for vertex \(v\) itself. If \(|S_{\mathcal{B},v}|\leq n/8\), we say branch \(T_{\mathcal{B},v}\) is small. Otherwise we say \(T_{\mathcal{B},v}\) is big. Finally, for all leaf blocks \(B\), let \(v_{B}\) denote the vertex that block \(B\) has in common with the next block on the path from \(B\) to \(B^{*}\) in \(T_{\mathcal{B}}\). We first consider the leaves of \(T_{\mathcal{B}}\) contained in small branches. Take two arbitrary enumerations \(B_{1},B_{2},\ldots,B_{h_{1}}\) and \(R_{1},R_{2},\ldots,R_{h_{2}}\) of all blue and red leaf blocks of \(T_{\mathcal{B}}\) contained in small branches respectively. We will iteratively add edges between \(B_{j}\) and \(S_{\text{red}}\) in increasing order of \(j\), and analogously between \(R_{j}\) and \(S_{\text{blue}}\). Suppose that semi-random edges have already been added between \(B_{i}\) and \(S_{\text{red}}\) for all \(i<j\). Let \(T_{\mathcal{B},v}\) be the branch containing leaf \(B_{j}\). We then choose \(v_{t}\) to be an arbitrary vertex in \(B_{j}\setminus\{v_{B_{j}}\}\). Because \(|B_{j}|\geq 2\), such a choice for \(v_{t}\) always exists. Then, if \(u_{t}\) lands in \(S_{\text{red}}\setminus S_{\mathcal{B},v}\), we add edge \(u_{t}v_{t}\). Otherwise we consider the round a failure. Analogously, for \(R_{j}\) the red leaf in a small branch \(T_{\mathcal{B},v}\) with the lowest index that has not previously received a circle, we choose \(v_{t}\) in \(R_{j}\setminus\{v_{R_{j}}\}\). If \(u_{t}\) is contained in \(S_{\text{blue}}\setminus S_{\mathcal{B},v}\), we add the edge \(u_{t}v_{t}\), and otherwise we consider the round a failure. Then, as tree \(T_{\mathcal{B}}\) has \(o(n)\) leaves, there are \(o(n)\) blue and \(o(n)\) red leaves. Moreover, by Proposition 4.8\(|S_{\text{blue}}|\leq|S_{\text{red}}|\leq 3|S_{\text{blue}}|\), and \(|S_{\text{red}}|+|S_{\text{blue}}|\geq 3n/4\). Thus, the probability that a vertex from \(S_{\text{red}}\setminus S_{\mathcal{B},v}\) is chosen u.a.r. where \(T_{\mathcal{B},v}\) is a small branch, is at least \(3n/8-n/8=n/4\). Similarly, the probability that a vertex from \(S_{\text{blue}}\setminus S_{\mathcal{B},v}\) is chosen u.a.r. where \(T_{\mathcal{B},v}\) a small branch, is at least \(3n/16-n/8=n/16\). Hence, the expected number of rounds needed to add edges to all leaf blocks in small branches is \(o(n)\). Next, we consider the leaf blocks in big branches. We first note that there are at most 7 big branches. We use a similar strategy as for the small branches, but drop the requirement that \(u_{t}\) and \(v_{t}\) must be in distinct branches. Again take two arbitrary enumerations \(B_{1},B_{2},\ldots,B_{h_{3}}\) and \(R_{1},R_{2},\ldots,R_{h_{4}}\) of all blue and red leaf blocks of \(T_{\mathcal{B}}\) contained in big branches respectively. Suppose that semi-random edges have already been added between \(B_{i}\) and \(S_{\text{red}}\) for all \(i<j\). We then choose \(v_{t}\) to be an arbitrary vertex in \(B_{j}\setminus\{v_{B_{j}}\}\). Because \(|B_{j}|\geq 2\), such a choice for \(v_{t}\) always exists. If \(u_{t}\) lands in \(S_{\text{red}}\), we add edge \(u_{t}v_{t}\). Otherwise, we consider the round a failure. The strategy for red leaf blocks is analogous. Because the probability that a vertex from \(S_{\text{red}}\) is chosen u.a.r. is at least \(3/8\), and the probability that a vertex from \(S_{\text{blue}}\) is chosen u.a.r. is at least \(3/16\), it also takes \(o(n)\) rounds in expectation to add edges to all leaf blocks in big branches. After all leaves in both small and big branches have received an edge, there exist two internally vertex-disjoint paths from each leaf block \(B\) to \(B^{*}\). Namely, as all of the edges we added have one red and one blue endpoint, each blue leaf has a path which only contains blue vertices and a vertex in \(B^{*}\), and a path that starts with the added edge, and then only contains red vertices and one vertex in \(B^{*}\). Analogously, there exist two such paths from each red leaf. As these two paths do not share their endpoint in leaf \(B\), and as each leaf is 2-connected by Corollary 4.7, set \(B\setminus\{v_{B}\}\) does not contain any cut-vertices. We note that again the resulting graph is 2-edge-connected. We then use the same strategy as in the case where there exists a block of size at least \(n/4\) to eliminate all the cut-vertices that separate individual blocks from the rest of the graph. Recall that this strategy a.a.s. takes \(o(n)\) rounds. Let \(G^{\prime\prime}_{2}\) be the resulting graph. We then observe that no vertex in \([n]\setminus B^{*}\) is a cut-vertex in graph \(G^{\prime\prime}_{2}\). Hence, we consider a cut-vertex \(v\in B^{*}\). First suppose that the branch rooted at \(v\) is empty. We observe that by Proposition 4.6 it then holds that \(|B^{*}|\geq 3\). But then, \(B^{*}\) is 2-connected, contradicting \(v\) being a cut-vertex. Next suppose that the branch rooted at \(v\) is small. We note that for each vertex in \(S_{\mathcal{B},v}\) there exists a path to a leaf of branch \(T_{\mathcal{B},v}\) contained in \(S_{\mathcal{B},v}\). As each such a leaf has an edge to another branch, and as \(B^{*}\) is either 2-connected or isomorphic to \(K_{2}\), it follows that \(G^{\prime\prime}_{2}-v\) is connected. Hence, \(v\) is not a cut-vertex. Finally, suppose that the branch rooted at \(v\) is big. In this case \(v\) may indeed be a cut-vertex. Namely, if \(T_{\mathcal{B},v}\) contains multiple components of different colours, each of the edges added to the leafs in the branch could have both endpoints within the branch. To deal with such cut-vertices, we use a two-step strategy. In the first step, we want to ensure that the subgraph induced by \(S_{\mathcal{B},v}\) becomes connected. We achieve this using the standard strategy of choosing \(v_{t}\) in the smallest component in the subgraph induced by \(S_{\mathcal{B},v}\). If \(u_{t}\) lands in a different component of this subgraph, we add edge \(u_{t}v_{t}\), otherwise we consider the round a failure. We note that as each component of \(T_{\mathcal{B},v}\) contains at least one leaf of \(T_{\mathcal{B}}\), by Corollary 4.10, \(T_{\mathcal{B},v}\) contains at most \(o(n)\) components. As \(|S_{\mathcal{B},v}|>n/8\), the probability of adding successfully adding an edge in this first step is at least \(1/16\). Then, by standard concentration inequalities, this step a.a.s. takes \(o(n)\) rounds as well. We note that in the resulting graph, cut-vertex \(v\) then separates two components, given by vertex sets \(S_{\mathcal{B},v}\) and \([n]\setminus(S_{\mathcal{B},v}\cup\{v\})\). In the second step of the strategy, we connect these two components by a single edge. By again choosing \(v_{t}\) in the smaller of the two components, and considering the round a failure if \(u_{t}\) does not land in the other component. As the probability of a failure round is thus at most \(1/2\), by standard concentration inequalities, the number of rounds in this step is \(O(1)\). We then note that there are at most \(7\) vertices \(v\in B^{*}\) such \(T_{\mathcal{B},v}\) is big. Hence, the total number of rounds to ensure that each of these cut-vertices is absorbed is a.a.s. \(o(n)\). Because the resulting graph then thus no longer contains any cut-vertices, the graph is \(2\)-connected. Thus, in any case, we can ensure that graph \(G_{2}\) becomes \(2\)-connected in a.a.s. \(o(n)\) rounds, as desired. Combining the analysis of these individual phases then results in the following lemma. **Lemma 4.13**.: \(\tau_{\mathcal{C}_{2}}\leq\ln 2+\ln(\ln 2+1))\) _in the pre-positional process._ Proof.: The lemma directly follows from Propositions 4.9 and 4.12, and the fact that the \(2\)-min process requires \((\ln 2+\ln(\ln 2+1))+o(1))n\) rounds. Lemma 4.3 then follows directly from Lemmas 3.1, and 4.13, and the known lower bound given by the \(2\)-min process. This then completes the proof of Theorem 1.1; it follows directly from Lemmas 4.1, 4.2, and 4.3. ## 5 Degenerate subgraphs: proof of Theorem 1.4 Part (a) follows by [2, Theorem 1.2] and Lemma 3.1. For part (b), we consider the pre-positional process. Our proof is similar to the proof of Theorem 1.3, but requires slightly more careful analysis due to the difference in power between the pre- and post-positional processes. Let \(g(n)\) be a function such that \(g(n)\to\infty\) as \(n\to\infty\). We prove that there exist a pre-positional strategy which construct an \(H\)-subgraph a.a.s. in at most \(g(n)2^{|V(H)|}\cdot n^{(d-1)/d}\) rounds. Note that this immediately implies part (b) as \(|V(H)|\) is fixed, and we may take \(g(n)=f(n)2^{-|V(H)|}\). We proceed by induction on \(|V(H)|\). We note that the statement holds directly if \(|V(H)|=1\). Suppose \(H\) is a \(d\)-degenerate graph with \(m\geq 2\) vertices, and assume that the statement holds for all fixed \(d\)-degenerate graphs \(H^{\prime}\) such that \(|V(H^{\prime})|<m\). Let \(v\in V(H)\) such that \(\deg_{H}(v)\leq d\). Consider the graph \(H^{\prime}:=H-v\). Then, by the inductive hypothesis, there exists a pre-positional strategy which a.a.s. constructs a graph \(G^{\prime}\) containing a copy of \(H^{\prime}\) in at most \(T:=g(n)2^{m-1}\cdot n^{(d-1)/d}\) rounds. Let \(C^{\prime}\) be the copy of \(H^{\prime}\) constructed in \(G^{\prime}\). For each vertex \(u\in N_{H}(v)\), let \(u^{\prime}\in[n]\) be the corresponding vertex in \(C^{\prime}\), and let \(N^{\prime}:=\{u^{\prime}\,:\,u\in N_{H}(v)\}\). The strategy is then to grow a star from each vertex in \(N^{\prime}\). We do the following subsequently for each \(u^{\prime}\in N^{\prime}\). Given \(u^{\prime}\in N^{\prime}\), choose \(u_{t}\) to be \(u^{\prime}\) for \(g(n)2^{m}/(2d)\cdot n^{(d-1)/d}\) subsequent rounds. Let \(S_{u^{\prime}}\) be the set of vertices \(w\in[n]\setminus V(C^{\prime})\) such that \(w=v_{t}\) for at least one of the \(g(n)2^{m}/(2d)\cdot n^{(d-1)/d}\) rounds. Then, by standard concentration arguments, and as \(|V(C^{\prime})|\) is fixed, a.a.s. \(|S_{u^{\prime}}|\) is at least \(g(n)2^{m}/(4d)\cdot n^{(d-1)/d}\). Let \(G\) be the graph resulting from growing such stars for all \(u^{\prime}\in N^{\prime}\). We then consider the probability that a vertex \(w\in[n]\setminus V(C^{\prime})\) is contained in all such sets, that is \(\mathbb{P}\left(w\in\bigcap_{u^{\prime}\in N^{\prime}}S_{u^{\prime}}\right)\). As the construction of \(\{S_{u^{\prime}}\}_{u^{\prime}\in N^{\prime}}\) is mutually independent, \[\mathbb{P}\left(w\in\bigcap_{u^{\prime}\in N^{\prime}}S_{u^{ \prime}}\right) \geq\Pi_{u^{\prime}\in N^{\prime}}\mathbb{P}\left(w\in S_{u^{ \prime}}\right)\] \[=\Pi_{u^{\prime}\in N^{\prime}}\left(\frac{|S_{u}|}{n-|V(C^{ \prime})|}\right)\] \[>\Pi_{u^{\prime}\in N^{\prime}}\left(\frac{|S_{u}|}{n}\right)\] \[\geq\Pi_{u^{\prime}\in N^{\prime}}\left(\frac{g(n)2^{m}}{4d} \cdot\frac{n^{(d-1)/d}}{n}\right)\] \[=\Pi_{u^{\prime}\in N^{\prime}}\left(\frac{g(n)2^{m}}{4d}\cdot \frac{1}{n^{1/d}}\right)\] \[\geq\left(\frac{g(n)2^{m}}{4d}\cdot\frac{1}{n^{1/d}}\right)^{d}\] \[=\left(\frac{g(n)2^{m}}{4d}\right)^{d}\cdot\frac{1}{n}.\] Let \(X:=\left|\bigcap_{u^{\prime}\in N^{\prime}}S_{u^{\prime}}\right|\) be a random variable. Then, \(\mathbb{E}[X]\geq(g(n)2^{m}/4d)^{d}\cdot(n-|V(C^{\prime})|)/n\), and hence, by standard concentration arguments, as \(\lim_{n\to\infty}g(n)=\infty\), a.a.s. \(\bigcap_{u^{\prime}\in N^{\prime}}S_{u^{\prime}}\) is non-empty. Let \(z\in\bigcap_{u^{\prime}\in N^{\prime}}S_{u^{\prime}}\). Consider the subgraph of \(G\) given by extending \(C^{\prime}\) with vertex \(z\) and the edges between \(z\) and \(N^{\prime}\); this subgraph is isomorphic to \(H\), as desired. Moreover, the number of rounds to construct \(H\) is bounded by up to \(g(n)2^{m-1}\cdot n^{(d-1)/d}\) rounds to construct \(H^{\prime}\), together with up to \(g(n)2^{m}/(2d)\cdot n^{(d-1)/d}\) rounds to grow each of the stars. Thus, in total the construction of \(H\) requires a.a.s. up to \[g(n)2^{m-1}\cdot n^{(d-1)/d}+|N^{\prime}|\cdot\frac{g(n)2^{m}}{2 d}\cdot n^{(d-1)/d} \leq\frac{g(n)2^{m}}{2}\cdot n^{(d-1)/d}+d\cdot\frac{g(n)2^{m}}{2 d}\cdot n^{(d-1)/d}\] \[=g(n)2^{m}\cdot n^{(d-1)/d}\] rounds as desired. Dense bipartite subgraphs: proof of Theorem 1.6 The lower bound is trivial, as constructing any such subgraph requires \(m\) edges. For the upper bound, by Lemma 3.1 it suffices to consider the pre-positional process. Let \(A:=\left[\lceil\sqrt{m}\rceil\right]\) and \(B=[n]\setminus A\). We construct a simple bipartite subgraph with bipartition \((A,B)\) with at least \(m\) edges. For a vertex \(v\in A\), let \(\deg_{G_{t}}(v,B)\) denote the number of distinct neighbours of \(v\) in \(B\). Our strategy consists of \(\lceil\sqrt{m}\rceil\) phases. In the \(i^{\text{th}}\) phase, we consistently choose \(v_{t}\) to be vertex \(i\). The phase terminates once \(\deg_{G_{t}}(i,B)=\lceil\sqrt{m}\rceil\). We consider a round a failure if \(u_{t}\in A\cup N_{G_{t-1}}(i)\). We observe that the probability of such a failure round is at most \(2\lceil\sqrt{m}\rceil/n\). Moreover, because \(m=o(n^{2})\), we observe that this probability is \(o(1)\). Moreover, once all phases have terminated, we note that the bipartition \((A,B)\) forms a bipartite subgraph with at least \(\lceil\sqrt{m}\rceil\cdot\lceil\sqrt{m}\rceil\geq m\) non-parallel edges, as desired. Since each round has a probability of being a failure round of \(o(1)\), and as there are \(\lceil\sqrt{m}\rceil\) phases, each of which requires \(\lceil\sqrt{m}\rceil\) successful rounds, the total needed number of rounds is a.a.s. \((1+o(1))\cdot\lceil\sqrt{m}\rceil\cdot\lceil\sqrt{m}\rceil=(1+o(1))m\), following a standard concentration argument. ## 7 Large induced cycles: proof of Theorem 1.8 Recall that the semi-random graph processes allow the creation of multi-edges. This will be useful to construct an induced \((n-1)\)-cycle in the post-positional process. **Lemma 7.1**.: _There exists a post-positional strategy that constructs an induced \((n-1)\)-cycle a.a.s. in \(O(n\ln n)\) rounds._ Proof.: Our strategy aims to construct an induced cycle on the vertices \(\{1,2,\ldots,n-1\}\). We designate vertex \(n\) as the _sink vertex_. That is, if we cannot add a useful edge given \(u_{t}\), we choose \(v_{t}\) to be \(n\). Hence, by removing vertex \(n\), we obtain a graph which only contains desired edges. The first time that \(u_{t}\) lands on a vertex \(v\in[n-1]\), we choose \(v_{t}\) to be \(v+1\) (unless \(v=n-1\), then we choose \(u_{t}\) to be \(1\)). Any subsequent time \(u_{t}\) lands on \(v\), we choose \(v_{t}\) to be \(n\). Note that once we have landed at least once on each vertex in \([n-1]\), we have constructed an induced spanning cycle on the set \([n-1]\), as desired. Hence, an induced \((n-1)\)-cycle is constructed once each vertex in \([n-1]\) is hit at least once, and this takes a.a.s. \(O(n\log n)\) steps by the coupon collector's problem (see [17, Theorem 5.13]). We complete the proof of Theorem 1.8 by showing that a.a.s. no pre-positional strategy can construct an induced \((n-1)\)-cycle. To obtain an induced \((n-1)\)-cycle, we first need to construct an induced path on \(n-1\) vertices. Suppose that one has constructed such an induced path \(P\), which includes all vertices other than \(w\in[n]\), after step \(t_{0}\). By the definition of an induced path, \([n]-w\) induces exactly \(n-2\) edges, which form an \((n-1)\)-path. **Claim 7.1**.: A.a.s. \(w\) has \(\Theta(n)\) distinct neighbours in \([n]-w\). Proof.: \(G_{t_{0}}\) contains at least \(n-1\) edges and thus \(t_{0}\geq n-1\). The distribution of the \(n-1\) squares in the first \(n-1\) steps is the same as is that of uniformly throwing \(n-1\) balls into \(n\) bins. By the standard Poisson approximation argument, the number of vertices receiving at least three squares is a.a.s. \(\Theta(n)\). These vertices must all be adjacent to \(w\) since \([n]-w\) induces an \((n-1)\)-path. It follows immediately that a.a.s. the only possible induced \((n-1)\)-cycle that can be constructed is on \([n]-w\). Observe that the only way to construct an induced \((n-1)\)-cycle on \([n]-w\) is that 1. \(\{u_{t},v_{t}\}=\{u,v\}\) for some \(t\geq t_{0}+1\), where \(u\) and \(v\) are the two ends of \(P\); 2. for all \(t_{0}<s<t\), \(\{u_{s},v_{s}\}\neq\{u,v\}\); and 3. for all \(t_{0}<s<t\), if \(v_{s}\neq w\) then \(u_{t}\) must be \(w\). Considering the first step \(t\) after \(t_{0}\) that \(v_{t}\neq w\). If \(v_{t}\notin\{u,v\}\) then by (c), \(u_{t}\) must be \(w\), which occurs with probability \(1/n\). If \(v_{t}\in\{u,t\}\) then by (a,b), \(u_{t}\) must be either \(\{u,v\}\setminus\{v_{t}\}\) or \(w\). The probability of this is \(2/n\). Hence, the probability that \(P\) can be completed into an induced \((n-1)\)-cycle is \(O(1/n)\). Hence, there does not exist a strategy that a.a.s. constructs an induced \((n-1)\)-cycle in the pre-positional process, as desired.
``` semi-ランダムなグラフプロセスを研究し、ニック・ウォーマルド氏が提案した新しいプロセスについてです。これらの2つのプロセスは、${\mathcal P}$の性質を持つ半ランダムグラフ$G$を構築する際の漸近的に等速であることを示しています。以下に${\mathcal P}$の例を挙げます。 - $d\geq 1$の値が固定された$d$デジェネレーティングなグラフを含む集合 - $k\geq 1$の値が固定された$k$連結なグラフを含む集合 特に、上記の$k$連結性を示す結果により、原始の半ランダムなグラフプロセスにおける$k=2$の未解決のケースが解決されました。さらに、これらの2つのプロセスは、漸近的に等速に構築されない${\mathcal P}$の性質の例が存在することを証明しています。最後に、これらの2つのプロセスが異なるパフォーマンスを示す${\
2309.14754
Spectral stability under removal of small segments
In the present paper we deepen the works of L. Abatangelo, V. Felli, L. Hillairet and C. Lena on the asymptotic estimates of the eigenvalue variation under removal of segments from the domain in R2. We get a sharp asymptotic estimate when the eigenvalue is simple and the removed segment is tangent to a nodal line of the associated eigenfunction. Moreover, we extend their results to the case when the eigenvalue is not simple.
Xiang He
2023-09-26T08:29:30
http://arxiv.org/abs/2309.14754v4
# Spectral stability under removal of small segments ###### Abstract. In the present paper we deepen the works of L. Abatangelo, V. Felli, L. Hillairet and C. Lena on the asymptotic estimates of the eigenvalue variation under removal of segments from the domain in \(\mathbb{R}^{2}\). We get a sharp asymptotic estimate when the eigenvalue is simple and the removed segment is tangent to a nodal line of the associated eigenfunction. Moreover, we extend their results to the case when the eigenvalue is not simple. ## 1. Introduction Let \(\Omega\subset\mathbb{R}^{n}\) be a bounded domain. According to classical results in spectral theory, the Dirichlet Laplacian on \(\Omega\) admits a sequence of real eigenvalues tending to infinity \[0<\lambda_{1}(\Omega)<\lambda_{2}(\Omega)\leq\cdots\leq\lambda_{N}(\Omega)\leq\cdots\] which encodes lots of geometric information of \(\Omega\). A natural question is to study the spectral stability under the removal of a "small" subset (with Dirichlet boundary condition on the new boundary). This type of problem was first studied by M. Kac in [6] and J. Rauch, M. Tayler in [7]. In particular, J. Rauch and M. Taylor showed that if \(K\) is a subset with Newtonian capacity zero, then \(\lambda_{k}(\Omega\setminus K)=\lambda_{k}(\Omega)\) for all \(k\). Their result was refined by G. Courtois [5] to a quantitative version, namely, for any compact subset \(K\) of \(\Omega\), \[0\leq\lambda_{k}(\Omega\setminus K)-\lambda_{k}(\Omega)\leq C\ \mathrm{Cap}_{\Omega}(K),\] where \(\mathrm{Cap}_{\Omega}(K)\) is the capacity of \(K\) in \(\Omega\) (see SS2 for the definition) and \(C\) is a constant that is independent of \(K\) (but depends on the eigenspace of \(\lambda_{k}(\Omega)\)). Recently, L. Abatangelo, V. Felli, L. Hillairet and C. Lena observed in [2] that in the case where \(K=K_{\varepsilon}\subset\Omega\) (\(\varepsilon>0\)) is a family of compact sets concentrating to a set \(K_{0}\) of capacity \(0\) (see SS2 for the exact meaning), if \(\lambda_{N}(\Omega)\) is a simple eigenvalue, then what really hidden behind G. Courtois's proof is the following sharp asymptotic expansion \[\lambda_{N}(\Omega\setminus K_{\varepsilon})=\lambda_{N}(\Omega)+\mathrm{Cap} _{\Omega}(K_{\varepsilon},u_{N})+\mathrm{o}(\mathrm{Cap}_{\Omega}(K_{ \varepsilon},u_{N})),\ \mathrm{as}\ \varepsilon\to 0^{+}, \tag{1.1}\] where \(u_{N}\) is the \(L^{2}\)-normalized eigenfunction associated to \(\lambda_{N}(\Omega)\), and \(\mathrm{Cap}_{\Omega}(K_{\varepsilon},u_{N})\) is the \(u_{N}\)-capacity whose definition will be given in SS2. They further studied the special case where \(\Omega\subset\mathbb{R}^{2}\) is a region containing the origin \(0\), and the removed compact set \(K_{\varepsilon}\) in \(\Omega\) is a small segment \(s_{\varepsilon}\) near \(0\), \[s_{\varepsilon}=[-\varepsilon,\varepsilon]\times\{0\}.\] Since \(\Omega\) is a planar region, it is well-known that ([4]) if \(0\) is a zero point of \(u_{N}\) of order \(k\), then there exists \(\beta\in\mathbb{R}\setminus\{0\}\) and \(\alpha\in[0,\pi)\) such that \[r^{-k}u_{N}(r\cos t,r\sin t)\to\beta\sin(\alpha-kt) \tag{1.2}\] in \(C^{1,\tau}([0,2\pi])\) as \(r\to 0^{+}\) for any \(\tau\in(0,1)\). Under the assumption that \(\lambda_{N}(\Omega)\) is simple, they proved that as \(\varepsilon\to 0\) (c.f. Theorem 1.10 in [2]) 1. if \(u_{N}(0)\neq 0\), then \[\lambda_{N}(\Omega\setminus s_{\varepsilon})-\lambda_{N}(\Omega)=\frac{2\pi}{ |\log\varepsilon|}u_{N}^{2}(0)(1+\mathrm{o}(1)).\] 2. if \(0\) is a zero of \(u_{N}\) of order \(k\), and \(\alpha\neq 0\) in (1.2), then \[\lambda_{N}(\Omega\setminus s_{\varepsilon})-\lambda_{N}(\Omega)=\varepsilon^ {2k}\pi\beta^{2}\sin^{2}\alpha C_{k}(1+\mathrm{o}(1))\] where (1.3) \[C_{k}=\sum_{j=1}^{k}j|A_{j,k}|^{2},\ A_{j,k}=\frac{1}{\pi}\int_{0}^{2\pi}(\cos \eta)^{k}\cos(j\eta)\mathrm{d}\eta.\] 3. if \(0\) is a zero of \(u_{N}\) of order \(k\), and \(\alpha=0\) in (1.2), then \[\lambda_{N}(\Omega\setminus s_{\varepsilon})-\lambda_{N}(\Omega)=\mathrm{O}( \varepsilon^{2k+2}).\] Note that the last case is not completely well-understood even to the leading order: \(\alpha=0\) means that the segment \(s_{\varepsilon}\) is tangent to a nodal line of \(u_{N}\), and they expected that in this case "the vanishing order will depends on the precision of the approximation between the nodal line and the segment". Furthermore, L. Abatangelo, C. Lena and P. Musolino [3] studied the problem when \(\lambda_{N}(\Omega)\) is an eigenvalue of multiplicity \(m\), in which case after removing \(K_{\varepsilon}\) the eigenvalues will split, and the asymptotics becomes \[\lambda_{N+i-1}(\Omega\setminus K_{\varepsilon})=\lambda_{N}(\Omega)+\mu_{i} ^{\varepsilon}+\mathrm{o}(\chi_{\varepsilon}^{2}),\ \mathrm{as}\ \varepsilon\to 0^{+} \tag{1.4}\] for \(1\leq i\leq m\), where \(\mu_{i}^{\varepsilon}\) and \(\chi_{\varepsilon}^{2}\) will be explained in SS4 below. Using analyticity of eigenfunctions, they also get sharp asymptotics for the case \(K_{\varepsilon}=\varepsilon\overline{\omega}\), where \(\omega\subset\mathbb{R}^{2}\) is bounded open domain containing \(0\) such that \(\mathbb{R}^{2}\setminus\overline{\omega}\) is connected and \(\Omega\) satisfies the same conditions as \(\omega\). The purpose of this short paper is two-fold. Firstly, we will verify their expectation. Precisely, when \(\alpha=0\) in (1.2), suppose that the segment \(s_{\varepsilon}\) is tangent to a nodal line of \(u_{N}\). We parameterize the nodal line of \(u_{N}\) near \((0,0)\) by \((t,f(t))\). We say \(s_{\varepsilon}\) is tangent to this nodal line to the order \(l-1\) if there exists \(l\in\mathbb{Z}_{\geq 2}\cup\{+\infty\}\) such that \[\begin{split} f^{(s)}(0)&=0,\ \forall\ 0\leq s<l,\ f^{(l)}(0)\neq 0 \quad\text{ if }l\neq+\infty,\\ f^{(s)}(0)&=0,\ \forall\ s\in\mathbb{Z}_{\geq 0} \quad\text{ if }l=+\infty.\end{split} \tag{1.5}\] Our first main result is **Theorem 1.1**.: _Let \(\Omega\subset\mathbb{R}^{2}\) be a bounded domain containing the origin \(0\). Suppose \(\lambda_{N}(\Omega)\) is a simple Dirichlet eigenvalue of \(\Omega\), with \(L^{2}\)-normalized eigenfunction \(u_{N}\), and suppose that the segment \(s_{\varepsilon}=[-\varepsilon,\varepsilon]\times\{0\}\) is tangent to a nodal line of \(u_{N}\) at \((0,0)\) to the order \(l-1\)._ 1. _If_ \(l=+\infty\)_, then_ \(\lambda_{N}(\Omega\setminus s_{\varepsilon})=\lambda_{N}(\Omega)\)_._ 2. _If_ \(l\neq+\infty\)_, then_ \[\lambda_{N}(\Omega\setminus s_{\varepsilon})-\lambda_{N}(\Omega)=\varepsilon ^{2(k+l-1)}[\binom{k+l-1}{k-1}\beta k!f^{(l)}(0)]^{2}\pi C_{k+l-1}(1+\mathrm{o} (1)),\] \[\text{as $\varepsilon\to 0^{+}$ where $k$ is given in (\ref{eq:L1}) and $C_{k+l-1}$ is given in (\ref{eq:L1}).}\] Secondly, we will use the techniques in [3] to extend Theorem 1.10 in [2] to the cases where the eigenvalue \(\lambda_{N}(\Omega)\) is not simple, i.e., \[\lambda=\lambda_{N}(\Omega)=\cdots=\lambda_{N+m-1}(\Omega)\] is a Dirichlet eigenvalue of \(\Omega\) with multiplicity \(m\). It turns out that the difference of the eigenvalues will depend on the order of \(x_{1}\)_-vanishing_ (which will be defined in SS4) of the associated eigenfunctions. Roughly speaking, we will decompose the eigenspace \(E(\lambda)\) of \(\lambda\) to \[E_{\infty}\oplus E_{1}\oplus\cdots\oplus E_{p}\] such that \(\dim E_{j}=1\) for all \(1\leq j\leq p\) and the order of \(x_{1}\)_-vanishing_ of functions in \(E_{j}\) is \(k_{j}\). Suppose that \(u_{N+m-p+j-1}\) is an \(L^{2}\)-normalized function in \(E_{j}\), then our second main result is **Theorem 1.2**.: _Let \(\Omega\subset\mathbb{R}^{2}\) be a bounded domain containing the origin \(0\), and \(s_{\varepsilon}=[-\varepsilon,\varepsilon]\times\{0\}\). Suppose \(\lambda_{N}(\Omega)\) is a Dirichlet eigenvalue of multiplicity \(m\). Then when \(\varepsilon\) is small enough, for all \(1\leq i\leq m-p\),_ \[\lambda_{N+i-1}(\Omega\setminus s_{\varepsilon})=\lambda_{N}(\Omega)\] _and for all \(1\leq j\leq p\),_ \[\lambda_{N+m-p+j-1}(\Omega\setminus s_{\varepsilon})-\lambda_{N}(\Omega)=\pi C _{k_{j}}\cdot[\frac{\partial_{x_{1}}^{k_{j}}u_{N+m-p+j-1}(0,0)}{k_{j}!}]^{2} \rho_{k_{j}}^{\varepsilon}+o(\rho_{k_{j}}^{\varepsilon}) \tag{1.6}\] _as \(\varepsilon\to 0^{+}\), where \(C_{k_{j}}\) is given in (1.3) for \(k_{j}\geq 1\) and \(C_{0}=2\), and_ \[\rho_{k}^{\varepsilon}=\begin{cases}\frac{1}{|\log(\varepsilon)|}&\text{ if $k=0$}\,,\\ \varepsilon^{2k}&\text{ if $k\geq 1$}\,.\end{cases} \tag{1.7}\] ## 2. preparation We start with some definitions. For any compact subset \(K\) of \(\Omega\), the capacity of \(K\) in \(\Omega\) is defined as \[\mathrm{Cap}_{\Omega}(K)=\inf\{\int_{\Omega}|\nabla f|^{2}:\ f-\eta_{K}\in H^{ 1}_{0}(\Omega\setminus K)\} \tag{2.1}\] where \(\eta_{K}\in C^{\infty}_{c}(\Omega)\) is a fixed function with \(\eta_{K}\equiv 1\) in a neighborhood of \(K\). The infimum (2.1) is achieved by a unique function \(V_{K}\in\eta_{K}+H^{1}_{0}(\Omega\setminus K)\) so that \[\mathrm{Cap}_{\Omega}(K)=\int_{\Omega}|\nabla V_{K}|^{2}\mathrm{d}x.\] \(V_{K}\) is called the capacitary potential of \(K\) in \(\Omega\). More generally, for any \(u\in H^{1}_{0}(\Omega)\), one can define the \(u\)-capacity of \(K\) in \(\Omega\) to be \[\mathrm{Cap}_{\Omega}(K,u)=\inf\{\int_{\Omega}|\nabla f|^{2}\mathrm{d}x:\ f-u \in H^{1}_{0}(\Omega\setminus K)\}. \tag{2.2}\] The infimum in (2.2) is also achieved by a unique function \(V_{K,u}\in u+H^{1}_{0}(\Omega\setminus K)\) such that \[\mathrm{Cap}_{\Omega}(K,u)=\int_{\Omega}|\nabla V_{K,u}|^{2}\mathrm{d}x\] and thus it is called the potential associated with \(u\) and \(K\). Note that if \(u\in H^{1}_{0}(\Omega\setminus K)\), then \(\mathrm{Cap}_{\Omega}(K,u)=0\). The concept of \(u\)-capacity can be further extended to \(H^{1}_{\mathrm{loc}}(\Omega)\) functions. For function \(v\) in \(H^{1}_{\mathrm{loc}}(\Omega)\), one can simply define the \(v\)-capacity to be \[\mathrm{Cap}_{\Omega}(K,v)=\mathrm{Cap}_{\Omega}(K,\eta_{K}v)\] with \(\eta_{K}\) as in (2.1). Finally we define the meaning of "concentrating compact sets". Let \(\{K_{\varepsilon}\}_{\varepsilon>0}\) be a family of compact sets contained in \(\Omega\). We say that \(K_{\varepsilon}\) is concentrating to a compact set \(K\subset\Omega\) if for every open set \(U\subset\Omega\) such that \(U\supset K\), there exists a constant \(\varepsilon_{U}>0\) such that \[U\supset K_{\varepsilon},\qquad\forall\varepsilon<\varepsilon_{U}.\] L. Abatangelo, V. Felli, L. Hillairet and C. Lena have computed the asymptotic estimate of \(\mathrm{Cap}_{\Omega}(s_{\varepsilon},u)\) in [2]: **Proposition 2.1** (Proposition 2.7 in [2]).: _Let \(\Omega\subset\mathbb{R}^{2}\) be a bounded domain containing the origin \(0\). For function \(u\in C^{k+1}_{\mathrm{loc}}(\Omega)\setminus\{0\}\), suppose the Taylor polynomial at \(0\) of \(u\) of order \(k\) has the form_ \[P_{k}(x_{1},x_{2})=\sum_{j=0}^{k}c_{j}x_{1}^{k-j}x_{2}^{j}\] _for some \(c_{0},c_{1},\cdots,c_{k}\in\mathbb{R}\), \((c_{0},c_{1},\cdots,c_{k})\neq(0,0,\cdots,0)\)._ 1. _If_ \(c_{0}\neq 0\)_, then_ (2.3) \[\mathrm{Cap}_{\Omega}(s_{\varepsilon},u)=\begin{cases}\frac{2\pi}{|\log \varepsilon|}u^{2}(0)(1+\mathrm{o}(1)),&\text{ if }k=0,\\ \varepsilon^{2k}c_{0}^{2}\pi C_{k}(1+\mathrm{o}(1)),&\text{ if }k\geq 1,\end{cases}\] _as_ \(\varepsilon\to 0^{+}\) _and_ \(C_{k}\) _being defined in (_1.3_)._ 2. _If_ \(c_{0}=0\)_, then_ \(\mathrm{Cap}_{\Omega}(s_{\varepsilon},u)=\mathrm{O}(\varepsilon^{2k+2})\) _as_ \(\varepsilon\to 0^{+}\)_._ ## 3. The Proof of Theorem 1.1 By calculations, one can find that for any \(\alpha\in\mathbb{Z}_{\geq 0}\), \[\frac{\mathrm{d}^{\alpha}}{\mathrm{d}t^{\alpha}}u_{N}(t,f(t))|_{t=0}\] can be written as a finite sum of expressions of the form \[\frac{\partial^{\beta}}{\partial_{x_{1}}^{\beta-\gamma}\partial_{x_{2}}^{ \gamma}}u_{N}(0,0)\times f^{(w_{1})}(0)\times\cdots\times f^{(w_{\gamma})}(0),\] where \(\beta,\gamma\in\mathbb{Z}_{\geq 0}\), \(w_{i}\in\mathbb{N}\) (\(1\leq i\leq\gamma\)) with \[\sum_{j=1}^{\gamma}w_{j}=\alpha-\beta+\gamma.\] By assumptions (1.2) and (1.5), one has \[\frac{\partial^{\beta}}{\partial_{x_{1}}^{\beta-\gamma}\partial_{x_{2}}^{ \gamma}}u_{N}(0,0)\times f^{(w_{1})}(0)\times\cdots\times f^{(w_{\gamma})}(0)=0 \tag{3.1}\] when \(\beta<k\) or \(w_{i}<l\) for some \(1\leq i\leq\gamma\) and by the fact \((t,f(t))\) parameterize a nodal line of \(u_{N}\), one has \[\frac{\mathrm{d}^{\alpha}}{\mathrm{d}t^{\alpha}}u_{N}(t,f(t))|_{t=0}=0 \tag{3.2}\] for all \(\alpha\in\mathbb{Z}_{\geq 0}\). Then combining (3.1) and (3.2), one has \[\begin{split}&\frac{\partial^{m}}{\partial_{x_{1}}^{m}}u(0,0)=0, \ \forall\ 0\leq m<k+l-1,\quad\text{if }l\neq+\infty,\\ &\frac{\partial^{m}}{\partial_{x_{1}}^{m}}u(0,0)=0,\ \forall\ m\in\mathbb{Z}_{\geq 0}, \text{if }l=+\infty.\end{split} \tag{3.3}\] Thus, when \(l=+\infty\), the segment \(s_{\varepsilon}\) belongs to a nodal line of \(u_{N}\). So \(\mathrm{Cap}_{\Omega}(s_{\varepsilon},u_{N})=0\) and then, by expansion (1.1), \(\lambda_{N}(\Omega\setminus s_{\varepsilon})=\lambda_{N}(\Omega)\). This proves the first part of Theorem 1.1 For the case \(l\neq+\infty\), let \[u_{\#,k+l-2}(x_{1},x_{2})=\sum_{\stackrel{{(h,j)\in\mathbb{Z}_{ \geq 0}^{2}}}{{h+j\leq k+l-2}}}\frac{\partial_{x_{1}}^{h}\partial_{x_{2}}^{j}u(0,0 )}{h!j!}x_{1}^{h}x_{2}^{j}, \tag{3.4}\] then by (3.3) with \(l\neq+\infty\), one has \(u_{\#,k+l-2}=0\) on the segment \(s_{\varepsilon}\). Thus \[\mathrm{Cap}_{\Omega}(s_{\varepsilon},u_{N})=\mathrm{Cap}_{\Omega}(s_{ \varepsilon},u_{N}-u_{\#,k+l-2}).\] By calculation, one has \[0=\frac{\mathrm{d}^{k+l-1}}{\mathrm{d}t^{k+l-1}}u_{N}(t,f(t))|_{t=0}=\frac{ \partial^{k+l-1}}{\partial_{x_{1}}^{k+l-1}}u(0,0)+\binom{k+l-1}{k-1}\frac{ \partial^{k}}{\partial_{x_{1}}^{k-1}\partial_{x_{2}}}u(0,0)f^{(l)}(0).\] So \[\frac{\partial^{k+l-1}}{\partial_{x_{1}}^{k+l-1}}u(0,0)=-\binom{k+l-1}{k-1}\frac{ \partial^{k}}{\partial_{x_{1}}^{k-1}\partial_{x_{2}}}u(0,0)f^{(l)}(0)=\binom{k+l -1}{k-1}\beta k!f^{(l)}(0)\] where the second equal sign is given by (1.2). Then by Proposition 2.1, one has \[\mathrm{Cap}_{\Omega}(s_{\varepsilon},u_{N}-u_{\#,k+l-2})=\varepsilon^{2(k+l-1 )}(\binom{k+l-1}{k-1}\beta k!f^{(l)}(0))^{2}\pi C_{k+l-1}(1+\mathrm{o}(1))\] as \(\varepsilon\to 0^{+}\). Finally by expansion (1.1), one can get the vanishing order of \(\lambda_{N}(\Omega\setminus s_{\varepsilon})-\lambda_{N}(\Omega)\) when \(\alpha=0\). This completes the proof of Theorem 1.1. ## 4. multiple case Now suppose \(\Omega\) is a bounded domain in \(\mathbb{R}^{2}\) containing \(0\) and \[\lambda=\lambda_{N}(\Omega)=\cdots=\lambda_{N+m-1}(\Omega)\] is an eigenvalue of the Dirichlet Laplacian on \(\Omega\) with multiplicity \(m\), we will use Proposition 3.1 in [3] to show how the eigenvalues perturb when \(\Omega\) is removed by \(s_{\varepsilon}\). In particular, the proof is just a small modification of the proof of Theorem 1.17 in [3]. Firstly, we introduce the concept of \((u,v)\)-capacity: **Definition 4.1** (Definition 1.7 in [3]).: _Given \(u,v\in H^{1}_{0}(\Omega)\), the \((u,v)\)-capacity of a compact set \(K\subset\Omega\) is_ \[\mathrm{Cap}_{\Omega}(K,u,v)=\int_{\Omega}\nabla V_{K,u}\cdot\nabla V_{K,v} \mathrm{d}x,\] _where \(V_{K,u}\) (resp. \(V_{K,v}\)) is the potential associated with \(u\) (resp. \(v\)) and \(K\)._ For a function \(u\) on \(\Omega\) which is real-analytic in a neighborhood of \(0\), we call the smallest \(\alpha\) such that \[\partial_{x_{1}}^{\alpha}u(0,0)\neq 0\] the order of \(x_{1}\)_-vanishing_ of \(u\) and we denote it by \(\kappa_{1}(u)\). In particular, if \[\partial_{x_{1}}^{\beta}u(0,0)=0,\qquad\forall\beta\in\mathbb{Z}_{\geq 0},\] we let \(\kappa_{1}(u)=+\infty\). For simplicity, denote \[\lambda_{i}(\Omega\setminus s_{\varepsilon})\text{ by }\lambda_{i}^{ \varepsilon},\qquad V_{s_{\varepsilon},u}\text{ by }V_{u}^{\varepsilon}\] and the eigenspace associated to \(\lambda\) by \(E(\lambda)\). Then directly by Theorem 1.9 in [3], one has \[\lambda_{N+i-1}^{\varepsilon}-\lambda=\mu_{i}^{\varepsilon}+\mathrm{o}( \chi_{\varepsilon}^{2}),\ \forall 1\leq i\leq m, \tag{4.1}\] where \[\chi_{\varepsilon}^{2}=\sup\{\mathrm{Cap}_{\Omega}(s_{\varepsilon},u):\ u\in E (\lambda)\text{ and }\|u\|=1\}\] and \(\{\mu_{i}^{\varepsilon}\}_{i=1}^{m}\) are eigenvalues of the quadratic form \(r_{\varepsilon}\) on \(E(\lambda)\) defined by \[r_{\varepsilon}(u,v)=\operatorname{Cap}_{\Omega}(s_{\varepsilon},u,v)-\lambda \int_{\Omega}V_{u}^{\varepsilon}\cdot V_{v}^{\varepsilon}\mathrm{d}x. \tag{4.2}\] To get more accurate estimate of the difference of eigenvalues, one need to decompose the eigenspace \(E(\lambda)\) with respect to the order of \(x_{1}\)-vanishing. Specifically, for any \(k\in\mathbb{Z}_{\geq 0}\), consider the mapping \(\Pi_{k}^{1}:E(\lambda)\to\mathbb{R}_{k}[x_{1}]\) that associates to a function its order \(k\) Taylor expansion with respect to \(x_{1}\) variable at \((0,0)\), i.e. \[\Pi_{k}^{1}u=\sum_{\beta=0}^{k}\frac{1}{\beta!}\partial_{x_{1}}^{\beta}u(0,0)x _{1}^{\beta},\] and denote the kernel of \(\Pi_{k}^{1}\) by \(N_{k}\). Let \[N_{-1}=E(\lambda),\qquad E_{\infty}=\bigcap_{k\in\mathbb{Z}_{\geq 0}}N_{k}\] and \[k_{1}>\cdots>k_{p}\geq 0\] be the integers at which there is a jump in the non-increasing sequence \(\{\dim(N_{k})\}_{k\geq 1}\). For \(1\leq j\leq p\), let \[E_{j}=N_{k_{j+1}}\cap N_{k_{j}}^{\perp}\] where we set \(k_{p+1}=-1\) and \(N_{k_{j}}^{\perp}\) is the orthogonal complement of \(N_{k_{j}}\) in \(L^{2}(\Omega)\). Then, we get an \(L^{2}(\Omega)\)-orthogonal decomposition of \(E(\lambda)\), \[E(\lambda)=E_{\infty}\oplus E_{1}\oplus\cdots\oplus E_{p}.\] By the construction of the decomposition, one can find that the order of \(x_{1}\)_-vanishing_ of functions in \(E_{j}\) is \(k_{j}\). In particular, since each function in \(\mathbb{R}_{k}[x_{1}]\) has only one variable, one has \(\dim E_{j}=1\) for all \(1\leq j\leq p\). Fix an orthonormal basis \(\{u_{N+i-1}\}_{i=1}^{m}\) such that \[E_{\infty}=\operatorname{span}\{u_{N},\cdots,u_{N+m-p-1}\},\] \[E_{j}=\operatorname{span}\{u_{N+m-p+j-1}\},\] for all \(1\leq j\leq p\). Before proving Theorem 1.2, we state two lemmas. **Lemma 4.2**.: _For any \(f\in H^{1}_{0}(\Omega)\),_ \[\int_{\Omega}|V_{f}^{\varepsilon}|^{2}\,dx=o(\operatorname{Cap}_{\Omega}(s_{ \varepsilon},f))\quad\text{as }\varepsilon\to 0.\] Proof.: It's a direct corollary of Lemma A.1 in [2]. **Lemma 4.3** (Proposition 3.1 in [3]).: _Let \((\mathcal{H},\|\cdot\|)\) be a Hilbert space and \(q\) be a quadratic form, semi-bounded from below (not necessarily positive), with domain \(\mathcal{D}\) dense in \(\mathcal{H}\) and with discrete spectrum \(\{\nu_{i}\}_{i\geq 1}\). Let \(\{g_{i}\}_{i\geq 1}\) be an orthonormal basis of eigenvectors of \(q\). Let \(N\) and \(m\) be positive integers, \(F\) an \(m\)-dimensional subspace of \(\mathcal{D}\) and \(\{\xi_{i}^{F}\}_{i=1}^{m}\) the eigenvalues of the restriction of \(q\) to \(F\)._ _Assume that there exist positive constants \(\gamma\) and \(\delta\) such that_ * \(0<\delta<\gamma/\sqrt{2}\)_;_ * _for all_ \(i\in\{1,\ldots,m\}\)_,_ \(|\nu_{N+i-1}|\leq\gamma\)_,_ \(\nu_{N+m}\geq\gamma\) _and, if_ \(N\geq 2\)_,_ \(\nu_{N-1}\leq-\gamma\)_;_ * \(|q(\varphi,g)|\leq\delta\left\|\varphi\right\|\left\|g\right\|\) _for all_ \(g\in\mathcal{D}\) _and_ \(\varphi\in F\)_._ _Then we have_ * \(\left|\nu_{N+i-1}-\xi_{i}^{F}\right|\leq\frac{4}{\gamma}\delta^{2}\) _for all_ \(i=1,\ldots,m\)_;_ * \(\left\|\Pi_{N}-\mathbb{I}\right\|_{\mathcal{L}(F,\mathcal{H})}\leq\sqrt{2} \delta/\gamma\)_, where_ \(\Pi_{N}\) _is the projection onto the subspace of_ \(\mathcal{D}\) _spanned by_ \(\{g_{N},\ldots,g_{N+m-1}\}\)_._ **Proof of Theorem 1.2:** For simplicity, denote \[\mu_{j}=\pi C_{k_{j}}\cdot[\frac{\partial_{x_{1}}^{k_{j}}u_{N+m-p+j-1}(0,0)}{k _{j}!}]^{2},\qquad v_{j}^{\varepsilon}=\lambda_{N+m-p+j-1}^{\varepsilon}-\lambda,\] for all \(1\leq j\leq p\). Easy to find that if \(\kappa_{1}(u)\neq+\infty\), \[\mathrm{Cap}_{\Omega}(s_{\varepsilon},u)=\mathrm{Cap}_{\Omega}(s_{\varepsilon},u-u_{\#,\kappa_{1}(u)-1})\] where \(u_{\#,\kappa_{1}(u)-1}\) is defined in (3.4) and if \(\kappa_{1}(u)=+\infty\), \[\mathrm{Cap}_{\Omega}(s_{\varepsilon},u)=0.\] So by Proposition 2.1, one has \(\chi_{\varepsilon}^{2}=\mathrm{O}(\rho_{k_{p}}^{\varepsilon})\) as \(\varepsilon\to 0^{+}\). Then, by the fact \[\mathrm{Cap}_{\Omega}(K,u,v)\leq\mathrm{Cap}_{\Omega}(K,u)^{\frac{1}{2}} \mathrm{Cap}_{\Omega}(K,v)^{\frac{1}{2}}\] and Lemma 4.2, the matrix of \(r_{\varepsilon}\) (defined in (4.2)) under the basis \(\{u_{N+i-1}\}_{i=1}^{m}\) is of the form \[A_{\varepsilon}=\left(\begin{array}{ccc}0&&0\\ &\ddots&\vdots\\ 0&\cdots&\mu_{p}\rho_{k_{p}}^{\varepsilon}\end{array}\right)+o\left(\rho_{k_{ p}}^{\varepsilon}\right).\] So by min-max principle and (4.1), one has \[v_{p}^{\varepsilon}=\mu_{p}\rho_{k_{p}}^{\varepsilon}+\mathrm{o}(\rho_{k_{p}} ^{\varepsilon}).\] For \(v_{p-1}^{\varepsilon}\), we apply Lemma 4.3 with \[\mathcal{H}^{\varepsilon}=L^{2}(\Omega\setminus s_{\varepsilon});\] \[q_{p-1}^{\varepsilon}(u)=\frac{1}{\rho_{k_{p}}^{\varepsilon}}(\int_{\Omega \setminus s_{\varepsilon}}|\nabla u|^{2}\mathrm{d}x-\lambda\int_{\Omega \setminus s_{\varepsilon}}u^{2}\mathrm{d}x),\text{ for all }u\in H_{0}^{1}(\Omega \setminus s_{\varepsilon});\] \[F_{p-1}^{\varepsilon}=\Pi_{\varepsilon}(E_{\infty}\oplus E_{1}\oplus\cdots \oplus E_{p-1}),\] where \(\Pi_{\varepsilon}u=u-V_{u}^{\varepsilon}\). Easy to check that there exists \(\gamma>0\) such that, for \(\varepsilon\) small enough, \[|\frac{1}{\rho_{k_{p}}^{\varepsilon}}v_{j}^{\varepsilon}|\leq\gamma,\qquad \text{for }1\leq j\leq m-1;\] \[\frac{1}{\rho_{k_{p}}^{\varepsilon}}v_{p}^{\varepsilon}\geq 2\gamma;\] and in case \(N\geq 2\) \[\lambda_{N-1}^{\varepsilon}-\lambda\geq-2\gamma.\] Similar to the arguments in Section 3.1 in [3], one has, for all \(v\in F_{p-1}^{\varepsilon}\) and \(w\in H_{0}^{1}(\Omega\setminus s_{\varepsilon})\), \[|q_{p-1}^{\varepsilon}(v,w)|\leq\mathrm{o}\bigg{(}\big{(}\frac{\rho_{k_{p-1}} ^{\varepsilon}}{\rho_{k_{p}}^{\varepsilon}}\big{)}^{1/2}\bigg{)}\|v\|_{L^{2}} \|w\|_{L^{2}}.\] Repeating the proof of Theorem 1.9 in Section 3.2 in [3], one has \[v_{p-1}^{\varepsilon}=\mu_{p-1}\rho_{k_{p-1}}^{\varepsilon}+\mathrm{o}(\rho_{ k_{p-1}}^{\varepsilon}).\] Continue the above process, one will get \[v_{j}^{\varepsilon}=\mu_{j}\rho_{k_{j}}^{\varepsilon}+\mathrm{o}(\rho_{k_{j}} ^{\varepsilon}),\qquad\text{for all }1\leq j\leq p.\] Since the proof is vary similar to the proof of Theorem 1.17 in [3], we omit the details. Last, by the fact that each function in \(E_{\infty}\) is also an eigenfunction of the Dirichlet Laplacian on \(\Omega\setminus s_{\varepsilon}\), one has \[\lambda_{N+i-1}^{\varepsilon}=\lambda,\qquad\text{for all }1\leq i\leq m-p.\] This completes the proof of Theorem 1.2. **Acknowledgments.** The author would like to thank his advisor, Zuoqin Wang, for numerous help during various stages of the work and Paolo Musolino for helpful conversations on [1].
本稿では、L.Abatangelo、V.Felli、L.Hillairet、C.Lenaの workをさらに深掘りし、R2の領域からセグメントの除去による有界値算出の近似値について、述べています。単純な値の値を有界値として得る際に、除去されたセグメントが関連する自己関数の方向線に接する時、鋭い近似値を得ることが出来ます。また、その結果を単純な値以外の値にも拡張しました。
2310.20592
Strongly Magnetized Tidal Disruption Event Disks via Stream Injection in GRMHD
Magnetically arrested accretion disks (MADs) around a rapidly rotating black hole (BH) have been proposed as a model for jetted tidal disruption events (TDEs). However, the dynamics of strongly magnetized disks in a more realistic simulation which can mimic the chaotic dynamics during a TDE have previously been unexplored. Here we employ global GRMHD simulations of a pre-existing MAD disk interacting with an injected TDE stream with impact parameter $\beta\equiv R_t/R_p=4-7$ to investigate how strongly magnetized TDEs differ from the standard MAD picture. We demonstrate for the first time that a MAD or semi-MAD state can be sustained and jets powered by the BH spin are produced in a TDE. We also demonstrate that the strength of the self-intersection shock depends on how dense the disk is relative to the stream, or the density contrast $f_\rho=\rho_d/\rho_s$. The jet or funnel can become significantly tilted (by $10-30^\circ$) due to the self-intersection outflow when $f_\rho \leq 0.1$. In models with a powerful jet and $f_\rho\leq 0.01$, the tilted jet interacts with and ultimately tilts the disk by as much as 23 degrees from the incoming stream. We illustrate that as $f_\rho$ increases, the tilt of the jet and disk is expected to realign with the BH spin once $f_\rho \geq 0.1$. We illustrate how the tilt can rapidly realign if $f_\rho$ increases rapidly and apply this to TDEs which have shown X-ray evolution on timescales of days-weeks.
Brandon Curd, Richard Anantua, Hayley West, Joaquin Duran
2023-10-31T16:30:02
http://arxiv.org/abs/2310.20592v1
# Strongly Magnetized Tidal Disruption Event Disks via Stream Injection in GRMHD ###### Abstract Magnetically arrested accretion disks (MADs) around a rapidly rotating black hole (BH) have been proposed as a model for jetted tidal disruption events (TDEs). However, the dynamics of strongly magnetized disks in a more realistic simulation which can mimic the chaotic dynamics during a TDE have previously been unexplored. Here we employ global GRMHD simulations of a pre-existing MAD disk interacting with an injected TDE stream with impact parameter \(\beta\equiv R_{t}/R_{p}=4-7\) to investigate how strongly magnetized TDEs differ from the standard MAD picture. We demonstrate for the first time that a MAD or semi-MAD state can be sustained and jets powered by the BH spin are produced in a TDE. We also demonstrate that the strength of the self-intersection shock depends on how dense the disk is relative to the stream, or the density contrast \(f_{\rho}=\rho_{d}/\rho_{s}\). The jet or funnel can become significantly tilted (by \(10-30^{\circ}\)) due to the self-intersection outflow when \(f_{\rho}\leq 0.1\). In models with a powerful jet and \(f_{\rho}\leq 0.01\), the tilted jet interacts with and ultimately tilts the disk by as much as 23 degrees from the incoming stream. We illustrate that as \(f_{\rho}\) increases, the tilt of the jet and disk is expected to realign with the BH spin once \(f_{\rho}\geq 0.1\). We illustrate how the tilt can rapidly realign if \(f_{\rho}\) increases rapidly and apply this to TDEs which have shown X-ray evolution on timescales of days-weeks. keywords: accretion, accretion discs - black hole physics - MHD - gamma-rays: galaxies - X-rays: galaxies ## 1 Introduction When a star wanders too close to its central black hole (BH), the tidal forces from the BH exceed the self gravity of the star and the star is subsequently disrupted into a stream of stellar material (Hills, 1975; Rees, 1988; Phinney, 1989; Evans & Kochanek, 1989). The bound portion of the stream ultimately returns to the BH, delivering mass to the pericenter radius at the fall back rate (\(\dot{M}_{\rm fb}\)) which falls off as \((t/t_{\rm fb})^{-5/3}\), where \(t_{\rm fb}\) is the orbital period of the most bound portion of the stream (or the fall back time). This leads to emission which also drops off as \((t/t_{\rm fb})^{-5/3}\) since the energy available for dissipation is provided by the kinetic energy of the stream. The transient, which is known as a tidal disruption event (TDE), is typically detectable for months-years. The dynamics governing the properties of the stream and subsequent emission depend on the stellar mass, eccentricity, pericenter radius, and compressibility of the star. The tidal radius of the star is given by, \[R_{t}/r_{g}=47m_{6}^{-2/3}m_{\star}^{-1/3}r_{\star}, \tag{1}\] where \(m_{6}=M_{\rm BH}/10^{6}\,M_{\odot}\) is the mass of the SMBH, \(m_{\star}=M_{\star}/M_{\odot}\) is the mass of the disrupted star, and \(r_{\star}=R_{\star}/R_{\odot}\) is its radius. For the typical TDE, the orbit is parabolic (\(e=1\)). For zero age main sequence stars the radius for complete disruption depends on the compressibility and occurs at \(\sim 0.9R_{t}\) for \(\gamma=5/3\) and at \(\gtrsim 2R_{t}\) for \(\gamma=4/3\)(Guillochon & Ramirez-Ruiz, 2013; Mainetti et al., 2017), though it is larger for evolved stars (Golightly et al., 2019). Several works have addressed the initial disruption of the star and evolution of the stream over a broad parameter space (Carter & Luminet, 1982; Evans & Kochanek, 1989; Kochanek, 1994; Lodato et al., 2009; Brassart & Luminet, 2010; Stone et al., 2013; Coughlin & Nixon, 2015; Coughlin et al., 2016; Steinberg et al., 2019; Ryu et al., 2020). TDEs have been discovered in the X-ray, optical/UV, and radio (see Komossa, 2015; Alexander et al., 2020; Gezari, 2021 for a review). While disk formation is expected in TDEs, what powers the emission is still unclear, for instance either turbulent accretion or shocks could explain the emission at different stages in the evolution. The presence of outflows, possibly launched by an accretion disk (Strubbe & Quataert, 2009; Coughlin & Begelman, 2014; Metzger & Stone, 2016), has been inferred in many cases due to radio emission (Alexander et al., 2016, 2017) and TDEs have also been observed to launch jets (Bloom et al., 2011; Burrows et al., 2011; Zauderer et al., 2011; Cenko et al., 2012; Brown et al., 2015). More recently, a handful of TDEs have been observed during the rise to peak (Holoien et al., 2019, 2020; Hinkle et al., 2021; Hammerstein et al., 2023). This bounty of observations is expected to grow significantly once the Large Synoptic Survey Telescope (LSST, Ivezic et al., 2019; Bricman and Gomboc, 2020) comes online, but theory has yet to fully describe the range of observational properties exhibited by TDEs. Jetted TDEs have observational properties that present a particularly complicated puzzle. For instance, _Swift_ J1644+57 showed rapid viability following the turn on with quasi-periodic oscillations (QPOs) at \(\sim 200\) s (Reis et al., 2012), long period viability at \(\sim 10^{6}\) s with the period increasing over the course of the transient (Saxton et al., 2012), and a rapid drop in the X-ray flux at the \(\sim 500\) days after the initial trigger (Zauderer et al., 2013). A similar drop off in the X-ray flux was seen in _Swift_ J2058+05 after several months (Pasham et al., 2015). Magnetically arrested accretion disks (MADs, Narayan et al., 2003) are thought to provide a physical explanation for both the presence of a relativistic jets and variability in jetted TDEs. The large magnetic flux required for a MAD is thought to be sourced by either poloidal field lines in a fossil disk (Kelley et al., 2014; Tchekhovskoy et al., 2014; Teboul and Metzger, 2023) or conversion of toroidal field lines to poloidal through a dynamo effect (Liska et al., 2020). However, general relativistic radiation magnetohydrodynamics (GRRMHD) simulations of thin MADs have not shown complete jet turn off, potentially due to magnetic pressure support of the disk at low accretion rates (Avara et al., 2016; Curd and Narayan, 2023; Liska et al., 2022). Thus, the rapid shut off in X-ray flux is difficult to explain in a MAD state unless simulations are unable to capture magnetic diffusion due to their relatively short duration (typically several days). Disk formation in TDEs may result in a different disk structure than the standard advection dominated accretion disk (ADAF, Abramowicz et al., 1988; Narayan and Yi, 1995), which has been assumed in some studies (Dai et al., 2018; Curd and Narayan, 2019). Several numerical studies of disk formation have demonstrated the presence of shocks and outflows as well as long lasting asymmetric structure (Ramirez-Ruiz and Rosswog, 2009; Guillochon and Ramirez-Ruiz, 2013; Shiokawa et al., 2015; Bonnerot et al., 2016; Sadowski et al., 2016; Hayasaki et al., 2016; Bonnerot and Lu, 2020; Bonnerot et al., 2021; Curd and Narayan, 2022; Steinberg and Stone, 2022; Ryu et al., 2023). Furthermore, the eccentricity of material sourced from the stream is difficult to dissipate which, in the majority of studies, leads to an eccentric accretion disk and spiral shocks. For instance, the most realistic smooth particle hydrodynamics simulations to date found imperfect circularization as the final disk remains mildly eccentric with \(e\approx 0.3\)(Bonnerot and Lu, 2020; Bonnerot et al., 2021). A long duration simulation (\(2t_{\rm fb}\)) by Ryu et al. (2023) demonstrated that shocks may dominate the energy budget of the TDE and the disk may remain highly eccentric with \(e\sim 0.5-0.6\). However, recent RHD simulations with adaptive mesh refinement find that the inner disk was able to reach \(e<0.2\) after more than 30 days (Steinberg and Stone, 2022), which is substantially longer than disk formation simulations with similar parameters. It is worth noting that GRMHD and GRRMHD simulations were unable to reach the magnetic flux that is required for the MAD state due to the weak magnetic flux provided by the stream as well as the chaotic disk formation (Sadowski et al., 2016; Curd, 2021). As there are no current simulations of eccentric MADs nor TDE disk formation simulations which result in a MAD, it is unclear how MADs in TDEs may differ from the standard thick accretion disk. The primary question we address in this work is whether or not TDE disks can maintain the magnetic flux required for the MAD state. Although Kelley et al. (2014) demonstrated that the stream can trap some magnetic flux, how much magnetic field threaded the BH can not be seen in their local simulations. Global simulations are needed in order to observe field lines advecting onto the BH horizon. Furthermore, the self intersection outflow is quasi spherical thus the force that it applies to the inner disk and jet is not symmetrical (e.g. Jiang et al., 2016). This suggests that the jet, during strong self intersection, will experience an asymmetric lateral force about the jet axis. One might expect strong perturbation of the jet, and potentially the disk due its interaction with the jet. In this work, we investigate MAD or strongly magnetized TDE disks in GRMHD using a novel approach to overcome the computational difficulties in simulating the large and small scale structures, as well as long time scales, required to study TDE disks in a global simulation. We assume a BH mass of \(10^{6}M_{\odot}\) and stellar mass of \(1M\odot\) in each simulation. We also study the effects of spin and use \(a_{\star}=0\) and \(a_{\star}=0.9\). We skip the initial disk formation process and assume it resulted in the existence of a circularized, small scale MAD disk, which we use as the initial conditions for each simulation. We then inject a magnetized stream with a fall back rate appropriate for a given time in the TDE evolution. We set the pericenter radius of the stream such that the self intersection radius is on the order of \(50r_{g}\), where \(r_{g}\) is the gravitational radius (defined in Section 4). Since GRMHD simulations are scale free, the most important parameter in our simulations is the ratio between the density of the pre-existing disk and injected stream (or the density contrast, which we define in Section 2). We evolve each simulation for \(\sim 0.87-4\) days and study the disk and jet properties during the interaction between the disk and stream. This paper is organized as follows. In Section 2, we discuss how the density contrast evolves in a simplified model of the TDE stream and accretion disk and illustrate potential consequences on the dynamics. In Section 3, we describe the numerical methods used to perform the GRMHD simulations. In Section 4, we define calculations used to analyze the simulations. In Section 5, we discuss core results and provide visualizations of each simulation. We discuss how our results can describe jetted TDEs in Section 6 and we conclude in Section 7. ## 2 Density contrast in TDEs Following Stone et al. (2013), we define the fallback time as \[t_{\rm fb}=3.5\times 10^{6}{\rm sec}\ m_{6}^{1/2}m_{\star}^{-1}r_{\rm s}^{3/2}. \tag{2}\] Following the rise to peak, the mass fallback rate follows a power law \[\dot{M}_{\rm fb}\sim\dot{M}_{\rm peak}\left(\frac{t}{t_{\rm fb}}\right)^{-5/3}, \tag{3}\] where \[\frac{\dot{M}_{\rm peak}}{\dot{M}_{\rm Edd}}\sim 133m_{6}^{-3/2}m_{*}^{2}r_{*}^{-3 /2} \tag{4}\] is the peak mass fallback rate in units of the Eddington mass accretion rate (defined later in Equation 16). Note we set \(\eta=0.1\), \(k=1\), and \(n=0\) in each of the expressions for simplicity such that there is no dependence on \(\beta\). The simulations presented in this work demonstrate that the density contrast is \[f_{\rho}(t,r)=\frac{\rho_{d}(t,r)}{\rho_{s}(t,r)}, \tag{5}\] leads to different dynamics in a TDE, where \(\rho_{d}\) is the mass density of the pre-existing disk and \(\rho_{s}\) that of the injected stream. Namely, the self-intersection outflow can be diminished if the stream's orbit is changed during its interaction with the disk. At the start of the TDE evolution, \(f_{\rho}<1\). Even in the simulation presented by Steinberg and Stone (2022), the circularized disk clearly remains less dense than the stream by roughly an order of magnitude. Depending on how the disk mass, scale, and geometry evolves, the quantity \(f_{\rho}\) could conceivably exceed unity at late times. Here we discuss how evolution of \(f_{\rho}\) could be relevant in TDEs. To describe the stream, we assume that its density is related to the fallback rate by the expression \[\rho_{s}(t,r)=\frac{\dot{M}_{\rm fb}(t)}{\pi H_{s}(r)^{2}v_{s}(r)}, \tag{6}\] where \(H_{s}\) is the stream height and \(v_{s}\approx\sqrt{2GM_{\rm BH}/r}\) is the free-fall velocity, which is roughly the speed of the incoming stream. For simplicity, we assume the stream height takes the form \(H_{s}=(r/R_{p})R_{*}/R_{p}\). To approximate the evolution of the disk, we assume that \(t\geq t_{\rm fb}\) such that the initial disk mass is \(M_{d}(t=t_{\rm fb})=0.1M_{*}\). We then approximate the disk mass by accounting for mass accreted by the BH over time \[\dot{M}_{d}(t)=\dot{M}_{\rm fb}(t)-\dot{M}_{\rm BH}(t). \tag{7}\] Here we assume \(\dot{M}_{\rm BH}=f_{\rm acc}\dot{M}_{\rm fb}\), and use a fiducial value of \(f_{\rm acc}=0.1\). This assumption is motivated by Curd (2021), which found a mass accretion rate of \(\sim 10\%\) of the fallback rate. This assumption may not hold for long term evolution as the disk mass builds up (e.g. Metzger, 2022). The disk mass then evolves as \[M_{d}(t)=M_{d,0}+(1-f_{\rm acc})\int_{t_{\rm fb}}^{t}\dot{M}_{\rm fb}(t)dt \tag{8}\] We assume that the gas density follows a power-law with radius of \(\rho_{d}(r,t)=\rho_{d,0}(t){(r/r_{H})}^{-1}\), where \(r_{H}\) is the horizon radius and \(\rho_{d,0}(t)\) is the maximum density of the disk at time \(t\). This profile is appropriate for a MAD disk (Chatterjee and Narayan, 2022), but is also similar to that of the TDE disk in Andalman et al. (2022). The density for a disk of outer radius \(R_{d}\) is obtained by \[\rho_{d}(t,r)=\frac{M_{d}(t)}{2\pi r(R_{d}^{2}-r_{H}^{2})} \tag{9}\] Here we assume a spherical distribution at all mass accretion rates. At low accretion rates, the disk may collapse into a disk geometry with scale-height \(h_{d}\) which may have radial dependence. We set \(\rho_{d}(r)=0\) for \(r<r_{H}\) and \(r>R_{d}\). Although we have performed simulations in which the accretion disk is geometrically thick, in part because we cannot sufficiently resolve small scale-height disks, our simulations do demonstrate the impact that the density contrast has on the stream dynamics. We believe that this effect should be similar in a thin system. Furthermore, the incoming stream is expected to be aligned with the disk since the disk tends to remain roughly aligned with the initial angular momentum of the star and does not precess (Andalman et al., 2022). We show an example of \(f_{\rho}\) over time using our assumed disk and stream evolution in Figure 1. In a scenario where a circularized accretion disk forms, there is not a cleared path for the stream to flow along towards pericenter. Instead, the circularized disk will exert ram pressure on the stream with an acceleration \(a_{\rm ram}\propto f_{\rho}\), effectively braking it. At low \(f_{\rho}\), the stream will be effectively unperturbed. However, as \(f_{\rho}\) approaches unity, the ram pressure may completely prevent the stream from reaching pericenter. Instead, the stream may mix with the disk as it rapidly dissipates orbital energy similar to Steinberg and Stone (2022). As we show in this work, the self intersection becomes weaker as \(f_{\rho}\) increases, which leads to dynamic changes in the disk and jet/corona. Such evolution could be responsible for state transitions and delayed outflows, which have occurred in several TDEs. Here we have ignored the possibility of disk collapse, but we discuss how this may change TDE evolution in the context of \(f_{\rho}\) in Section 6. We note that the evolution and size of the disk is a vital component of our asserted scenario. For instance, we have neglected the possibility of an extended envelope existing beyond \(R_{\rm circ}\) as in Metzger (2022). In addition, we assume that \(\dot{M}_{\rm BH}\) is proportional to \(\dot{M}_{\rm fb}\) at all times. While this is based on simulation results, global simulations have yet to cover the full range of TDE evolution. In models such as Metzger (2022), bound material within the disk will also drain into the BH after an accretion time. See Metzger (2022) for a description. Figure 1: Here we illustrate how \(f_{\rho}\) evolves in our simple TDE disk model with a BH mass \(m_{6}=1\) and stellar mass \(m_{*}=1\). Note that we set \(f_{\rm acc}=0.1\) and \(\beta=1\) for simplicity. We show the initial \(f_{\rho}\) for each \(a_{*}=0.9\) simulation in Table 1 based on \(\dot{M}_{\rm inj}\) (horizontal dashed lines). As \(f_{\rho}\) increases, the stream will dissipate more of its orbital energy in its interaction with the disk. As we describe in Section 5, the self intersection shock weakens as a result. ## 3 Numerical methods We present a suite of 3D numerical simulations of MAD TDE disks carried out with the GRRMHD code, koral(Sadowski et al., 2013, 2014, 2015; Sadowski & Narayan, 2015). Using a mesh-based, finite-difference method in a stationary Kerr space-time, koral solves the conservation equations of GRMHD: \[(\rho u^{\mu})_{;\mu} =0, \tag{10}\] \[(T^{u}_{\ \nu})_{;\mu} =0, \tag{11}\] where \(\rho\) is the gas density in the comoving fluid frame, \(u^{\mu}\) are the components of the gas four-velocity as measured in the "lab frame", \(T^{\mu}_{\ \nu}\) is the MHD stress-energy tensor in the "lab frame": \[T^{\mu}_{\ \nu}=(\rho+u_{g}+p_{g}+b^{2})u^{\mu}u_{\nu}+(p_{g}+\frac{1}{2}b^{2}) \delta^{\mu}_{\ \nu}-b^{\mu}b_{\nu}. \tag{12}\] Here \(u_{g}\) and \(p_{g}=(\gamma_{g}-1)u_{g}\) are the internal energy and pressure of the gas in the comoving frame, and \(b^{\mu}\) is the magnetic field four-vector which is evolved following the ideal MHD induction equation (Gammie et al., 2003). We adopt \(\gamma=5/3\) in this work. The code can handle radiation as well, but we choose to study pure GRMHD in this work to lower computational costs. We evolve the fluid in modified Kerr-Schild coordinates with the inner radius of the simulation domain inside of the BH horizon. The radial grid cells are spaced logarithmically, and we choose inner and outer radial bounds \(R_{\rm min}<r_{H}\) (with 4 cells within the horizon) and \(R_{\rm max}=5\times 10^{4}\,r_{g}\). We also use a full \(2\pi\) in azimuth and set \(\varphi_{\rm min}=-\pi\) and \(\varphi_{\rm max}=\pi\). We choose outflow boundary conditions at both the inner and outer radial bounds, reflective boundary conditions at the top and bottom polar boundaries, and periodic boundary conditions in \(\varphi\). In each simulation, we employ a resolution \(N_{r}\times N_{\theta}\times N_{\varphi}=256\times 144\times 144\). Specifics of the grid are given in Appendix A. In order to study a strongly magnetized disk which resembles a TDE disk, we first initialize and evolve a MAD disk before introducing the TDE stream. Similar to the fossil disk scenario proposed by Tchekhovskoy et al. (2014) and Kelley et al. (2014), this setup relies on the pre-existing disk to obtain the poloidal field required by a MAD. Our setup differs in that we skip the rise to peak and the interaction between the stream and fossil disk. Instead, we assume that the TDE has already obtained magnetic flux from the fossil disk and formed a circularized MAD accreting at a super-Eddington rate. We then inject a TDE stream into the simulation domain as in Curd (2021), allow the stream and pre-existing MAD to interact, and study how the presence of a TDE stream changes the dynamics compared to a typical MAD system. We note that our methods are similar to that of Chan et al. (2019), but they study systems where the disk and stream are misaligned initially and the disk is geometrically thin. The BH mass is set to \(10^{6}M_{\odot}\), though this only sets the units since GRMHD is scale free. We start with a torus of gas in hydrostatic equilibrium threaded by a large-scale poloidal magnetic field and its angular momentum aligned with the BH spin axis (or \(z\)-axis). From the torus initial conditions, the magnetorotational instability naturally develops and drives accretion onto the BH, which ultimately drags in magnetic field which saturates at the MAD state. We perform two such initial simulations (one for each BH spin) and evolve this initial stage for \(15,000t_{g}\), which is long enough for the magnetic field to saturate. We give additional details of the initial torus and time evolution of our initial setup in Appendix B. The simulation state for each BH spin after the initial evolution before stream injection is shown in Figure 2. To inject the stream, we assume the stream resulted from the disruption of a \(1M_{\odot}\) star on a parabolic trajectory (eccentricity \(e=1\)) around a \(10^{6}M_{\odot}\) BH and follow the injection methodology described in Curd (2021) with a few modifications. We reproduce relevant expressions from Curd (2021) below for completeness. We describe the disruption in terms of the impact parameter, \(\beta\), which is defined as the ratio between the tidal radius and pericenter separation such that \(\beta\equiv R_{t}/R_{p}\). We choose \(\beta=4\) for BH spin \(a_{*}=0\) models and \(\beta=7\) for \(a_{*}=0.9\). This gives a self-intersection radius (ignoring interaction between the stream and disk) of \(\sim 50\,r_{g}\) for all models. We apply the 'frozen in' approximation to estimate the spread in binding energy Stone et al. (2013): \[\Delta\epsilon\approx 4.3\times 10^{-4}\frac{m_{6}^{1/3}m_{*}^{2/3}}{r_{*}}c^{2}. \tag{13}\] We set the binding energy of the stream to that of the most bound component, \(\epsilon_{\rm inj}=\epsilon_{\rm mb}=\epsilon_{*}-\Delta\epsilon/2\). Here \(\epsilon_{*}\) is the initial orbital binding energy of the star, which is zero since we assume a parabolic orbit. We note that this is not accurate for late times in a TDE and \(\epsilon\) of incoming material will slowly approach zero, but we maintain this assumed binding energy for all simulations for simplicity. The orbit of the disrupted star is assumed to be aligned with the equatorial plane of the BH spin vector. For each simulation we fix \(\dot{M}_{\rm inj}\) (and correspondingly \(\rho_{\rm inj}\)) to be constant since the simulation time is much shorter than the fallback time. We set the gas temperature \(T_{\rm inj}=10^{5}\) Figure 2: Initial simulation state for each BH spin. We show the gas density (colors), velocity (streamlines), and jet boundary (\(\sigma=1\), pink line). K, gas pressure \(p_{\rm inj}=k_{B}T_{\rm inj}/\mu_{\rm gas}m_{p}\)1, and injection radius \(R_{\rm inj}=250\,r_{g}\). Due to resolution limitations, we assume \((H/R)_{\rm inj}=0.05\), which subtends only 6 cells in \(\vartheta\) and 2 cells in \(\varphi\) in our grid. The angular momentum is fixed to the value corresponding to the pericenter radius of the TDE stream \(l=\sqrt{2R_{\rm p}}\), from which we obtain the \(\varphi\) velocity \(v^{\varphi}=l/R_{\rm inj}\). The total velocity is then set by Footnote 1: Here \(k_{B}\) is the Boltzmann constant, \(m_{p}\) is the mass of a proton, and \(\mu_{\rm gas}\) is the mean molecular weight assuming Solar metallicity. \[v_{\rm inj}=\sqrt{\frac{2}{R_{\rm inj}}+2c_{\rm inj}}\;, \tag{14}\] from which we obtain the radial velocity, \(v^{r}=-\sqrt{(v_{\rm inj})^{2}-(v^{\varphi})^{2}}\). We inject a weak toroidal magnetic field with the stream by setting \[B_{\rm inj}^{r}=\frac{p_{\rm inj}\beta_{\rm,inj}}{\sqrt{g^{rr}}}\cos\left( \frac{|\vartheta-\pi/2|}{(H/R)_{\rm inj}}\pi\right), \tag{15}\] where \(\beta_{\rm,inj}=10^{-3}\) is the ratio magnetic and gas pressure in the injection cells. The other field components are set to \(B_{\rm inj}^{g}=B_{\rm inj}^{\varphi}=0\). ## 4 Definitions In this section, we discuss the units adopted throughout the text and provide brief descriptions of quantities used to study the KORAL simulation data. Throughout this work, we use gravitational units to describe physical parameters. For distance we use the gravitational radius \(r_{g}\equiv GM_{\rm BH}/c^{2}\) and for time we use the gravitational time \(t_{g}\equiv GM_{\rm BH}/c^{3}\), where \(M_{\rm BH}\) is the mass of the BH. Often, we set \(G=c=1\), so the above relations would be equivalent to \(r_{g}=t_{g}=M_{\rm BH}\). Occasionally, we restore \(G\) and \(c\) when we feel the lips to keep track of physical units. We adopt the following definition for the Eddington mass accretion rate: \[\dot{M}_{\rm Edd}=\frac{L_{\rm Edd}}{\eta_{\rm NT}c^{2}}, \tag{16}\] where \(L_{\rm Edd}=1.25\times 10^{38}\,(M_{\rm BH}/M_{\odot})\,{\rm erg\,s^{-1}}\) is the Eddington luminosity, \(\eta_{\rm NT}\) is the radiative efficiency of a thin disk around a BH with spin parameter \(a_{*}\) (which is often referred to as the Novikov-Thorne efficiency): \[\eta_{\rm NT}=1-\sqrt{1-\frac{2}{3r_{\rm ISCO}}}, \tag{17}\] and \(r_{\rm ISCO}=3+Z_{2}-\sqrt{(3-Z_{1})(3+Z_{1}+2Z_{2})}\) is the radius of the Innermost Stable Circular Orbit (ISCO, Novikov & Thorne, 1973) in the Kerr metric, where \(Z_{1}=1+(1-a_{*})^{1/3}\left((1+a_{*})^{1/3}+(1-a_{*})^{1/3}\right)\) and \(Z_{2}=\sqrt{3a_{*}^{2}+Z_{1}^{2}}\). For \(a_{*}=0\) and \(0.9\), the efficiency is \(\eta_{\rm NT}=5.72\%\) and \(15.58\%\). We compute the net mass inflow rate as \[\dot{M}(r)=-\int_{0}^{2\pi}\int_{0}^{\pi}\sqrt{-g}\rho\,u^{r}d\vartheta d\varphi. \tag{18}\] The magnetic flux is computed as \[\Phi(r)=-\frac{1}{2}\int_{0}^{2\pi}\int_{0}^{\pi}\sqrt{-g}|B^{r}(r)|d\vartheta d\varphi, \tag{19}\] where \(B^{r}\) is the radial component of the magnetic field. The total energy flux (excluding the rest mass flux) is computed as \[L(r)=-\int_{0}^{2\pi}\int_{0}^{\pi}\sqrt{-g}(T^{r}_{\,t}+\rho u^{r})d\vartheta d\varphi. \tag{20}\] We track the time evolution of the mass accretion rate, magnetic flux, and jet power through unitless quantities evaluated at the BH horizon. We track the accretion of mass onto the BH in each simulation in Eddington units \[\dot{m}=\frac{\dot{M}(r_{H})}{\dot{M}_{\rm Edd}}. \tag{21}\] We quantify the magnetic field strength at the BH horizon through the normalized magnetic flux parameter (Tchekhovskoy et al., 2011) \[\phi=\frac{\Phi(r_{H})}{\sqrt{\dot{M}(r_{H})}}. \tag{22}\] For geometrically thick disks the MAD state is achieved once \(\phi_{\rm BH}\sim 40-50\)(see e.g. Tchekhovskoy et al., 2011, 2012). Since the majority of the escaping energy leaves the system through the jet in MAD disks, we quantify the jet power via the total efficiency at the BH horizon \[\eta=\frac{L(r_{H})}{\dot{M}(r_{H})}. \tag{23}\] To determine the driving factor for angular momentum transport, we measure the effective viscosity \[\alpha_{\rm eff}=\frac{u^{r}u^{\varphi}}{c_{s}^{2}}, \tag{24}\] Reynolds viscosity \[\alpha_{\rm Rey}=\frac{\widehat{T}_{\rm Rey}^{\varphi\dot{\varphi}}}{p_{b}+p_ {g}}, \tag{25}\] and Maxwell viscosity \[\alpha_{\rm Max}=\frac{\widehat{T}_{\rm Max}^{\varphi\dot{\varphi}}}{p_{b}+p_ {g}}. \tag{26}\] Here \(\widehat{T}^{\varphi\dot{\varphi}}\) is the average orthonormal \(r,\ \varphi\) component of the stress-energy tensor measured in the fluid frame, \(c_{s}\) is the sound speed, and \(p_{b}=b^{2}/2\) is the magnetic pressure. Note \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Model & \(a_{*}\) & \(\beta\) & \(\dot{M}_{\rm inj}\) & \(f_{\rho 0}\) & \(t_{\rm start}\) & \(t_{\rm end}\) \\ & & & (\(\dot{M}_{\rm Edd}\)) & & (\(10^{4}t_{g}\)) & (\(10^{4}t_{g}\)) \\ \hline m00f0.3b4 & 0 & 4 & 1 & 0.3 & 0 & 2 \\ m00f0.003b4 & 0 & 4 & 100 & 0.003 & 0 & 2 \\ m09f1b7A & 0.9 & 7 & 1 & 1 & 0 & 2 \\ m09f0.1b7A & 0.9 & 7 & 10 & 0.1 & 0 & 3.5 \\ m09f0.01b7 & 0.9 & 7 & 100 & 0.01 & 0 & 3.5 \\ m09f1b7B & 0.9 & 7 & 1 & 1 & 2 & 3.5 \\ m09f0.1b7B & 0.9 & 7 & 10 & 0.1 & 2 & 7 \\ \hline \hline \end{tabular} \end{table} Table 1: Here we describe the relevent parameters of each model presented in this work. Models n00f1b7B and m00f0.1b7B are restarts of m0f0.01b7 from 20,000 \(t_{g}\) with the injection rate lowered to increase the initial density contrast \(f_{\rho 0}\) to study how an evolved system changes once self-intersection is weakened. that we have taken advantage of the fact that the stress-energy tensor can be broken into gas (Reynolds) and magnetic (Maxwell) components. That is we write Equation 12 strictly in terms of the gas or magnetic components. We compute the eccentricity at each grid point via \[e=\sqrt{1+2\ell l^{2}}, \tag{27}\] where \(\epsilon=-(u_{t}+1)\) is the binding energy and \(l=u_{\varphi}\) is the angular momentum. To quantify the orientation of the disk and jet (or corona/funnel), we first use the magnetization to divide the fluid into 'disk' (\(\sigma<1\)) and 'jet' (\(\sigma\geq 1\)). In simulations where there is no spin, this is not a true jet since there is no mechanism to accelerate the gas to relativistic speeds. Nevertheless, this region is likely to be low optical depth and represents where X-rays are likely to escape. Note that we transform quantities from spherical polar to cartesian coordinates \(x^{i}=(x,y,z)\) to describe the position and angular momentum of the fluid in the following paragraphs. The angular momentum of the BH is aligned with the \(z\)-axis, so \[J^{i}_{\rm BH}=(0,0,a_{\rm BH}M). \tag{28}\] Since this term cancels when computing the tilt and precession and is meaningless for a Schwarzschild BH, we only show it here for completeness. We derive the angular momentum of each cell in the disk using the stress energy tensor transformed to Cartesian coordinates \[S^{i}=[i\,j\,k]x^{j}T^{0k}_{\rm Cart}, \tag{29}\] where the brackets denote the antisymmetric Levi-Cevita tensor. We then find the shell integrated, density weighted angular momentum components \[J^{i}=\frac{\int_{0}^{2\pi}\int_{0}^{\pi}\sqrt{-g}\,w_{\rm disk}(\sigma)\rho \,S^{i}d\vartheta d\varphi}{\int_{0}^{2\pi}\int_{0}^{\pi}\sqrt{-g}\,w_{\rm disk }(\sigma)\rho d\vartheta d\varphi}. \tag{30}\] In the above expression, the term \[w_{\rm disk}(\sigma)=\begin{cases}1,&\quad\sigma<1\\ 0,&\quad\sigma\geq 1\end{cases} \tag{31}\] is used to only include the disk in integration. We then define the tilt angle relative to the BH spin (or z-axis in the zero spin case) as a function of radius \[\mathcal{T}_{\rm disk}(r)=\arccos\Bigg{[}\frac{J^{z}}{\sqrt{(J^{z})^{2}+(J^{ y})^{2}+(J^{z})^{2}}}\Bigg{]}. \tag{32}\] We also obtain the precession angle relative to the \(y\)-axis \[\mathcal{P}_{\rm disk}(r)=\arccos\Bigg{[}\frac{J^{y}}{\sqrt{(J^{z})^{2}+(J^{ y})^{2}}}\Bigg{]}. \tag{33}\] In aligned systems, the precession angle is not a useful quantity, but once tilt sets in it can show whether the disk and jet precess together. For the jet, we derive a position based angle. We start by finding the \(\sigma\) weighted mean position for the top and bottom jet at each radius \[x^{i}_{\rm jet,top}=\frac{\int_{0}^{2\pi}\int_{0}^{\pi/2}\sqrt{-g}\,w_{\rm jet }(\sigma)\sigma\ x^{i}d\vartheta d\varphi}{\int_{0}^{2\pi}\int_{0}^{\pi/2} \sqrt{-g}\,w_{\rm jet}(\sigma)\sigma d\vartheta d\varphi}, \tag{34}\] \[x^{i}_{\rm jet,bot}=\frac{\int_{0}^{2\pi}\int_{\pi/2}^{\pi}\sqrt{-g}\,w_{\rm jet }(\sigma)\sigma\ x^{i}d\vartheta d\varphi}{\int_{0}^{2\pi}\int_{\pi/2}^{\pi} \sqrt{-g}\,w_{\rm jet}(\sigma)\sigma d\vartheta d\varphi}. \tag{35}\] In both expressions, the term \[w_{\rm jet}(\sigma)=\begin{cases}0,&\quad\sigma<1\\ 1,&\quad\sigma\geq 1\end{cases} \tag{36}\] is used to explicitly exclude the disk from calculations. We then calculate a tilt and precession angle based on the mean position. For example, the top jet's tilt and precession are calculated as \[\mathcal{T}_{\rm jet,top}(r)=\arccos\Bigg{[}\frac{z_{\rm jet,top}}{\sqrt{(z _{\rm jet,top})^{2}+(y_{\rm jet,top})^{2}+(z_{\rm jet,top})^{2}}}\Bigg{]}, \tag{37}\] and \[\mathcal{P}_{\rm jet,top}(r)=\arccos\Bigg{[}\frac{y_{\rm jet,top}}{\sqrt{(z _{\rm jet,top})^{2}+(y_{\rm jet,top})^{2}}}\Bigg{]}. \tag{38}\] The same expressions are used for the bottom jet except with the mean coordinates \(x^{i}_{\rm jet,bot}\). For both the disk and jet, we report the average tilt and precession angles over \(10\leq r/r_{g}\leq 100\). We quantify the jet opening angle by computing the solid angle it subtends in a flat spacetime: \[\Omega_{\rm jet,top}(r)=\int_{0}^{2\pi}\int_{0}^{\pi/2}\,w_{\rm jet}(\sigma) \sin(\vartheta)\cos(\vartheta)d\vartheta d\varphi \tag{39}\] \[\Omega_{\rm jet,bot}(r)=-\int_{0}^{2\pi}\int_{\pi/2}^{\pi}\,w_{\rm jet}( \sigma)\sin(\vartheta)\cos(\vartheta)d\vartheta d\varphi. \tag{40}\] Note the minus sign in Equation 40 is to account for the negative introduced by \(\cos(\vartheta)\). We compute an average solid angle \[\Delta\Omega(r)=\frac{\Omega_{\rm jet,top}(r)+\Omega_{\rm jet,bot}(r)}{2}. \tag{41}\] We relate this to the mean jet width under the assumption of a conical geometry \[\mathcal{W}(r)=r\sin\biggl{(}\arccos[1-\Delta\Omega(r)/2\pi]\biggr{)}. \tag{42}\] ## 5 Results ### Stream/Disk Dynamics We show the large scale structure of models with \(f_{\rho}=0.01,0.1,1\) and \(a_{*}=0.9\) in Figure 3 (m09f1b7A, m09f0.1b7A, m09f0.01b7). When \(f_{\rho}=0.01\), the ram pressure from the disk is negligible, and the system evolves much like disk formation simulations initialized with no initial disk (Sadowski et al., 2016; Curd, 2021). The stream dissipates a negligible amount of orbital energy on its way to pericenter, where it goes through a nozzle shock due to vertical compression and self-intersects at roughly the self-intersection radius (See bottom left panel in Figure 3 and bottom right panel in Figure 4). Similar to Curd (2021), the nozzle shock is poorly resolved, so we do not discuss it throughout this work. Bound and unbound gas is produced by the self-intersection shock, some of which falls in and makes an accretion disk while the rest flows out and interacts with the jet and outer medium. The material which forms the accretion disk maintains a high eccentricity (See bottom right panel in Figure 5). Despite the low magnetic field strength injected with the stream, the forming disk maintains a strong magnetic field due to the pre-existing field being anchored to smaller radii by inflowing material (See bottom right panel in Figure 3). Similar to the magnetized disk formation simulations in Curd (2021), the magnetic field in material which has gone through the self-intersection shock becomes highly disordered and turbulent. However, as we discuss later, the poloidal magnetic flux inside the self-intersection radius remains trapped and the field in the inner accretion disk remains ordered. The outflowing part is launched with velocity \(\sim 0.1c\) and produces an asymmetrical ram pressure on the jet since it is quasi-spherical. This results in a force in the \(-x\) direction. We describe how this effects the disk and jet evolution in Section 5.4.1. With \(f_{\rho}=0.1\), we observe significant slowing of the stream on its way to pericenter, but it is not completely stopped by the disk (See middle left panel in Figure 3 and bottom left panel in Figure 4). As a consequence, the pericenter radius is shifted outward radially significantly and the self-intersection has far less kinetic energy available for dissipation. No quasi-spherical outflow is produced as a result. This may be due to the shock weakening due poorer resolution at larger radii. However, this result is not unreasonable since the energy and velocity of the self-intersection outflow is expected to rapidly drop off with increasing radius since the stream self-intersects at roughly the orbital velocity. We again find a highly eccentric accretion disk forms, but we note a slight decrease in eccentricity compared with the \(f_{\rho}=0.01\) model due to the dissipation of orbital energy as the stream interacts with the disk (See bottom left panel in Figure 5). Since there is no self-intersection outflow, the magnetic field in the outer accretion disk is less turbulent. We again find anchoring of poloidal magnetic field to the BH by the inflowing material. With \(f_{\rho}=1\), the ram pressure exerted on the stream by the disk is large enough to halt the stream before it reaches pericenter. Instead, the stream is observed to mix with the accretion disk (See top panel in Figure 3). This can clearly be seen in the velocity which closely resembles the initial MAD disk (See top panels in Figure 4). Interestingly, the stream does add eccentricity to the disk as the inflowing material reaches \(e>0.7\). The field structure closely resembles a standard MAD accretion disk (e.g. bottom panel in Figure 2) since the stream has little effect on the disk. The dynamics for a given \(f_{\rho}\) are similar in the \(a_{*}=0\) models. Videos of each simulation can be seen in our YouTube playlist. ### TDE Disks Maintain Magnetic Flux and Jets We show the accretion rate, normalized magnetic flux, and efficiency at the BH horizon in Figure 6. In all models save m09f0.01b7, the accretion rate drops from about 10 to 1 Eddington. This is due to a drop in density around the BH as the disk spreads viscously and mass is consumed by the BH. Surprisingly, there is little difference in accretion history as we vary \(f_{\rho}\) except in m09f0.01b7 which shows elevated accretion once the disk tilts, an effect we describe in the next section. In all models, a MAD or semi-MAD state is maintained. Despite the high eccentricity, magnetic field is successfully contained and does not rapidly diffuse from the horizon. This is a genuinely new result and is a bit of a surprise since Curd (2021) found negligible poloidal flux accumulation when the field comes from the stream even with a favorable field configuration. Our results indicate that once poloidal flux reaches the BH, regardless of how it was obtained (i.e. fossil disk or a dynamo effect), the chaotic and eccentric disk can anchor it to the BH. We note that while m09f0.01b7 showed a decrease in normalized magnetic flux, the total magnetic flux given by Equation 19 remains roughly the same. The decrease in normalized magnetic flux is due to additional accretion driven by strong shocks once the tilt sets in. See discussion in Section 5.4.1. We treat the efficiency as measured at the horizon as a proxy for the outgoing jet power. In all models with \(a_{*}=0.9\) we find \(\eta\approx 100-400\%\) while the magnetic flux remains MAD (\(\delta\gtrsim 50\)). Ultimately, the jet power at larger radii may decrease especially in cases where the self-intersection outflow is strong, and the jet may interact with the disk and outflow. In addition, instabilities in the jet disk interface can lead to additional dissipation of jet power (Chatterjee et al., 2019). For models with spin \(a_{*}=0\), the efficiency remains much lower at \(\sim 2-6\%\) since there is no jet. ### Magnetic Stresses are Subdominant To quantify the contribution to angular momentum transport from hydrodynamic and magnetic processes, we compute a radius-weighted average of \(\alpha_{\rm eff},\alpha_{\rm Rey}\), and \(\alpha_{\rm Max}\) in the disk (\(\sigma<1\)) from \(r_{H}<r<100r_{g}\) at \(t=t_{\rm end}\)2. We employ radius-weighting instead of density-weighting to incorporate part of the outer disk where shocks are present into the calculation. Footnote 2: We have verified that the viscosity behaves the same across time and the qualitative properties shown in Figure 7 are not effected by our choice of time to perform the measurement. We show the average viscosity in Figure 7 as a function of \(f_{\rho}\). We find that the effective and Reynolds viscosity both decline as a function of \(f_{\rho}\). Meanwhile, the Maxwell viscosity is similar across all values of \(f_{\rho}\) with \(\alpha_{\rm Max}\lesssim 10^{-3}\). At all values of \(f_{\rho}\), the effective viscosity and the Reynolds viscosity are larger than the Maxwell viscosity. At \(f_{\rho}\lesssim 0.01\), the effective viscosity is more than an order of magnitude larger than the Reynolds viscosity which suggests shocks dominate transport at this stage of a TDE. We observe that at \(f_{\rho}\gtrsim 0.1\), the effective and Reynolds viscosity are of roughly the same magnitude which suggests a transition to turbulent transport. Our findings at \(f_{\rho}\lesssim 0.1\) are similar to Sadowski et al. (2016) who found even after a disk formed, the Maxwell viscosity remained subdominant by at least an order of magnitude. At \(f_{\rho}\gtrsim 1\), the viscosity resembles some of the MAD disks in McKinney et al. (2012) which also showed a larger Reynolds viscosity than Maxwell viscosity in spite of the powerful poloidal magnetic fields. Figure 4: Here we show the velocity (colors) and velocity field vector (stream lines) for an equatorial slice of each of the \(a_{*}=0.9\) models for \(f_{\rho}=0.01\) (bottom right), \(0.1\) (bottom left), \(1\) (top right). We also show the velocity field for the initial conditions on the top left for comparison. Each panel shows in \(120r_{g}\times 120r_{g}\) region centered around the BH. See Section 5.1 for a description of the figures. Figure 5: Here we show the eccentricity (colors) for an equatorial slice of each of the \(a_{*}=0.9\) models for \(f_{\rho}=0.01\) (bottom right), \(0.1\) (bottom left), \(1\) (top right). We also show the eccentricity for the initial conditions on the top left for comparison. Each figure spans a region similar to Figure 4. See Section 5.1 for a description of the figures. Figure 3: Here we show the gas density (colors, left panels), velocity field (stream lines, left panels), magnetic field strength (colors, right panels), and magnetic field (stream lines, right panels) for an equatorial slice of each of the \(a_{*}=0.9\) models for \(f_{\rho}=0.01\) (bottom row), \(0.1\) (middle row), \(1\) (top row). Each figure spans a region of \(480r_{g}\times 480r_{g}\) centered around the BH. We describe the figure in Section 5.1. ### Disk and Jet Tilt Evolution #### 5.4.1 Low Density Contrast Jetted Model: \(f_{\rho}=0.01\) At the onset of stream injection, since the stream is substantially denser than the pre-existing MAD disk with \(f_{\rho}=0.01\), the stream is largely unperturbed by the disk material on its path towards pericenter. Subsequently, the stream precesses and violently self-intersects with itself at the self-intersection radius. Between \(t=0-0.7\times 10^{4}t_{g}\), the self-intersection outflow begins to tilt the jet and we measure tilt angles for both the top and bottom jet of \(\sim 10^{\circ}-20^{\circ}\). During this initial stage, the disk remains aligned with the BH spin. Between \(t=0.7-1.2\times 10^{4}t_{g}\), the disk tilt begins to increase until it roughly equals the tilt angle of the top and bottom jets. During this stage, the precession angle oscillates wildly, in part due to the initial tilt angle of zero. For \(t>1.2\times 10^{4}t_{g}\), the tilt of the top jet and disk continue to grow until \(\mathcal{T}_{\rm jet,top}\sim 30^{\circ}\) and \(\mathcal{T}_{\rm disk}\sim 23^{\circ}\). In a typical tilted MAD disk system, the jet acts to align the inner accretion disk with the BH spin. However, once the disk tilt begins to grow in m09f0.01b7, it is unable to realign with the BH spin due to already tilted disk material adding angular momentum at the self-intersection radius. This sets up a tilted system which is shown to be stable for at least the duration of the simulation. Interestingly, the jet precession angle does not show strong variability after the disk tilts. Instead the top Figure 8: Volume renderings of a \(200r_{g}\times 200r_{g}\) region of model m09f0.01b7 showing the stream/disk (red), outer disk/outflow (blue), and jet (yellow) viewed edge on (top panel) and viewed down the jet axis (bottom panel). We show times in \(t=0t_{g}\) (left), \(t=10,000t_{g}\) (middle), \(t=35,000t_{g}\) (right). The outflow pushes on the jet laterally and begins to tilt the jet. This ultimately leads to a tilted disk and jet in the final snapshot. Figure 6: We show the mass accretion rate (top row), normalized magnetic flux at the BH horizon (middle row), and efficiency (bottom row) for each of the \(a_{\star}=0\) (left column) and \(a_{\star}=0.9\) (right column) models. Each model shows an initial decrease in the mass accretion rate as the injected stream interacts with the disk. As we discuss in Section 5.2, this is due to the density in the disk decreasing due to viscous spreading and mass accretion. In each model we find \(\phi>20\), which confirms that TDE disks can maintain a strong poloidal field. For the models where no tilt instability sets in, a MAD flux of \(\phi>50\) is maintained and a powerful jet with \(\eta\approx 100-400\%\) is launched when \(a_{\star}=0.9\). As expected, no jet is launched when \(a_{\star}=0\) and we find similar \(\eta\) for both of the \(a_{\star}=0\) models. Figure 7: Here we show the radius-weighted viscosity as computed in Section 5.3 as a function of \(f_{\rho}\). We indicate \(a_{\star}=0\) models as squares and \(a_{\star}=0.9\) models as circles. and bottom jets show nearly constant precession angles that are roughly \(180^{\circ}\) out of phase at \(t>2.3\times 10^{4}t_{g}\). Volume renderings of the evolution are shown in Figure 8. Equatorial and poloidal slices as well as the full time evolution of the tilt and precession angles are shown in Figure 9. #### 5.4.2 Medium Density Contrast Jetted Model: \(f_{\rho}=0.1\) Since this model has an intermediate density contrast, the stream is still able to flow towards the BH. However, it is significantly perturbed and the pericenter radius is shifted slightly outward, which also increases the self-intersection radius. This leads to a substantially weakened self-intersection and self-intersection outflow. As a result, the jet is only slightly perturbed by the outflow and we find that the jet remains stable with \(\mathcal{T}\lesssim 10^{\circ}\) and the disk remains aligned with the BH spin throughout the entire evolution. The precession angle is not meaningful here due to the near perfect alignment. See Figure 10 for visualations and the time evolution. #### 5.4.3 High Density Contrast Jetted Model: \(f_{\rho}=1\) In this model, the density contrast is large enough that the stream experiences extreme ram pressure from the accretion disk and is halted at \(r\sim 50-100r_{g}\). The stream material never reaches pericenter and instead mixes in with the pre-existing disk. Consequently, the system resembles a standard MAD ADAF and neither the jet or disk show large changes in their tilt. Again, the precession angle is not meaningful here due to the near perfect alignment. This evolution is depicted in Figure 11. #### 5.4.4 Restarts of m09f0.01b7 with Higher Density Contrast For model m09f1b7, we perform a restart of m09f0.01b7 at \(t=2\times 10^{4}t_{g}\) with \(f_{\rho}\) instantaneously increased from 0.01 to Figure 10: _Top row:_ Here we show the same quantities as the top three rows in Figure 9, but for model m09f0.1b7A. As we discuss in Section 5.4.2, the stream loses orbital energy on its path to pericenter and the self-intersection outflow is significantly weakened which leads to a weaker perturbation on the jet. We note that the jet profile is less smooth than in the initial state (top panel in Figure 9) due to asymmetry in the disk structure induced by the interaction with the stream. _Bottom two rows:_ The weak perturbation on the jet leads to a non-zero tilt measurement. However, both the disk and jet maintain low tilts with \(\mathcal{T}<10^{\circ}\), which confirms that strong self-intersection is needed to induce strong interaction between the jet and disk. The top and bottom jet maintain precession angles which are roughly in-phase and oscillate over time, which is typical of spin aligned MAD disks. Figure 9: _Top two rows:_ Gas density (colors), velocity (streamlines), and jet boundary (\(\sigma=1\), pink line) for m09f0.01b7. We show an equatorial slice (left) and vertical slice (right) spanning a region of \(120r_{g}\times 120r_{g}\) centered on the BH. Snapshots are shown during the initial self-intersection (\(t=10^{4}t_{g}\), first row), and at the end of the simulation after the tilt has set in (second row). _Bottom two rows:_ We show the tilt and precession angle for the disk and top/bottom jet over the evolution of the simulation. As the stream flows in, a quasi-spherical outflow begins to push on the jet and we see the jet tilt increase initially. At around \(t=0.6\times 10^{4}t_{g}\), the jet begins to perturb the disk and we observe a steady increase in the disk tilt until it roughly aligns with the jet, after which the tilt in both the disk and jet increases until they settle around a rough equilibrium state at \(t=2.5\times 10^{4}t_{g}\). Once the disk tilts, a feedback cycle begins due to self-intersection and magneto-spin alignment cannot realign the inner disk. The precession angle prior to the tilt setting in is not a meaningful quantity since the system is initially aligned with the BH spin. Once the system tilts, the disk and top jet share the same precession angle and we do not observe much variability in the precession. The bottom jet points in the opposite direction and is roughly \(180^{\circ}\) out of phase with the top jet. 1. The self-intersection is rapidly halted due to the increased density contrast and the jet subsequently realigns with the BH spin. The tilt of the disk remains slightly elevated above the tilt of the jet. This is due to the density weighting applied in Equation 30, which gives larger weighting to higher density remnants of the tilted gas which is still in the simulation domain. However, as can be seen in Figure 12, the inner disk is able to realign with the BH spin by the end of the simulation. We expect in a physical scenario the system will have time to adjust and the disk tilt should completely realign with the BH spin similar to the jet. For model m09f0.1b7B, we also perform a restart of model m09f0.01b7 at \(t=2\times 10^{4}t_{g}\), but with \(f_{\rho}\) instantaneously increased from 0.01 to 0.1. Similar to m09f0.1b7A, the stream is only perturbed from its orbit and the self-intersection still persists, but is weakened as a result. With weaker ram pressure acting on the jet, the jet and disk begin to realign with the BH spin. However, this process is much slower than in model m09f1b7B, and we find that the disk and jet tilt are highly variable until finally decreasing until they settle at \(\mathcal{T}\sim 10^{\circ}\) by the end of the simulation (see Figure 13). The total run time of the simulation (see Table 1) corresponds to only roughly three days for a \(10^{6}M_{\odot}\) BH which suggests, assuming rapid transitions in the density contrast, that the tilt can evolve rapidly enough to explain features such as jet shut-off as we discuss later in this work. Figure 11: The same as Figure 10, but for model m09f1b7A. Since the stream is halted by the pre-existing disk, no self-intersection outflow occurs. Subsequently, the jet and disk are approximately aligned with the BH spin throughout the entire simulation. Interestingly, the added turbulence to the system during the interaction with the stream appears to perturb the jet boundary compared to the initial state. Note the precession angle of the disk is not a useful quantity since the disk is aligned with the BH. Figure 12: The same as Figure 10, but for model m09f1b7B which is a restart of m09f0.01b7 at \(t=2\times 10^{4}t_{g}\). \(f_{\rho}\) is instantaneously increased from 0.01 to 1 at the start of the simulation. Because the stream is halted by the disk due to the change in density contrast, the self-intersection ceases shortly after we start the simulation. Without the added perturbation from a self-intersection outflow, the jet realigns with the z-axis and magneto-spin alignment rapidly realigns the disk with the BH spin. Interestingly, the top and bottom jet remain approximately \(180^{\circ}\) out of phase even after self-intersection ceases. Figure 13: The same as Figure 10, but for model m09f0.1b7B which is a restart of m09f0.01b7 at \(t=2\times 10^{4}t_{g}\). \(f_{\rho}\) is instantaneously increased from 0.01 to 0.1 at the start of the simulation. Since the change in density contrast is milder than m09f1b7B, the stream manages to penetrate the disk, but loses a substantial amount of orbital energy similar to model m09f0.1b7A. As a result, the self-intersection outflow persists, but is much weaker. The tilt of the jet and disk slowly decreases over the course of the simulation until it was observed to reach a rough equilibrium of about \(10^{\circ}\). Although magneto-spin alignment is able to realign much of the inner system, filaments of tilted material linger in the disk which may contribute to the residual tilt in the system as well as the wild precession observed in the jet at late times. #### 5.4.5 Non-Jetted Models For the low density contrast model (m00f0.003b4), the initial evolution of the streams is similar to that of model m09f0.01b7. The self-intersection and self-intersection outflow result in a ram pressure which tilts the jet region. However, as there is no true jet since \(a*=0\), the jet region that we measure may be thought of as a corona. As shown in Figure 14, the corona becomes substantially tilted with \(\mathcal{T}\sim 20^{\circ}-40^{\circ}\). The disk remains perfectly aligned with the BH spin throughout the entire evolution. This demonstrates that a powerful jet is responsible for the tilt instability that we observe in m09f0.01b7. For the low density contrast model (m00f0.3b4), the stream is perturbed due to its interaction with the pre-existing disk, similar to m09f1b7A. However, we find that the disk tilt increases slightly over the course of the simulation (\(\mathcal{T}\lesssim 10^{\circ}\), see Figure 15). The corona attains a tilt of \(\mathcal{T}\sim 20^{\circ}\). This is due to asymmetry introduced to the system as the stream interacts with the disk. Since the magnetic field is strong, and stream material cannot steadily feed aligned material Figure 16: Here we show snapshots of violent self-intersection events in model m09f0.01b7 (first and second row). Colors indicate gas density and stream lines indicate gas velocity. We also show the mass accretion rate (third row), magnetic flux threading the BH (fourth row), and jet efficiency (fifth row). Vertical gray lines correspond to the same times as the snapshots shown in the first and second rows, \(2,800\), \(4,300\), \(5,700\), \(7,400\), \(9,200\), and \(13,500\)t\({}_{\rm g}\), respectively. The violent self-intersections are accompanied by a drop in magnetic flux and jet power. We also note a small increase in mass accretion rate, which is less dramatic than the change in magnetic flux and jet efficiency. Figure 14: The same as Figure 10, but for model m09i100b4. Note we show the initial state in the top row and the final state of the simulation in the bottom row. Since there is no jet, the corona is observed to tilt by \(\mathcal{T}>20^{\circ}\) due to the self-intersection outflow. However, the disk tilt remains approximately aligned with the BH spin. This confirms that a jet is necessary to induce a tilt instability in MAD TDE disks. Figure 15: The same as Figure 10, but for model m09i1b4. Due to the higher density contrast, the stream loses orbital energy on its way to pericenter, and the self-intersection outflow is negligible. Surprisingly, we measure a nonzero tilt for the corona and disk. We believe this is due to asymmetry introduced to the system by the stream in the absence of magneto-spin alignment and a strong jet. to the inner disk, the tilted corona is capable of tilting the disk. Unlike model m09f1b7A, magneto-spin alignment does not counteract any induced tilt. Tilt induction in a MAD around a non-spinning BH was also demonstrated by Ressler et al. (2020) in the context of a stellar wind fed model of Sagittarius A\({}^{*}\), suggesting tilt induction may be common in MAD disks around non-spinning BHs that are fueled by asymmetrical inflows. ### Violent Self-Intersections and Variability For the first \(15,000\,t_{g}\) of model m09f0.01b7, we identify six complete stream disruptions at times \((2,800,\,4,300,\,5,700,\,7,400,\,9,200,\,13,500)t_{g}\) as shown in Figure 16. These correspond to a temporal separation of \((1500,\,1400,\,1700,\,1800,\,3300)t_{g}\). Assuming a Keplerian orbit, this corresponds to an orbital radius of \((38,\,37,\,42,\,43,\,65)r_{g}\). These are similar to the self-interaction radius of \(\sim 50r_{g}\), which is to be expected in the case of a feedback loop caused by angular momentum transfer during self-intersection (Sadowski et al., 2016; Curd, 2021). Here we find that not only does the mass accretion rate vary during these events, but the magnetic flux threading the BH drops from \(\phi_{\rm BH}\sim 60\) to \(\sim 40\). Since the disk is MAD and the BH is rapidly rotating, this will inevitably lead to flaring behaviour. Indeed, we see the total efficiency drop from \(\sim 100\%\) at the peaks to \(10-50\%\) at the minima. We discuss how our model can be applied to the variability in jetted TDEs like _Swift_ J1644+57 in Section 6. ### Jet Collimation We measure the mean jet width at \(r=10r_{g}\) (\(\mathcal{W}_{10}\)) and \(r=100r_{g}\) (\(\mathcal{W}_{100}\)) as a function of time following Equation 42 in Figure 17. The jet width shows oscillations as a function of time due to the highly variable magnetic flux. This is typical of a MAD disk, but here we are focused on the average behavior of the jet. For model m09f0.01b7, the self-intersection outflow causes substantial collimation. The velocity stream lines in the right middle panel of Figure 9 show high density material sometimes flowing completely in the \(-x\) direction which will provide substantial ram pressure on the jet. We see a decrease of roughly \(10r_{g}\) in the jet width measured at \(100r_{g}\) compared to the initial jet. For models m09f1b7A and m09f0.1b7A, the jet width at \(r=10r_{g}\) is similar to that of the initial jet prior to injection due to the weakening of the self-intersection outflow. However, we do observe slightly more collimation at \(r=100r_{g}\) compared to the initial jet, perhaps due to changes in the outflow properties when the stream interacts with the disk. For instance, the velocity stream lines in Figure 11 and Figure 10 show flows towards the jet axis, which are not present in the initial jet (see top panel of Figure 9). This may lead to more collimation in TDE jets as they propagate outwards compared to a standard MAD; however, we limit ourselves to measuring the jet profile for \(r\leq 100r_{g}\) due to poor angular resolution of the jet at larger radii. For model m09f1b7B, once the self-intersection ceases due to the increased \(f_{\rho}\), the jet width returns to near the initial value within \(\sim 5000t_{g}\). However, model m09f0.1b7B shows a much narrower jet when compared with model m09f0.1b7A. This is not due to the self-intersection outflow, but the magnetic flux dropping off towards the end of the simulation. We also time average the jet width from \(t_{\rm start}+5000t_{g}\) to \(t_{\rm end}\) (see bottom panel in Figure 17). We find similar jet profiles for all models with weak self-intersection outflows (m09f1b7A, m09f0.1b7A, m09f1b7B). Model m09f0.1b7B is similar to model m09f0.01b7, but again this is due to a decrease in magnetic flux and not a result of the self-intersection outflow. We compare our results with the jet profile for the \(a_{*}=0.9\) model from Narayan et al. (2022). We find that our initial conditions result in a slightly narrower jet, but the profile appears to be quite similar for the models with weak self-intersection. ### Gas Temperature We estimate the gas temperature in the disk by accounting for radiation under the assumption that the disk is optically Figure 17: We show the mean jet width at \(r=10r_{g}\) (\(\mathcal{W}_{10}\), top panel) and \(r=100r_{g}\) (\(\mathcal{W}_{100}\), middle panel) as a function of time for each model. In the bottom panel we show the mean jet width as a function of \(z\) and time averaged over \(t_{\rm start}+5000t_{g}\) to \(t_{\rm end}\). We also show the jet profile for the \(a_{*}=0.9\) model from Narayan et al. (2022) (dashed black line). We describe the figures in Section 5.6. thick. We split the temperature into gas and radiation by solving \[p_{g}=\frac{\rho kT}{\overline{m}}+\frac{1}{3}aT^{4}, \tag{43}\] where \(\overline{m}\) is the mass per particle and \(T\) is the temperature. The gas temperature in the \(\sigma>1\) region is uncertain due to both numerical floors and the use of entropy inversion when energy conserving inversion fails in highly magnetized zones in GRMHD. As a result, we mask the gas temperature in the jet/corona, but we generally expect it to be substantially hotter than the disk (Curd & Narayan, 2019). We show the gas temperature for each model at \(t=t_{\rm end}\) in Figure 18. In the accretion disk, since the gas and radiation pressure are split evenly, the gas temperature of the accretion disk reaches \(T\sim 10^{5-6}\) K, which approximately agrees with Curd & Narayan (2019). Nozzle and self-intersection shocks also contribute to heating the gas and drive the temperature up to \(\sim 10^{6}\) K at radii up to \(50-100r_{g}\). In models with a prominent jet, the gas temperature may Figure 18: Here we show the gas temperature (colors) and \(\sigma=1\) boundary (pink line) for each model at the final snapshot. We mask the gas temperature in regions where \(\sigma>1\) for numerical reasons. exceed \(10^{6}\) K where \(\sigma>1\), which is in the range for X-ray photon production (Curd & Narayan, 2019). Since the jet is able to prevent polar inflows, the poles will remain optically thin even at the peak of the fallback rate, allowing jet emission to emerge. Comptonization within this region is expected to produce a hard spectrum which shines even in the \(\gamma\)-ray band. The non-jetted models on the other hand may have their X-ray emission largely absorbed if the photosphere is roughly spherical early on. Only after the funnel can form (or the photosphere recedes) can X-rays emerge. ## 6 Discussion ### Variability Driven by Violent Self-Intersection _Swift_J1644+57 showed variability on a range of timescales with both short period QPOs at \(\sim 200\)s (Reis et al., 2012) and long period dips in the light curve on time scales of \(\sim 10^{6}\)s (Saxton et al., 2012). The short period QPOs are thought to originate from short term variability on the horizon scale due to orbits or resonances in the inner accretion disk. The long period variability has been suggested to arise from wobbling of the jet (Tchekhovskoy et al., 2014), periodic violent stream self-intersection (Andalman et al., 2022), or magnetic flux eruption events (Curd & Narayan, 2023). Previous global simulations of forming TDE disks have identified complete disruptions of the incoming stream in cases where \(\beta=3-7\)(Curd, 2021; Andalman et al., 2022). The disruptions are temporally spaced by roughly the orbital period at the self-intersection radius. The fact that such a periodic dynamical effect took place was viewed as an attractive explanation for the variability in J1644+57. However, with no magnetic fields or radiative transfer calculations available, Andalman et al. (2022) hypothesized that this interaction could drive flaring through changes in the mass accretion rate at the horizon. As shown in Section 5.5, we directly relate the complete disruptions during self-intersection with jet variability. Since the total efficiency in Figure 16 correlates directly to jet power in MAD accretion disks, this can account for the large drops in the flux seen in J1644+57, which had minima as low as \(\lesssim 50\%\) of the maximum. This is solid confirmation of the idea proposed by Andalman et al. (2022); however, we suggest that flaring is not because of changes in the mass accretion rate directly. Rather, it is the fact that the stream acts to keep magnetic flux anchored to the BH. The magnetic flux threading the BH is at the saturation value prior to the stream disrupts itself during self-intersection. When the feeding from incoming steam material is temporarily halted, magnetic flux eruptions shed flux until \(\phi_{\rm BH}\) settles to a lower value. The disk injection simulations presented in Curd & Narayan (2023) found that after flux eruption events the magnetic flux took roughly the orbital period at the injection radius to recover. This is dynamically similar to the effects seen in this work; however, here the period is directly related to the self-intersection radius rather than the gas injection radius. Given the relationship between the variability period and the self-intersection radius, this suggests that X-ray variability can be related to the orbital properties of the disrupted star in a jetted TDE. For instance, assuming \(M_{\rm BH}=10^{6}M_{\odot}\) for J1644+57, the roughly \(10^{6}\) second variability corresponds to a self-intersection radius on the order of \(10^{3}r_{g}\). For an \(a_{*}=0\) BH, this corresponds to a \(\beta\sim 1.5\) TDE. The steady increase in the variability period may be due to an increase in the self-intersection radius as the disk builds up over time as illustrated by Ryu et al. (2023). We will explore the properties of magnetized TDE disks and magnetic flux saturation in more detail in a future report. ### Could Disk Collapse Still Be Dynamically Important in a MAD? Simulations of thin MADs presently negate the idea that TDEs will rapidly shed magnetic flux and resemble a standard thin disk as the accretion rate goes below Eddington. The powerful fields in a MAD provide support against runaway thermal collapse. This may only apply to the inner disk. Here we treat the thermal instability of the entire disk and examine how changes in \(f_{\rho}\) may lead to tilt evolution of the disk/jet without it becoming non-MAD. Since the mass fallback rate in TDEs evolves from super- to sub-Eddington, it is thought that the mass accretion rate in the disk will evolve similarly assuming \(\dot{M}\sim\dot{M}_{\rm fb}\). We can apply standard accretion theory to predict the geometry of the accretion disk over time. In an accretion disk, angular momentum transport is driven by viscosity and this drives accretion onto the BH but also heats the disk (See Abramowicz et al., 1988; Abramowicz & Fragile, 2013 for an introductory discussion). In order for the disk to remain stable, it cools through advection and radiation. In super-Eddington disks, the disk is optically thick and cooling is dominated by advective cooling, \(Q^{-}_{\rm adv}\). Dynamically, this means radiation within the disk is advected with the inflow and eventually crosses the BH horizon. In thin disks, energy generated by viscous heating is radiated locally and cooling is dominated by radiation, \(Q^{-}_{\rm rad}\). Since radiation pressure dominates in super-Eddington systems, the accretion disk puffs up to a large scale-height \(h_{d}\equiv H_{d}/R\gtrsim 0.3\) when radiation cannot escape directly, or \(Q^{-}_{\rm adv}\gg Q^{-}_{\rm rad}\). If the system is in a steady state, meaning that the mass accretion rate is constant with radius, advective and radiative cooling vary with radius. We assert that a steady state is a reasonable assumption even in the chaotic environment of a TDE since \(\dot{M}\) was found to be roughly constant with radius in Curd (2021). Following Sadowski (2011), we can write the ratio between advective and radiative cooling \[\frac{Q^{-}_{\rm adv}}{Q^{-}_{\rm rad}}\approx\dot{M}\frac{\kappa_{\rm es}h_{d }}{\pi Rc}, \tag{44}\] where \(h_{d}\) is the disk scale-height and \(\kappa_{\rm es}\) is the electron scattering opacity. From this expression, it is clear that as the accretion rate declines radiative cooling begins to become more significant until a critical accretion rate (which is around Eddington) where it becomes dominant. Assuming \(\dot{M}\) and \(h_{d}\) are constant and setting \(\kappa_{\rm es}=0.2(1+X){\rm cm^{2}\,g^{-1}}\) where \(X=X_{\odot}=0.7381\) is the solar hydrogen mass fraction, we can approximate the transition radius where advective and radiative cooling terms balance (or \(Q^{-}_{\rm adv}=Q^{-}_{\rm rad}\)) \[R_{\rm tr}=\dot{M}\frac{\kappa_{\rm es}h_{d}}{\pi c}. \tag{45}\] From the above expression, we can conclude that (i) \(R_{\rm tr}\) scales linearly with mass accretion rate and thus shrinks over time in a TDE, (ii) we expect the system to become thermally unstable at \(r>R_{\rm tr}\). Assuming the disk is heated purely by viscosity, collapse occurs on the thermal timescale \(t_{\rm th}\sim(\alpha\Omega)^{-1}\), where \(\Omega\) is the angular velocity and \(\alpha\) is the unitless viscosity parameter. We note that we have ignored other sources of heating such as dissipative heating due to the shocks for simplicity in our calculation of \(R_{\rm tr}\). If heating generated by shocks is not radiated locally, regions of the disk which have become thermally unstable by the condition \(Q_{\rm adv}^{-}<Q_{\rm rad}^{-}\) may remain stable and geometrically thick. The first shock we consider is the self-intersection shock which can release a large amount of energy, especially for relativistic TDEs. We account for heating by the self-intersection shock by first approximating the self-intersection radius. We adopt a similar method to Dai et al. (2015) to quantify apsidal precession. For material making its first pericenter passage, the precession angle may be approximated by \[\Delta\phi=\frac{6\pi}{a(1-e^{2})}. \tag{46}\] Here \(e\) is the eccentricity of the incoming stream and \(a\) is the semi-major axis. Note that we have expressed \(\Delta\phi\) using gravitational units so the semi-major axis \(a\) is given in gravitational radii. Treating the orbits of the incoming stream that has yet to pass through pericenter and the already precessed stream as ellipses, the self-intersection between the incoming material and material that has precessed occurs at the radius \[R_{\rm SI}=\frac{(1+e)R_{t}}{\beta(1-e\cos(\Delta\phi/2))}. \tag{47}\] The self-intersection shock releases energy at a rate of roughly \[L_{\rm SI}(t)\approx\frac{1}{2}\dot{M}_{\rm fb}(t)v_{\rm SI}^{2}, \tag{48}\] where the velocity at which the streams collide, \(v_{\rm SI}\), is on the order of the free-fall velocity. As the velocity of the stream elements is greater at smaller radii, the rate of dissipation will also be greater for closer orbits. We note that our definition of \(R_{\rm SI}\) assumes \(a_{\rm BH}=0\); however, \(a_{\rm BH}>0\) BHs can cause smaller \(R_{\rm SI}\) at lower \(\beta\) for retrograde TDEs due to frame dragging effects. Shocks are present in the disk throughout the evolution and are also sites of dissipative heating (Shiokawa et al., 2015; Sadowski et al., 2016; Liptai et al., 2019; Ryu et al., 2023). The \(\beta=1\) model in Liptai et al. (2019) showed dissipation from shocks exceed Eddington at up to ten times \(t_{\rm fb}\). Ryu et al. (2023) estimate the total mechanical energy output and find that it exceeds Eddington even after \(2t_{\rm fb}\), though they do not isolate energy from shocks. Since the spiral shocks are spread over the majority of the disk, energy generated from the shocks is expected to not be localized. Energy released from shocks may delay thermal collapse assuming it is instantaneously spread evenly in the disk. If the disk radiates at \(L_{\rm Edd}\), elements of the outer disk which are already thermally unstable by the condition \(Q_{\rm adv}^{-}\leq Q_{\rm rad}^{-}\) cannot collapse until the dissipation rate from shocks is less than Eddington. We define the time when the dissipation rate from shocks is less than \(L_{\rm Edd}\) as \(t_{\rm Edd}\). To illustrate how thermal collapse occurs, it is instructive to compute the time at which the disk component at \(R_{\rm tr}\) will collapse, \[t_{\rm collapse}=\begin{cases}t+t_{\rm th},&t\geq t_{\rm Edd}\\ t_{\rm Edd},&t<t_{\rm Edd},\end{cases} \tag{49}\] where \(t\) is the time since the initial disruption. We show examples of \(t_{\rm collapse}\) versus \(R_{\rm tr}\) in Figure 19 for \(m_{6}=1\), \(m_{*}=1\), and \(\beta=1\) TDE. We assume a Keplerian profile in both models (since the disk is circularized) and \(\alpha=0.1\) such that \(t_{th}\propto R^{3/2}\). In a standard accretion disk, the outer disk will collapse first and \(R_{\rm tr}\) slowly decreases over several hundred days. As a result, the ram pressure acting on the stream will also slowly increase since the bulk of the disk will still be geometrically thick until \(R_{\rm tr}\sim r_{H}\). Thus, a model where the transition radius depends only on mass accretion cannot explain rapid state transitions since \(R_{\rm tr}\propto\dot{M}\propto t^{-5/3}\). For the collapsing disk model, we assume \(t_{\rm Edd}=515\) days to be similar to _Swift_ J1644+57. Since the energy injected into the disk is assumed to exceed the radiated energy of the system until \(t>t_{\rm Edd}\), the outer disk will remain geometrically thick until it collapses (the vertical part of the curve in Figure 19) on the thermal time scale which is much smaller than \(t\). Once \(t>t_{\rm Edd}\), the inner disk follows the standard accretion curve. Here we have ignored the possibility of magnetic pressure support for simplicity. For an assumed state transition at several hundred days, Figure 19.— In the top panel, we illustrate a delayed disk collapse model where the accretion disk remains geometrically thick all the way to the transition radius (\(R_{\rm tr}\)) until \(t\geq t_{\rm Edd}\). In the bottom panel, we show the collapse time (\(t_{\rm collapse}\)) as a function of radius in the disk for a \(m_{6}=1\), \(m_{*}=1\), and \(\beta=1\) TDE with (dashed line) and without (solid line) delayed thermal collapse. In our delayed collapse model, larger radii are prevented from cooling early on and a large portion of the disk has the same collapse time (vertical portion of dashed line). only the delayed collapse model will have an instantaneous change in \(f_{\rho}\) over most of the disk. This will lead to the desired dynamical consequences on the jet and disk. That is, the density contrast of the collapsed region of the disk will rapidly increase by more than an order of magnitude at \(t_{\rm collapse}\) since the disk density in Equation 9 for \(r>R_{\rm tr}\) will be multiplied by a factor of \(h_{d}^{-1}\). This will lead to the self-intersection outflow rapidly ceasing as in our simulations, in which case the disk and jet will rapidly realign with the BH spin due to the disk being MAD. We also note the possibility of radial contraction of the disk (or a decrease in \(R_{d}\)) as in Metzger (2022) would only enhance the rise in \(f_{\rho}\). As such, we expect that the effects of disk collapse will play a role dynamically; however, our analysis favors relativistic TDEs with \(\beta>1\) (or retrograde TDEs) if self-intersection is the assumed method of delaying disk collapse. ### Tilt Evolution in Jetted TDEs Our simulations illustrate that even an aligned TDE can undergo strong tilt excitation when a jet is present. The fact that the tilt decreases when the density contrast increases, which is due to the self-intersection shock and outflow being weakened, suggests that X-ray shut-off in TDE jets may be possible even without the disk exiting the MAD state. We produce a toy model of a relativistic jet where the tilt and flux depend on \(f_{\rho}\). The tilt is assumed to be \[\mathcal{T}_{\rm jet}(f_{\rho})=\mathcal{T}_{\rm jet,0}\begin{cases}1,&f_{ \rho}<f_{\rho,\rm min}\\ \left(1-\frac{f_{\rho}-f_{\rho,\rm min}}{f_{\rho,\rm max}-f_{\rho,\rm min}} \right),&f_{\rho,\rm min}\leq f_{\rho}\leq f_{\rho,\rm max}\\ 0,&f_{\rho}>f_{\rho,\rm max}.\end{cases} \tag{50}\] Here we have assumed that the jet angle is constant when the stream is dense enough for the self-intersection shock to occur. It then linearly decreases from \(\mathcal{T}_{\rm jet,0}\) to \(0\) as \(f_{\rho}\) increases from \(f_{\rho,\rm min}\) to \(f_{\rho,\rm max}\). Here \(f_{\rho,\rm min}\) is the critical density contrast where self-intersection is weak enough for the jet to begin to realign and \(f_{\rho,\rm max}\) is where the jet is completely unperturbed. The X-ray variability in _Swift_ J1644+57 indicates that X-rays originate from near the BH, so we use a simple top-hat jet model and incorporate beaming effects to predict the time evolution of the flux. We adopt a model similar to Beniamini et al. (2023) where the off-axis jet flux is proportional to the on-axis jet flux through a simple beaming correction factor \[a=\frac{1-\beta_{\rm jet}}{1-\beta_{\rm jet}\cos(\mathcal{T}_{\rm obs}- \mathcal{T}_{\rm jet})} \tag{51}\] is the beaming correction, where \(\beta_{\rm jet}\) is the jet velocity \(\mathcal{T}_{\rm obs}\) is the angle of the observer relative to the \(z\)-axis. The flux is approximated as \[F(\mathcal{T}_{\rm jet})=F_{\rm on,jet}(t)\begin{cases}1,&\Delta\theta<\theta_{ \rm jet}\\ 0.5a^{2},&\theta_{\rm jet}<\Delta\theta<2\theta_{\rm jet}\\ 0.5a^{3},&\Delta\theta>2\theta_{\rm jet}.\end{cases} \tag{52}\] Here \(\Delta\theta\equiv\mathcal{T}_{\rm obs}-\mathcal{T}_{\rm jet}\) and \(\theta_{\rm jet}=\gamma_{\rm jet}^{-1}\) is the angle that the jet is beamed into. The factor of \(0.5\) is a geometrical correction. We assume that the jet flux is directly correlated to the mass accretion rate, which we assume to be a fraction of the fallback rate \(F_{\rm on,jet}(t)\propto\dot{M}_{\rm fb}(t)\). We divide the flux by \(F_{\rm on,jet}(t=0)=F_{\rm peak}\) for simplicity. We apply our toy model to a range of TDEs in Figure 20. For the disk/stream interaction, we set \(f_{\rho,\rm min}=0.01\) and \(f_{\rho,\rm max}=0.1\). For the jet, we assume \(\gamma_{\rm jet}=10\), \(\mathcal{T}_{\rm jet,0}=20^{\circ}\). The observer is assumed to be aligned with the jet initially with \(\mathcal{T}_{\rm obs,0}=20^{\circ}\). In addition to smoothly varying \(f_{\rho}\) models, we also analyse a collapsing disk model, motivated by the discussion in Section 6.2, with \(m_{\rm\theta}=5,\ m_{\star}=1,\ \beta=1\) which instantaneously collapses to \(h_{d}=0.05\) at \(t=t_{\rm Edd}=515\) days. Our model illustrates that if the band of \(f_{\rho}\) where the self Figure 20: In the top panel, we illustrate how \(f_{\rho}\) evolves in our simple disk model described in Section 2 with a range of BH mass \(m_{\rm\theta}=\dot{M}_{\rm BH}/10^{6}M_{\odot}\) and stellar mass \(m_{\star}=M_{\star}/M_{\odot}\). Note that we set \(f_{\rm acc}=0.1\) in each profile for simplicity. We show the initial \(f_{\rho}\) for each simulation with \(a_{\star}=0.9\) models in Table 1 based on \(\dot{M}_{\rm inj}\) (horizontal dashed lines). We also show a case where we assume disk collapse at \(t=t_{\rm Edd}\) for \(m_{\rm\theta}=5\) and \(m_{\star}=1\) (brown line). In the middle panel, we show the jet tilt from Equation 50 assuming \(\mathcal{T}_{\rm jet,0}=20^{\circ}\), \(f_{\rho,\rm min}=0.01\), and \(f_{\rho,\rm max}=0.1\). In the bottom panel, we show the beamed jet flux computed from Equation 52 assuming \(\mathcal{T}_{\rm obs}=\mathcal{T}_{\rm jet,0}\). We compare each model with the normalized X-ray flux from _Swift_ J1644+57 taken from Zauderer et al. (2013); Eftekhari et al. (2018); Cendes et al. (2021) (black circles). Models without collapse show a steady decrease in jet flux as the jet angle changes. Only the model which assumes disk collapse reasonably explains the \(>2\) order of magnitude decrease in X-ray flux observed in _Swift_ J1644+57. intersection weakens is large, the jet cannot shift by tens of degrees in 1-2 weeks. A rapid shutoff may instead be related to the collapse of the outer disk which causes a rapid spike in \(f_{\rho}\) and a subsequent rapid realignment of the jet, as illustrated by the 'collapsing disk' model. Due to relativistic beaming, this can account for the more than 2 orders of magnitude drop in X-rays at \(\sim 500\) days in less than 15 days jetted TDEs like _Swift_ J1644+57 with the appropriate TDE parameters. Note that we only require that the X-ray emission decrease by at least two orders of magnitude within \(\sim 2\) weeks in order to explain the behaviour of _Swift_ J1644+57. The X-rays after the decline could be disk emission which becomes dominant when the jet is out of the line of sight. This is more attractive of an explanation since the jet emission follows a \(t^{-5/3}\) power law even after tilting, but the late time X-rays are approximately constant. ### Coronal Evolution in Non-Jetted TDEs Tilt effects are unlikely to lead to substantial X-ray changes in non-jetted TDEs since the emitting region is non-relativistic and we only saw changes of up to \(\sim 10^{\circ}\) in our models. However, our \(a_{*}=0\) simulations demonstrate that a coronal region can be sustained even during stream self-intersection provided enough magnetic flux threads the disk/BH. Curd (2021) found no funnel/corona region during the stream injection phase due to a substantially lower magnetic flux than our MAD disks, but this may only apply to the TDE evolution near the peak fallback rate as their simulations covered only a few days of evolution. Assuming magnetic flux increases as a function of time, which appears to occur in TDE disks (Sadowski et al., 2016), our \(a_{*}=0\) simulations may be interpreted as the limiting state for a TDE at a given \(f_{\rho}\) around a Schwarzschild BH since they are MAD. Increases in X-ray emission in non-jetted TDEs may then be related to both a hot, magnetized corona forming as \(\phi\) increases combined with a decrease in optical depth as the fall back rate declines. The X-rays during this phase would exhibit a slow rise as the photosphere radius drops. The X-ray emission in AT2021ehb steadily turned on until it reached a maximum of \(\sim 5\times 10^{43}\)erg s\({}^{-1}\) before promptly declining by an order of magnitude at \(\sim 270\) days. The rise phase and spectral hardening of AT2021ehb could be explained by the coronal evolution scenario outlined in the previous paragraph while the rapid decrease in X-ray flux could conceivably be due to the delayed disk collapse we discuss in Section 6.2. While the coronal evolution in our non-jetted models is expected to be similar to a non-MAD case, whether or not thermal TDEs are also MAD is unclear and simulations which evolve the magnetic field suggest they should not be. This leads to important dynamical differences when considering the evolution of the disk. While a MAD disk may remain magnetic pressure supported, non-MAD accretion disks are expected to become thermally unstable once pressure support from shocks is lost. ### Future Prospects The discovery of tilt instability in TDE disks could have profound consequences on the emission properties beyond the X-ray emission from the jet or corona. It is conceivable that the polarization signal of the disk and jet will be impacted by changes in the tilt of the system. Although we found some evidence of enhanced magnetic flux accumulation in model m09f0.01b7, the turbulent dynamics near the horizon may have impeded this effect. The onset of the disk tilt also seems to correspond with a decrease of the magnetic flux at the horizon. Simulations with the self intersection radius move further away from the horizon may allow higher magnetic flux to be sustained. This may lead to a magnetic flux much higher than expected by Tchekhovskoy et al. (2014). Curd et al. (2022, 2023) investigated the morphology and radio spectra of jets from SANE super-Eddington accretion disks. Such an analysis could similarly be carried out on MAD TDE disks and would provide useful insight into how the dynamics of the system effect the ultimate jet properties. We plan to investigate this in a future work. ## 7 Conclusions * All of our simulations maintained a significant magnetic flux threading the horizon even after interacting with the TDE stream. Each simulation reached a MAD or semi-MAD state. Powerful jets were launched for \(a_{*}=0.9\) models. This is strong validation of the idea that TDEs can become MAD and launch spin-powered jets. * We found that the Maxwell stress is subdominant to hydrodynamic sources of viscosity at all values of \(f_{\rho}\) investigated in this work. Instead, shocks and hydrodynamic viscosity drive angular momentum transport. * The strength of the self-intersection outflow depends on the ratio between the stream and the disk. As the stream becomes less dense, ram pressure from the disk can effectively brake the stream and it eventually joins with the disk with either a weak self-intersection or no self-intersection at all. * During the early stages of a TDE, the stream is much denser than the disk with \(f_{\rho}<0.01\) since most of the mass has yet to fallback. The stream is essentially unperturbed by the disk at this stage and has a strong self-intersection shock since it maintains its orbital energy. The self-intersection outflow pushes on the jet/corona region. This tilts the jet/corona by \(10-40^{\circ}\) in our simulations. As \(f_{\rho}\) increases, the self-intersection shock weakens and powerful jets remain aligned with the BH spin. * In jetted TDEs, because the jet is tilted by the self-intersection outflow, the jet can transfer momentum to the disk, which tilts the disk to \(\sim 20-30^{\circ}\) in less than \(10,000t_{g}\). This configuration is stable due to the self-intersection of tilted material within \(R_{\rm SI}\) with un-tilted material being brought in from the stream. This effect does not occur when there is no self-intersection outflow (the stream is not dense enough) or there is no jet (as shown by our \(a_{*}=0\) models). * When we lowered the stream density in a restart of the model m09f0.01b7 after the disk/jet was tilted, we found that a MAD or semi-MAD state leads to alignment of the disk/jet similar to GRMHD simulations of tilted disks. We propose that this is due to the weakening/absence of the self-intersection, which acts to maintain the tilt once it sets in. * We demonstrate that rapid changes in \(f_{\rho}\), which may occur due to delayed disk collapse, will lead to a rapid X-ray shutoff in jetted TDEs. Jet realignment with the BH spin in models m09f1b7B and m09f0.1b7B represents a change of \(\sim 20-30^{\circ}\) in the jet angle in less than three days. We propose that this is an alternative method of rapidly dropping the X-ray flux in _Swift_ J1644+57 in \(\sim 15\) days without also requiring that the system no longer be MAD. ## 8 Acknowledgements We thank Enrico Ramirez-Ruiz for useful suggestions during preparation of the manuscript. We thank Aviyel Ahiyya for assistance with obtaining observational data. This work was supported by the Simons Collaboration on Extreme Electrodynamics of Compact Sources. Richard Jude Anantua was supported by the Oak Ridge Associated Universities Powe Award for Junior Faculty Enhancement. Computational support was provided via ACCESS resources (grant PHY230006). ## 9 Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
磁力捕獲された accretion disks (MADs) が高速回転するブラックホール (BH) の周りで提案されているが、TDE (tidal disruption event) に関して、磁力に強い磁場を持つディスクのダイナミクスは、これまで実在的なシミュレーションでは、その chaotic なダイナミクスを模倣できるシミュレーションは未調査だった。ここでは、既存の MAD ディスクが TDE 流体との相互作用をする際のグローバル GRMHDシミュレーションを用いて、強磁気性 TDE と標準 MAD の違いを探る。私たちは、初めて MAD や半MAD状態が維持され、BH回転によるジェットが TDE に生成されることを示す。また、自重交叉衝撃の強さは、ディスクの密度と流体密度との比によって変化する。密度比$f_\rho = \rho_d/\rho_s$ が小さいほど、ジェットや funnel の傾斜
2309.12443
Active Learning for Multilingual Fingerspelling Corpora
We apply active learning to help with data scarcity problems in sign languages. In particular, we perform a novel analysis of the effect of pre-training. Since many sign languages are linguistic descendants of French sign language, they share hand configurations, which pre-training can hopefully exploit. We test this hypothesis on American, Chinese, German, and Irish fingerspelling corpora. We do observe a benefit from pre-training, but this may be due to visual rather than linguistic similarities
Shuai Wang, Eric Nalisnick
2023-09-21T19:36:22
http://arxiv.org/abs/2309.12443v2
# Active Learning for Multilingual Fingerspelling Corpora ###### Abstract We apply active learning to help with data scarcity problems in sign languages. In particular, we perform a novel analysis of the effect of pre-training. Since many sign languages are linguistic descendants of French sign language, they share hand configurations, which pre-training can hopefully exploit. We test this hypothesis on American, Chinese, German, and Irish fingerspelling corpora. We do observe a benefit from pre-training, but this may be due to visual rather than linguistic similarities. Active Learning, Transfer Learning, Multilingual Sign Language Active Learning for Multilingual Fingerspelling Corpora Active Learning, Transfer Learning, Multilingual Sign Language ## 1 Introduction While there has been recent advances in technologies for written languages (e.g. BERT (Devlin et al., 2019), GPT-3 (Brown et al., 2020)), _signed languages_ have received little attention (Yin et al., 2021). In turn, the globe's 70 million deaf people and 360 million hard-of-hearing people largely do not benefit from modern language technologies. One reason for this gap is that _sign language processing_ (SLP) is inherently more challenging since identifying signs is a complex computer vision task--compared to string matching for written languages. Yet, the scarcity of sign language corpora is the largest hurdle to advances in SLP. There are, of course, relatively fewer users of signed languages, leading to less available training data. Moreover, the relative scarcity of native signers makes procuring high-quality labels and annotations for supervised learning difficult and costly. In this work, we make progress on alleviating data scarcity in SLP by applying _active learning_ (AL) (Settles, 2012) to multilingual _fingerspelling_ corpora. _Fingerspelling_ is a subtask in sign language communication: the signer spells-out a word associated with a concept that does not have an associated sign, such as a proper noun. For example, if a signer wants to reference "GPT-3," the signer would make a hand shape for each letter 'G,' 'P, and 'T'. See Figure 1 for example images. In addition to detecting sign-less concepts, recognizing fingerspelling is important because the hand configuration for some concepts are the same as the first letter of the associated word. For instance, in American sign language, the concept of'ready' is signed with the hands in the 'r'-configuration. While fingerspelling recognition is a relatively easier task than full SLP, it can still be challenging for low-resource languages. Consider Irish sign language: it has just 5000 active deaf users, and to make matters more complicated, has variations depending on the gender of the signer. Collecting large, diverse data sets for languages such as this one requires great effort. Thus, we also perform _transfer active learning_ by exploiting linguistic relations across sign languages. Specifically, American, Irish, German, and Chinese sign languages all are descendants of French sign language, thus resulting in shared hand configurations, among other features. We conjecture that learning can be done more quickly by first pre-training a model on a related sign language and then performing AL on an especially low-resource target language. We report two sets of experiments. In the first, we perform AL on American, Irish, German, and Chinese sign languages, demonstrating improvements over random sampling. In the second, we perform transfer AL by first pre-training the model on one of the other three sign languages. This is the first work to investigate leveraging linguistic relationships for transfer AL. While we observe some benefits, our initial results suggest that pairing data sets with a similar visual style improves transfer AL more than having a strong linguistic relationship but dissimilar visual style. ## 2 Data Sets: Multilingual Fingerspelling Corpora We use the following publicly available fingerspelling corpora, all of which have a linguistic relationship due to being descendants from French sign language. In all cases, we standardize the image resolution to \(28\times 28\) pixels. For each data set, we resample the frequency of each letter so that it follows the character distribution of the corresponding written language. As a result, the data sets are class imbalanced. 1. **American Fingerspelling (ASL)**: This data set is availabe on Kaggle1 and contains \(34,627\) grayscale images (resolution \(28\times 28\)) for \(24\) alphabetic characters used in American sign language (ASL). The characters 'J' and 'Z' are omitted because they are dynamic signs that cannot be well-represented by a static image. The images are closely cropped to the hand and in turn do not require pre-processing. Footnote 1: [https://www.kaggle.com/datasets/datamunge/sign-language-mnist](https://www.kaggle.com/datasets/datamunge/sign-language-mnist) 2. **Chinese Fingerspelling (CSL)** (Jiang et al., 2020): This data set consists of the alphabetic characters used in Chinese sign language (CSL). Specifically, there are \(1320\) Figure 1: _Example Images_. The above figure shows the letters ‘A’ and ‘P’ from the ASL, CSL, GSL, and ISL data sets respectively. images, each labeled as one of \(30\) letters--\(26\) are the characters A through Z and \(4\) are double syllable letters (ZH, CH, SH, and NG). The resolution is originally \(256\times 256\), and we down-sample them to \(28\times 28\). We discard the double syllable letters since they are unamenable to transfer learning. We convert all images to grayscale and remove 'J' and 'Z' so that the data set is aligned with the ASL data set. 3. **German Fingerspelling (GSL)**(Dreuw et al., 2006): This data set contains alphanumeric characters used in German sign language (GSL). The original data set consists of video sequences for \(35\) gestures: the \(26\) letters A to Z, the \(4\) German umlauts (SCH, A, O, U), and the numbers from \(1\) to \(5\). We extracted \(20,904\) static images for the letters from the video frames. We remove the background environment using a ResNet-101 model pre-trained for segmentation (Chen et al., 2018). We again convert the images to grayscale and remove 'J' and 'Z'. 4. **Irish Fingerspelling (ISL)**(Oliveira et al., 2017): This data set contains the alphabetic characters used in Irish sign language (ISL). It consists of \(58,114\) images for the \(26\) letters A through Z. This data set is unique from the others in that the sign is repeated by moving the forearm from a vertical to near horizontal position. Again, we convert the images to grayscale and remove 'J' and 'Z'. Figure 1 shows images for the letters 'A' and 'P' from the four corpora listed above. The sign for 'A' is very similar for ASL, GSL, and ISL. The sign for 'P' is the same for ASL and GSL but different for CSL and ISL. ## 3 Methodology: Active Learning Our aim is to obtain a predictive model for fingerspelling recognition. Given an input image, the model should be able to predict a label representing the alphabetic character being signed. Yet, given the resource constraints for signed languages (discussed in Section 1), we wish to train a highly accurate predictive model using as few labeled instances as possible. To achieve this, we rely on the methodology of _active learning_ (AL) (Settles, 2012), which allows us to obtain a sequence of highly informative labels from an oracle (such as a human expert). The hope is that the intelligent selection of these labels allows the model to achieve satisfactory performance while minimizing queries of the oracle. Formally, we assume the model is first trained on an initial data set \(\mathcal{D}_{0}=\{\mathbf{x}_{n},y_{n}\}_{n=1}^{N}\), with \(N\) being relatively small for the problem at hand. We also assume access to an unlabeled pool set \(\mathcal{X}_{p}=\{\mathbf{x}_{m}\}_{m=1}^{M}\) such that \(M>>N\). The oracle has access to the corresponding labels for this pool set, denoted \(\mathcal{Y}_{p}=\{y_{m}\}_{m=1}^{M}\). AL for image data has been well-studied (Gal et al., 2017), and various acquisition functions have been proposed. In preliminary experiments, we studied the following: maximum entropy (Shannon, 1948), \(BALD\)(Houlsby et al., 2011), variation ratios (Freeman, 1965), and mean standard deviation. We found all methods to be comparable but with variation ratios to have a slight edge. In turn, we use variation ratios in all experiments. At time step \(t\), a point from the pool set is selecting according to: \[\mathbf{x}_{t}^{*} = \underset{\mathbf{x}\in\mathcal{X}_{p,t-1}}{\arg\max}\quad 1-\max_{ \mathrm{y}}p\left(\mathrm{y}|\mathbf{x},\mathbf{\theta}_{t-1}\right) \tag{1}\] where \(p\left(\mathrm{y}|\mathbf{x},\mathbf{\theta}_{t-1}\right)\) is the model's maximum class confidence after being fit during the previous time step. After obtaining \(\mathbf{x}_{t}^{*}\), the oracle is queried for the associated label, and the model is re-fit to data that includes the selected feature-label pair (to obtain \(\mathbf{\theta}_{t}\)). Transfer Active LearningKnowledge gained from other domains can be helpful to improve performance on the target domain. In fact, Tamkin et al. (2022) suggest that the ability to actively learn is an emergent property of pre-training. We combine this inspiration with the fact that ASL, CSL, GSL, and ISL are linguistically related, proposing to pre-train on a related fingerspelling corpora before performing AL on the target domain. Specifically, we assume access to an auxiliary data set \(\mathcal{D}_{-1}=\left\{\mathbf{x}_{l},y_{l}\right\}_{l=1}^{L}\). We train the model of interest on this data set before training it on the pool set \(\mathcal{D}_{0}\). \(\mathcal{D}_{-1}\) is seen by the model only once and not retained as part of the growing data set as AL proceeds. ## 4 Experiments We perform two types of experiment. For the first, we verify the effectiveness of AL for an individual fingerspelling corpus. For the second, we examine if AL can be made more sample efficient by pre-training on a linguistically related corpus. The following settings apply to all experiments. We construct the initial set \(\mathcal{D}_{0}\) by choosing two samples per class uniformly at random (\(N=48\)). For constructing test sets, we hold-out 30% of the data in CSL (because of its small size) and 10% for the other corpora. The remaining data is the pool set. The query size for each data set is different due their different sizes: 50 for ISL and GSL, 10 for ASL, and 5 for CSL. The classifier is retrained from a random initialization whenever new labels are procured. The classifier is trained for 50 epochs using a batch size of 128 with learning rate of \(10^{-3}\). ### Single Corpus Active Learning Implementation DetailsWe compare against uniformly random acquisition as the baseline. We re-run experiments with three unique random seeds to account for sampling variation. The classifier is a neural network with two convolutional layers (ReLU activations Figure 2: _Single Corpus Active Learning_: The figure above shows AL results for each fingerspelling corpus. Using variation ratios (blue) as an acquisition function is clearly superior to random sampling (yellow). Figure 4: _Per Class Results for Transfer Active Learning_: Generally, we see the gains from pre-training are had by shared letters (red) but not exclusively (‘X’ in ISL is not shared by GSL). Figure 3: _Transfer Active Learning_: The figure above shows AL results for each fingerspelling corpus with pre-training and no pre-training. Pre-training with a non-fingerspelling corpus degrades performance in all cases. and dropout with \(p=25\%\)) and two fully-connected layers (ReLU activations and dropout with \(p=50\%\)). ResultsFigure 2 reports the results, showing test set accuracy (y-axis) as more data is collected (x-axis). We see a clear improvement over random sampling in all cases. Moreover, AL reaches near full-data-set performance by seeing just \(15\%\) (or less) of the original data in three out of four cases (ASL, GSL, and ISL). GSL sees the least benefit from AL, but surprisingly, we see that with just \(12\%\) of the training data, the model starts to surpass full-data-set performance. We suspect that this is due to outliers (created by frame extraction and/or cropping mentioned in section 2) that the AL method has not yet seen but detract from performance. ### Transfer Active Learning Implementation DetailsWe next turn to the pre-training experiment. We use a ResNet-18 (He et al., 2016) as the backbone with the same structure as the previous experiment. We pre-train the classifier on the full data set from one corpus and then load the backbone of the pre-trained model with a re-initialized classification head. The entire structure of the neural network is retrained during AL. We re-run all experiments five times with different random seeds. As a control, we also perform a run with the model pre-trained on FashionMNIST (Xiao et al., 2017), another grayscale data set. ResultsFigure 3 reports test accuracy for all pre-training configurations, including no pre-training. The only noticeable trend that generalizes to all corpora is that pre-training on FashionMNIST degrades performance in comparison to pre-training on a fingerspelling corpus _and_ using no pre-training. While this observation supports our motivating hypothesis for transfer AL, otherwise, we see modest gains with pre-training. The only clear improvement over no pre-training is demonstrated for ISL pre-trained on GSL (subfigure d, blue line). As these are the only two data sets that include the signer's forearm, we suspect that visual similarly is causing the benefit rather than linguistic relationship, or else we would have seen improvements in other data pairings. Yet, curiously, this benefit is not symmetric since pre-training with ISL did not improve GSL performance more than pre-training with other corpora. Figure 4 shows the accuracy per class (letter). To make the performance gap more obvious, we show performance at the time step with the largest gap (\(t=40\)). We expect to see the pink and blue lines (pre-training) to be higher than the green line (no pre-training) for letters shown in black (shared across languages). This is generally the case, but we also see improvements for non-shared letters, such as for 'X' in ISL. ## 5 Conclusion We have reported the first transfer AL results for fingerspelling corpora. We observed a clear success (pretraining on GSL and performing AL on ISL), but this was not observed for other pairings nor is the ISL-GSL relationship symmetric. In future work, we plan to more carefully control for linguistic and visual relationships between data sets to better isolate the cause of the performance gains. ## Acknowledgements We thank Floris Roelofsen, Paul Verhagen, Rajeev Verma, and Haochen Wang for helpful feedback.
**アクティブ学習を用いて手話のデータ不足問題を解決します。特に、事前学習の効果を解析します。手話は、多くの場合、フランス手話から言語的枝分かれし、手形配置を共有するため、事前学習が有効に利用できる可能性があります。アメリカ、中国、ドイツ、アイルランドの指書きのcorpusを用いて、この仮説を検証します。事前学習は有益であることが観察されましたが、これは視覚的な類似性よりも言語的類似性によるものかもしれません。** I think the translation is good, but I would like to improve the flow and word choice slightly. Would you be willing to make some tweaks to the translation?
2309.13985
Physics-Driven ML-Based Modelling for Correcting Inverse Estimation
When deploying machine learning estimators in science and engineering (SAE) domains, it is critical to avoid failed estimations that can have disastrous consequences, e.g., in aero engine design. This work focuses on detecting and correcting failed state estimations before adopting them in SAE inverse problems, by utilizing simulations and performance metrics guided by physical laws. We suggest to flag a machine learning estimation when its physical model error exceeds a feasible threshold, and propose a novel approach, GEESE, to correct it through optimization, aiming at delivering both low error and high efficiency. The key designs of GEESE include (1) a hybrid surrogate error model to provide fast error estimations to reduce simulation cost and to enable gradient based backpropagation of error feedback, and (2) two generative models to approximate the probability distributions of the candidate states for simulating the exploitation and exploration behaviours. All three models are constructed as neural networks. GEESE is tested on three real-world SAE inverse problems and compared to a number of state-of-the-art optimization/search approaches. Results show that it fails the least number of times in terms of finding a feasible state correction, and requires physical evaluations less frequently in general.
Ruiyuan Kang, Tingting Mu, Panos Liatsis, Dimitrios C. Kyritsis
2023-09-25T09:37:19
http://arxiv.org/abs/2309.13985v2
# Physics-Driven ML-Based Modelling for Correcting Inverse Estimation + ###### Abstract When deploying machine learning estimators in science and engineering (SAE) domains, it is critical to avoid failed estimations that can have disastrous consequences, e.g., in aero engine design. This work focuses on detecting and correcting failed state estimations before adopting them in SAE inverse problems, by utilizing simulations and performance metrics guided by physical laws. We suggest to flag a machine learning estimation when its physical model error exceeds a feasible threshold, and propose a novel approach, GEESE, to correct it through optimization, aiming at delivering both low error and high efficiency. The key designs of GEESE include (1) a hybrid surrogate error model to provide fast error estimations to reduce simulation cost and to enable gradient based backpropagation of error feedback, and (2) two generative models to approximate the probability distributions of the candidate states for simulating the exploitation and exploration behaviours. All three models are constructed as neural networks. GEESE is tested on three real-world SAE inverse problems and compared to a number of state-of-the-art optimization/search approaches. Results show that it fails the least number of times in terms of finding a feasible state correction, and requires physical evaluations less frequently in general. Physics-Driven Machine Learning Correction Optimization ## 1 Introduction Many estimation problems in science and engineering (SAE) are fundamentally inverse problem, where the goal is to estimate the state \(\mathbf{x}\in\mathcal{X}\) of a system from its observation \(\mathbf{y}\in\mathcal{Y}\). Examples include estimating the temperature state from the observed spectrum in combustion diagnostics [1], and discovering design parameters (state) of aero engine according to a group of performance parameters (observation) [2]. Traditional physics-driven inverse solvers are supported by rigorous physical laws, which vary depending on the application, e.g., the two-colour method for spectrum estimation [3], and cycle analysis for aero engine design [4]. Recent advances take advantage of machine learning (ML) techniques, constructing mapping functions \(F\) to directly estimate the state from the observation, i.e., \(\hat{\mathbf{x}}=F(\mathbf{y})\)[5, 6, 7]. Such ML solutions are more straightforward to develop, moreover, efficient and easy to use. However, ML-based state estimates can sometimes be erroneous, while SAE applications have very low error tolerance. One can imagine the disastrous consequences of providing unqualified aero engine design parameters. Therefore, it is critical to detect and correct failed ML estimations before adopting them. This leads to a special SAE requirement of evaluating the estimation correctness in the deployment process of an ML estimator. Since the ground truth state is unknown at this stage, indirect evaluation has to be performed. Such evaluations can be based on physical forward models and performance metrics [8; 9]. A common practice is to combine multiple evaluations to obtain an accumulated physical error, enforcing quality control from different aspects. When the physical error exceeds a feasibility threshold, one has to remediate the concerned ML estimation. One practice for finding a better estimation is to directly minimize the physical error in state space [10]. This requires solving a black-box optimization problem, for which it is challenging to find its global optimum, iterative approaches are used to find a near-optimal solution [11; 12]. In each iteration, a set of states are selected to collect their physical errors, then error feedback is used to generate better state(s) until a near-optimal state is found. Physical error collection involves time-consuming simulations[13; 14], e.g., a spectrum simulation which, despite taking just several minutes for each run [15], can become costly if queried many times. Consequently, the optimization process becomes time-consuming. Therefore, in addition to searching a satisfactory state with as small as possible physical error, it is also vital to decrease the query times to the physical evaluation. Our work herein is focused on developing an efficient algorithm for remediating the concerned ML estimation in deployment. We propose a novel correction algorithm, **G**enerative **E**xploitation and **E**xploration guided by hybrid **S**urrogate **E**rror (GEESE), building upon black-box optimization. It aims at finding a qualified state within an error tolerance threshold after querying the physical evaluations as few times as possible. The key design elements of GESE include: (1) A hybrid surrogate error model, which comprises an ensemble of multiple base neural networks, to provide fast estimation of the physical error and to enable informative gradient-based backpropagation of error feedback in model training. (2) A generative twin state selection approach, which consists of two generative neural networks for characterizing the distributions of candidate states, to effectively simulate the exploitation and exploration behaviours. We conduct thorough experiments to test the proposed algorithm and compare it with a series of state-of-the-art optimization/search techniques, based on three real-world inverse problems. Results show that, among the compared methods, GESE is able to find a qualified state after failing the least number of times and needing to query the physical evaluations less times. ## 2 Related Work **Optimization in SAE:** Development of SAE solutions often requires to formulate and solve optimization problems [16; 17; 18]. They are often black-box optimization due to the SAE nature. For instance, when the objective function is characterized through physical evaluations and solving partial differential equations (PDEs) [19], it is not given in a closed form. Typical black-box optimization techniques include Bayesian Optimization [20], Genetic Algorithm (GA) [21], and Particle Swarm Optimization (PSO) [22], etc. They often require a massive number of queries to the objective function in order to infer search directions for finding a near-optimal solution, which is time-consuming and expensive in SAE applications. Instead, differentiable objective functions are constructed, and the problem is reduced to standard optimization, referred to as white-box optimization to be in contrast with black-box. A rich amount of well established solvers are developed for this, e.g., utilizing first- and second-order gradient information [23]. Some recent developments use neural networks to optimize differentiable physical model evaluations, e.g., Optnet [24] and iterative neural networks [25]. However, physics-driven objective functions cannot always be formulated in a differential form, e.g., errors evaluated by the physical forward model in aero engine simulation, which is a mixture of database data, map information and PDEs [26]. A grey-box setting is thus more suitable in practice, where one does not overwrap the evaluations as a black box or oversimplify them as a white box, but a mixture of both. **Surrogate Model in Black-box Optimization:** To reduce the cost of querying objective function values in black-box optimization, recent approaches construct surrogate models to obtain efficient and cheap estimation of the objective function. This practice has been by and large used in SAE optimization, where the objective functions are mostly based on physical evaluations. The most popular technique for constructing surrogate models is ML, including neural networks and Gaussian process models [27; 28; 29]. The associated surrogate model is then incorporated within an optimization process, guided by, for instance, GA and Bayesian optimization, which generate states and interact with it [30; 29], or neural networks that work with differentiable surrogate models [31; 32; 12]. To avoid overfitting, recent effort has been invested to develop surrogate models consistent with some pre-collected data, aiming at obtaining more reliable near-optimal solutions [33; 34; 35; 36; 37]. Nevertheless, there is no guarantee that a surrogate model can well approximate a physical model consistently. Indeed, this is the motivation for the proposed method, where surrogate models are used to speed up the querying process, while the decision in regards to the suitability of the solution is based on the actual physical evaluation. **Reinforcement Learning for Inverse Problems:** In addition to black-box optimization based approaches, Reinforcement Learning (RL) [38; 39] serves as an alternative framework for solving inverse problems [40; 41; 42]. In an RL-based solution framework, physical evaluations are wrapped as a black-box environment outputting scalar reward, and the actions are the states to estimate according to the observation. The behaviour of the environment is simulated by training a world/critic model [43; 44], which is equivalent to a surrogate model of the physical evaluations. Different from black-box optimization based approaches, RL does not intend to search a feasible state estimation for the given observation, but to learn an authoritative agent/policy model [45; 46] to provide state estimations, while the policy training is guided by optimizing an accumulated scalar reward or error [47; 48]. Because of the desire of training a powerful policy model and the statistical nature of the reward, RL often requires many physical evaluations to collect diverse samples and validate training performance [49; 50]. This can be time-consuming when there is limited computing resource. ## 3 Proposed Method We firstly explain the notation convention: Ordinary letters, such as \(x\) or \(X\), represent scalars or functions with scalar output. Bold letters, such as \(\mathbf{x}\) or \(\mathbf{X}\), represent vectors or functions with vector output. The \(i\)-th element of \(\mathbf{x}\) is denoted by \(x_{i}\), while the first \(k\) elements of \(\mathbf{x}\) by \(x_{1:k}\). We use \(|\mathbf{x}|\), \(\|\mathbf{x}\|_{1}\) and \(\|\mathbf{x}\|_{2}\) to denote the dimension, \(l_{1}\)-norm and \(l_{2}\)-norm of the vector \(\mathbf{x}\). An integer set is defined by \([n]=\{1,2\ldots n\}\). Without loss of generality, an estimated state \(\hat{\mathbf{x}}\) is assessed by multiple physical models and/or metrics \(\{P_{i}\}_{i=1}^{h}\), resulting to an \(h\)-dimensional error vector, denoted by \[\mathbf{e}(\hat{\mathbf{x}},\mathbf{y})=\left[E_{P_{1}}(\hat{\mathbf{x}}, \mathbf{y}),E_{P_{2}}(\hat{\mathbf{x}},\mathbf{y}),\ldots,E_{P_{h}}(\hat{ \mathbf{x}},\mathbf{y})\right]. \tag{1}\] Each concerned ML estimation obtained from an observation \(\mathbf{y}\) is remediated independently, so \(\mathbf{y}\) acts as a constant in the algorithm, which enables simplifying the error notation to \(\mathbf{e}(\hat{\mathbf{x}})\) and \(E_{P_{i}}(\hat{\mathbf{x}})\). A better state estimation is sought by minimizing the following accumulated physical error as \[\min_{\hat{\mathbf{x}}\in\mathcal{X}}e(\hat{\mathbf{x}})=\sum_{i=1}^{h}w_{i}E_ {P_{i}}(\hat{\mathbf{x}}), \tag{2}\] where the error weights are priorly identified by domain experts according to the targeted SAE application. For our problem of interest, the goal is to find a state correction that is within a desired error tolerance, e.g., \(e(\hat{\mathbf{x}})\leq\epsilon\) where \(\epsilon>0\) is a feasibility threshold, determined by domain experts. Thus it is not necessary to find a global optimal solution, instead a feasible solution suffices. To achieve this, we adapt a typical iterative framework for black-box optimization: (1) Exploitation: Search the corrected states \(\left\{\hat{\mathbf{x}}_{i}^{(t)}\right\}_{i=1}^{n_{\text{tr}}}\) according to the guidance of surrogate model, and assess them by error function \(e\); (2) Exploration: Collect more data pair \(\left\{(\hat{\mathbf{x}}_{i},\mathbf{e}_{i})\right\}_{i=1}^{n_{\text{tr}}}\) for the purpose of updating the surrogate model. (3) Estimation: Training the surrogate error model with online collected data; This process is terminated till one of corrected states \(e(\hat{\mathbf{x}}_{\text{tr}})<\epsilon\). The objective is to find a feasible state \(\hat{\mathbf{x}}^{*}\) by querying the physical errors as less times as possible because it is time-consuming to collect the errors. Therefore, we challenge the difficult setting of choosing only two states to query at each iteration, where one is for exploitation and the other for exploration. A novel _twin state selection_ approach is proposed for this, which selects a potentially near-optimal state for exploitation and a potentially informative state for exploration at each iteration. Subsequently, this requires to perform error analysis for a large set of candidate states, which involves both the errors and their gradients. To ease and enable such computation, we develop a differentiable surrogate error model to rapidly approximate those error elements that are expensive to evaluate or in need of gradient calculation, and also provide informative gradient guidance with the assistance of error structure. A sketch of GEESE is shown in the algorithm 1. Below, we first explain the process of constructing the surrogate model for error approximation, followed by the twin state selection for characterizing the probability distributions of the candidate states and collecting errors, and finally, the implementation of the complete algorithm. ### Hybrid Neural Surrogate Error Models We start from an informal definition of implicit and explicit errors. Among the set of \(h\) error elements in Eq. (1 ), those that are expensive to collect or to perform gradient calculation are referred to as _implicit errors_, which includes the system is too complicated which need much more time to calculate the gradient than that of network's backpropagation; or the system is indifferentialable, such as the physical model of spectroscopy [15] and aeroengine [26] containing database or map. In addition to implicit error, the remaining are _explicit errors_. We order these error elements so that the first \(k\) elements \(\left\{E_{P_{i}}(\hat{\mathbf{x}})\right\}_{i=1}^{k}\) are implicit while the remaining \(\left\{E_{P_{i}}(\hat{\mathbf{x}})\right\}_{i=k+1}^{n}\) are explicit. Our strategy is to develop a surrogate for each implicit error element, while directly calculate each explicit error. Taking advantage of the robustness of ensemble learning [51, 52], we propose to estimate the implicit errors by an ensemble of multiple base neural networks. Each base neural network is fully connected with a mapping function \(\boldsymbol{\phi}(\mathbf{x},\mathbf{w}):\mathcal{R}^{D}\times\mathcal{R}^{| \mathbf{w}|}\rightarrow\mathcal{R}^{k}\), taking the \(D\)-dimensional state space \(\mathcal{R}^{D}\) as its input space, while returning the approximation of the \(k\) implicit errors by its \(k\) output neurons. The dimension of state space for prediciting temperature and concentration from spectroscopy is \(D=2\) which are respectively for temperature and concentration, while for desigining aeroengine is the amount of design paramters, which is eleven in our experiment(4),as we have eleven design parameters herein. The network weights are stored in the vector \(\mathbf{w}\). We train \(L\) individual base networks sharing the same architecture, while obtain the final prediction using an average combiner. As a result, given a state estimation \(\hat{\mathbf{x}}\), the estimate of implicit error vector is computed by \[\hat{\mathbf{e}}_{\text{im}}\left(\hat{\mathbf{x}},\left\{\mathbf{w}_{i} \right\}_{i=1}^{L}\right)=\frac{1}{L}\sum_{i=1}^{L}\boldsymbol{\phi}\left( \hat{\mathbf{x}},\mathbf{w}_{i}\right), \tag{3}\] and thus, the accumulated physical error is approximated by \[\hat{e}\left(\hat{\mathbf{x}},\left\{\mathbf{w}_{i}\right\}_{i=1}^{L}\right)= \underbrace{\sum_{j=1}^{k}w_{j}\left(\frac{1}{L}\sum_{i=1}^{L}\phi_{j}\left( \hat{\mathbf{x}},\mathbf{w}_{i}\right)\right)}_{\text{approximated implicit error}}+ \underbrace{\sum_{j=k+1}^{h}w_{j}E_{P_{j}}(\hat{\mathbf{x}})}_{\text{true explicit error}}. \tag{4}\] We refer to Eq. (4) as a hybrid surrogate error model including both approximated and true error evaluation. The weights of the base neural networks \(\left\{\mathbf{w}_{i}\right\}_{i=1}^{L}\) are trained using a set of collected state-error pairs, e.g., \(D=\left\{(\hat{\mathbf{x}}_{i},\mathbf{e}_{i})\right\}_{i=1}^{N}\). In our implementation, bootstrapping sampling [53] is adopted to train each base neural network independently, by minimizing a distance loss between the estimated and collected implicit errors, as \[\min_{\mathbf{w}_{i}}\mathbb{E}_{(\hat{\mathbf{x}},\mathbf{e})\sim D}\left[ \text{dist}\left(\boldsymbol{\phi}\left(\hat{\mathbf{x}},\mathbf{w}_{i}\right),\mathbf{e}_{1:k}\right)\right]. \tag{5}\] A typical example of the distance function is \(\text{dist}(\hat{\mathbf{e}},\mathbf{e})=\|\hat{\mathbf{e}}-\mathbf{e}\|_{2}^{2}\). Notably, each element of the implicit error vector is estimated, rather than scalar value of the weighted error sum, as the structural information of the error vector can directly contribute in training, through the associated gradient information. When estimating the weighted sum directly, it is in a way to restrict the training loss to a form loosely like \((\hat{e}\left(\mathbf{w}\right)-\|\mathbf{e}\|_{1})^{2}\), which negatively affects the information content of the gradient information. We have observed empirically that, the proposed individual error estimation leads to improvements in training the exploitation generator, compared to using the weighted error sum, see ablation study (1) in Table 2. ### Twin State Selection A selection strategy, i.e., twin state selection (TSS), for querying two individual states at each iteration is proposed, one for exploration and one for exploitation, respectively. The objective of TSS is to substantially reduce the cost associated with physical error collection. In turn, this translates to the formidable challenge of designing a selection process, which maximizes the informativeness of the associated physical error collection subject to minimizing query times. It is obviously impractical and inaccurate to adopt the naive approach of choosing directly one state by searching the whole space. Instead, we target at a two-folded task, researching (1) which candidate set of states to select from and (2) how to select. By taking advantage of developments in generative AI, we construct generative neural networks to sample the candidate states. Specifically, we employ a latent variable \(\mathbf{z}\in\mathcal{R}^{d}\), which follows a simple distribution, e.g., uniform distribution \(\mathbf{z}\sim U\left([-a,a]^{d}\right)\), and a neural network \(\mathbf{G}(\mathbf{z},\boldsymbol{\theta}):\mathcal{R}^{d}\times\mathcal{R}^ {|\boldsymbol{\theta}|}\rightarrow\mathcal{R}^{D}\). The transformed distribution \(p\left(\mathbf{G}(\mathbf{z},\boldsymbol{\theta})\right)\) is then used to model the distribution of a candidate set. Thus, the task of candidate selection is transformed into determining the neural network weights \(\boldsymbol{\theta}\) for the generator \(\mathbf{G}\). In general, exploitation attempts to select states close to the optimal one, whereas exploration attempts to select more informative states to enhance the error estimation. There are various ways to simulate the exploitation and exploration behaviours. For instance, in conventional black-box optimization, e.g., Bayesian optimization and GA, exploitation and exploration are integrated within a single state selection process [54], while in reinforcement learning, a balance trade-off approach is pursued [55; 39]. Our method treats them as two separate tasks with distinct strategies for constructing generators and selecting states. **ExploI_Tation:** To simulate the exploitation behaviour, the exploitation generator \(\mathbf{G}_{\text{IT}}\) is trained at each iteration by minimizing the expectation of the physical error estimate, using the hybrid surrogate error model \[\boldsymbol{\theta}_{\text{G}_{\text{IT}}}^{(t)}=\arg\min_{\boldsymbol{\theta} \in\mathcal{R}^{d}}\mathbb{E}_{\mathbf{z}\sim U([-a,a]^{d})}\left[\hat{e} \left(\mathbf{G}_{\text{IT}}(\mathbf{z},\boldsymbol{\theta}),\left\{\mathbf{ w}_{i}^{(t-1)}\right\}_{i=1}^{L}\right)\right], \tag{6}\] where the base networks from the last iteration are used and we add the subscript \(t-1\) to the weights of the error network for emphasizing. Finally, among the candidates generated by \(\mathbf{G}_{\text{IT}}\) with its trained weights \(\boldsymbol{\theta}_{\text{G}_{\text{IT}}}^{(t)}\), we select the following state \[\hat{\mathbf{x}}_{\text{IT}}^{(t)}=\arg\min_{\hat{\mathbf{x}}\sim p\left(\hat {\mathbf{x}}|\boldsymbol{\theta}_{\text{G}_{\text{IT}}}^{(t)}\right)}\hat{e} \left(\hat{\mathbf{x}},\left\{\mathbf{w}_{i}^{(t-1)}\right\}_{i=1}^{L}\right), \tag{7}\] to query its physical error by Eq. (1), resulting in the state-error pair \(\left(\hat{\mathbf{x}}_{\text{IT}}^{(t)},\mathbf{e}_{\text{IT}}^{(t)}\right)\). If the queried error is less than the feasibility threshold, i.e., \(\mathbf{e}_{\text{IT}}^{(t)}\leq\epsilon\), this selected state is considered acceptable and the iteration is terminated. Otherwise, it is used to keep improving the training of the surrogate error model in the next iteration. **ExploRation:** To simulate the exploration behaviour, a state that does not appear optimal but has the potential to complement the surrogate error model should be selected. We use an exploration generator \(\mathbf{G}_{\text{R}}\) to generate candidates. To encourage diversity so as to facilitate exploration, we assign the generator random weights sampled from a simple distribution, e.g., \[\boldsymbol{\theta}_{\text{G}_{\text{R}}}^{(t)}\sim N\left(0,\mathcal{I}^{| \theta_{\text{G}_{\text{R}}}|}\right). \tag{8}\] We do not intend to train the exploration generator \(\mathbf{G}_{\text{R}}\), because any training loss that encourages exploration and diversity can overly drive the base networks to shift focus in the state space and cause instability in the integrated algorithm. Such an instability phenomenon, caused by training \(\mathbf{G}_{\text{R}}\), is demonstrated in the ablation study (2) in Table 2. By adopting the idea of active exploration via disagreement [56; 57], we consider the state, for which the base networks are the least confident about to estimate the implicit errors, as more informative. Since we use an ensemble of base neural networks to estimate the error, the standard deviations of the base network predictions serve as natural confidence measures [56], which are stored in a \(k\)-dimensional vector: \[\boldsymbol{\sigma}\left(\hat{\mathbf{x}},\left\{\mathbf{w}_{i}^{(t-1)} \right\}_{i=1}^{L}\right)=\left[\sigma_{1}\left(\left\{\boldsymbol{\phi}_{1} \left(\hat{\mathbf{x}},\mathbf{w}_{i}\right)\right\}_{i=1}^{L}\right),\ldots, \sigma_{k}\left(\left\{\boldsymbol{\phi}_{k}\left(\hat{\mathbf{x}},\mathbf{w}_ {i}\right)\right\}_{i=1}^{L}\right)\right]. \tag{9}\] The state maximizing disagreement, i.e., an accumulated standard deviation, between the base networks, is selected given as \[\hat{\mathbf{x}}_{\text{R}}^{(t)}=\arg\max_{\hat{\mathbf{x}}\sim p\left(\hat{ \mathbf{x}}|\boldsymbol{\theta}_{\text{G}_{\text{R}}}^{(t)}\right)}\boldsymbol{ \sigma}\left(\hat{\mathbf{x}},\left\{\mathbf{w}_{i}^{(t-1)}\right\}_{i=1}^{L} \right)\mathbf{w}_{k}^{T}, \tag{10}\] where the row vector \(\mathbf{w}_{k}=[w_{1},w_{2},\dots,w_{k}]\) stores the implicit error weights. The state-error pair \(\left(\hat{\mathbf{x}}_{\text{R}}^{(t)},\mathbf{e}_{\text{R}}^{(t)}\right)\) is obtained after error collection. **Surrogate Model Update:** To initialize the algorithm, we priorly collect a set of state-error pairs \(D_{0}=\{\mathbf{x}_{i},\mathbf{e}_{i}\}_{i=1}^{N}\) for randomly selected states. Next, at each iteration \(t\), two new states are selected and their physical errors are calculated, thus resulting to two new training examples to update the surrogate error model, and an expanded training set \(D_{t}=D_{t-1}\cup\left(\hat{\mathbf{x}}_{\text{IT}}^{(t)},\mathbf{e}_{\text{IT }}^{(t)}\right)\cup\left(\hat{\mathbf{x}}_{\text{R}}^{(t)},\mathbf{e}_{\text{ R}}^{(t)}\right)\). In our implementation, the base neural network weights \(\mathbf{w}_{i}^{(t-1)}\) obtained from the previous iteration are further fine tuned using the two added examples \(\left(\hat{\mathbf{x}}_{\text{IT}}^{(t)},\mathbf{e}_{\text{IT}}^{(t)}\right)\) and \(\left(\hat{\mathbf{x}}_{\text{R}}^{(t)},\mathbf{e}_{\text{R}}^{(t)}\right)\), as well as \(N\) examples sampled from the previous training set \(D_{t-1}\). ### Remediation System and Implementation Given an ML estimation \(\hat{\mathbf{x}}\), the remediation system collects its physical error vector as in Eq. (1), then calculates the accumulated error from the objective function of Eq. (2) and compares it to the feasibility threshold \(\epsilon>0\). When the error exceeds the threshold, the GEESE algorithm is activated to search a feasible estimation \(\hat{\mathbf{x}}^{*}\) such that \(e\left(\hat{\mathbf{x}}^{*}\right)\leq\epsilon\) by querying the physical error as few times as possible. Algorithm 2 outlines the pseudocode of GEESE3, while Fig.1 illustrates its system architecture. Our key implementation practice is summarized below. Footnote 3: We will release an implementation of GEESE after the paper is accepted and insert the link here. **Empirical Estimation:** Eqs. (6), (7) and (10) require operations performed over probability distributions. In practice, we approximate these by Monte Carlo sampling. For Eq. (6), we minimize instead the average over the sampled latent variables \(Z_{\text{IT}}=\{\mathbf{z}_{i}\}_{i=1}^{N_{\text{IT}}}\) with \(\mathbf{z}_{i}\sim U\left([-a_{\text{IT}},a_{\text{IT}}]^{d}\right)\), and this is fixed in all iterations. The search space of Eq. (7) is approximated by a state set computed from \(Z_{\text{IT}}\) using the trained generator, i.e., \(X_{\text{IT}}^{(t)}=\left\{\mathbf{G}_{\text{IT}}\left(\mathbf{z}_{i},\mathbf{ \theta}_{\text{G}_{\text{IT}}}^{(t)}\right)\right\}_{i=1}^{N_{\text{IT}}}\). Similarly, the search space of Eq. (10) is approximated by a state sample \(X_{\text{R}}^{(t)}=\left\{\mathbf{G}_{\text{R}}\left(\mathbf{z}_{i},\mathbf{ \theta}_{\text{G}_{\text{R}}}^{(t)}\right)\right\}_{i=1}^{N_{\text{IT}}}\) where \(\mathbf{z}_{i}\sim U\left([-a_{\text{R}},a_{\text{R}}]^{d}\right)\). **Early Stopping:** When training the base neural networks for implicit error estimation, in addition to the maximum iteration number \(T_{e}\), early stopping of the training is enforced when the training loss in Eq. (5) is smaller than a preidentified threshold \(\epsilon_{e}\). As a result, a higher number \(n_{e}\) of early stopped base neural networks indicates a potentially more accurate error estimation. This strengthens the confidence in training the generator \(\mathbf{G}_{\text{IT}}\) by Eq. (6) that uses the trained base neural network from the previous iteration. In other words, when the base neural network are not sufficiently well trained, it is not recommended to put much effort in training the generator, which relies on the estimation quality. Therefore, we set the maximum iteration number \(T_{G}\) for training \(\mathbf{G}_{\text{IT}}\) in proportional to \(n_{e}\), i.e., \(T_{G}=\delta_{G}[\frac{2n_{e}}{L}+1]\), where \(\delta_{G}\) is training frequency coefficient. **Failed Exploitation Exclusion:** The state selection motivated by exploitation aims at choosing an \(\hat{\mathbf{x}}_{\text{IT}}^{(t)}\) with comparatively low physical error. To encourage this, a focus coefficient \(c\) is introduced, which, together with the feasibility error threshold \(\epsilon>0\), is used to exclude a potentially failed state with a high estimated error, i.e., \(\hat{e}\left(\hat{\mathbf{x}},\{\mathbf{w}_{i}\}_{i=1}^{L}\right)>c\epsilon\), to avoid an unnecessary query. ## 4 Experiments and Results We test the proposed approach GEESE on three real-world engineering inverse problems, including aero engine design [42], electro-mechanical actuator design [58] and pulse-width modulation of 13-level inverters [59]. The first problem is to find eleven design parameters (state) of an aero engine to satisfy the thrust and fuel consumption requirement (observation), the second problem is to find 20 design parameters (state) of an electro-mechanical actuator to satisfy requirements for overall cost and safety factor (observation), and the third problem is to find a group of 30 control parameters (state) of a 13-level inverter to satisfy the requirements for distortion factor and nonlinear factor (observation). It is notable that these three problems are not computationally expense, which is only used for the convenience of demonstrating the proposed GEESE algorithm. Details of these problems along with their physical models and metrics for evaluation are explained in supplementary material (Section A). We compare it with a set of classical and state-of-the-art black-box optimization techniques, including Bayesian Optimization with Gaussian Process (BOGP), GA [21], PSO [22], CMAES [60], ISRES [61], NSGA2 [62], and UNSGA3 [63], as well as the recently proposed work SVPEN [42], which employs RL in solving SAE inverse problems. These techniques are chosen because they are effective at seeking solutions with the assist of actual physical evaluations. Driven by the research goal of finding a feasible state estimation by querying as few times as possible the physical evaluations, we adopt two metrics to compare performance. First, we set a maximum budget of \(T=1,000\) query Figure 1: The workflow of whole system: Existed ML model gives first estimation, which is assessed by physical evaluations \(E_{P}\). If failed, GEESE is activated. The error estimated by hybrid surrogate error model is used to train exploitation generator \(\mathbf{G}_{\text{IT}}\). Two candidate state sets are generated by \(\mathbf{G}_{\text{IT}}\) and exploration generator \(\mathbf{G}_{\text{R}}\), and finally, two states \(\hat{\mathbf{x}}^{*}=\hat{\mathbf{x}}_{\text{IT}}\) and \(\hat{\mathbf{x}}_{\text{R}}\) are selected by surrogate error model and feed to \(E_{P}\) for evaluation and data collection. The process is terminated till \(e(\hat{\mathbf{x}}^{*})\leq\epsilon\). times for all studied problems and compared methods, and test each method on each problem individually with 100 experimental cases, each case corresponds to a concerned ML state estimation. The setup of the experimental cases is described in Appendix A of supplementary material. We measure the number of experiments out of 100 where a method fails to correct the concerned estimation when reaching the maximum query budget, and refer to it as the failure times \(N_{\text{failure}}\). Also, the average number of queries that a method requires before finding a feasible state in an experiment, is reported over 100 experiments, and referred to as average query times \(N_{\text{query}}\). A more competitive algorithm expects smaller \(N_{\text{failure}}\) and \(N_{\text{query}}\). We report the adopted hyper-parameter and model setting for GEESE: The common hyperparameter settings shared between all three studied problems include \(T_{e}=40\), \(\epsilon_{e}=1e^{-4}\) and \(N=64\), and the learning rates of \(1e^{-2}\) and \(1e^{-4}\), for training the exploitation generator and base neural networks, respectively. Different focus coefficients of \(c=1.5,2\) and \(5\) (set in an increasing fashion) are used for problems 1, 2 and 3, respectively, due to an increased problem complexity in relation to their increasing dimensions of the state space. Similarly, an increasing training frequency coefficient \(\delta_{G}=1,1\) and \(7\) is used for problems 1, 2 and 3, respectively, because the problem requires more training iterations as it involves more complex patterns from higher-dimensional state space. The ensemble surrogate model for estimating the implicit errors is constructed as an average of 4 multi-layer perceptrons (MLPs) each with three hidden layers consisting of 1024, 2028 and 1024 hidden neurons. The exploration generator \(\mathbf{G_{R}}\) is constructed as a single layer perceptron (SLP) and its one-dimensional input is sampled from \(U\left([-5,5]\right)\). For problems 1 and 2 that are relatively less complex from an engineering point of view, we design a simplified exploitation generator that making \(\mathcal{Z}=\mathcal{X}\). Then, we directly sample \(Z_{\text{IT}}\) to be initial state set \(X_{\text{IT}}^{(0)}\), such an initial state set is iterated via the following equations modified from Eq. (6) and (7) to obtain state set \(X_{\text{IT}}^{(t)}\): \[X_{\text{IT}}^{(t)}=\arg\min_{X_{\text{IT}}^{(t)}\in\mathcal{X}}\mathbb{E}_{ \hat{\mathbf{x}}\sim X_{\text{IT}}^{(t-1)}}\left[\hat{e}\left(\hat{\mathbf{ x}},\left\{\mathbf{w}_{i}^{(t-1)}\right\}_{i=1}^{L}\right)\right], \tag{11}\] \[\hat{\mathbf{x}}_{\text{IT}}^{(t)}=\arg\min_{\hat{\mathbf{x}}\in X_{\text{IT} }^{(t)}}\hat{e}\left(\hat{\mathbf{x}},\left\{\mathbf{w}_{i}^{(t-1)}\right\}_{ i=1}^{L}\right). \tag{12}\] Problem 3 involves a special state pattern, requiring an increasing state value over the dimension, i.e., \(\mathbf{x}_{i}-\mathbf{x}_{i+1}<0\). To enable the latent variables to capture this, we construct the exploitation generator \(\mathbf{G_{\text{IT}}}\) as an MLP with three hidden layers consisting of 256, 512 and 256 hidden neurons. Also, to avoid generation collapse [64] in problem 3, a regularization term has been added to the training loss in Eq. (6), resulting in the following revised training to encourage state diversity, as \[\boldsymbol{\theta}_{\text{GT}}^{(t)}=\arg\min_{\boldsymbol{\theta}\in\mathbb{ R}^{30}}\mathbb{E}_{\mathbf{z}\sim U\left([-5,5]^{30}\right)}\left[\hat{e} \left(\mathbf{G_{\text{IT}}}(\mathbf{z},\boldsymbol{\theta}),\left\{\mathbf{ w}_{i}^{(t-1)}\right\}_{i=1}^{L}\right)+\max\left(0.0288-\sigma_{1}(\mathbf{z}, \boldsymbol{\theta}),0\right)\right], \tag{13}\] where \(\sigma_{1}(\mathbf{z},\boldsymbol{\theta})\) denotes the standard deviation of the first state element generated by \(\mathbf{G_{\text{IT}}}\). We encourage it to shift away from the collapsed point but not overly spread, by bounding \(\sigma_{1}\) with a portion of the standard deviation of a uniform distribution, e.g., \(0.288\), and the portion \(\frac{0.288}{10}=0.0288\) is observed empirically effective. The spread control is only needed for the first state as the remaining states follow by \(\mathbf{x}_{i}-\mathbf{x}_{i+1}<0\). Configurations of the competing methods and the extra information on GEESE are provided in Appendix B of supplementary material. ### Results and Comparative Analysis Table 1 summarizes the results of the compared methods for the three problems, obtained with a feasibility threshold of \(\epsilon=0.075\), which reflects high challenge with low error tolerance. It can be observed that GEESE has the least failure times \(N_{\text{failure}}\) on all three problems. In problem 3, especially, GEESE succeeds with no failure while most other methods have more than 10 failures. This is a highly desired characteristic for a remediation system with low error tolerance. In addition, GEESE also has the least query times \(N_{\text{query}}\) in all three problems, indicating the best efficiency. We report additional results in Appendix C of supplementary material by varying the feasibility threshold \(\epsilon\) and the initial sample size \(N\), where GEESE also achieves satisfactory performance in general, while outperforming other methods in handling higher-dimensional problems with lower error tolerance. SVPEN [42] cannot return a feasible correction in 1000 queries in all experiments, as its core supporting RL requires a lot more queries than other optimization based techniques. ### Ablation Studies and Sensitivity Analysis To examine the effectiveness of the key design elements of GEESEE, we perform a set of ablation studies and report the results in Table 2 using problem 1 with a small feasibility threshold \(\epsilon=0.05\) indicating low error tolerance. The studies include the following altered designs: (1) Estimate directly the implicit error sum using an MLP with the same hidden layers but one single output neuron. (2) Train the exploration generator \(\mathbf{G_{R}}\) by using an approach suggested by [57]. (3) Remove the early stopping design. (4) Remove the focus coefficient. Results show that estimating the implicit error sum worsens the performance. This is because the structural information in gradient is lost in error sum estimation, and this can cause ambiguous update when training \(\mathbf{G_{IT}}\) and consequently requires GEESE to make more error queries. Also training \(\mathbf{G_{R}}\) worsens the performance as compared to just assigning random network weights to \(\mathbf{G_{R}}\) without training. As previously explained in Section 3.2, this is because training \(\mathbf{G_{R}}\) can frequently shift the focus of the surrogate error model and, thus, impact on the stability of the optimization process. Both early stopping and focus coefficient play an important role in GEESE, where the former prevents GEESE from overfitting and the latter helps avoid unnecessary queries. Additional results on hyperparameter sensitivity analysis for GEESE are provided in Appendix D of supplementary material. The results show that GEESE is not very sensitive to hyperparameter changes and allows a wide range of values with satisfactory performance, which makes GEESE easy to be tuned and used in practice. ## 5 Discussion and Conclusion We have proposed a novel physics-driven optimization algorithm GEESE to correct ML estimation failures in SAE inverse problems. To query less frequently expensive physical evaluations, GEESE uses a cheaper hybrid surrogate error model, mixing an ensemble of base neural networks for implicit error approximation and analytical expressions of exact explicit errors. To effectively model the probability distribution of candidate states, two generative neural networks are constructed to simulate the exploration and exploitation behaviours. In each iteration, the exploitation generator is trained to find the most promising state with the smallest error, while the exploration generator is randomly sampled to find the most informative state to improve the surrogate error model. These two types of selection are separately guided by the approximated error by the ensemble and the disagreement between its base neural networks. The element-wise error approximation promotes a more effective interaction between the surrogate error model and the two generators. Being tested on three real-world engineering inverse problems, GEESE outperforms all the compared methods, finding a feasible state with the least query number with no failure under the low error tolerance setup. In future work, there are still challenges to address, particularly for very high-dimensional inverse problems. Such problems are in need of larger and more complex model architecture to accommodate their more complex underlying patterns, and thus impose challenge on training time and data requirement. Computation expense should not only consider the query cost of physical evaluations but also the learning cost of such models. Flexible neural network architectures that allow for embedding domain/induced knowledge in addition to simulation data and its training, as well \begin{table} \begin{tabular}{c c c c} \hline \hline \multicolumn{3}{c}{(1): Individual vs Sum Error Estimation} \\ \hline Surrogate Error Model & Query times & Standard deviation \\ \hline Estimate error elements & **20.20** & **16.37** \\ Estimate error sum & 23.26 & 21.18 \\ \hline \hline \multicolumn{3}{c}{(3): Effect of Early stopping} \\ \hline Schedule & Query times & Standard deviation \\ \hline with earlystop & **20.20** & **16.37** \\ w/o earlystop & 32.80 & 17.84 \\ \hline \hline \end{tabular} \begin{tabular}{c c c} \hline \hline \multicolumn{3}{c}{(2): Effect of Exploration Training} \\ \hline Exploration style & Query times & Standard deviation \\ \hline w/o training & **32.64** & **22.82** \\ with training & 41.32 & 97.15 \\ \hline \hline \multicolumn{3}{c}{(4): Effect of Focus Coefficient} \\ \hline Schedule & Query times & Standard deviation \\ \hline with focus coefficient & **20.20** & **16.37** \\ w/o focus coefficient & 27.19 & 19.36 \\ \hline \hline \end{tabular} \end{table} Table 2: Results of ablation studies reported on problem 1, where a better performance is highlighted in **bold**. \begin{table} \begin{tabular}{c|c c|c c|c c} \hline \hline \multirow{2}{*}{**Algorithm**} & \multicolumn{2}{c|}{**Problem 1**} & \multicolumn{2}{c|}{**Problem 2**} & \multicolumn{2}{c}{**Problem 3**} \\ & \multicolumn{2}{c|}{**State Dimension:11**} & \multicolumn{2}{c|}{**State Dimension:20**} & \multicolumn{2}{c}{**State Dimension:30**} \\ & Failure times & Query times & Failure times & Query times & Failure times & Query times \\ \hline BOGP & 0 & 3.29 ±1.51 & 97 & 97.73.76 ±144.28 & 4 & 112.66 ±229.98 \\ GA & 0 & 64.00 ±0.00 & 0 & 130.56 ±63.31 & 13 & 231.76 ±339.71 \\ PSO & 0 & 64.00 ±0.00 & 0 & 64.00 ±0.00 & 12 & 244.16 ±343.71 \\ CMAES & 0 & 55.67 ±3.28 & 0 & 119.44 ±41.80 & 12 & 227.42 ±312.17 \\ ISRES & 0 & 65.00 ±0.00 & 0 & 177.64 ±80.51 & 16 & 250.05 ±350.16 \\ NSGA2 & 0 & 64.00 ±0.00 & 0 & 139.52 ±68.56 & 13 & 232.40 ±359.94 \\ UNSGA3 & 0 & 64.00 ±0.00 & 0 & 140.80 ±79.94 & 12 & 227.52 ±330.07 \\ SVPEN & 100 & 1000.00 ±0.00 & 100 & 1000.00 ±0.00 & 100 & 1000.00 ±0.00 \\ GEESE (Ours) & 0 & **3.18 ±1.98** & 0 & **51.65 ±33.01** & **0** & **43.56 ±65.28** \\ \hline \hline \end{tabular} \end{table} Table 1: Performance comparison of the compared methods, where the best is shown in **bold**, while the second best is underlined as its interaction with the main solution model, e.g., an ML estimator for inverse problems, are interesting directions to pursue.
機械学習推論の展開において、科学・技術分野(SAE)においては、失敗する推論を避けることが重要であり、その結果として、航空機エンジンの設計などにおいては重大な結果をもたらす可能性がある。本研究では、SAE逆問題への採用前に、失敗する推論を検出し、修正することを目的とし、シミュレーションと物理法則に基づいた性能評価を用いて達成する。機械学習推論をフラグとして表示するためには、物理モデルエラーが可視化可能な閾値を超える場合に、その推論を修正することを提案する。GEESEという新しい方法を提案し、最適化を通して修正する。この方法は、低誤差と高い効率性を実現することを目指す。GEESEの主要設計には、 (1) 迅速なエラー推定のためのハイブリッド surrogate error モデル、 (2) 候補状態の確率分布を近似するための二つの生成モデルがある。
2310.00124
Bidirectional multi-photon communication between remote superconducting nodes
Quantum communication testbeds provide a useful resource for experimentally investigating a variety of communication protocols. Here we demonstrate a superconducting circuit testbed with bidirectional multi-photon state transfer capability using time-domain shaped wavepackets. The system we use to achieve this comprises two remote nodes, each including a tunable superconducting transmon qubit and a tunable microwave-frequency resonator, linked by a 2 m-long superconducting coplanar waveguide, which serves as a transmission line. We transfer both individual and superposition Fock states between the two remote nodes, and additionally show that this bidirectional state transfer can be done simultaneously, as well as used to entangle elements in the two nodes.
Joel Grebel, Haoxiong Yan, Ming-Han Chou, Gustav Andersson, Christopher R. Conner, Yash J. Joshi, Jacob M. Miller, Rhys G. Povey, Hong Qiao, Xuntao Wu, Andrew N. Cleland
2023-09-29T20:30:42
http://arxiv.org/abs/2310.00124v1
# Bidirectional multi-photon communication between remote superconducting nodes ###### Abstract Quantum communication testbeds provide a useful resource for experimentally investigating a variety of communication protocols. Here we demonstrate a superconducting circuit testbed with bidirectional multi-photon state transfer capability using time-domain shaped wavepackets. The system we use to achieve this comprises two remote nodes, each including a tunable superconducting transmon qubit and a tunable microwave-frequency resonator, linked by a 2 m-long superconducting coplanar waveguide, which serves as a transmission line. We transfer both individual and superposition Fock states between the two remote nodes, and additionally show that this bidirectional state transfer can be done simultaneously, as well as used to entangle elements in the two nodes. + Footnote †: preprint: APS/123-QED Long range, high fidelity communication of quantum information has applications in several areas [1], including secure communication using quantum key cryptography [2], as well as serving as the backbone for a future quantum internet [3]. These applications require sources of entangled photons, preferably on-demand, to perform most quantum cryptographic functions. Photons at optical frequencies are a natural choice for the communication medium, due to their high energies compared to ambient thermal energies, low propagation loss at room temperature, and widely available fiber communication technology. However, high fidelity and high rate sources of on-demand entangled photons are still lacking at optical frequencies, yielding to date low information transfer rates [4; 5; 6; 7; 8]. At microwave frequencies, superconducting circuits provide a flexible platform for designing high fidelity control of computational elements with reasonably low-loss memory elements, and can deterministically generate microwave photons entangled with qubits. Combined with variable superconducting couplers, these elements can be used to experimentally test long-range communication protocols [9] with itinerant photon wavepackets [10; 11; 12; 13; 14; 15; 16; 17; 18; 19], as well as modular quantum computing approaches, using the standing modes in a weakly-coupled communication waveguide [20; 21; 22; 23]. Several experiments using superconducting circuits have demonstrated deterministic transfer of both single [14; 15; 16] and multi-photon states [17] between remote nodes, using time-symmetric shaped wavepackets to improve transfer fidelity [24]. However, with the exception of Ref. [16] and [19], these all involved the use of lossy microwave circulators, thus only supporting communication in one direction. Here we demonstrate bidirectional, multi-photon state transfer with shaped wavepackets between two remote superconducting qubit nodes, eliminating the microwave circulators used in earlier experiments [14; 15; 17]. Circulators are useful for preventing unwanted reflections from emitted signals, simplifying tune-up and operation of these circuits. However, existing broadband commercial circulators cannot reverse their polarity _in situ_, and further are a significant source of loss; while broadband parametric circulators may overcome these limitations [25; 26], these are not yet available. Here we implement itinerant bidirectional communication, where itinerant means the pulsed signals have time-domain envelopes shorter than the length of the transmission line; we achieve this by using fast dynamic couplers at each node, allowing us to complete signal transfers with low reflection rates and sufficient speed that interference between emission and reflection signals can be avoided [16]. The experimental device is shown in Fig. 1. Each of the two nodes comprises a frequency-tunable superconducting Xmon qubit [27]\(Q1\) (\(Q2\)), capacitively-coupled to a frequency-tunable resonator \(R1\) (\(R2\)), which is in turn connected via a variable coupler [28] to one end of a 2 m-long, 50 \(\Omega\) coplanar waveguide. The tunable resonators are implemented as a resonant section of coplanar waveguide terminated by a superconducting quantum interference device (SQUID) [29], connected to ground through the variable coupler. Control flux lines allow tuning the resonator over a range of 1.5 GHz, limited in the experiment to 0.5 GHz by the control electronics. The variable couplers afford control of the coupling strength of the resonator to the 2-m long transmission line, where the resonator decay rate \(\kappa_{r}/2\pi\) into the transmission line can be varied dynamically from 0-55 MHz, with control signals as short as 3 ns, limited by control-line filtering. The transmission line itself is a 50 \(\Omega\) coplanar waveguide, galvanically connected to each variable coupler. All components are fabricated on a single 2 cm \(\times\) 2 cm sapphire side, shown in a back-lit optical micrograph in Fig. 1(b). Fabrication details are provided in Ref. [30]. The device was wirebonded to a copper printed circuit board that was in turn placed in an aluminum enclosure, the latter placed in a double magnetic shield mounted to the 10 mK mixing chamber of a dilution refrigerator. A wiring diagram showing the cabling for the dilution refrigerator, and a schematic for the control and readout circuitry, is shown in the supplemental material [30]. Standard characterization of the qubits yields characteristic lifetimes \(T_{1,Q1}=20\,\mu\)s and \(T_{1,Q2}=22\,\mu\)s at the qubit operating frequencies of \(f_{Q1}=4.57\,\)GHz and \(f_{Q2}=4.5\,\)GHz. The qubit Ramsey lifetimes \(T_{2}\) at the same operating frequencies are \(2.6\,\mu\)s and \(0.56\,\mu\)s, respectively. To characterize the tunable resonators, we swap single excitations from the qubits to their respective resonators by tuning the excited qubit into resonance with the resonator for a calibrated time, with the variable couplers turned off. Using the qubits to monitor the subsequent resonator decay, we measure characteristic \(T_{1}\) and \(T_{2}\) times of \(T_{1,R1}=4.57\,\mu\)s, \(T_{1,R2}=0.86\,\mu\)s, \(T_{2,R1}=0.95\,\mu\)s, and \(T_{2,R2}=0.9\,\mu\)s. All qubit-resonator swaps are measured at frequency \(f_{R}=4.058\,\)GHz with qubit-resonator coupling strength \(g_{QR}/2\pi=6.8\,\)MHz set by the geometric capacitance C = 1.4 fF connecting these elements. We characterize the transmission waveguide by swapping excitations into individual standing waveguide modes as shown in Fig. 2(a), at weak coupling \(g_{RW}\ll\omega_{FSR}\), with \(g_{RW}/2\pi<3.4\,\)MHz the resonator/waveguide coupling strength, and \(\omega_{FSR}/2\pi=31\,\)MHz the waveguide free spectral range. The waveguide \(T_{1}\) coherence times are in the range of \(4-5\,\mu\)s. At stronger coupling, we emit itinerant wavepackets into the waveguide, which requires coupling to a number of adjacent waveguide modes. Figure 2(b) shows a Ramsey-like experiment with \(\pi/2\) pulses on \(Q2\) at the beginning and end of the pulse sequence, where we emit and then recapture a superposition \(\ket{0}+\ket{1}\) state in \(R2\) for different resonator frequencies. The return time for the pulses is independent of the resonator frequency in this regime; fringes at different frequencies indicate phase coherence in the waveguide. See Ref. [30] for more characterization details. We prepare Fock states in each tunable resonator by exciting the adjacent qubit, then resonantly swapping photons one at a time into the resonator [31]. In Fig. 3(a) and (b), we swap an \(n=1\) Fock state between the resonators via the communication waveguide, by first exciting either resonator via its adjacent qubit with both resonators tuned to \(f_{R}\), and controlling the variable couplers with calibrated pulses to perform an itinerant release-and-catch transfer of the excitation [16]. We then swap Figure 1: Experimental layout. (a) Block diagram for the experiment, comprising two tunable qubits \(Q1\), \(Q2\) (green), capacitively-coupled to two tunable resonators \(R1\), \(R2\) (orange), in turned coupled via two variable couplers \(C1\), \(C2\) (purple) to a 2 m-long waveguide (yellow). Circuit representations of these elements are shown in the dashed box. (b) Backside-illuminated optical photograph of fabricated device with false coloring, showing all components including the 2 m-long coplanar waveguide, as well as the printed circuit board to which the die is wirebonded for signal routing. (c) Optical micrograph of device with false coloring corresponding to components shown in panel (a). The coplanar waveguide is patterned on the same die but is cropped from this image; see panel (b). Figure 2: (a) Standing waveguide modes seen by \(R1\) at weak coupling. After a single excitation is swapped into \(R1\), the coupling to the waveguide is turned on for time \(\tau_{r}\) while tuning the resonator frequency. (b) Ramsey-like experiment showing itinerant emission and recapture of a \(\ket{0}+\ket{1}\) state in \(R2\). The superposition state is swapped from \(Q2\) to \(R2\), emitted and recaptured from \(R2\) with 20 ns rectangular pulses separated by delay \(\tau_{d}\) while tuning the resonator frequency, swapped back to \(Q2\), and a \(\pi/2\) pulse is applied to \(Q2\) before measurement. the receiving resonator's excitation to the corresponding qubit for measurement. We release and subsequently capture the wavepacket by dynamically tuning the coupling strength for each coupler, and optimize the state transfer using Bayesian optimization [32]. During this process, we hold each resonator's frequency constant by applying a stabilizing flux pulse to its SQUID; this corrects both for frequency changes due to the changing loading electrical circuit as well as flux cross-talk. We transfer single photons in either direction, with an efficiency of 0.72 for \(Q1\to Q2\) transfers, and an efficiency of 0.74 for the reverse direction. Here efficiency is defined as \(\mathcal{E}=\langle P_{j}\rangle/\langle P_{i}\rangle\), where \(P_{i,j}\) are the release and capture qubit populations, respectively. The inefficiency is likely dominated by non-ideal pulse shaping and timing, as the inverse loss rate in the system is well-characterized and significantly longer than the transfer time. To demonstrate bidirectional itinerant state transfer, we prepare dual resonator states using the adjacent qubits and then simultaneously release these into the communication waveguide, using the qubits to monitor the resonator populations. In Fig. 3(c) we show the schematic process, where we prepare resonator \(R1\) with a two-photon Fock state while preparing resonator \(R2\) with a one-photon Fock state. We then use both variable couplers to perform a simultaneous itinerant release-and-catch of photons transmitted in each direction in the communication waveguide. To reconstruct the resulting population in the resonators, we resonantly interact the resonator with its corresponding qubit for a variable length of time, monitoring the qubit excited state probability. We then fit the time-dependent response to a model Hamiltonian for the combined system [33]. The reconstructed resonator Fock state probabilities for Fock states \(n=0,1,2\) are shown in Fig. 3 as a function of time. Due to the use of slightly different control pulses, in this experiment the resonator one-photon transfer efficiency is 0.82, while the two-photon transfer efficiency is 0.64. We can also use this system to generate and transfer more complex quantum states. In Fig. 4(a), we show the preparation and transfer of superposition Fock states. We prepare the \(\ket{0}+\ket{1}\) superposition states in resonator \(R1\) by swapping a \(\ket{g}+\ket{e}\) state from qubit \(Q1\), achieving a state fidelity \(\mathcal{F}=0.99(6)\). We perform an itinerant state transfer via the communication channel to resonator \(R2\) with fidelity \(\mathcal{F}=0.94(0)\), where \(\mathcal{F}(\rho,\sigma)=\mathrm{tr}\sqrt{\rho^{1/2}\sigma\rho^{1/2}}\) of the measured density matrix \(\rho\) to the ideal state \(\sigma\). In a similar fashion, we prepare the \(\ket{0}+\ket{2}\) superposition in \(R2\) by exciting \(Q1\) to the \(\ket{g}+\ket{f}\) state with two subsequent swaps to \(R1\), achieving a fidelity \(\mathcal{F}=0.95(2)\). The \(\ket{0}+\ket{2}\) superposition is then transferred by wavepacket via the communication channel to \(R1\), with fidelity \(\mathcal{F}=0.82(3)\). The Wigner tomograms in the figure are reconstructed by using convex optimization to find the most likely state that fits the Fock distribution as a function of displacement [34; 35]. To measure the final resonator Fock state distribution, we tune the qubit, initially in its ground state, to the resonator frequency, and fit the subsequent qubit oscillations to a model system [30; 33]. The phase difference between the prepared and received states are possibly due to the resonator frequency varying slightly during wavepacket release and capture, due to non-ideal control pulses. Wigner tomograms of the raw parity data are given in [30]. To demonstrate the generation of remote entanglement in this system, shown in Fig. 4 (b), we create NOON states, superposition states in which \(N\)-photon states in one resonator are superposed with vacuum states in the other resonator, in the form \(\ket{\psi}=\frac{1}{\sqrt{2}}(\ket{N0}-\ket{0N})\). In Fig. 4 (b), for the \(N=1\) NOON state, we prepare a one-photon Fock state in \(R1\), then, via an itinerant transfer, move half the population to \(R2\). We then measure the joint resonator state with bipartite Wigner tomography [36], using convex optimization to find the most likely Figure 3: Transfer of excited qubit states, from (a) qubit \(Q1\) to qubit \(Q2\) and (b) \(Q2\) to \(Q1\). We excite the emitting qubit, swap the excitation to the adjacent resonator, transmit the excitation as a wavepacket to the distant resonator by time-varying control of the couplers between the resonators and waveguide, then finally swap to the receiving qubit. Lines are a guide for the eye. (c) Simultaneous state swaps between the resonators. We initialize resonator \(R1\) in a 2-photon Fock state while initializing resonator \(R2\) in a 1-photon Fock state. These states are simultaneously released into the transmission line as itinerant wavepackets that are then captured by the receiving resonators, and we use the associated qubit to analyze its associated resonator’s final state. The population of each resonator’s Fock states \(\ket{F_{n}}\), \(n=0,1,2\) is shown as a function of time. state given by the joint resonator Fock distribution as a function of displaced resonator states [34; 35]. Similar to the single resonator case, we measure the joint resonator Fock distribution by tuning both qubits, initially in their ground states, into resonance with their respective resonators, then fit the resulting qubit population oscillations to a model system [36; 30]. The fidelity of the prepared state is found to be \(\mathcal{F}=0.82(7)\). In Fig. 4 (c), for the \(N=2\) NOON state, we first prepare a one-excitation Bell state \((|eg\rangle-|ge\rangle)/\sqrt{2}\) in the two qubits, by swapping half an excitation from \(Q1\) to \(R1\), followed by an itinerant transfer from \(R1\) to \(R2\), and finally swapping the half-excitation to \(Q2\). We next excite the \(e-f\) transition in each qubit, resulting in the state \((|fg\rangle-|gf\rangle)/\sqrt{2}\), then transfer the qubit populations to the resonators with two subsequent swaps, resulting in the final resonator state \((|20\rangle-|02\rangle)/\sqrt{2}\). The fidelity to the ideal state is \(\mathcal{F}=0.78(0)\). Fig. 4 (b) and (c) show the reconstructed absolute value of the density matrices for the two states. The major sources of infidelity are photon loss during the transfer process and while stored in the resonators. In conclusion, we demonstrate multi-photon bidirectional communication between two quantum-coherent superconducting nodes coupled by a 2 m-long coplanar waveguide, with states sent using itinerant photons whose pulse lengths of \(0.45\,\mathrm{m}\) are significantly less than the 2 m length of the waveguide. Future experiments might transfer multi-photon qubits such as the cat [37] or GKP [38] encoding states between resonators in the testbed. Other communication protocols might be achieved by dynamically varying both the resonator frequencies and the coupling strength of each resonator to the waveguide during wavepacket emission and capture. We can judge the relative performance of different communication protocols using a single system, e.g. using final state fidelities. Tens of individual waveguide modes can be addressed by each node; this system thus has further potential as a quantum random access memory [39]. Additional nodes may be connected to the network by adding multiple coupled waveguides to each resonator, with nodes in separate modules if needed [22]. The most significant limitations in this system are the challenges in the time-domain control of both the couplers and resonators, which has to account for circuit loading, flux cross-talk and cable-related pulse distortions. Higher fidelities might be achieved using machine learning techniques such as implemented in Ref. [40; 41]; more precise pulse control with faster electronics and fewer control wiring filters; and a longer waveguide that allows longer duration itinerant photon pulse shapes, making time-domain control less challenging. ###### Acknowledgements. We thank P. J. Duda for helpful discussions and W. D. Oliver and G. Calusine at Lincoln Laboratories for the provision of a traveling-wave parametric amplifier. Financial support was provided by the NSF QLCI for HQAN (NSF Award 2016136), the U.S. Department of Energy Office of Science National Quantum Information Science Research Centers, the Army Research Office-Laboratory for Physical Sciences (contract W911NF-23-1-0077), and the University of Chicago MRSEC (NSF award DMR-2011854). We made use of the Pritzker Nanofabrication Facility, partially supported by SHyNE, a node of the National Science Foundations National Nanotechnology Coordinated Infrastructure (NSF Grant No. NNCI ECCS-2025633). A.N.C. was supported in part by the DOE, Office of Basic Energy Sciences. The authors declare no competing financial interests. Corre Figure 4: (a) Transfer of superposition Fock states. We prepare \(|0\rangle+|1\rangle\) and \(|0\rangle+|2\rangle\) states in resonator \(R1\) and transfer these as itinerant wavepackets to resonator \(R2\). We use the corresponding qubits to reconstruct the Wigner tomograms in both resonators, as described in the main text. NOON states \(|n0\rangle+|0n\rangle\) with (b) \(n=1\) and (c) \(n=2\) itinerant transfers between the two tunable resonators. State preparation is described in the main text. Dotted lines indicate the ideal NOON states while transparent colored bars are the reconstructed density matrices. spondence and requests for materials should be addressed to A. N. Cleland ([email protected]). J.G. designed and fabricated the devices, performed the experiment and analyzed the data. H.Y., M.H.C., and G.A. provided suggestions for measurement and data analysis. A.N.C. advised on all efforts. All authors contributed to discussion and production of the manuscript.
量子通信テストベッドは、様々な通信プロトコルを実験的に調査するための有用なリソースを提供します。ここでは、時域形状波包を用いる双方向多Photon状態移行能力を備えた超伝導回路テストベッドを説明します。このシステムには、2mの長さの超伝導平面状配線で接続された、両方のリモートノードに、可変超伝導トランソン量子ビットと可変マイクロ波周波数共鳴器を含んだ2つのリモートノードがあります。両方のリモートノード間で個々の状態と重ね合わせ状態のフォーク状態を転送し、さらにこの双方向の状態移行を同時に実行できること、および両方のノードの要素を entangle することができることを示しました。
2301.00665
Targeted Phishing Campaigns using Large Scale Language Models
In this research, we aim to explore the potential of natural language models (NLMs) such as GPT-3 and GPT-2 to generate effective phishing emails. Phishing emails are fraudulent messages that aim to trick individuals into revealing sensitive information or taking actions that benefit the attackers. We propose a framework for evaluating the performance of NLMs in generating these types of emails based on various criteria, including the quality of the generated text, the ability to bypass spam filters, and the success rate of tricking individuals. Our evaluations show that NLMs are capable of generating phishing emails that are difficult to detect and that have a high success rate in tricking individuals, but their effectiveness varies based on the specific NLM and training data used. Our research indicates that NLMs could have a significant impact on the prevalence of phishing attacks and emphasizes the need for further study on the ethical and security implications of using NLMs for malicious purposes.
Rabimba Karanjai
2022-12-30T03:18:05
http://arxiv.org/abs/2301.00665v1
# Targeted Phishing Campaigns using Large Scale Language Models ###### Abstract Natural language models (NLMs) such as GPT-3, GPT-2, and other large language models have achieved impressive results in various natural language processing tasks, including language translation, summarization, and text generation. In recent years, there has been a growing concern about the potential use of NLMs to generate phishing emails, which are fraudulent emails that trick individuals into revealing sensitive information or performing actions that benefit the attackers. This research paper aims to investigate the feasibility and effectiveness of NLMs in generating phishing emails. To this end, we propose a framework for evaluating the performance of NLMs in generating phishing emails based on various metrics, including the quality of the generated text, the ability to bypass spam filters, and the success rate of tricking individuals into falling for the phishing attack. We evaluate the performance of several NLMs on a dataset of phishing emails and compare their results with those of a baseline model. Our results show that NLMs can indeed generate phishing emails that are difficult to detect and that have a high success rate in tricking individuals. However, we also find that the performance of NLMs in generating phishing emails depends on the specific NLM and the training data used, and that there are limitations to their effectiveness. Overall, our research suggests that NLMs have the potential to significantly impact the landscape of phishing attacks and highlights the need for further research on the ethical and security implications of using NLMs for malicious purposes. ## I Introduction Recent advances in natural language generation (NLG) have greatly improved the diversity, control, and quality of machine-generated text. However, this increased ability to quickly and efficiently create unique, manipulable, human-like text also presents new challenges for detecting the abuse of NLG models in phishing attacks. Machine-generated texts can pose various risks depending on the context and how they are used. For example, in the case of NLG models, the ability to generate legitimate texts attht looks like emails can lead to attacks like phishing, where the attacker tricks the victim into disclosing sensitive information by impersonating someone else. Another effect of machine generated text is mass disinformation campaigns. With the ability to generate large amounts of text automatically and quickly, it is possible for malicious actors to create fake news, hoaxes, and other forms of false or misleading information that can harm individuals, organizations, and even entire societies. Moreover, machine-generated texts can also raise ethical concerns, such as the impact on employment and the potential for bias and discrimination. For example, the use of NLG models to automate certain writing tasks may lead to job losses for human writers, and the algorithms used in NLG may reflect and amplify the biases and stereotypes present in the data they are trained on. Abuses of NLG models, such as phishing [1, 2],disinformation[3, 4, 5] has been on the rise. Email is a common method used by phishers to deliver malicious links and attachments to victims. Anti-Phishing Working Group found over 121860 phishing email incidents in march 2017 and in 2016, the APWG received more than 1313771 unique phishing reports. In the first quarter of 2017, around 870 organizations were targeted by W2-based phishing scams, a significant increase from the 100 organizations in 2016. These attacks are becoming more sophisticated and difficult to detect. Phishers often use techniques such as bulk mailing, spamming, and including action words and links in phishing emails to increase their chances of success. However, these techniques can be easily detected by improved statistical detection models. Another popular method is email masquerading, where the attacker gains access to the victim's email inbox or outbox and studies the content and nature of the emails to create a synthetic malicious email that resembles a benign one. This reduces the chances of detection by automated classifiers and increases the likelihood of a successful attack. Modern large language models have enabled users to generate text based on context. These models can be trained to generate text using predefined grammars, such as the Dada Engine[1], or by leveraging deep learning neural networks, such as recurrent neural networks (RNNs)[6], to learn and emulate the input to the system. NLG systems that use advanced deep learning neural networks (DNNs) can be used by phishers to generate coherent and convincing sequences of text. These systems have been shown to be effective for generating text in various genres, from tweets[7] to poetry[8]. It is likely that phishers and spammers will soon start using email datasets, both legitimate and malicious, in conjunction with DNNs to create deceptive malicious emails that mimic the properties of legitimate emails. This makes it harder for pre-trained email detectors to identify and block these attacks. In this report, we try to show a class of attacks where existing large-scale language models have been trained on both legitimate and malicious (phishing and spam) email data. We also aim to show how the generated emails can bypass existing production-level email protection mechanisms and propose a future work to detect such attacks. ## II Related Work Phishing detection is a well-studied area in cybersecurity, but many victims still fall for these attacks. In their work, Drake et al [9] provide a detailed analysis of the structure and tactics used in phishing emails. In this section, we review previous research on natural language generation, deep learning, and their applications in generating and detecting phishing attacks. Natural language generation techniques have been widely used to synthesize unique pieces of text. Previous work by Reiter and Dale et al [10] relied on pre-constructed templates for specific purposes, while the fake email generation system in Baki et al[1] used manually constructed rules to define the structure of fake emails. Recent advances in deep learning have enabled the generation of creative and equitable text with enough training data. RNN(Recurrent Neural Networks) language models are used to generate a range of genres, including poetry by Ghazvininejad et al [8], fake reviews by Yao et al [6], tweets [7], and geographical information by Turner et al [11], among others. ## III Experimental Methodology The section is divided into four subsections. The first subsection (Section 3.1) describes the nature and source of the training and evaluation data. The second subsection (Section 3.2) discusses the pre-processing steps applied to the data. The third subsection (Section 3.3) presents the system setup and experimental settings used in the study. ### _Data Description_ To create a legitimate looking phishing email we first need to start from actually benign and legitimate emails. The text generation algorithms must be trained in legitimate emails. Hence it was imperative to have valid benign emails in the dataset used for training. However, since the goal here is to create emails that even though can serve as a phishing email, should still look like legitimate emails, a mix of legitimate and bad emails was used as a dataset for training and augmenting the models. For legitimate datasets, instead of using one dataset on our own, we use pre-trained models from Meta and Google to create benign emails. The pre-trained models utilized are Roberta, The Pile, and PushShift.io Reddit. Since training these large language models is almost impossible in normal infrastructure, we utilize [12] to generate the texts. This has been augmented with [13] to have email generation capabilities. Python clean text [14] has been used to remove email, and phone numbers from the dataset. For malicious datasets, we primarily use two datasets to augment the benign email data. Notably, the Phishing emails from Jose Nazario's Phishing corpus [15] and [16] along with the Enron email dataset [17]. ### _Data Processing_ Most of the pre-processing was done by trying to remove personal information using Python clean text [14]. As well as Removal of special characters like, #, $, % as well as common punctuations from the email body. However, as we have realized later generating emails was not perfect. ### _Experimental Setup_ The experimental setup has been designed with certain different methods in mind. We primarily focused on * Using GPT-2 to generate emails. Augmented with email dataset [18] * GPT-3 to generate emails without any training * Contextual support for GPT-3 with da-vinci-beta which has been trained in email by openai * The DADA engine [1] * Word based RNN's proposed by Xie et al [19], Das et al [20] * Augmenting Open Pre-trained Transformer Language Models[12] on [13] While using the general large language models were interesting in trying to produce emails. The spam and phishing email datasets used for training the models to produce malicious looking email produced better results. The Jose Nazario dataset has 32,000 spams and 415 phishing email. These are all in Unix mbox formatted dataset which were cleaned using clean-text. The Enron corpus was email dataset from Enron Corporation. It has been used in email research for quite some time and was made public during their legal investigation. This however gives us a rare glimpse in the working of legitimate email data of a big corporation. It consists of 619446 emails from 158 users. It has folders like "discussion threads, "notes inbox" and more in its outlook inboxes. These were created by default for most of the users but were not actively used for tagging. The "All Documents" folder present also had large number of duplicates present from inbox. To further clean our dataset, these all were removed. The cleaned corpus has 200399 emails from 158 users. We have tried to validate our ways with existing findings available here [https://github.com/egmp777/basic_data_cleaning_enron_case_study](https://github.com/egmp777/basic_data_cleaning_enron_case_study). But our purpose was different than exploratory analysis. ### _Experiment_ A PoC was done to evaluate if we can auto-detect any emails from the generated text. GPT3 API, GPT2, OPT, Chatbot and our trained models were all used to generate the texts. Some samples have been provided below. In our test we had generated 100 samples using each. However some of these gave us inconsistent result for a same prompt. We also had sample fo real phishing email (only one for that topic) to compare it with. A variety fo prompts were used to see what kind of emails the AI's are more comfortable generating. We applied the GPT-3 davinci model with default sampling parameters (temperature=1, top_p=1, presence_penalty=0, frequency_penalty=0) to predict the log probabilities for the first 100 tokens in each sample using all the preceding tokens as the prompt. We then calculated the actual token's probability, absolute rank, contextual entropy, and whether it matched the predicted token by the GPT-3 API. To account for the maximum of 100 log probabilities returned by the API, we masked out any outliers with an absolute rank greater than 100. We also normalized the results if there were less than 100 tokens in a sample. ## IV Analysis and Results #### Iv-1 Example Results generated **ChatGPT**: The following was generated with the prompt _Can you generate an email that will be from The President of a University asking its students to fill up course completion survey form_ \begin{tabular}{|p{340.0pt}|} \hline \multicolumn{1}{|p{340.0pt}|}{Dear Students,} \\ \multicolumn{1}{|p{340.0pt}|}{As the academic year comes to a close, we would like to gather your feedback on your experiences and courses at the University. To help us improve our programs and support your learning, we encourage you to take a few minutes to complete the course completion survey form.} \\ \multicolumn{1}{|p{340.0pt}|}{Your honest and thoughtful responses will be greatly appreciated and will help us better understand your needs and expectations. The survey form is available online and can be accessed through your student portal. We thank you in advance for your participation and look forward to hearing from you.} \\ \multicolumn{1}{|p{340.0pt}|}{Best regards,} \\ \multicolumn{1}{|p{340.0pt}|}{[Name], President [University]} \\ \hline \end{tabular} **GPT3 da-vinci**: The following was generated with the prompt _Can you generate an email that will be from The President of a University asking its students to fill up course completion survey form_ \begin{tabular}{|p{340.0pt}|} \hline \multicolumn{1}{|p{340.0pt}|}{Hello,} \\ \multicolumn{1}{|p{340.0pt}|}{As the President of the University, I am writing to request your help in completing a course completion survey form.} \\ \multicolumn{1}{|p{340.0pt}|}{I am writing to request your help in completing a course completion survey form.} \\ \multicolumn{1}{|p{340.0pt}|}{The purpose of this survey is to gather feedback from \({}_{\text{i}}\)NUMBER\({}_{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text Hello, As the President of the University, I am writing to request your help in completing a course completion survey form. By completing this form, you are welcome to access the COMPANY Club website and ask additional questions about the UCLA Club and our events. To access the website you may click the link at the top of this page.If you prefer to not complete this form at this time, please let me know and I will Notably, UCLA was not present in the prompt. This shows us that with enough clever prompt discovery it is probably possible to extract meaningful information from the trained dataset even with safeguards in place. #### Iv-B2 Training Parameters The training parameters used for the HF opt model was * learning_rate: \(6e^{-}5\) * train_batch_size: 8 * eval_batch_size: 8 * seed: 42 * distributed_type: GPU * gradient_accumulation_steps: 16 * total_train_batch_size: 128 * optimizer: Adam with betas = (0.9, 0.999) and epsilon = \(1e^{-}8\) * lr_scheduler_type: cosine * lr_scheduler_warmup_ratio: 0.03 * num_epochs: 8 And the training parameters used for HF postbot GPT2 * learning_rate: 0.001 * train_batch_size: 16 * eval_batch_size: 16 * seed: 42 * distributed_type: multi-GPU * gradient_accumulation_steps: 8 * total_train_batch_size: 128 * optimizer: Adam with betas = (0.9, 0.999) and epsilon = \(1e^{-}8\) * lr_scheduler_type: cosine * lr_scheduler_warmup_ratio: 0.02 * num_epochs: 3 ## V Future Work Research on the risks of using natural language generation (NLG) models suggests that being able to detect machine-generated text is useful for reducing the harm caused by abuse of these models. When we want to detect machine-generated text, it can be treated as a binary classification problem. We train a classifier to differentiate between machine-generated and human-generated text [22]. We can use generative models without fine-tuning to detect their own outputs or the outputs of other similar models. Autoregressive generative models like GPT-2, GPT-3 are unidirectional, where each token has an embedding that depends on the embeddings of the tokens that come before it. This shows us that an embedding can be created if we add a token at the end of an input sequence, thus creating a sequence of tokens. This now can be used as a new feature vector. Now once we have these newly created features, they can be utilized along with human data to train a layer of neurons for classification. Research on how to detect machine-generated text has looked at the problem of detecting text when a different dataset was used to train RoBERTa than GPT-2. But here, it was observed that just tuning the detection model with couple of hundred different attack samples provided by domain experts had a significant effect on the detector's performance on different domains[23]. One another possibility is when an attacker decides to generate the attack from an existing hand-written content. Much like how we have started in this email generation problem. Using human like sample but tweaking the generating parameters to closely meet his goals. Analysis showed that making these targeted changes to texts reduces the effectiveness of GPT-2 or RoBerta-based detectors [24]. A generalized solution to this is trying to differentiate between human and machine generated text. Giant Language Model Test Room is a software developed to improve the detection of machine-generated text by adding human review in the pipeline. The tool helps humans classify text by highlighting texts based on how likely of them being chosen by the Transformer model. However, this tool was designed to target GPT-2, which was found to be easier for untrained human evaluators to detect. In addition, GLTR uses "top-k" sampling to determine the likelihood of a word being selected, but this method has been largely replaced by nucleus sampling, which is used in GPT-3 and other works that build on the GPT-2 architecture. While highlighting words based on sampling likelihood may improve human classification ability, it is clear that it still will pose a problem when they have to detect the more advanced models and sampling methods of today. In long term, we want to propose a framework that can differentiate NLG-generated emails from human-generated emails. Prior work has already been done trying to determine machine-generated text, however specifically for email and malicious emails, there are distinct characteristics we have observed that can be exploited to augment prior works to be more effective. Few of these are homogeneous to what we have seen in language models [25], but some are significantly distinct and should be explored more. ## VI Conclusion The more we experimented with large language models and prior works by Das et al [20], Baki et al [1] it became clear that prior RNN-based models and DIDA engines, even though show some malicious intent in their generation, don't actually pose threat to be understood as real malicious email. All of them went past Gmail and outlook when sent from a legitimate email id. The emails generated by GPT3 and OPT significantly pose a larger threat to be believed as real emails when generated in mass using tools and bulk emailed with targeted intent. Especially with targeted email dataset training and keywords in prompts, the models generated very convincing-looking emails. Even with safeguards in place for GPT3, we were able to generate these emails and chatGPT was a very interesting contender in the tests. Even though chatgpt didn't let us generate the email directly in one go, we were able to find creative ways by 'conversing' with it and giving it a plausible context to overcome its barriers. Here we identify how these new language models can be weaponized to be used as phishing and scamming tools which gets past our present email systems like Gmail and Outlook. However, that's hardly surprising considering they look legitimate. We want to further this work by integrating it with tools like PhEmail[26] which makes sending NLG generated emails to targeted bulk userbase a keypress away.
この研究では、GPT-3やGPT-2などの自然言語モデル(NLM)の潜在能力を探索し、効果的なphishingメールを生成することを目的としている。フィッシングメールは、個人に機密情報を提供したり、攻撃者にとって有益な行動を取らせることを目的とした詐欺的なメッセージである。私たちは、NLMsの生成に関するパフォーマンスを評価するためのフレームワークを提案した。このフレームワークは、生成されたテキストの品質、スパムフィルターを回避する能力、個人を欺く成功率といったさまざまな基準に基づいている。私たちの評価によると、NLMは検出が困難なフィッシングメールを生成でき、個人を欺く成功率が高いことが示されている。しかし、NLMの有効性は、使用されている特定のNLMとトレーニングデータによって異なる。この研究は、NLMがフィッシング攻撃の普及に大きな影響を与える可能性を示しており、NLMの悪用を目的とした利用に関する倫理的かつ
2309.14032
DeepACO: Neural-enhanced Ant Systems for Combinatorial Optimization
Ant Colony Optimization (ACO) is a meta-heuristic algorithm that has been successfully applied to various Combinatorial Optimization Problems (COPs). Traditionally, customizing ACO for a specific problem requires the expert design of knowledge-driven heuristics. In this paper, we propose DeepACO, a generic framework that leverages deep reinforcement learning to automate heuristic designs. DeepACO serves to strengthen the heuristic measures of existing ACO algorithms and dispense with laborious manual design in future ACO applications. As a neural-enhanced meta-heuristic, DeepACO consistently outperforms its ACO counterparts on eight COPs using a single neural architecture and a single set of hyperparameters. As a Neural Combinatorial Optimization method, DeepACO performs better than or on par with problem-specific methods on canonical routing problems. Our code is publicly available at https://github.com/henry-yeh/DeepACO.
Haoran Ye, Jiarui Wang, Zhiguang Cao, Helan Liang, Yong Li
2023-09-25T10:56:38
http://arxiv.org/abs/2309.14032v2
# DeepACO: Neural-enhanced Ant Systems for Combinatorial Optimization ###### Abstract Ant Colony Optimization (ACO) is a meta-heuristic algorithm that has been successfully applied to various Combinatorial Optimization Problems (COPs). Traditionally, customizing ACO for a specific problem requires the expert design of knowledge-driven heuristics. In this paper, we propose DeepACO, a generic framework that leverages deep reinforcement learning to automate heuristic designs. DeepACO serves to strengthen the heuristic measures of existing ACO algorithms and dispense with laborious manual design in future ACO applications. As a neural-enhanced meta-heuristic, DeepACO consistently outperforms its ACO counterparts on eight COPs using a single neural architecture and a single set of hyperparameters. As a Neural Combinatorial Optimization method, DeepACO performs better than or on par with problem-specific methods on canonical routing problems. Our code is publicly available at [https://github.com/henry-yeh/DeepACO](https://github.com/henry-yeh/DeepACO). ## 1 Introduction Ant systems in nature are self-learners. They utilize chemical signals and environmental cues to locate and return food to the colony. The pheromone trails deposited by ants indicate the quality and distance of a food source. The intensity of pheromone trails increases as more ants visit and decreases due to evaporation, creating a self-learning foraging system. Inspired by the ant systems in nature, researchers propose and develop Ant Colony Optimization (ACO) meta-heuristics for (but not limited to) Combinatorial Optimization Problems (COPs) [24; 26]. ACO deploys a population of artificial ants to explore the solution space through repeated solution constructions and pheromone updates. The exploration is biased toward more promising areas through instance-specific pheromone trails and problem-specific heuristic measures. Both the pheromone trail and the heuristic measure indicate how promising a solution component is. Typically, pheromone trails are initialized uniformly for all solution components and learned while solving an instance. On the contrary, heuristic measures are predefined based on prior knowledge of a problem, and devising proper heuristic measures for complicated COPs is quite challenging (an example is [49]). Over the past decades, research and practice efforts have been dedicated to a careful design of heuristic measures in pursuit of knowledge-driven performance enhancement [57; 62; 33; 49; 26; 24]. However, this routine of algorithm customization exhibits certain deficiencies: 1) it requires extra effort and makes ACO less flexible; 2) the effectiveness of the heuristic measure heavily relies on expert knowledge and manual tuning; and 3) designing a heuristic measure for less-studied problems can be particularly challenging, given the paucity of available expert knowledge. This paper proposes DeepACO, a generic neural-enhanced ACO meta-heuristic, and a solution to the above limitations. DeepACO serves to strengthen the heuristic measures of existing ACO algorithms and dispense with laborious manual design in future ACO applications. It mainly involves two learning stages. The first stage learns a problem-specific mapping from an instance to its heuristic measures by training neural models across COP instances. Guided by the learned measures, the second stage learns instance-specific pheromone trails while solving an instance with ACO. The heuristic measures learned in the first stage are incorporated into ACO (the second learning stage) by biasing the solution constructions and leading Local Search (LS) to escape local optima. DeepACO is also along the line of recent progress in Neural Combinatorial Optimization (NCO) [12; 61; 7; 34]. Within the realm of NCO, DeepACO is more related to the methods that utilize heatmaps for algorithmic enhancement [32; 42; 90; 55; 42], but it is superior in its flexibility: DeepACO provides effective neural enhancement across eight COPs covering routing, assignment, scheduling, and subset problems, being the most broadly evaluated NCO technique to our knowledge. In addition, we propose three extended implementations for better balancing between exploitation and exploration: one featuring a multihead decoder, one trained with an additional top-\(k\) entropy loss, and one trained with an additional imitation loss. They can be generally applied to heatmap-based NCO methods. As a neural-enhanced version of ACO, DeepACO consistently outperforms its counterparts on eight COPs using a single neural architecture and a single set of hyperparameters after only minutes of training. As an NCO method, DeepACO performs better than or competitively against the state-of-the-art (SOTA) and problem-specific NCO methods on canonical routing problems while being more generalizable to other COPs. To the best of our knowledge, we are the first to exploit deep reinforcement learning (DRL) to guide the evolution of ACO meta-heuristics. Such a coupling allows NCO techniques to benefit from decades of ACO research (notably regarding theoretical guarantees [7; 26]), while simultaneously offering ACO researchers and practitioners a promising avenue for algorithmic enhancement and design automation. In summary, we outline our **contributions** as follows: * We propose DeepACO, a neural-enhanced ACO meta-heuristic. It strengthens existing ACO algorithms and dispenses with laborious manual design in future ACO applications. * We propose three extended implementations of DeepACO to balance exploration and exploitation, which can generally be applied to heatmap-based NCO methods. * We verify that DeepACO consistently outperforms its ACO counterparts across eight COPs while performing better than or on par with problem-specific NCO methods. ## 2 Related work ### Neural Combinatorial Optimization Neural Combinatorial Optimization (NCO) is an interdisciplinary field that tackles COPs with deep learning techniques. In general, existing NCO methods can be categorized into end-to-end and hybrid methods, and DeepACO belongs to the latter methodological category. End-to-end methods in the former category learn autoregressive solution constructions or heatmap generation for subsequent sampling-based decoding. Within this realm, recent developments include better-aligned neural architectures [82; 64; 54; 50; 89; 13; 46], improved training paradigms [6; 56; 52; 83; 10; 66; 40; 45], advanced solution pipelines [47; 51; 48; 20; 78; 18; 65; 19], and broader applications [88; 95; 16; 29; 75; 9]. End-to-end methods are admirably efficient, but their constructed solutions can be further improved with iterative refinement and algorithmic hybridization. Therefore, the hybrid methods in the latter category incorporate neural learners to make decisions within/for heuristics or generate heatmaps to assist heuristics. These methods either let neural learners make decisions within algorithmic loops [60; 87; 21; 17; 94; 86; 59], or generate heatmaps in one shot to assist subsequent algorithms. In the latter case, Xin et al. [90] propose to learn edge scores and node penalties to guide the searching process of LKH-3 [39], a highly optimized solver for routing problems. Kool et al. [55] train neural models to predict promising edges in routing problems, providing a neural boost for Dynamic Programming. Fu et al. [32] train small-scale GNN to build heatmaps for large TSP instances and feed the heatmaps into Monte Carlo Tree Search for solution improvements. Hudson et al. [42] utilize neural models to generate regret in Guided Local Search for TSP. DeepACO is along the line of these works, but superior in terms of its flexibility, i.e., it provides effective neural enhancement across eight COPs covering routing, assignment, scheduling, and subset problems while the existing hybridized methods mostly focus on a limited set of routing problems. ### Ant Colony Optimization Ant Colony Optimization (ACO) is a meta-heuristic and evolutionary algorithm (EA) [4; 73] inspired by the behavior of ants in finding the shortest path between their colony and food sources [26]. The representative ACO meta-heuristics such as Ant System (AS) [25], Elitist AS [25], and MAX-MIN AS [76] provide general-purpose frameworks allowing for problem-specific customization. Such customization may involve designs of heuristic measures [49; 33], the incorporation of local search operators [3; 93], and the hybridization of different algorithms [44; 5]. DeepACO is not in competition with the most SOTA ACO algorithms for a specific COP. Instead, DeepACO can strengthen them with stronger heuristic measures and can in turn benefit from their designs. ACO can utilize a group of techniques named hyper-heuristics [69] that are conceptually related to DeepACO. But hyper-heuristics mostly involve expert-designed heuristics to select from (heuristic selection hyper-heuristics) [27] or problem-specific and manually-defined components to evolve heuristics (heuristic generation hyper-heuristics) [14]. By comparison, DeepACO is more generic, requiring little prior knowledge of a COP. Its aim is not to compete with any hyper-heuristics; instead, for example, we can utilize hyper-heuristics to improve LS components in DeepACO. Unlike the knowledge-driven ACO adaptations above, a recent method, namely ML-ACO [77], boosts the performance of ACO via Machine Learning (ML) for solution prediction. While ML-ACO provides an initial insight into coupling ML with ACO, it is preliminary and limited. ML-ACO tailors five features for Orienteering Problem and trains ML classifiers in a supervised manner. It entails high-demanding expert knowledge for feature designs and specialized solvers for optimal solutions, making itself inflexible and hard to scale. By comparison, DeepACO leverages DRL and demands little expert knowledge to apply across COPs. ## 3 Preliminary on Ant Colony Optimization The overall ACO pipeline is depicted in Fig. 1. We begin with defining a COP model and a pheromone model. They are prerequisites for implementing ACO algorithms. COP modelGenerally, a COP model consists of, 1) a search space \(\mathbf{S}\) defined over a set of discrete decision variables \(X_{i},i=1,\ldots,n\), where each decision variable \(X_{i}\) takes a value from a finite set \(\mathbf{D}_{i}=\{v_{i}^{1},\ldots,v_{i}^{|\mathbf{D}_{i}|}\}\); 2) a set of constraints \(\Omega\) that the decision variables must satisfy; and 3) an objective function \(f:\mathbf{S}\rightarrow\mathbb{R}_{0}^{+}\) to minimize. A feasible solution \(\mathbf{s}\) to a COP is a complete assignment of all decision variables that satisfies all the constraints in \(\Omega\). Pheromone modelA COP model defines a pheromone model in the context of ACO. Without loss of generality, a pheromone model is a construction graph that includes decision variables as nodes and solution components as edges [26]. Each solution component \(c_{ij}\), that represents the assignment of value \(v_{i}^{j}\) to decision variable \(X_{i}\), is associated with its pheromone trial \(\tau_{ij}\) and heuristic measure \(\eta_{ij}\). Both \(\tau_{ij}\) and \(\eta_{ij}\) indicate how promising it is to include \(c_{ij}\) in a solution. Typically, ACO uniformly initializes and iteratively updates pheromone trails, but predefines and fixes heuristic measures. As a motivating example, for TSP, a instance can be directly converted into a fully connected construction graph, where city \(i\) becomes node \(i\) and solution component \(c_{ij}\) represents visiting city \(j\) immediately after city \(i\). Then, \(\eta_{ij}\) is typically set to the inverse of the distance between city \(i\) and \(j\). After defining a COP model and a pheromone model, we introduce an ACO iteration, usually consisting of solution constructions, optional LS refinement, and pheromone updates. Solution construction and local search (optional)Biased by \(\tau_{ij}\) and \(\eta_{ij}\), an artificial ant constructs a solution \(\mathbf{s}=\{s_{t}\}_{t=1}^{n}\) by traversing the construction graph. If an ant is located in node \(i\) at the \(t\)-th construction step (\(s_{t-1}=i\)) and has constructed a partial solution \(\mathbf{s}_{<t}=\{s_{t}\}_{t=1}^{t-1}\), the probability of selecting node \(j\) as its next destination (\(s_{t}=j\)) is typically given by \[P(s_{t}|\mathbf{s}_{<t},\mathbf{\rho})=\left\{\begin{array}{ll}\dfrac{\tau_{ij}^{ \alpha}\cdot\eta_{ij}^{\beta}}{\sum_{c_{il}\in\mathbf{N}(\mathbf{s}_{<t})}\tau_{il}^{ \alpha}\cdot\eta_{il}^{\beta}}&\text{if }c_{ij}\in\mathbf{N}(\mathbf{s}_{<t}),\\ 0&\text{otherwise}.\end{array}\right. \tag{1}\] Here, \(\mathbf{\rho}\) is a COP instance, \(\mathbf{N}(\mathbf{s}_{<t})\) is the set of feasible solution components given the partial solution, and \(\alpha\) and \(\beta\) are the control parameters, which are consistently set to 1 in this work unless otherwise stated. To simplify the notations, we omit the dependence on \(\mathbf{\rho}\) for \(\tau\), \(\eta\), \(c\), and \(\mathbf{N}\). Based on Eq. (1), constructing a complete solution requires an \(n\)-step graph traversal. The probability of generating \(\mathbf{s}\) can be factorized as \[P(\mathbf{s}|\mathbf{\rho})=\prod_{t=1}^{n}P(s_{t}|\mathbf{s}_{<t},\mathbf{\rho}). \tag{2}\] After solution constructions, local search (LS) is optionally applied to refine the solutions. Pheromone updateAfter solution constructions, the pheromone update process evaluates solutions and adjusts the pheromone trails accordingly, i.e., it increases the pheromone trails of components in the superior solutions while decreasing those in the inferior ones. The detailed update rules can differ depending on the ACO variation used. ACO intelligently explores the solution space by iterating the above process, eventually converging on (sub)optimal solutions. We refer the readers to [26] for more details. ## 4 Methodology DeepACO is schematically presented in Fig. 1 wherein a comparison is made with ACO. It dispenses with expert knowledge and learns a set of stronger heuristic measures to guide the ACO evolution. DeepACO involves parameterizing the heuristic space (Section 4.1), optionally interleaving LS with neural-guided perturbation (Section 4.2), and training a heuristic learner across instances (Section 4.3). Additionally, we introduce three extended designs (Section 4.4) to boost exploration. ### Parameterizing heuristic space We introduce a heuristic learner, defined by a graph neural network (GNN) with trainable parameters \(\mathbf{\theta}\), to parameterize the heuristic space. The heuristic learner maps an input COP instance \(\mathbf{\rho}\) to its Figure 1: The schematic diagrams of ACO and DeepACO. DeepACO 1) additionally trains a heuristic learner across instances, 2) applies the well-trained heuristic learner to generate heuristic measures during inference, and 3) optionally leverages the learned heuristic measures to conduct local search interleaved with neural-guided perturbation. heuristic measures \(\mathbf{\eta_{\theta}}(\mathbf{\rho})\), where we rewrite it as \(\mathbf{\eta_{\theta}}\) for notation simplicity. It contains non-negative real values \(\eta_{ij;\mathbf{\theta}}\) associated with each solution component \(c_{ij}\), \(\forall i\in\{1,\dots,n\},\forall j\in\{1,\dots,|\mathbf{D_{i}}|\}\). DeepACO constructs solutions following Eq. (2) but biased by \(\mathbf{\eta_{\theta}}\): \[P_{\mathbf{\eta_{\theta}}}(\mathbf{s}|\mathbf{\rho})=\prod_{t=1}^{n}P_{\mathbf{\eta_{\theta}}}( s_{t}|\mathbf{s}_{<t},\mathbf{\rho}). \tag{3}\] In particular, we exploited the neural models recommended by Joshi et al. [48] and Qiu et al. [66]. It consists of a GNN backbone relying on anisotropic message passing and an edge gating mechanism, and a Multi-Layer Perceptron (MLP) decoder mapping the extracted edge features into heuristic measures. We defer the full details of this neural architecture to Appendix A. ### Local search interleaved with neural-guided perturbation In ACO, local search (LS) is optionally applied to refine the constructed solutions. However, LS is a myopic procedure in that it greedily accepts any altered solution with a lower objective value and easily gets trapped in local optima. In DeepACO, we intend the well-learned heuristic measures to indicate the global optimality of solution components. Leveraging such indicators and including solution components with greater global optimality can lead to better solutions eventually, if not immediately. Nevertheless, it is unrealistic to solely rely on the learned heuristic measures due to the inherent complexity of COPs. Based on these considerations, we propose LS interleaved with neural-guided perturbation (NLS for short) in Algorithm 1. NLS interleaves LS aiming for a lower objective value and neural-guided perturbation biasing the learned optima. In each iteration, the first stage utilizes LS to repeatedly refine a solution until (potentially) reaching local optima. The second stage utilizes LS to slightly perturb the locally optimal solution toward gaining higher cumulative heuristic measures. ``` 1:Input: A solution \(\mathbf{s}\); an objective function \(f\); well-learned heuristic measures \(\mathbf{\eta_{\theta}}\); a local search operator \(LS\) that takes three inputs: a solution to refine, the targeted objective function, and the maximum iterations before encountering local optima; the number of perturbation moves \(T_{p}\); the number of NLS iterations \(T_{NLS}\) 2:Output: The best improved solution \(\mathbf{s}^{*}\) 3:\(\mathbf{s}=LS(\mathbf{s},f,+\infty)\) // Improve \(\mathbf{s}\) until reaching a local optimum 4:\(\mathbf{s}^{*}=copy(\mathbf{s})\) 5:for\(iter=1\to T_{NLS}\)do 6:\(\mathbf{s}=LS(\mathbf{s},\frac{1}{\mathbf{\eta_{\theta}}},T_{p})\) // Perturb \(\mathbf{s}\) toward higher cumulative heuristic measures with \(T_{p}\) moves 7:\(\mathbf{s}=LS(\mathbf{s},f,+\infty)\) 8:\(\mathbf{s}^{*}=\arg\min(f(\mathbf{s}^{*}),f(\mathbf{s}))\) 9:endfor ``` **Algorithm 1** NLS ### Training heuristic learner We train the heuristic learner across COP instances. The heuristic learner \(\mathbf{\theta}\) maps each instance \(\mathbf{\rho}\) to its heuristic measures \(\mathbf{\eta_{\theta}}\). Then, we minimize the expected objective value of both constructed solutions and NLS-refined constructed solutions: \[\text{minimize}\quad\mathcal{L}(\mathbf{\theta}|\mathbf{\rho})=\mathbb{E}_{\mathbf{s} \sim P_{\mathbf{\eta_{\theta}}}(\cdot|\mathbf{\rho})}[f(\mathbf{s})+Wf(NLS(\mathbf{s},f,+ \infty))], \tag{4}\] where \(W\) is a coefficient for balancing two terms. Intuitively, the first loss term encourages directly constructing optimal solutions, which is, however, often held off by the complexity of COP. The second term encourages constructing solutions most fit for NLS, and it could be easier to learn high-quality solutions if coupling end-to-end construction with NLS. Even so, solely relying on the second term leads to inefficient training due to the small quality variance of the NLS-refined solutions. As a result, we find it useful to implement both terms for training. Note that the NLS process itself does not involve gradient flow. In practice, we deploy Ant Systems to construct solutions stochastically following Eq. (3) for estimating Eq. (4). The pheromone trials are fixed to 1 to ensure an unbiased estimation. We apply a REINFORCE-based [85] gradient estimator: \[\nabla\mathcal{L}(\mathbf{\theta}|\mathbf{\rho})=\mathbb{E}_{\mathbf{s}\sim P_{\mathbf{\eta_{ \theta}}}(\cdot|\mathbf{\rho})}[((f(\mathbf{s})-b(\mathbf{\rho}))+W(f(NLS(\mathbf{s},f,+\infty) )-b_{NLS}(\mathbf{\rho})))\nabla_{\mathbf{\theta}}\log P_{\mathbf{\eta_{\theta}}}(\mathbf{s}| \mathbf{\rho})], \tag{5}\] where \(b(\mathbf{\rho})\) and \(b_{NLS}(\mathbf{\rho})\) are the average objective value of the constructed solutions and that of the NLS-refined constructed solutions, respectively. ### Toward better exploration The vanilla DeepACO fully exploits the underlying pattern of a problem and learns a set of aggressive heuristic measures (visualized in Appendix C.6) according to Eq. (4). Still, preserving exploration is of critical importance since COPs often feature many local optima [89]. To that end, we further present three extended designs to enable a better balance of exploration and exploitation. Note that they can generally be applied to heatmap-based NCO methods. #### 4.4.1 Multihead decoder Multihead DeepACO implements \(m\) MLP decoders on the top of the GNN backbone. It aims to generate diverse heuristic measures to encourage exploring different optima in the solution space. A similar multihead design has been utilized for autoregressive solution construction [89]. We extend this idea to the non-autoregressive heuristic generation here. For training, Multihead DeepACO utilizes Eq. (4) calculated individually and independently for \(m\) decoders and an extra Kullback-Leibler (KL) divergence loss expressed as \[\mathcal{L}_{KL}(\mathbf{\theta}|\mathbf{\rho})=-\frac{1}{m^{2}n}\sum_{k=1}^{m}\sum_{l =1}^{m}\sum_{i=1}^{n}\sum_{j=1}^{|\mathbf{D}_{i}|}\mathbf{\tilde{\eta}}_{ij;\mathbf{\theta }}^{k}\log\frac{\mathbf{\tilde{\eta}}_{ij;\mathbf{\theta}}^{k}}{\mathbf{\tilde{\eta}}_{ij; \mathbf{\theta}}^{l}}, \tag{6}\] where \(\mathbf{\tilde{\eta}}_{ij;\mathbf{\theta}}^{k}\) is the heuristic measure of the solution component \(c_{ij}\) output by the \(k\)-th decoder after row-wise normalization. In the inference phase, Multihead DeepACO deploys \(m\) ant groups guided by their respective MLP decoders while the whole ant population shares the same set of pheromone trails. #### 4.4.2 Top-\(k\) entropy loss Entropy loss (reward) incentivizes agents to maintain the diversity of their actions. It is often used as a regularizer to encourage exploration and prevent premature convergence to suboptimal policies [68; 37; 51]. In the context of COP, however, most solution components are far from a reasonable choice for the next-step solution construction. Given this, we implement a top-\(k\) entropy loss while optimizing Eq. (4), promoting greater uniformity only among the top \(k\) largest heuristic measures: \[\mathcal{L}_{H}(\mathbf{\theta}|\mathbf{\rho})=\frac{1}{n}\sum_{i=1}^{n}\sum_{j\in \mathcal{K}_{i}}\mathbf{\bar{\eta}}_{ij;\mathbf{\theta}}\log(\mathbf{\bar{\eta}}_{ij;\mathbf{ \theta}}). \tag{7}\] Here, \(\mathcal{K}_{i}\) is a set containing top \(k\) solution components of decision variable \(X_{i}\), and \(\mathbf{\bar{\eta}}_{ij;\mathbf{\theta}}\) are the heuristic measures normalized within the set. #### 4.4.3 Imitation loss If expert-designed heuristic measures are available, we can additionally incorporate an imitation loss, enabling the heuristic learner to mimic expert-designed heuristics while optimizing Eq. (4). It holds two primary merits. First, the expert-designed heuristic measures are less aggressive than the learned ones. So, it can function as a regularization to maintain exploration. Second, the heuristic learner can acquire expert knowledge. Perfecting the imitation first can guarantee that the learned heuristics are at least not inferior to the expert-designed counterparts. Accordingly, the imitation loss is formulated as \[\mathcal{L}_{I}(\mathbf{\theta}|\mathbf{\rho})=\frac{1}{n}\sum_{i=1}^{n}\sum_{j=1}^{| \mathbf{D}_{i}|}\mathbf{\tilde{\eta}}_{ij}^{*}\log\frac{\mathbf{\tilde{\eta}}_{ij}^{*}}{ \mathbf{\tilde{\eta}}_{ij;\mathbf{\theta}}}, \tag{8}\] where \(\mathbf{\eta}^{*}_{ij}\) and \(\mathbf{\eta}_{ij;\mathbf{\theta}}\) are expert-designed and learned heuristic measures, respectively, both after row-wise normalization. ## 5 Experimentation ### Experimental setup BenchmarksWe evaluate DeepACO on eight representative COPs, including the Traveling Salesman Problem (TSP), Capacitated Vehicle Routing Problem (CVRP), Orienteering Problem (OP), Prize Collecting Traveling Salesman Problem (PCTSP), Sequential Ordering Problem (SOP), Single Machine Total Weighted Tardiness Problem (SMTWTP), Resource-Constrained Project Scheduling Problem (RCPSP), and Multiple Knapsack Problem (MKP). They cover routing, assignment, scheduling, and subset COP types. Their definitions and setups are given in Appendix D. HardwareUnless otherwise stated, we conduct experiments on 48-core Intel(R) Xeon(R) Platinum 8350C CPU and an NVIDIA GeForce RTX 3090 Graphics Card. ### DeepACO as an enhanced ACO algorithm In this part, we do not apply NLS and set \(W\) in Eq. (4) to 0 for training. It only entails minutes of training to provide substantial neural enhancement as shown in Fig. 2 (also see Appendix C.4). On the other hand, the extra inference time for DeepACO is negligible; for example, it takes less than 0.001 seconds for a TSP100. DeepACO for fundamental ACO algorithmsWe implement DeepACO based on three fundamental ACO meta-heuristics: Ant System [25], Elitist Ant System [25], and MAX-MIN Ant System [76]. They are widely recognized as the basis for many SOTA ACO variations [44; 2; 5; 3; 93]. We compare the heuristic measures learned by DeepACO to the expert-designed ones (detailed separately for eight COPs in Appendix D) on 100 held-out test instances for each benchmark COP. The results in Fig. 2 show that DeepACO can consistently outperform its ACO counterparts, verifying the universal neural enhancement DeepACO can bring. We defer the results on more COP scales to Appendix C.1. DeepACO for advanced ACO algorithmsWe also apply DeepACO to Adaptive Elitist Ant System (AEAS) [2], a recent ACO algorithm with problem-specific adaptations. In Fig. 3, we plot the evolution curves of AEAS based on the original and learned heuristic measures, respectively, and DeepACO shows clearly better performance. In light of this, we believe that it is promising to exploit DeepACO for designing new ACO SOTAs. Figure 2: Evolution curves of fundamental DeepACO and ACO algorithms on eight different COPs. For each COP, we plot the best-so-far objective value (averaged over 100 held-out test instances and 3 runs) w.r.t. the number of used evaluations along the ACO iterations. DeepACO for different pheromone modelsThe pheromone model used in ACO can vary, even for the same problem [63]. Following the naming convention in [63], we use \(PH_{suc}\) to denote the pheromone model that captures the successively chosen items and \(PH_{items}\) the one that directly captures how good each item is. We extend DeepACO from \(PH_{suc}\) to \(PH_{items}\) using a Transformer [79] encoder equipped with an MLP decoder. We showcase such flexibility of DeepACO on MKP in Fig. 6, where DeepACO using \(PH_{items}\) can still outperform its ACO counterparts. It validates that DeepACO can be readily extended to variations of pheromone models using properly aligned neural models. We defer further discussions to Appendix A and C.5. DeepACO for better robustness to hyperparameter choiceIncreasing the robustness of ACO to hyperparameter choices has long been an important research topic [84, 92]. In Fig. 5, we adjust two important hyperparameters, i.e., _Alpha_ that controls the transition probability of solution construction and _Decay_ that controls the pheromone updates, evaluate DeepACO (AS) and ACO (AS) on the TSP100 test dataset, and plot their best objective values within the evolution of 4K evaluations. The results indicate that DeepACO is more robust to the hyperparameter choice given its much lower color variance. We thereby argue that data-driven training can also dispense with high-demanding expert knowledge for hyperparameter tuning. DeepACO for real-world instancesWe draw all 49 real-world symmetric TSP instances featuring _EUC_2\(D\) and containing less than 1K nodes from TSPLIB. We infer instances with \(n<50\) nodes using the model trained on TSP20, those with \(50\leq n<200\) nodes using the model trained on TSP100, and the rest using the model trained on TSP500. The evolution curves of DeepACO and ACO are shown in Fig. 5, suggesting that DeepACO can consistently outperform its ACO counterparts even when generalizing across scales and distributions. ### DeepACO as an NCO method Comparison on routing problemsAs shown in Table 1, DeepACO demonstrates competitive performance against the SOTA NCO methods for TSP. Here, we apply the generic 2-opt for NLS (\(T_{NLS}=10,T_{p}=20\)) and set \(W\) in Eq. (4) to 9. Note that most NCO baselines are specialized for TSP or routing problems, while DeepACO is a general-purpose meta-heuristic and validated across eight different COPs. In addition, the results of ablation studies in Table 2 further validate our designs; that is, both neural-guided perturbation and training with LS effectively strengthen DeepACO. More results on TSP100 and CVRP are given in Appendix C.2. For heatmaps with better explorationWe implement the three extended designs introduced in Section 4.4 without LS and compare their evolution curves with vanilla DeepACO in Fig. 6. We apply 4 decoder heads for Multihead DeepACO and set \(K\) to 5 for the top-K entropy loss. The coefficients for the additional loss terms (i.e., \(\mathcal{L}_{KL}\), \(\mathcal{L}_{H}\), and \(\mathcal{L}_{I}\)) are set to 0.05, 0.05, and 0.02, respectively. The results indicate that these designs lead to better performance on TSP20, 50, and 100 since small-scale COPs typically desire more exploration. For enhancing heatmap decoding with pheromone updateThough we can conduct pure solution samplings similar to [47, 66, 78] given the learned heuristic measures, we demonstrate the superiority of ACO evolution (without NLS) with self-learning pheromone update in Fig. 8. Furthermore, coupling heatmaps with ACO can benefit from extensive ACO research, such as advanced ACO variations, algorithmic hybridizations, and seamless incorporation of local search operators. consistently better than its ACO counterparts and is on par with the specialized NCO methods. For the Operations Research (OR) community, DeepACO demonstrates a promising avenue toward leveraging deep reinforcement learning for algorithmic enhancement and design automation. For the Machine Learning (ML) community, DeepACO presents a versatile and adaptable NCO framework that can seamlessly integrate with SOTA ACO and EA techniques, including improved rules for solution construction and pheromone update [76, 11], algorithmic hybridizations [67, 91], and incorporation of sophisticated local search operators [3, 93]. However, DeepACO is potentially limited by compressing all learned heuristic information into an \(n\times n\) matrix of heuristic measures. Due to the complexity of COPs and this restricted way of expressing problem patterns, DeepACO may fail to produce close-to-optimal solutions when not incorporating LS components. To address such limitations, we plan to investigate 3D or dynamic heuristic measures in our future work. ## Acknowledgments and Disclosure of Funding The authors appreciate the helpful discussions with Yu Hong, Juntao Li, and anonymous reviewers. Yu Hu, Jiusi Yin, and Tao Yu also contributed to this work. This work was supported by the National Natural Science Foundation of China (NSFC) [grant numbers 61902269, 52074185]; the National Key R&D Program of China [grant number 2018YFA0701700]; the Undergraduate Training Program for Innovation and Entrepreneurship, Soochow University [grant number 202210285001Z, 202310285041Z]; the Priority Academic Program Development of Jiangsu Higher Education Institutions, China; and Provincial Key Laboratory for Computer Information Processing Technology, Soochow University [grant number KJS1938].
蟻群最適化(ACO)は、様々な組合せ最適化問題(COP)に成功的に適用されているメタ Heuristics アルゴリズムです。伝統的に、ACOを特定の問題に適応させるには、専門家の知識に基づいたヒューリスティックの設計が必要となります。この論文では、DeepACOという、深層Reinforcement学習を活用してヒューリスティック設計を自動化する汎用的なフレームワークを提案します。DeepACOは既存のACOアルゴリズムのヒューリスティックの強化に役立ち、将来のACOアプリケーションにおける手動設計の作業を省略します。神経強化メタHeuristicsであるDeepACOは、単一の神経ネットワークアーキテクチャと単一のハイパーパラメータセットを使用して、8つのCOPにおいてACOの対照アルゴリズムと比較して常に優位性を示しています。神経強化組合せ最適化方法として、DeepACOは、canonical routing問題において問題特定のアル
2309.07728
Predictions of the Strange partner of $T_{cc}$ in the quark delocalization color screening model
Inspired by the detection of $T_{cc}$ tetraquark state by LHCb Collaboration, we preform a systemical investigation of the low-lying doubly heavy charm tetraquark states with strangeness in the quark delocalization color screening model in the present work. Two kinds of configurations, the meson-meson configuration and diquark-antidiquark configuration, are considered in the calculation. Our estimations indicate that the coupled channel effects play important role in the multiquark system, and a bound state with $J^{P}=1^{+}$ and a resonance state with $J^{P}=0^{+}$ have been predicted. The mass of the bound state is evaluated to be $(3971\sim3975)$ MeV, while the mass and width of the resonance are determined to be $(4113\sim4114)$ MeV and $(14.3\sim 16.1)$ MeV, respectively.
Xuejie Liu, Dianyong Chen, Hongxia Huang, Jialun Ping
2023-09-14T14:04:01
http://arxiv.org/abs/2309.07728v1
# Predictions of the Strange partner of \(T_{cc}\) in the quark delocalization color screening model ###### Abstract Inspired by the detection of \(T_{cc}\) tetraquark state by LHCb Collaboration, we preform a systemical investigation of the low-lying doubly heavy charm tetraquark states with strangeness in the quark delocalization color screening model in the present work. Two kinds of configurations, the meson-meson configuration and diquark-antidiquark configuration, are considered in the calculation. Our estimations indicate that the coupled channel effects play important role in the multiquark system, and a bound state with \(J^{P}=1^{+}\) and a resonance state with \(J^{P}=0^{+}\) have been predicted. The mass of the bound state is evaluated to be \((3971\sim 3975)\) MeV, while the mass and width of the resonance are determined to be \((4113\sim 4114)\) MeV and \((14.3\sim 16.1)\) MeV, respectively. pacs: 13.75.Cs, 12.39.Pn, 12.39.Jh + Footnote †: Corresponding author + Footnote †: Corresponding author ## I Introduction In the recent two decades, an increasing number of charmonium-like states have been observed experimentally, which provide a good opportunity of searching for multiquark states. As the first confirmed charmonium-like state, \(Z_{c}(3900)\) was first observed in the year of 2013 by the BESIII[1] and Belle [2] Collaborations in the \(\pi^{+}J/\psi\) invariant mass spectrum of the process \(e^{+}e^{-}\to\pi^{+}\pi^{-}J/\psi\) at a center of mass energy of 4.26 GeV, and then the authors of Ref. [3] further confirmed the existence of \(Z_{c}(3900)\) by using the data sample collected by CLEO-c detector in the same process but at \(\sqrt{s}=4.170\) GeV. The partial wave analysis of the process \(e^{+}e^{-}\to\pi^{+}\pi^{-}J/\psi\) with the data sample accumulated at \(\sqrt{s}=4.23\) and 4.26 GeV indicated that the spin and parity of the \(Z_{c}(3900)^{+}\) state are \(1^{+}\)[4]. The observations indicate that such a new particle can not be simply interpreted in the conventional quark-antiquark and three-quark schemes. Thus, some exotic interpretations, such as tetraquark state [5; 6; 7; 8], hadronic molecular state [9; 10; 11; 12; 13; 14; 15; 16; 17], have been proposed. Besides the resonance interpretations, \(Z_{c}(3900)\) has also been considered as the kinematic effects [18; 19; 20; 21; 22; 23], which indicated that \(Z_{c}(3900)\) was not a genuine resonance. In the resonance frame, the quark component of \(Z_{c}(3900)\) is \(c\bar{c}q\bar{q}\). The flavor independence of the strong interactions naturally indicates the possible existence of the strange partner of \(Z_{c}(3900)\), whose quark components are \(c\bar{c}s\bar{q}\). Such kind of charmonium-like states with strangeness have been predicted theoretically in various model, such as tetraquark scenarios [24; 25], hadronic molecular model [26; 27], the hadro-quarkonium model [25] and initial single chiral particle emission mechanism [28]. In the year of 2020, the BES III Collaboration observed a new states named \(Z_{cs}(3985)\) in the \(K^{+}\) recoil mass distributions of the process \(e^{+}e^{-}\to K^{+}D_{s}^{-}D^{*0}/K^{+}D_{s}^{*-}D^{0}\)[29]. Later on, the LHCb Collaboration reported their observation of two exotic structures, \(Z_{cs}(4000)\) and \(Z_{cs}(4220)\), in the \(J/\psi K^{+}\) invariant mass spectrum of the \(B^{+}\to J/\psi\phi K^{+}\) decay in 2021 [30]. Since the observed masses of \(Z_{cs}(3985)\) and \(Z_{cs}(4000)\) were similar, these two states may be considered as the same one (hereinafter, we use \(Z_{cs}(3985)\) to refer to this state). It's interesting to notice that \(Z_{c}(3900)\) is located in the vicinity of the \(D^{*}\bar{D}\) threshold, while \(Z_{cs}(3985)\) is close to \(D_{s}^{*}\bar{D}\) threshold, thus one can consider \(Z_{cs}(3985)\) as a strange partner of \(Z_{c}(3900)\). Consequently, the hadronic molecular [31; 32; 33; 34; 35; 36; 37; 38; 39; 40], compact tetraquark [41; 42; 43] and hadro-quarkonium [25] scenarios have been proposed to decode the nature of \(Z_{cs}(3985)\). In the naive multiquark scenario, if there are multiquark states composed of \(c\bar{c}q\bar{q}\), the states composed of \(cc\bar{q}\bar{q}\) are also expected to exist and have been considered to be the molecular \(D^{*+}D^{0}\) states [44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60], and compact states [61; 62; 63]. Recently, the LHCb Collaboration reported the observation of the first doubly charmed tetraquark state \(T_{cc}^{+}(3875)\) in the \(D^{0}D^{0}\pi^{+}\) mass spectrum just below the \(D^{*+}D^{0}\) mass threshold [64; 65] with \(I(J^{P})=1(1^{+})\). As indicated in Fig. 1, the quark components of \(T_{cc}(3875)\) are \(cc\bar{q}\bar{q}\), which indicate that \(T_{cc}(3875)\) could be a good candidate of compact tetraquark state. In Refs. [61; 62], the authors investigated the mass Figure 1: (Color online). The similarity of the hidden charm and doubly charmed states. Hereinafter, \(T_{cc}\) is used to refer the doubly charmed state with strangeness. spectrum of the \(S-\)wave doubly heavy tetraquark states \(QQ\bar{q}\bar{q}\) based on the improved chromomagnetic interaction model and found a stable \(cc\bar{a}\bar{d}\) tetraquark state with \(I(J^{P})=0(1^{+})\) below the \(D^{*+}D^{0}\) threshold, which is well consistent with the observed \(T_{cc}^{+}(3875)\). Moreover, the QCD sum rule estimation in Ref. [63] also supported the compact tetraquark interpretation. In addition, the observed mass of \(T_{cc}^{+}(3875)\) is only several hundred keV below the threshold of \(D^{0}D^{*+}\), which imply that \(T_{cc}^{+}(3875)\) could be interpreted as a shallow molecular state composed of \(D^{0}D^{*+}+h.c.\). Further estimations by using the quark models [44; 45; 46; 47; 48; 57; 58; 59], QCD sum rules [49; 50; 51], heavy quark symmetry [52; 53; 54; 60] and Bethe-Salpeter equations [55; 56] indicated that \(T_{cc}^{+}(3875)\) could be a good candidate of \(D^{0}D^{*+}+h.c.\) molecular state. Similar to the relation between \(Z_{cc}(3985)\) and \(Z_{c}(3900)\), one can expect the existence of the strange partner of \(T_{cc}(3875)\), i.e., the tetraquark states composed of \(cc\bar{q}\bar{s}\). Actually, before the observation of \(T_{cc}^{+}(3875)\), the Lattice QCD estimations in Ref. [66] predicted that the \(T_{cc\bar{s}}\) state with \(J^{P}=1^{+}\) was about 10 MeV below the threshold of \(D^{+}D_{s}^{-}\), while the estimations by using the heavy quark symmetry in Ref. [67] both its mass to be about 180 MeV above the corresponding threshold. In Ref. [68], the predicted \(T_{cc\bar{s}}\) tetraquark state with \(J^{P}=1^{+}\) was below the threshold of \(D^{+}D_{s}^{-}\), while those with \(J^{P}=0^{+}\) and \(2^{+}\) were both above the corresponding thresholds. After the observation of \(T_{cc}^{+}\), the authors in Ref. [60] took advantage of the experimental information on the binding energy of \(T_{cc}^{+}\) to fix the cutoff regulator of the loops in the Bethe-Salpeter equation and a \(D_{s}^{*}D^{*}\) bound state with \(J^{P}=1^{+}\) was predicted. Besides, the color-magnetic model estimations in Ref. [69] implied that both \(T_{cc}^{+}\) and \(T_{cc\bar{s}}^{+}\) system could be stable against the strong interactions. However, the state \(T_{cc\bar{s}}^{+}\) was not found in the quark model but if the mixing of S\(-\)D wave was taken into account, this state may be obtained [59]. As mentioned above, theorists have not reach an agreement on the existence of \(T_{cc\bar{s}}\) tetraquark states. In the present work, we perform a system estimations of \(T_{cc\bar{s}}\) system by using the quark delocalization color screening model (QDCSM) in an attempt to further explore the existence of the possible bounded and resonant states in the \(T_{cc\bar{s}}\) system. This paper is organized as follows. After the introduction, the details of the QDCSM and resonating group method (RGM) are presented in Section II. Our numerical results and the related discussions for \(T_{cc\bar{s}}\) system are given in Section III, and the last section is devoted to a short summary. ## II Quark delocalization color screening model and the resonanting group method ### Quark delocalization color screening model The QDCSM is an extension of the native quark cluster model [70; 71; 72; 73] and also developed with aim of addressing multiquark systems. For the tetraquark system, the Hamiltonian reads, \[H=\sum_{i=1}^{4}\left(m_{i}+\frac{p_{i}^{2}}{2m_{i}}\right)-T_{CM}+\sum_{j>i=1 }^{4}V(r_{ij}), \tag{1}\] where \(T_{CM}\) is the center-of-mass kinetic energy, who is usually subtracted without losing generality since we mainly focus on the internal relative motions of the multiquark system. The interplay is two body potential, which includes color-confining potential \(V_{\rm CON}\), one-gluon exchange potential \(V_{\rm OGE}\), and the potential results from Goldstone-boson exchange, \(V_{\chi}\), i.e., \[V(r_{ij})=V_{\rm CON}(r_{ij})+V_{\rm OGE}(r_{ij})+V_{\chi}(r_{ij}). \tag{2}\] In the present work, we focus on the \(S-\)wave low-lying positive \(T_{cc\bar{s}}\) tetraquark system with positive parity. In this case, the spin-orbit and tensor interactions vanish and the potential \(V_{\rm OGE}(r_{ij})\) becomes, \[V_{\rm OGE}(r_{ij}) = \frac{1}{4}\alpha_{s}^{q_{i}q_{j}}\lambda_{i}^{c}\cdot\lambda_{j}^ {c} \tag{3}\] \[\left[\frac{1}{r_{ij}}-\frac{\pi}{2}\delta(r_{ij})(\frac{1}{m_{i} ^{2}}+\frac{1}{m_{j}^{2}}+\frac{4\mathbf{\sigma}_{i}\cdot\mathbf{\sigma}_{j}}{3m_{i}m_{ j}})\right],\] where \(m_{i}\) is the quark mass, \(\mathbf{\sigma}_{i}\) and \(\mathbf{\lambda}_{i}^{c}\) are the Pauli matrices and SU(3) color matrices, respectively. The \(\alpha_{s}^{q_{i}q_{j}}\) is the quark-gluon coupling constant, which offers a consistent description of mesons from light to heavy-quark sector. The values of \(\alpha_{ij}\) are associated with the quark flavors and in the present work they are fixed by reproducing the mass difference of the low-lying mesons with \(S=0\) and \(S=1\). The confining potential \(V_{\rm CON}(r_{ij})\) is, \[V_{\rm CON}(r_{ij})=-a_{c}\mathbf{\lambda}_{i}^{c}\cdot\mathbf{\lambda}_{j}^{c}\left[f( r_{ij})+V_{0_{q_{ij}}}\right], \tag{4}\] where the \(V_{0_{q_{ij}}}\) is determined by the mass differences of the theoretical estimations and experimental measurement of each kind of meson, which is also quark flavor related parameter. In the QDCSM, the function \(f(r_{ij})\) is defined as, \[f(r_{ij})=\left\{\begin{array}{ll}r_{ij}^{2}&\mbox{if $i,j$ occur in the same cluster,}\\ \frac{1-e^{-\mu_{ij}r_{ij}^{2}}}{\mu_{ij}}&\mbox{if $i,j$ occur in different cluster,} \end{array}\right. \tag{5}\] where the color screening parameter \(\mu_{ij}\) relevant to the light quarks can be determined by fitting the deuteron properties, \(NN\) and \(NY\) scattering phase shifts [74; 75; 76], which are \(\mu_{qq}=0.45\), \(\mu_{qs}=0.19\) and \(\mu_{ss}=0.08\). The parameter \(\mu_{ij}\) satisfy the relation \(\mu_{qs}^{2}=\mu_{qq}\mu_{ss}\), where \(q\) represents \(u\) or \(d\) quark. When extending to the heavy-quark case, we found that the dependence of the parameter \(\mu_{cc}\) is rather weak in the calculation of the spectrum of \(P_{c}\) states by taking the value of \(\mu_{cc}\) from \(10^{-4}\) to \(10^{-2}\) fm\({}^{-2}\)[77]. Moreover, when \(\mu_{ij}\) is rather small, the exponential function can be approximated to be, \[e^{-\mu_{ij}r_{ij}^{2}} = 1-\mu_{ij}r_{ij}^{2}+\mathcal{O}(\mu_{ij}^{2}r_{ij}^{4}).\] in the small \(r\) region. Accordingly, the confinement potential between two clusters is approximated to be, \[V_{\rm CON}(r_{ij}) = -a_{c}\lambda_{i}^{c}\cdot\lambda_{j}^{c}\left(\frac{1-e^{-\mu_{ij} \mathbf{r}_{ij}^{2}}}{\mu_{ij}}+V_{0_{ij}}\right) \tag{7}\] \[\approx -a_{c}\lambda_{i}^{c}\cdot\lambda_{j}^{c}\left(r_{ij}^{2}+V_{0_{ ij}}\right)\!,\] which is the same with the expression of two quarks in the same cluster. Thus, when the value of the \(\mu_{ij}\) is very small, the screened confinement will return to the quadratic form, which is why the results are insensitive to the value of \(\mu_{cc}\). So in the present work, we take \(\mu_{cc}=0.01\) fm\({}^{-2}\). Then \(\mu_{sc}\) and \(\mu_{uc}\) are obtained by the relation \(\mu_{sc}^{2}=\mu_{ss}\mu_{cc}\) and \(\mu_{uc}^{2}=\mu_{uu}\mu_{cc}\), respectively. The Goldstone-boson exchange interactions between light quarks appear because the dynamical breaking of chiral symmetry. For the \(T_{cc\bar{s}}\) system, the \(\pi\) exchange interaction vanishes because there is no unfavor quark pair in the tetraquark state, and then the concrete form of the Goldstone-boson exchange potential becomes, \[V_{ij}^{\chi} = V_{K}(\mathbf{r}_{ij})\sum_{a=4}^{7}\lambda_{i}^{a}\cdot\lambda_{j}^ {a}+ \tag{8}\] \[V_{\eta}(\mathbf{r}_{ij})\left[\left(\lambda_{i}^{8}\cdot\lambda_{j }^{8}\right)\cos\theta_{P}-\left(\lambda_{i}^{0}\cdot\lambda_{j}^{0}\right) \sin\theta_{P}\right],\] with \[V_{\chi}(\mathbf{r}_{ij}) = \frac{g_{ch}^{2}}{4\pi}\,\frac{m_{x}^{2}}{12m_{i}m_{j}}\frac{ \Lambda_{\chi}^{2}}{\Lambda_{x}^{2}-m_{x}^{2}}m_{x} \tag{9}\] \[\left\{(\mathbf{\sigma}_{i}\cdot\mathbf{\sigma}_{j})\!\left[Y(m_{\chi}\;r _{ij})-\frac{\Lambda_{\chi}^{3}}{m_{\chi}^{3}}Y(\Lambda_{\chi}\;r_{ij}) \right]\right\},\] \[\chi=\{K,\eta\},\] where \(Y(x)=e^{-x}/x\) is the standard Yukawa function. The \(\mathbf{\lambda}^{a}\) is the SU(3) flavor Gell-Mann matrix. The mass of the \(K\) and \(\eta\) meson is taken from the experimental value [78]. The chiral coupling constant, \(g_{ch}\), is determined from the \(\pi NN\) coupling constant through, \[\frac{g_{ch}^{2}}{4\pi}=\left(\frac{3}{5}\right)^{2}\frac{g_{\pi NN}^{2}}{4 \pi}\frac{m_{u,d}^{2}}{m_{N}^{2}}, \tag{10}\] where the SU(3) flavor symmetry only broken by the different masses of the light quarks. All the other model parameters are the same as the ones in Ref. [79], where three different sets of parameters were used to study the \(\bar{c}\bar{c}s\bar{s}\) tetraquark system and some experimental discovered charmonium-like state, such as \(\chi_{c0}(3930)\), \(X(4350)\), \(X(4500)\), \(X(4700)\) and \(X(4274)\), could be well explained. For the singlet of completeness, we collect the relevant model parameters in Table 1. In the QDCSM, the single-particle orbital wave functions in the ordinary quark cluster model are the left and right centered single Gaussian functions, which are, \[\phi_{\alpha}(\mathbf{S}_{i})=\left(\frac{1}{\pi b^{2}}\right)^{\frac {1}{2}}e^{-\frac{(\mathbf{r}_{\alpha}+\mathbf{S}_{i}\mathbf{\gamma}^{2})}{2\beta^{2}}},\] \[\phi_{\beta}(-\mathbf{S}_{i})=\left(\frac{1}{\pi b^{2}}\right)^{\frac {1}{2}}e^{-\frac{(\mathbf{r}_{\beta}+\mathbf{S}_{i}\mathbf{\gamma}^{2})}{2\beta^{2}}}. \tag{11}\] The quark delocalization is realized by writing the single-particle orbital wave function as a linear combination of the left and right Gaussians, which are, \[\psi_{\alpha}(\mathbf{S}_{i},\mathbf{\epsilon}) = \left(\phi_{\alpha}(\mathbf{S}_{i})+\epsilon\phi_{\alpha}(-\mathbf{S}_{i} )\right)/N(\epsilon),\] \[\psi_{\beta}(-\mathbf{S}_{i},\mathbf{\epsilon}) = \left(\phi_{\beta}(-\mathbf{S}_{i})+\epsilon\phi_{\beta}(\mathbf{S}_{i} )\right)/N(\epsilon),\] \[N(\epsilon) = \sqrt{1+\epsilon^{2}+2\epsilon e^{-S_{i}^{2}/4b^{2}}}, \tag{12}\] where \(\mathbf{\epsilon}(\mathbf{S}_{i})\) is the delocalization parameter determined by the dynamics of the quark system rather than free parameters. In this way, the system can choose its most favorable configuration through its dynamics in a larger Hilbert space. \begin{table} \begin{tabular}{c c c c c} \hline & Parameters & QDCSM1 & QDCSM2 & QDCSM3 \\ \hline \multirow{4}{*}{Quark Mass} & \(m_{u}\)(MeV) & 313 & 313 & 313 \\ & \(m_{s}\)(MeV) & 536 & 536 & 536 \\ & \(m_{c}\)(MeV) & 1728 & 1728 & 1728 \\ \hline \multirow{4}{*}{Confinement} & b(fm) & 0.29 & 0.3 & 0.315 \\ & \(a_{c}\)(MeV \(fm^{-2}\)) & 101 & 101 & 101 \\ & \(V_{0_{\rm m}}\)(MeV) & -2.3928 & -2.2543 & -2.0689 \\ & \(V_{0_{\rm m}}\)(MeV) & -1.9137 & -1.7984 & -1.6429 \\ & \(V_{0_{\rm m}}\)(MeV) & -1.4175 & -1.3231 & -1.2052 \\ & \(V_{0_{\rm m}}\)(MeV) & -1.3448 & -1.2826 & -1.2745 \\ & \(V_{0_{\rm m}}\)(MeV) & -0.7642 & -0.6739 & -0.5452 \\ & \(V_{0_{\rm m}}\)(MeV) & 0.6063 & 0.7555 & 0.9829 \\ \hline \multirow{4}{*}{OGE} & \(\alpha_{s}^{\rm na}\) & 0.2292 & 0.2567 & 0.3019 \\ & \(\alpha_{s}^{\rm na}\) & 0.2655 & 0.2970 & 0.3484 \\ \cline{1-1} & \(\alpha_{s}^{\rm na}\) & 0.3437 & 0.3805 & 0.4405 \\ \cline{1-1} & \(\alpha_{s}^{\rm na}\) & 0.3856 & 0.3604 & 0.3360 \\ \cline{1-1} & \(\alpha_{s}^{\rm na}\) & 0.5969 & 0.6608 & 0.7649 \\ \cline{1-1} & \(\alpha_{s}^{\rm na}\) & 1.5101 & 1.6717 & 1.9353 \\ \hline \end{tabular} \end{table} Table 1: Three sets of model parameters involved in the present estimations. ### The resonating group method In the present work, the RGM is employed to carry out the dynamical calculation. When dealing with the two-cluster system in this method, one can only consider the relative motion between the clusters, while the two clusters are frozen inside [80]. So the wave function of the \(T_{cc\bar{s}}\) system can be constructed as, \[\psi_{4q} = \mathcal{A}\left[\left[\psi_{A}(\mathbf{\rho}_{A})\psi_{B}(\mathbf{\rho}_{ B})\right]^{[\sigma]JS}\otimes\chi_{L}(\mathbf{R})\right]^{\prime}, \tag{13}\] where the symbol \(\mathcal{A}\) is the antisymmetry operator, which is defined as \[\mathcal{A} = 1-P_{13}. \tag{14}\] where the \(P_{13}\) indicates the exchange of the particle positions with numbers 1 and 3 from the Fig. 2. \([\sigma]=[222]\) gives the total color symmetry. The symbols \(I\), \(S\), \(L\), and \(J\) represent flavor, spin, orbit angular momentum, and total angular momentum of \(T_{cc\bar{s}}\) system, respectively. \(\psi_{A}\) and \(\psi_{B}\) are the wave functions of the two-quark cluster, which are, \[\psi_{A} = \left(\frac{1}{2\pi b^{2}}\right)^{3/4}e^{-\rho_{A}^{2}/(4b^{2}) }\eta_{A}S_{A}\chi_{A}^{c},\] \[\psi_{B} = \left(\frac{1}{2\pi b^{2}}\right)^{3/4}e^{-\rho_{B}^{2}/(4b^{2}) }\eta_{I_{B}}S_{B}\chi_{B}^{c}, \tag{15}\] where \(\eta_{I}\), \(S\), and \(\chi\) represent the flavor, spin and internal color terms of the cluster wave functions, respectively. According to Fig. 2, we adopt different Jacobi coordinates for different diagrams. As for the meson-meson configuration in Fig. 2-(a), the Jacobi coordinates are defined as, \[\mathbf{\rho}_{A} = \mathbf{r}_{q_{1}}-\mathbf{r}_{\bar{q}_{2}},\quad\mathbf{\rho}_{B}=\mathbf{r}_{q_ {3}}-\mathbf{r}_{\bar{q}_{4}},\] \[\mathbf{R}_{A} = \frac{m_{1}\mathbf{r}_{q_{1}}+m_{2}\mathbf{r}_{\bar{q}_{2}}}{m_{1}+m_{2}},\] \[\mathbf{R}_{B} = \frac{m_{3}\mathbf{r}_{q_{3}}+m_{4}\mathbf{r}_{\bar{q}_{4}}}{m_{3}+m_{4}},\] \[\mathbf{R} = \mathbf{R}_{A}-\mathbf{R}_{B},\] \[\mathbf{R}_{\mathbf{c}} = \frac{m_{1}\mathbf{r}_{q_{1}}+m_{2}\mathbf{r}_{\bar{q}_{2}}+m_{3}\mathbf{r}_{ q_{3}}+m_{4}\mathbf{r}_{\bar{q}_{4}}}{m_{1}+m_{2}+m_{3}+m_{4}}. \tag{16}\] where the subscript \(q/\bar{q}\) indicates the quark or antiquark particle, while the number indicates the quark position in Fig. 2-(a). As for the diquark-antidiquark configuration as shown in Fig. 2-(b), the relevant Jacobi coordinates can be obtained by interchanging \(\mathbf{r}_{q_{3}}\) with \(\mathbf{r}_{\bar{q}_{2}}\) in Eq. (16). Form the variational principle, after variation with respect to the relative motion wave function \(\chi(\mathbf{R})=\sum_{L}\chi_{L}(\mathbf{R})\), one obtains the RGM equation, which is, \[\int H\left(\mathbf{R},\mathbf{R}^{\prime}\right)\chi\left(\mathbf{R}^{\prime }\right)d\mathbf{R}^{\prime}=E\] \[\int N\left(\mathbf{R},\mathbf{R}^{\prime}\right)\chi\left(\mathbf{R}^{ \prime}\right)d\mathbf{R}^{\prime}, \tag{17}\] where \(H(\mathbf{R},\mathbf{R}^{\prime})\) and \(N(\mathbf{R},\mathbf{R}^{\prime})\) are Hamiltonian and norm kernels, respectively. The eigenenergy \(E\) and the wave functions can be obtained by solving the RGM equation. In the present estimation, the function \(\chi(\mathbf{R})\) can be expanded by gaussian bases, which is \[\chi(\mathbf{R}) = \frac{1}{\sqrt{4\pi}}\sum_{L}\left(\frac{1}{\pi b^{2}}\right)^{3 /4}\sum_{i}^{n}C_{i,L} \tag{18}\] \[\times\int e^{-\frac{1}{2}\left(\mathbf{R}-S\right)^{2}/b^{2}}Y^{L} \left(\hat{\mathbf{S}}_{i}\right)d\hat{\mathbf{S}}_{i},\] where \(C_{i,L}\) is the expansion coefficient, and \(n\) is the number of gaussian bases, which is determined by the stability of the results. \(\mathbf{S}_{i}\) is the separation of two reference centers. \(\mathbf{R}\) is the dynamic coordinate defined in Eq. (16). After including the motion of the center of mass, i.e., \[\phi c(\mathbf{R_{c}})=\left(\frac{4}{\pi b^{2}}\right)^{3/4}\mathrm{e}^{\frac{-2 \pi b^{2}}{b^{2}}}, \tag{19}\] one can rewrite Eq. (13) as, \[\psi_{4q} = \mathcal{A}\sum_{i,L}C_{i,L}\int\frac{d\hat{\mathbf{S}}_{i}}{\sqrt{4 \pi}}\prod_{\alpha=1}^{2}\phi_{\alpha}\left(\mathbf{S}_{i}\right)\prod_{\alpha=3}^ {4}\phi_{\beta}\left(-\mathbf{S}_{i}\right) \tag{20}\] \[\times\left[\left[\eta_{I_{A}S_{\alpha}}\eta_{I_{B}S_{\alpha}} \right]^{JS}Y^{L}(\hat{\mathbf{S}}_{i})\right]^{J}\left[\chi_{A}^{c}\chi_{B}^{c} \right]^{[\sigma]},\] where \(\phi_{\alpha}(\mathbf{S}_{i})\) and \(\phi_{\beta}(-\mathbf{S}_{i})\) are the single-particle orbital wave functions with different reference centers, whose specific expressions have been presented in Eq. (11). With the reformulated ansatz as shown in Eq. (20), the RGM equation becomes an algebraic eigenvalue equation, which is, \[\sum_{j,L}C_{j,L}H_{i,j}^{L,L^{\prime}} = E\sum_{j}C_{j,L^{\prime}}N_{i,j}^{L^{\prime}}, \tag{21}\] where \(N_{i,j}^{L^{\prime}}\) and \(H_{i,j}^{L,L^{\prime}}\) are the overlap of the wave functions and the matrix elements of the Hamiltonian, respectively. By solving the generalized eigenvalue problem, we can obtain the energies of the tetraquark systems \(E\) and the corresponding expansion coefficients \(C_{j,L}\). Finally, the relative motion wave function between two clusters can be obtained by substituting the \(C_{j,L}\) into Eq. (18). As for the flavor, spin and color Figure 2: The meson-meson configuration (diagram (a)) and diquark-antidiquark configuration (diagram (b)) in the \(T_{cc\bar{s}}\) tetraquark system. wave functions of the tetraquark system, they are constructed in a two step way. One can first construct the wave functions for the two clusters, and then coupling the wave functions of two clusters to form the wave function of tetraquark system. The details of the flavor, spin and color wave functions of tetraquark system are collected in the Appendix A. ## III Numerical results and discussions In this work, only the low-lying \(S-\)wave \(T_{cc\bar{s}}\) tetraquark state are considered and the spin of the tetraquark system can be 0, 1, and 2. Thus, the spin parity of \(T_{cc\bar{s}}\) tetraquark states can be \(0^{+}\), \(1^{+}\) and \(2^{+}\), respectively. Moreover, in the present estimations, both the meson-meson and diquark-antidiquark configurations are considered. In general, there are two types of color structures for the meson-meson configuration, which are color singlet-singlet (\(\mathbf{1}_{c}\otimes\mathbf{1}_{c}\)) and the color octet-octet (\(\mathbf{8}_{c}\otimes\mathbf{8}_{c}\)). The later color structure have been taken into account by introducing the color screening effects in the model, thus, we only consider the color singlet-singlet structures in the present estimations. A for the diquark-antidiquark configuration, both the antitriplet-triplet (\(\mathbf{\hat{3}}_{c}\otimes\mathbf{3}_{c}\)) and sextet-antisextet (\(\mathbf{6}_{c}\otimes\mathbf{\bar{6}}_{c}\)) structure are taken into account. All the relevant channels for all possible quantum numbers are listed in Table 2, where \(F^{i};S^{j}_{s};\chi^{i}_{k}\) shows the necessary basis combinations in flavor (\(F^{i}\)), spin (\(S^{j}_{s}\)) and color (\(\chi^{c}_{k}\)) degrees of freedom. ### Bound State With the above preparations, the low-lying \(S-\)wave \(T_{cc\bar{s}}\) tetraquark states are systematically explored herein. In Tables 3-5, we collect the estimated eigenenergies of the \(T_{cc\bar{s}}\) tetraquark states with different \(J^{P}\) quantum numbers. In those tables, the index of the first column represents the symbols of each channel and in the second and third columns we list all the involved channels and the corresponding theoretical threshold, respectively. Moreover, \(E_{sc}\) is the eigenenergy obtained in the single channel estimations, \(E_{cc}\) and \(E_{mix}\) are the eigenenergies estimated by considering the coupled channel effects in each kind of configuration, and in both configurations, respectively. Additionally, we define the binding energy \(E_{b}\) of the \(T_{cc\bar{s}}\) tetraquark states as \(E_{bt}=E_{i}-E_{4}(\infty)\) to identify whether or not the tetraquark states are stable against the strong interactions, where \(E_{4}(\infty)\) is the lowest possible threshold of the two meson structure estimated in the QDCSM. and \(i\) represents the different situation of channel coupling. Such a subtraction procedure can greatly reduce the influence of the model parameters on the binding energies. If \(E_{b}>0\), the tetraqaurk systems can fall apart into two mesons via the strong interactions. If \(E_{b}<0\), the strong decay into two mesons is forbidden kinemetally and therefore the decay can only occur via either the weak or electromagnetic interaction. For the \(T_{cc\bar{s}}\) tetraquark system with \(J^{P}=0^{+}\), there are two channels in the meson-configuration and two channels in the diquark-antidiquark configuration. The estimated eigenenergies of \(T_{cc\bar{s}}\) state with \(J^{P}=0^{+}\) are listed in Table 3. The theoretical thresholds of the meson-meson channels are also presented for comparison. With the parameters in QDCSM1, the single channel estimations in the meson-meson configuration find that the eigenenergies are all above the corresponding threshold, which indicate that the single channel estimations do not support the existence of the bound states. In addition, when considering the coupled channels effects in the meson-meson configurations, we find the estimated eigenenergy is 3836.2 MeV, which is above the threshold of \(D^{0}D^{+}_{s}\). The lowest eigenenenergy obtained by coupled channel estimations in the meson-meson configuration is very close to the one of the single channel estimations in the \(D^{0}D^{+}_{s}\) channel, which indicates that the coupled channel effect in the meson-meson configuration is rather weak. As for the diquark-antidiquark configuration, both the single channel estimations and the coupled channel estimations indicate that the eigenenergies are above the threshold of \(D^{0}D^{+}_{s}\). Different from the meson-meson configuration, we find the eigenenergy obtained from the coupled channel estimation is at least 20 MeV below the lowest one of the single channel estimation, which indicate the coupled channels effect in the diquark-antidiquark configuration is much strong. Moreover, we extend the coupled channel effect in both configurations, and the eigenenergy is estimated to be 3836.2 MeV, which is still above the threshold of \(D^{0}D^{+}_{s}\). The results estimated with the parameters in QDCSM2 and QDCSM3 are very similar with those obtained with the parameter in QDCSM1 and no stable tetraquark state is found. For the \(T_{cc\bar{s}}\) tetraquark system with \(J^{P}=1^{+}\), there are six channels, including three channels in the meson-meson configuration and three channels in the diquark-antidiquark configuration. From Table 4, the estimated results of three sets of model parameters are almost identical. When considering the single channel estimations in the meson-meson configuration, we find that the estimated eigenenergy of \(D^{0}D^{+}_{s}\) and \(D^{*}D^{+}_{s}\) channels are above the theoretical threshold of the corresponding physical channels, which indicates that these channels are scattering channels in single channel calculations. However, a bound state in the \(D^{*}D^{*+}_{s}\) channel with the bound energy about \(1\sim 10\) MeV is obtained with all three sets of model parameters. Besides, by the coupling channels with the meson-meson configuration, the estimated eigenenergy is slightly above the lowest theoretical threshold of the \(D^{*}D^{+}_{s}\), which show that the effect of couple channels in the meson-meson configuration is rather weak. For the diquark-antidiquark configuration, the estimated eigenenergies obtained for the single-channel and channel-coupled estimations are above the theoretical threshold of the lowest channel \(D^{*}D^{+}_{s}\). Nevertheless, when the channel coupling between the two configuration are taken into account, a shallow bound state is detected, although the magnitude of the bound energy is slightly different with different sets of the model parameters. In view of the above conclusions, we estimate the average values of each terms in the Hamiltonian to examine how a shallow \(D^{*}D^{+}_{s}\) bound state with \(J^{P}=1^{+}\) is created. In Table 6, we present the contributions of each interaction by considering the single channel and coupled channel calculations. In addition, the average values of each interaction of two conventional \(D^{*}\)and \(D_{s}^{+}\) mesons without interactions, i.e., the distance between the two mesons are large enough, are also listed in the table for comparison. From the Table, one finds that the magnitude of the average values of each terms for different sets of model parameter are very similar. Here, we define \(\Delta E_{xc}=E_{xc}-E_{M}\), \(\Delta E_{cc}=E_{cc}-E_{M}\) and \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Index} & \multirow{2}{*}{Channel} & \multirow{2}{*}{Threshold} & \multicolumn{3}{c}{QDCSM1} & \multicolumn{3}{c}{QDCSM2} & \multicolumn{3}{c}{QDCSM3} \\ \cline{3-13} & & & \(E_{xc}\) & \(E_{cc}\) & \(E_{mix}\) & \(E_{xc}\) & \(E_{cc}\) & \(E_{mix}\) & \(E_{xc}\) & \(E_{cc}\) & \(E_{mix}\) \\ \hline 1 & \(D^{0}D_{s}^{*+}\) & 3977 & 3978.2 & 3977.1 & 3971.1 & 3978.2 & 3977.7 & 3973.8 & 3978.2 & 3978.1 & 3974.8 \\ 2 & \(D^{*}D_{s}^{*}\) & 3975 & 3978.0 & & 3978.1 & & 3978.2 & & & \\ 3 & \(D^{*}D_{s}^{*+}\) & 4119 & 4110.8 & & 4117.2 & & 4118.1 & & & \\ 4 & \((cc)(\bar{q}\bar{s})\) & & 4544.2 & 4128.2 & & 4535.4 & 4127.2 & & 4518.9 & 4124.1 & \\ 5 & \((cc)(\bar{q}\bar{s})\) & & 4132.7 & & 4132.5 & & 4130.7 & & & \\ 6 & \((cc)(\bar{q}\bar{s})\) & & 4337.5 & & 4334.1 & & 4327.8 & & & \\ \hline \hline \end{tabular} \end{table} Table 4: The same as Table 3 but for the tetraquark states with \(J^{p}=1^{+}\). \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline & \(J^{p}=0^{*}\) & & & \(J^{p}=1^{+}\) & & & \(J^{p}=2^{+}\) & & \\ \hline \multirow{2}{*}{index} & \(F^{i};S^{j}_{i};\chi^{c}_{k}\) & \multirow{2}{*}{channels} & \multirow{2}{*}{index} & \(F^{i};S^{j}_{i};\chi^{c}_{k}\) & & & \(F^{i};S^{j}_{i};\chi^{c}_{k}\) & & \\ & [i;j;k] & & & [i;j;k] & & & & [i;j;k] & & channels \\ \hline 1 & [1,1,1] & \(D^{0}D_{s}^{*}\) & 1 & [1,3,1] & \(D^{0}D_{s}^{**}\) & 1 & [1,6,1] & \(D^{*}D_{s}^{**}\) \\ 2 & [1,2,1] & \(D^{*}D_{s}^{**}\) & 2 & [1,4,1] & \(D^{*}D_{s}^{**}\) & 2 & [2,6,4] & \((cc)(\bar{q}\bar{s})\) \\ 3 & [2,1,3] & \((cc)(\bar{q}\bar{s})\) & 3 & [1,5,1] & \(D^{*}D_{s}^{**}\) & & & & \\ 4 & [2,2,4] & \((cc)(\bar{q}\bar{s})\) & 4 & [2,3,3] & \((cc)(\bar{q}\bar{s})\) & & & & \\ & & & 5 & [2,4,4] & \((cc)(\bar{q}\bar{s})\) & & & & \\ & & & 6 & [2,5,4] & \((cc)(\bar{q}\bar{s})\) & & & & \\ \hline \hline \end{tabular} \end{table} Table 2: The relevant channels for all possible \(J^{p}\) quantum numbers. \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Index} & \multirow{2}{*}{Channel} & \multirow{2}{*}{Threshold} & \multicolumn{3}{c}{QDCSM1} & \multicolumn{3}{c}{QDCSM2} & \multicolumn{3}{c}{QDCSM3} \\ \cline{3-13} & & & \(E_{xc}\) & \(E_{cc}\) & \(E_{mix}\) & \(E_{xc}\) & \(E_{cc}\) & \(E_{mix}\) & \(E_{xc}\) & \(E_{cc}\) & \(E_{mix}\) \\ \hline 1 & \(D^{0}D_{s}^{*}\) & 3833 & 3836.3 & 3836.2 & 3836.2 & 3836.3 & 3836.3 & 3836.2 & 3836.2 & 3836.2 & 3836.2 \\ 2 & \(D^{*}D_{s}^{**}\) & 4119 & 4119.7 & & & 4120.9 & & 4121.2 & & \\ 3 & \((cc)(\bar{q}\bar{s})\) & & 4589.3 & 4299.8 & & 4585.1 & 4291.8 & & 4574.7 & 4277.9 & \\ 4 & \((cc)(\bar{q}\bar{s})\) & & 4321.3 & & & 4316.5 & & 4308.0 & & \\ \hline \hline \end{tabular} \end{table} Table 3: The low-lying eigenenergies (in unit of MeV) of \(T_{cc}\) tetraquark states with \(J^{p}=0^{*}\). \(\Delta E_{mix}=E_{mix}-E_{M}\). From our estimations, we find the contributions of the confinement potential to \(\Delta E_{xc}\), \(\Delta E_{cc}\) and \(\Delta E_{mix}\) are positive, which indicate the confinement potential hinders the \(D^{*}\) and \(D_{s}^{*}\) subclusters from forming a bound states. For the kinetic energy term, with more physical channels taking into consideration, the properties of kinetic energy basically transforms gradually from repulsion towards very strong attraction, whereas the similar conclusions can be drawn for the one-gluon-exchange interaction. In addition, in the meson exchange interactions, the meson exchange potential contributions to \(\Delta E_{xc}\), \(\Delta E_{cc}\) and \(\Delta E_{mix}\) are negligible, in particularly, the contributions from \(\eta\) meson exchange potential are all less than 0.05 MeV, which are not listed in the table. According to the above analysis, one can find that the kinetic energy term and the one-gluon-exchange potential have deep attractions in the channel coupling calculations with both the meson-meson and diquark-antidiquark configurations, However, the confinement potential displays as repulsive interaction, which weaken the overall attraction. Such a phenomenon illustrates the very delicate competition between the kinetic energy and the interaction potential from various sources in the Hamiltonian. For the \(T_{cc\bar{s}}\) tetraquark system with \(J^{P}=2^{+}\), only one physics channel in the meson-meson configuration and one channel in the diquark-antidiquark configuration exists. From Table 5, one can find the eigenenergies obtained from the single channel estimation is higher than the physical meson-meson channel. After considering the coupled channel effect between the meson-meson and diquark-antidiquark configurations, the estimated eigenenergy is still above the threshold of \(D^{*}D_{s}^{**+}\), which indicates that there is no bound state in the \(T_{cc\bar{s}}\) tetraquark system with \(J^{P}=2^{+}\). ### Resonance States In the bound state estimations, we find one bound state with \(J^{P}=1^{+}\) while there is no bound state in the \(J^{P}=0^{+}\) and \(J^{P}=2^{+}\) systems. In the following, we will employ the real scaling method to explore the possible resonance states in the \(T_{cc\bar{s}}\) tetraquark system. To determine whether these resonance states could be detected by the open channels, we perform a channel coupling estimation by including all the meson-meson and diquark-antidiquark channels in the estimations. The real scaling method is developed to identify the genuine resonances from the states with discrete energies with finite volume [81]. In this method, a factor \(\mathbf{S}_{m}\), which is the distance between two clusters, is adopted to scale the finite volume. So with the increase of the distance between two clusters, the continuum state will fall off toward its thresh \begin{table} \begin{tabular}{c c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{QDCSM1} & \multicolumn{4}{c}{QDCSM2} & \multicolumn{4}{c}{QDCSM3} \\ \cline{2-13} & \(H_{T}\) & \(V_{\text{CON}}\) & \(V_{\text{OGE}}\) & \(V_{K}\) & \(H_{T}\) & \(V_{\text{CON}}\) & \(V_{\text{OGE}}\) & \(V_{K}\) & \(H_{T}\) & \(V_{\text{CON}}\) & \(V_{\text{OGE}}\) & \(V_{K}\) \\ \hline \(E_{xc}\) & 1081.3 & -901.7 & -506.6 & \(\sim 0.0\) & 1011.2 & -783.9 & -554.2 & \(\sim 0.0\) & 917.9 & -615.9 & -628.8 & \(\sim 0.0\) \\ \(E_{cc}\) & 1073.9 & -895.9 & -505.8 & -0.1 & 1008.8 & -782.5 & -553.5 & -0.1 & 917.1 & -615.5 & -628.5 & \(\sim 0.0\) \\ \(E_{mix}\) & 1049.0 & -820.4 & -558.1 & -4.4 & 998.4 & -752.4 & -573.7 & -3.5 & 915.3 & -609.8 & -635.4 & -0.3 \\ \(E_{M}\) & 1079.6 & -903.3 & -506.1 & \(\sim 0.0\) & 1008.7 & -784.7 & -553.8 & \(\sim 0.0\) & 915.0 & -616.3 & -628.5 & \(\sim 0.0\) \\ \hline \(\Delta E_{xc}\) & 1.7 & 1.6 & -0.5 & \(\sim 0.0\) & 2.5 & 0.8 & 0.4 & \(\sim 0.0\) & 2.9 & 0.4 & -0.3 & \(\sim 0.0\) \\ \(\Delta E_{cc}\) & -5.7 & 7.4 & 0.3 & -0.1 & 0.1 & 2.2 & -0.3 & -0.1 & 2.1 & 0.8 & 0.0 & \(\sim 0.0\) \\ \(\Delta E_{mix}\) & -30.6 & 82.9 & -52.0 & -4.4 & -10.3 & 32.3 & -19.9 & -3.5 & 0.3 & 5.5 & -7.2 & -0.3 \\ \hline \hline \end{tabular} \end{table} Table 6: Contributions of each terms in Hamiltonian to the energy of the \(D^{0}D_{s}^{**}\) bound state with \(J^{P}=1^{+}\) in unit of MeV. \(E_{M}\) stands for the sum of two mesons threshold. Our estimations indicate the contributions of \(\eta\) meson exchange potential are all less than 0.05 MeV in different sets of model parameters. Thus, the contributions from \(\eta\) meson exchange are not presented. Figure 3: A sketch diagram of the resonance shape in the real-scaling method. old, the energy of the bound state remains unchanged, while a resonance state will tend to be stable. If the energy of a scattering state is far away from the one of the resonance, the coupling between the resonance and the scattering states is rather weak, and the energy of the resonance is almost stable. When the energy of the scattering state approaches the one of the resonance due to the increasing of \(\mathbf{S}_{m}\), the coupling will become strong, and if \(\mathbf{S}_{m}\) increases further, the energy gap between the resonance and scattering states will increase and the coupling will become weak again. In this way, an avoided crossing structure appears. This is a general feature of two interacting energy levels. Because of the continuum nature of the scattering states, the avoided crossing structure will show up repeatedly with the increasing of \(\mathbf{S}_{m}\) as shown in Fig. 3 and the resonance line corresponds to the energy of the resonance state. In addition, from the slopes of resonance and scattering states, the decay width can be estimated by, \[\Gamma = 4|V_{\text{min}}(S)|\frac{\sqrt{|k_{r}||k_{c}|}}{|k_{r}-k_{c}|} \tag{22}\] where \(k_{r}\) and \(k_{c}\) are the slopes of the resonance and scattering states, respectively. While, \(V_{min}(S)\) is the minimal energy difference between the resonance and the scattering state at avoided crossing point. This method has been successfully applied to investigate the pentaquark [82; 83], the dibaryon [84], and the tetraquark systems [79; 85; 86]. In the present work, we expand the spacial wave function with a set of gaussians with differences \(\mathbf{S}_{m}\), \((m=1,2,3,\ldots,n)\) and the distance with the relative motion of two clusters can be scaled. So we calculate the energy eigenvalues of the \(T_{cc\bar{s}}\) tetraquark system by taking the value of the largest distance (\(S_{m}\)) between two clusters from 4.0 to 9.0 fm to check if there is any resonance state. Here, we take the results of the QDCSM1 as examples, which are shown in Fig. 4 with different \(J^{P}\) quantum numbers. For the \(T_{cc\bar{s}}\) tetraquark system with \(J^{P}=0^{+}\) as shown in Fig. 4-(a), one can note that the lower black horizontal line corresponds to the physical threshold of \(D_{s}^{+}D^{0}\), while the upper blue horizontal line with the energy to be about 4114 MeV, locates below the threshold of \(D^{*}D_{s}^{**}\), which corresponds to a resonance state since the resonance behavior appearing in the Fig. 4-(a) as the finite space is constantly expanding. Moreover, the resonance state is estimated by considering the full channel coupling, and the present result indicates that its main ingredient is \(D^{*}D_{s}^{**}\). In other words, the effect of the channel coupling push the energy of the physical channel \(D^{*}D_{s}^{**}\) a bit below its threshold. In addition, the width of this resonance state is estimated to be about 14.3 MeV according to Eq. (22). For the \(T_{cc\bar{s}}\) tetraquark system with \(J^{P}=1^{+}\) as shown in Fig. 4-(b), it is obvious that the lowest red horizontal line locates at the energy of 3971 MeV, which is below the threshold of the \(D^{0}D_{s}^{**}\), and this represents the bound states of \(T_{cc\bar{s}}\) tetraquark system with \(J^{P}=1^{+}\). This conclusion is consistent with the estimations in the last subsection. Moreover, two additional horizontal lines are also presented, which stand for the threshold of \(D^{*}D_{s}^{+}\) and \(D^{*}D_{s}^{**}\), respectively. The present estimations indicate that there is no resonance state in the \(T_{cc\bar{s}}\) tetraquark system with \(J^{P}=1^{+}\), and the bound state in the \(D^{*}D_{s}^{**}\) channel becomes the scattering state by the effect of the channel coupling. For the \(T_{cc\bar{s}}\) tetraquark system with \(J^{P}=2^{+}\) as shown in Fig. 4-(c), there is one horizontal line, which represents the threshold of \(D^{*}D_{s}^{**}\). It is clearly to conclude that there are no bound or resonant states in the \(T_{cc\bar{s}}\) tetraquark system with \(J^{P}=2^{+}\). In addition, we perform the same estimations for the \(T_{cc\bar{s}}\) tetraquark system in the QDCSM2 and QDCSM3. The results are similar to those of QDCSM1. We summarize the results obtained from three sets of model parameters in Table 7. By taking the coupled channel effects into consideration, we find one resonance state with a mass \(4113\sim 4114\) MeV for the \(T_{cc\bar{s}}\) tetraquark system with \(J^{P}=0^{+}\). The dominant component of the resonance state is \(D^{*}D_{s}^{**}\) with the percentage of this component to be about 80%. Moreover, the decay width of this resonance state is predicted to be \(14.3\sim 16.1\) MeV. For the \(J^{P}=1^{+}\) system, there is a bound state with energy range (\(3971.1\sim 3974.8\)) MeV and no resonance state is obtained. For the \(T_{cc\bar{s}}\) tetraquark system with \(J^{P}=2^{+}\), no resonance or bound state is obtained by the channel coupling estimations. ## IV Summary In the present work, the \(T_{cc\bar{s}}\) tetraquark system with the quantum number \(J^{P}=0^{+},1^{+},2^{+}\) are systemically investigated to search for the possible bound state and resonance state by using the RGM in the QDCSM framework. In the model, both meson-meson and diquark-antidiquark configurations are taken into account, and the single-channel and the coupled channel calculations are preformed to obtain the energy of the \(T_{cc\bar{s}}\) tetraquark system. In addition, a stabilization calculation is carried out to seek for possible resonance states. Furthermore, to check whether the estimated results are parameter dependent, three different sets of model parameters are employed in the calculation and we find the qualitative results of three sets of model parameters for the \(T_{cc\bar{s}}\) tetraquark system are very similar. From the present estimations, we find that the coupled channel effects plays important role in the \(T_{cc\bar{s}}\) tetraquark system. After taking the coupled channel effects into consideration, we predict one bound state with the energy to be \(3971.1\sim 3974.8\) MeV and \(J^{P}=1^{+}\). Moreover, one resonance state with \(J^{P}=0^{+}\) is also obtained, the resonance mass and width are estimated to be \(4113\sim 4114\) MeV and \(14.3\sim 16.1\) MeV, respectively. The predictions in the present work could be experimentally detected in the future by LHCb and Belle II. Additionally the theoretical and further experimental investigations for properties of the \(T_{cc\bar{s}}\) tetraquark could pave \begin{table} \begin{tabular}{c c c c c} \hline \multicolumn{2}{c}{State} & \multicolumn{3}{c}{Parameter Sets} \\ \hline & \(J^{P}\) & QDCSM1 & QDCSM2 & QDCSM3 \\ \hline Bound & \(1^{+}\) & \(3971.1\) & \(3973.8\) & \(3974.8\) \\ Resonance & \(0^{+}\) & \(4114/14.3\) & \(4144/15.8\) & \(4143/16.1\) \\ \hline \end{tabular} \end{table} Table 7: The energies and widths of the \(T_{cc\bar{s}}\) tetraquark states. the way for possible doubly and triply tetraquark states. ###### Acknowledgements. This work is supported partly by the National Natural Science Foundation of China under Contract No. 12175037, No. 12335001, No. 11775118 and No. 11535005. This work is also supported by china Postdoctoral Science Foundation funded project No. 2021M690626, and No. 1107020201. ## Appendix A The wave function of the open heavy charm tetraquark with strangeness ### The color wave function Plenty of color structures in multiquark systems will be available with respect to those of conventional hadrons such as \(q\bar{q}\) mesons and \(qqq\) baryons. In this appendix, we present how to construct the colorless wave function for a tetraquark system. For the meson-meson configurations, the color wave functions of a \(q\bar{q}\) cluster would be, \[C^{1}_{[111]} = \sqrt{\frac{1}{3}}(r\bar{r}+g\bar{g}+b\bar{b}),\] \[C^{2}_{[21]} = r\bar{b},\qquad C^{3}_{[21]}=-r\bar{g},\] \[C^{4}_{[21]} = g\bar{b},\qquad C^{5}_{[21]}=-b\bar{g},\] \[C^{6}_{[21]} = g\bar{r},\qquad C^{7}_{[21]}=b\bar{r},\] \[C^{8}_{[21]} = \sqrt{\frac{1}{2}}(r\bar{r}-g\bar{g}),\] \[C^{9}_{[21]} = \sqrt{\frac{1}{6}}\Big{(}-r\bar{r}-g\bar{g}+2b\bar{b}\Big{)}, \tag{34}\] where the subscript \([111]\) and \([21]\) stand for color-singlet (\(\mathbf{1}_{c}\)) and color-octet (\(\mathbf{8}_{c}\)), respectively. So, the SU(3)\({}_{\rm color}\) wave functions of color-singlet (two color-singlet clusters, \(\mathbf{1}_{c}\otimes\mathbf{1}_{c}\)) and hidden-color (two color-octet clusters, \(\mathbf{8}_{c}\otimes\mathbf{8}_{c}\)) channels are given, respectively, \[\chi^{c}_{1} = C^{1}_{[111]}C^{1}_{[111]},\] \[\chi^{c}_{2} = \sqrt{\frac{1}{8}}\big{(}C^{2}_{[21]}C^{7}_{[21]}-C^{4}_{[21]}C^{ 5}_{[21]}-C^{3}_{[21]}C^{6}_{[21]} \tag{35}\] \[+C^{8}_{[21]}C^{8}_{[21]}-C^{6}_{[21]}C^{2}_{[21]}+C^{9}_{[21]}C ^{9}_{[21]}\] \[-C^{8}_{[21]}C^{4}_{[21]}+C^{7}_{[21]}C^{2}_{[21]}\big{)}.\] For the diquark-antidiquark structure, the color wave functions of the diquark clusters are, \[C^{1}_{[2]} = rr,\qquad C^{2}_{[2]}=\sqrt{\frac{1}{2}}\big{(}rg+gr\big{)},\] \[C^{3}_{[2]} = gg,\qquad C^{4}_{[2]}=\sqrt{\frac{1}{2}}\big{(}rb+br\big{)},\] \[C^{5}_{[2]} = \sqrt{\frac{1}{2}}\big{(}gb+bg\big{)},\qquad C^{6}_{[2]}=bb,\] \[C^{7}_{[11]} = \sqrt{\frac{1}{2}}\big{(}rg-gr\big{)},\qquad C^{8}_{[11]}=\sqrt{ \frac{1}{2}}\big{(}rb-br\big{)},\] \[C^{9}_{[11]} = \sqrt{\frac{1}{2}}\big{(}gb-bg\big{)}. \tag{36}\] While the color wave functions of the antidiquark clusters can Figure 4: The stabilization plots of the energies of the \(T_{cct}\) tetraquark systems. be writen as, \[C^{1}_{[22]} = \bar{r}\bar{r},\qquad C^{2}_{[22]}=-\sqrt{\frac{1}{2}}\big{(}\bar{r} \bar{b}+\bar{g}\bar{r}\big{)},\] \[C^{3}_{[22]} = \bar{g}\bar{g},\qquad C^{4}_{[22]}=\sqrt{\frac{1}{2}}\big{(}\bar{r} \bar{b}+\bar{b}\bar{r}\big{)},\] \[C^{5}_{[22]} = -\sqrt{\frac{1}{2}}\big{(}\bar{g}\bar{b}+\bar{b}\bar{g}\big{)}, \qquad C^{6}_{[22]}=\bar{b}\bar{b},\] \[C^{7}_{[211]} = \sqrt{\frac{1}{2}}\big{(}\bar{r}\bar{g}-\bar{g}\bar{r}\big{)}, \qquad C^{8}_{[211]}=-\sqrt{\frac{1}{2}}\big{(}\bar{r}\bar{b}-\bar{b}\bar{r} \big{)},\] \[C^{9}_{[211]} = \sqrt{\frac{1}{2}}\big{(}\bar{g}\bar{b}-\bar{b}\bar{g}\big{)}. \tag{10}\] The color-singlet wave functions of the diquark-antidiquark configuration can be the product of color sextet and antisextet clusters (\(\mathbf{6}_{c}\otimes\bar{\mathbf{6}}_{c}\)) or the product of color-triplet and antitriplet cluster (\(\mathbf{3}_{c}\otimes\bar{\mathbf{3}}_{c}\)), which read, \[\chi^{c}_{3} = \sqrt{\frac{1}{6}}\big{(}C^{1}_{[21]}C^{1}_{[22]}-C^{2}_{[2]}C^{ [2]}_{[22]}+C^{3}_{[2]}C^{3}_{[22]}\] \[+C^{4}_{[2]}C^{4}_{[22]}-C^{5}_{[2]}C^{5}_{[22]}+C^{6}_{2}C^{6}_{ 2}\big{)},\] \[\chi^{c}_{4} = \sqrt{\frac{1}{3}}\big{(}C^{7}_{[11]}C^{7}_{[211]}-C^{8}_{[11]}C^ {8}_{[211]}+C^{9}_{[11]}C^{9}_{[211]}\big{)}. \tag{11}\] ### The flavor wave function For the flavor degree of freedom, the different coupling methods generate different flavor wave function. From the Table 2, the \(T_{cc\bar{c}}\) tetraquark flavor wave function can be categorized as \(F^{i}_{m}\) and \(F^{i}_{d}\), where the subscript \(m\) and \(d\) refer to meson-meson and the diquark-antidiquark configurations, respectively. Distinctive structures are gotten the quark coupling arrange. For the meson-meson structure, the coupling orders can be accessed as, \[F^{1}_{m} = (c\bar{q})-(c\bar{s}), \tag{12}\] while for the diquark-antidiquark structure, the flavor wave function should be written as \[F^{2}_{d} = (cc)-(\bar{q}\bar{s}) \tag{13}\] ### The spin wave function The total spin \(S\) of tetraquark states ranges from 0 to 2. All of them are considered. The wave functions of two body clusters are, \[\chi_{11} = \alpha\alpha,\] \[\chi_{10} = \sqrt{\frac{1}{2}}\big{(}\alpha\beta+\beta\alpha\big{)},\] \[\chi_{1-1} = \beta\beta,\] \[\chi_{00} = \sqrt{\frac{1}{2}}\big{(}\alpha\beta-\beta\alpha\big{)}. \tag{14}\] Then, the total spin wave functions \(S^{i}_{\,j}\) are obtained by considering the coupling of two subcluster spin wave functions with SU(2) algebra, and the total spin wave functions of four-quark states can be read as, \[S^{1}_{\,0} = \chi_{00}\chi_{00},\] \[S^{2}_{\,0} = \sqrt{\frac{1}{3}}\big{(}\chi_{11}\chi_{1-1}-\chi_{10}\chi_{10}+ \chi_{1-1}\chi_{11}\big{)},\] \[S^{3}_{\,1} = \chi_{00}\chi_{11},\] \[S^{4}_{\,1} = \chi_{11}\chi_{00},\] \[S^{5}_{\,1} = \sqrt{\frac{1}{2}}\big{(}\chi_{11}\chi_{10}-\chi_{10}\chi_{11}\big{)},\] \[S^{6}_{\,2} = \chi_{11}\chi_{11}. \tag{15}\]
LHCb collaboationの$T_{cc}$四重子発見にインスパイアされて、この研究では、現時点で色脱lokal化色 screenedモデルを用いて、重な低い二重重性チャーム四重子状態を体系的に調査しました。この調査では、meson-meson配置とdiquark-antidiquark配置の2つの設定を検討しました。計算の結果、結合チャネル効果が多重子系の重要な役割を果たすと示唆されており、$J^{P}=1^{+}$の結合状態と$J^{P}=0^{+}$の共鳴状態が予測されました。この結合状態の質量はおおよそ3971〜3975 MeVで、共鳴状態の質量と幅は4113〜4114 MeVと14.3〜16.1 MeVと推定されました。
2309.10254
LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins
Large language model (LLM) platforms, such as ChatGPT, have recently begun offering an app ecosystem to interface with third-party services on the internet. While these apps extend the capabilities of LLM platforms, they are developed by arbitrary third parties and thus cannot be implicitly trusted. Apps also interface with LLM platforms and users using natural language, which can have imprecise interpretations. In this paper, we propose a framework that lays a foundation for LLM platform designers to analyze and improve the security, privacy, and safety of current and future third-party integrated LLM platforms. Our framework is a formulation of an attack taxonomy that is developed by iteratively exploring how LLM platform stakeholders could leverage their capabilities and responsibilities to mount attacks against each other. As part of our iterative process, we apply our framework in the context of OpenAI's plugin (apps) ecosystem. We uncover plugins that concretely demonstrate the potential for the types of issues that we outline in our attack taxonomy. We conclude by discussing novel challenges and by providing recommendations to improve the security, privacy, and safety of present and future LLM-based computing platforms.
Umar Iqbal, Tadayoshi Kohno, Franziska Roesner
2023-09-19T02:20:10
http://arxiv.org/abs/2309.10254v2
# LLM Platform Security: ###### Abstract Large language model (LLM) platforms, such as ChatGPT, have recently begun offering a _plugin ecosystem_ to interface with third-party services on the internet. While these plugins extend the capabilities of LLM platforms, they are developed by arbitrary third parties and thus cannot be implicitly trusted. Plugins also interface with LLM platforms and users using natural language, which can have imprecise interpretations. In this paper, we propose a framework that lays a foundation for LLM platform designers to analyze and improve the security, privacy, and safety of current and future plugin-integrated LLM platforms. Our framework is a formulation of an attack taxonomy that is developed by iteratively exploring how LLM platform stakeholders could leverage their capabilities and responsibilities to mount attacks against each other. As part of our iterative process, we apply our framework in the context of OpenAI's plugin ecosystem. We uncover plugins that concretely demonstrate the potential for the types of issues that we outline in our attack taxonomy. We conclude by discussing novel challenges and by providing recommendations to improve the security, privacy, and safety of present and future LLM-based computing platforms. ## 1 Introduction Large language models (LLMs), such as GPT-4 [1], and platforms that leverage them, such as ChatGPT [2], have recently advanced tremendously in capabilities and popularity. In addition to the actual LLM at their core, platforms like ChatGPT [2] and Bard [3] are becoming increasingly complex in order to support various use cases and integrate with different features and third-party services. For example, platforms vendors like OpenAI and Google have announced and begun implementing a plugin ecosystem, allowing the LLM to interface with third-party services [4, 5]. In this paper, we investigate conceptually and empirically the security and privacy of these emerging LLM-based platforms that support third-party integrations or plugins. We focus on OpenAI, which has the most mature plugin ecosystem, as a case study. While extending the capabilities of LLM platforms, third-party plugins may add to the long list of security, privacy, and safety concerns raised by the research community about LLMs, e.g., [6, 7, 8, 9, 10, 11]. First, plugins are developed by third-party developers and thus should not be implicitly trusted. Prior research on other computing platforms has shown that third-party integrations often raise security and privacy issues, e.g., [12, 13, 14, 15, 16, 17]. In the case of LLM platforms, anecdotal evidence already suggests that third-party plugins can launch prompt injection attacks and can potentially take over LLM platforms [18]. Second, as we observe, plugins interface with LLM platforms and users using natural language, which can have ambiguous and imprecise interpretation. For example, the natural language functionality descriptions of plugins could either be interpreted too broadly or too narrowly by the LLM platform, both of which could cause problems. Furthermore, at least some LLM platform vendors, such as OpenAI, currently only impose modest restrictions on third-party plugins with a handful of policies [19, 20] and -- based on our analysis and anecdotal evidence found online [21] -- a frail review process. These concerns highlight that at least some LLM platform plugin ecosystems are emerging without a systematic consideration for security, privacy, and safety. If widely deployed without these key considerations, such integrations could result in harm to the users, plugins, and LLM platforms. Thus, to lay a systematic foundation for secure LLM platforms and integrations as a whole, we propose a framework that can be leveraged by current and future designers of LLM-based platforms. To develop the framework, we first formulate an extensive taxonomy of attacks by systematically and conceptually enumerating potential security, privacy, and safety issues with an LLM platform that supports third-party plugins. To that end, we survey the capabilities of plugins, users, and LLM platforms, to determine the potential attacks that these key stakeholders can carry out against each other. We consider both attacks and methods that uniquely apply to the LLM platform plugin ecosystem as well as attacks and methods that already exist in other computing platforms but also apply to LLM platform plugin ecosystems. Second, to ensure that our taxonomy is informed by current reality, we investigate existing plugins to assess whether they have the potential to implement adversarial actions that we enumerate in our taxonomy. Specifically, we leveraged our developed attack taxonomy to systematically analyze the plugins hosted on OpenAI's plugin store (as of June 6 2023) by reviewing their code (manifests and API specifications) and by interacting with them. When we uncovered a new attack possibility or found that a conjectured attack is infeasible, we iteratively revised our attack taxonomy. Looking ahead, we anticipate that third-party plugin integration in LLM platforms is only the beginning of an era of _LLMs as computing platforms_[22]. In parallel with innovation in the core LLMs, we expect to see systems and platform level innovations in how LLMs are integrated into web and mobile ecosystems, the IoT, and even core operating systems. The security and privacy issues that we identify in the context of LLM plugin ecosystems are "canaries in the coalmine" (i.e., advance warnings of future concerns and challenges), and our framework can help lay a foundation for these emerging LLM-based computing platforms. We summarize our key contributions below: 1. We develop a framework for the systematic evaluation of the security, privacy, and safety properties of LLM computing platforms. The core component of this framework is a taxonomy of attacks. 2. We demonstrate the actionability of our framework by evaluating it on a leading LLM platform (OpenAI and its plugin ecosystem) and found numerous examples where plugins, at least at the time of our analysis, had the potential to mount attacks enumerated in our taxonomy. 3. We reflect upon the framework and the attacks we found, to identify challenges and lessons for future researchers and industry practitioners seeking to secure LLM computing platforms. ## 2 Background: LLM plugin architecture Pre-trained LLMs on their own are limited at tasks that require interaction with external services. For example, LLMs cannot create a travel itinerary without using data about active flight schedules and cannot book tickets without reaching out to travel agencies. To tackle these limitations, platform vendors, such as OpenAI, have begun to extend LLMs by integrating them with third-party plugins [4]. Third-party plugins expose API endpoints to LLM platforms so that the LLMs can access up-to-date and/or restricted data (e.g., data beyond the training samples) and interface with third party services on the internet (i.e., to act on recommendations made in the emitted output) [23]. ### _Plugin architecture & interaction workflow_ LLM platform plugins (in at least one, currently existing design) consist of a manifest and an API specification, both of which are defined through natural language descriptions [23]. Code 1 and 2 show the manifest and API specification for an OpenAI plugin, respectively. The manifest includes plugin metadata, functionality description (defined separately for users and the LLM), authentication details, a link to a privacy policy, and a reference to the API specification. The API specification includes the API server endpoint, API functionality endpoints along with their description, expected API data with its type and description, and expected API response type. Figure 1 summarizes the life cycle of a user prompt to an LLM that requires interaction with a plugin. Once a user enables a plugin, its _description_for_model_ and endpoints (specified under _paths_) are fed to the LLM to build the context that is necessary for interpreting and resolving the user prompt with the help of the plugin. Once a user initiates a prompt, the LLM first determines if addressing the prompt requires the use of the installed plugin, based on the plugin's _description_for_model_ in Code 1. Then the LLM platform makes a call to the relevant plugin API endpoint, which is determined through the endpoint path _summary_ defined in Code 2. The LLM also determines the necessary data that needs to be sent along with API call, based on the schema _properties_ in Code 2. The LLM may send additional user data, that is not part of the user prompt, such as the country and state, with the plugin API request [23]. After the LLM makes the API call, the plugin executes its functionality on its own server and returns the response. The LLM then interprets the response returned from the API, and then formats it to show it to the user. Note that the LLM platform mediates all interactions with the plugin; users and plugins do not directly interact, except for a few instances, e.g., logging in on plugin service. ### _Responsibilities of key stakeholders_ Next, we briefly survey the capabilities and responsibilities of plugins, LLM platforms, and users, in order to provide background on the roles of different potential victims and attackers in our subsequent discussions. Additional details are provided in Appendix A. While surveying the capabilities, we consider OpenAI's current plugin architecture as our reference point. First, **plugin developers** are responsible for (1) developing and updating plugins, (2) hosting the plugin on their own servers, (3) supporting authentication of platform (e.g., endpoints restricted to traffic from the LLM platform), (4) supporting authentication of users to the plugin's entity, and (5) processing data and fulfilling commands provided by the LLM platform. Next, the **LLM platform** is responsible for (1) reviewing plugins and making them available on the plugin store, (2) providing user authentication interfaces, (3) initiating plugins based on user prompts, and (4) facilitating user-plugin interaction. Finally, the **user** is responsible for (1) installing and removing plugins, (2) managing their accounts, and (3) issuing prompts to interact with plugins. ### _Security considerations_ It is a standard practice in computing platforms that support third party ecosystems to impose restrictions on third parties. OpenAI also deploys some restrictions, provides suggestions, and enforces a review process to improve the security of the plugin ecosystem. As for restrictions, OpenAI requires that plugins use HTTPS for all communication with the LLM platform [24], build confirmation flows for requests that might alter user data, e.g., through POST requests [23], use OAuth if the plugin takes an action on user's behalf [19], not use non-OpenAI generative image models [19], adhere to OpenAI's content policy [25], comply with OpenAI's brand guidelines [26], among other things mentioned in the plugin review process [19]. OpenAI also: states that it will remove plugins if they change [27], restricts communication to only the plugin's root domain [28], and only passes user identifiers that do not persist for more than a day and beyond a chat session [29]. As for suggestions, OpenAI suggests that plugins implement API request rate limits [29] and provides an IP address range for OpenAI servers so that plugins can add it to their allow lists [30]. These restrictions and suggestions are a step in the right direction, but in our assessment, insufficient in securing LLM platforms (as we elaborate in Section 7.2). Furthermore, anecdotal evidence found online [21] and experience of some developers (Section 3.4) suggests that even these restrictions are not fully enforced by OpenAI. Figure 1: Life cycle of a user command to the LLM that requires the use of a plugin: User installs a plugin on LLM platform from the plugin store (step 1). Plugin description and its endpoints are fed to the LLM to build the context that is necessary for interpreting user prompt (step 2). User makes a prompt to the LLM that requires the use of the installed plugin (step 3). LLM selects the relevant plugin based on its description (step 4) and makes a request to the plugin API endpoint with the required parameters (step 5). LLM then interprets the response from the plugin API endpoint and displays it to the user. ### _Threat modeling_ We consider both security and NLP researchers and practitioners to be among our target audience with this paper. We rely heavily on threat modeling, a common technique in computer security. For the benefit of non-security readers, we provide some background here. Threat modeling is a process to systematically uncover vulnerabilities in a system with a goal to improve its security. The vulnerabilities uncovered during the threat modeling can be structured in an _attack taxonomy_, which thematically groups different classes of potential attack. The attack taxonomy provides information related to the objectives of the attacker and the potential mechanisms it could use to achieve the objectives. This structured information is used by system designers to triage and eliminate the potential attack mechanisms or the classes of attacks. To identify the threats, security analysts use a variety of techniques, including surveying existing security and privacy literature that closely relates to the system, domain knowledge, and parallels from the real-world. The goal of threat modeling is to not just reveal the novel attacks that uniquely apply to the system, but instead to enumerate a comprehensive set of both existing and novel attacks that need to be addressed in order to improve the security of the system. Along with the novel attacks, such as the ones related to the complexity of natural language processing in our case, which we later uncover in our taxonomy, existing attacks that uniquely apply to the system may also require development of new concepts and framework for mitigation. Listing both existing and novel attacks is also crucial because the consumers of an attack taxonomy may not be security experts, they may be experts in another domain, including NLP experts or product managers trying to make prioritization decisions. ## 3 Methodology In this section, we describe our framework to systematically evaluate the security, privacy, and safety properties of LLM platform plugin ecosystem. We iteratively develop a framework where we first formulate a preliminary attack taxonomy and evaluate it on the LLM platform plugins. Based on our evaluation, we refine our attack taxonomy and improve the examination of plugins. While developing the framework, we consider OpenAI's plugin-integrated LLM platform as our reference point. ### _Framework goal and tenets_ Our primary goal for building this framework is to contribute to a foundation for LLM platform designers to analyze and improve the security, privacy, and safety of current and future plugin-integrated LLM platforms. To achieve that goal, we set the fundamental tenets of our framework to be _actionable, extensive, extensible, and informed_. By being actionable, we intend to provide a scaffolding that could be leveraged to create an attack taxonomy for analyzing the security, privacy, and safety of plugin-integrated LLM platforms. Through extensiveness, we intend to capture a broad set of classes of existing attacks that also apply to LLM platforms along with new and future attacks that uniquely apply to LLM platforms. While being extensive, we also intend our framework to be extensible so that our framework can incorporate future attacks and is also generalizable across existing and future LLM platforms. Lastly, we intend to be informed in our enumeration and discovery of attacks such that they are grounded in reality and are not mere speculation. ### _Framework formulation process_ To begin creating our attack taxonomy, we take inspiration from prior research which has studied and discovered security and privacy issues in other computing platforms that support third-party application and plugins, such as the web [31, 32, 33, 34], mobile [12, 35], and IoT [14, 36, 16]. We then filter the attacks that apply to the plugin-integrated LLM platform, by considering the capabilities of key stakeholders, i.e., plugins, users, and the LLM platform, and the relationships between them, surveyed in Section 2. We also assume that an external adversary could compromise any of the stakeholders and assume their roles. Next, we use a structured threat modeling process with all authors to identify new and future attacks that could be mounted against plugin-integrated LLM platforms. To systematically enumerate these attacks, we review the surveyed capabilities of users, plugins, and LLM platforms (in Section 2) and determine the potential ways in which an adversary could leverage its capabilities to raise security, privacy, and safety issues. While determining, we rely on our domain knowledge and consider the issues that could arise due to the complexity of understanding the functionality described in natural language [37]. Toward achieving extensibility, it is important for the framework to be well-structured. To provide that structure, we first group the attacks based on the high-level goal that the attacker intends to achieve, and then further under pairs of LLM platform stakeholders, each acting as adversaries and/or victims. This extensibility will allow future researchers to incorporate new stakeholders, attack goals, and specific instantiations of attacks that might appear in future LLM platforms (or others that are not captured by our framework). ### _Applying the framework_ To ensure that our taxonomy is informed by current reality, we evaluate the feasibility of enumerated attacks by doing an analysis of plugins hosted on OpenAI. We also iteratively updated the taxonomy throughout this process. #### 3.3.1 Crawling OpenAI plugins OpenAI implemented support for plugins in ChatGPT in March, 2023 [4] and as of August 1, the OpenAI plugin store contains 799 plugins. Our analysis considers 268 plugins from June 6 and a few other plugins from later dates. All of the analysis was conducted between June 6 and July 31, 2023. We visited the OpenAI plugin store and individual plugin developer websites to download plugin manifest and specifications. We downloaded the amalgamated manifests for all plugins from the OpenAI's official plugin store. We then programmatically traversed the plugin manifests and sent requests to each plugin services' API URL to download their API specifications. Additionally, we also download privacy policies of plugins from the links provided by plugins. #### 3.3.2 Analyzing OpenAI plugins We started by manually analyzing the plugins' manifests and API specifications. We reviewed each plugin and examined whether our hypothesized attacks apply to the plugin. If we suspected that a plugin might demonstrate the capability of an attack, we installed the plugin on the LLM platform (ChatGPT) and interacted with it to exercise the potentially problematic functionality. When we uncovered a new attack possibility or found that a conjectured attack is infeasible, we revised our attack taxonomy accordingly. It is important to note that the discovered attack potentials (referred to as _risks_) may not be deliberate attempts by malicious actors but could instead be the results of bugs, poor security and privacy practices, poorly defined interfaces, and/or fundamental inabilities to provide stronger security within the current LLM plugin ecosystem. Nonetheless, these practices could result in harm to users. Overall, we find numerous plugins that contain or illustrate potential security, privacy, and safety risks. \begin{table} \begin{tabular}{l|l|l|l} \hline \hline ### Ethics and disclosure In evaluating the ethics and morality of this research, we drew from both consequentialist and deontological traditions [38]. We provide more details of our analysis in Appendix B. From our analysis, we determined that the benefits of conducting our research, including developing our analysis framework, analyzing the potential for attacks within the ChatGPT ecosystem, and (eventually) sharing our results provided significant benefits to society and to users. Contributing to this decision was the fact that LLM-based systems are evolving at an incredibly rapid rate and researchers are continuing to discover vulnerabilities (which means that, if defensive directions are not developed, adversaries could also discover vulnerabilities). Further, we determined that it was important to give OpenAI advance notice about our findings and, hence, we disclosed our findings to OpenAI before disclosing these results publicly. OpenAI responded that they appreciate our effort in keeping the platform secure but have determined that the issues do not pose a security risk to the platform. We clarified to them that our assessment of these issues is that they pose a risk to users, plugins, and the LLM platform and should be seriously considered by OpenAI. For issues related to the core LLM, e.g., hallucination, ignoring instructions, OpenAI suggested that we report them to a different forum [39] so that their researchers can address them, which we also did. While we did not analyze the security of any plugins, we did evaluate the potential for plugins to create attacks or risks for users. Hence, while one might argue that it is not necessary to disclose our findings to plugin vendors, we believe that they have a right to know about our findings that are relevant to their products before the public. We have informed plugin vendors about our results and findings with respect to their plugins. Upon disclosing to plugin vendors, we learned that in at least one case the plugin vendor also disclosed the situation to OpenAI because OpenAI (not them) were in the position to fix the issue, but OpenAI did not. ## 4 Attack surface between plugins & users In this section, we describe our attack taxonomy for the attack surface between plugins and users, interleaved with our application of this taxonomy to OpenAI's current ecosystem. We turn to the attack surface between plugins and the LLM platform in Section 5 and between plugins in Section 6 (see also Table 1 for a summary). We elaborate on each attack goal in a separate subsection along with example mechanisms through which that goal could be achieved. We also present the potential manifestation of some of the attack mechanisms in OpenAI's plugins, discovered by applying our framework. ### _Hijack user machine_ In this attack category, the goal of the attacker is to take control over the user's machine. After an attacker takes over a user's machine, they can abuse it in a number of ways. Potential harms could include stealing data stored on the user machine, locking the users out and demanding ransom, and inserting malware on web services hosted on the machine. At a high level, to hijack a user's machine, the attacker could manipulate users into installing malware or get access to their machines through social engineering. Below, we describe some example mechanisms through which an attacker could hijack a user's machine. #### 4.1.1 Leverage unvetted and unofficial plugins Users may install unvetted plugins and plugins outside the official plugin store (e.g., in developer mode). Attackers could exploit that workflow and trick users into installing malware that is cloaked as a plugin. #### 4.1.2 Make malicious recommendations Users may need to visit external websites to act on the recommendations from a plugin, e.g., clicking a link to visit a travel agent's website to book a flight. Malicious plugin developers could exploit that workflow and trick users into visiting websites that can infect their machines. #### 4.1.3 Exploit information shared for legitimate reason Some use cases supported by LLM platforms, such as remote management of a user's machine, could expose users to severe attacks from plugins. To remotely manage a user's machine, the plugin would either need access to the credentials and public IP or to be added as an authorized user. From there, a plugin could fully control the machine. ### Example of a potential risk Building on the attack from Section 4.1.3, we identified plugins that exfiltrate user credentials. We describe the details in Risk 1. **Risk 1.** **C**ependential **E**neideration **Risk overview.** OpenAI hosts plugins that provide functionality to users to automate their software development operations and infrastructures. These plugins require users to either share their credentials or allow SSH access to their servers. **Risk impact.** The presence of user credentials with third-party plugins could cause serious harm to users. In the worst case, the third-party developer can log into the user's machine and completely take over it. Even when the third party is trustworthy, a compromise at the third party's end could result on the leakage of user credentials to an attacker. **Evidence of risk.** AutoInfra1 [40] and ChatSSH-Plug [41] are two plugins that provide SSH session management functionality. AutoInfra1 asks users to add its public key in their SSH authorized_keys file and then asks them to share their public IP address, as seen in our partial interaction with AutoInfra1 in Figure 2 and in our full interaction by visiting AutoInfra1 interaction link.ab ChatSSHPug on the other hand, directly asks users to share their passwords or private key (more detail can be seen by visiting ChatSSHPug interaction linkc). Analysis conducted on June 07, 2023. [MISSING_PAGE_POST] Footnote 27: [https://github.com/flim-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-interaction.pdf](https://github.com/flim-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-interaction.pdf) Footnote 28: [https://github.com/flim-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-security/chatgpt-plugin-eval/blob/main/dutchnfra1-security/chatgpt-plugin-eval/blob/main/dutchnfra1-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutch](https://github.com/flim-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-security/chatgpt-plugin-eval/blob/main/dutchnfra1-security/chatgpt-plugin-eval/blob/main/dutchnfra1-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutch) ### Harvest user data In this attack category, the attacker's goal is to collect personal and excessive data on users to gain benefit from it. Among other ways, an attackers could benefit from users' data by selling it to other services (e.g., data brokers) or using it for non-essential and undisclosed purposes (e.g., to profile users for online advertising), both of which are common practices on the internet [44, 45, 46, 36]. Below we describe possible mechanisms to collect user data. #### 4.3.1 Mandate accounts Plugins could mandate users to log in before they can use their services, even when an account is not necessary, e.g., a plugin that provides latest news. Mandating accounts will allow plugins to associate user data with personal identifiers, such as email addresses. Such linking can enable plugin services to track user activities and reach out to them even outside the LLM platform, without their knowledge or consent. #### 4.3.2 Define broad API specifications Similar to over-privileged mobile apps [35], plugins could specify overly broad API parameters to collect excessive amount of user data, even more than necessary for their functionality. For example, a plugin's API specification could include that it needs the entire user query instead of relevant keywords. Note that the collection of excessive user data could just be needed to fulfill the use case offered by the plugin. ### Example of a potential risk Building on the attack described in Section 4.3.2, we identified plugins that exfiltrate user prompt history. We describe the details in Risk 3. **Risk overview.** OpenAI hosts plugins that allow users to export their interactions with the ChatGPT. Plugins that provide these services, exfiltrate either raw or summarized user prompts and ChatGPT responses to their API endpoints. **Risk impact.** The plugins get access to users' conversation with ChatGPT, which can contain sensitive and personal information. Some of these plugins also require users to sign in to their platform before they can use the plugin, which allows them to associate user prompt history to a persistent email address. **Evidence of risk.** PDF Exporter [47] and Reflect Notes [48] are two plugins that exfiltrate user prompt history. PDF Exporter converts ChatGPT interactions into a PDF and Reflect Notes provides functionality to users to "reflect on their interactions". Partial user interaction with PDF Exporter can be seen in Figure 4, which shows that the user's sensitive information, in this particular scenario, their credentials, are sent to the plugin. Full interaction with PDF Exporter and Reflect Notes can be viewed by visiting PDF Exporter interaction link\({}^{a}\) and Reflect Notes interaction link\({}^{b}\), respectively. This analysis was conducted on June 08, 2023. **Precavations by plugin services.** PDF Exporter states in its privacy policy that the plugin does not collect, store, or share any personal information [49]. However, based on our analysis we did not notice any functionality or attempt to restrict or truncate personal information in the API specification, as Figure 4: User interaction with PDF Exporter plugin. Figure 3: Dual presence of Upskillr plugin on the OpenAI plugin store. it is also demonstrated in our interaction with the plugin in Figure 4. Reflect Notes provides a generic privacy policy, which does not seems to be specific to the plugin [50]. Reflect Notes also claims that the user data is end-to-end encrypted, however, we did not find any functionality in its API specification to encrypt user data before it is transmitted. Our interaction also showed the transmission of un-encrypted conversation to Reflect Notes (Reflect Notes interaction link\({}^{b}\)). We present additional examples that demonstrate the risk of user data harvesting in Appendix C. **Observation.** Users might want to share their sensitive data in some contexts but not in others. It would be a key challenge for LLM platforms to support interfaces that take informed user consent for specific contexts, e.g., through permission models, and not expose that consent in other contexts. ### _Benefit partner plugins_ In this attack category, an attacker plugin's goal is to benefit their partner plugins. There are potentially several benefits that plugins can provide each other through several mechanisms. Broadly, the benefits could fit under the objective of improving each other's businesses to make more revenue. It is important to note that the plugin collusion may not be beneficial for users and in fact may result in harms to the users. Below we describe some example mechanisms that plugins can use to benefit each other. #### 4.4.1 Share user data Since plugins can collect unique user identifiers and link user data with them (Section 4.3), they can engage in server-to-server data sharing, similar to how third party trackers share data with each other on the web [51]. Such sharing can enable plugins to better profile users, resulting in the leakage of user privacy. #### 4.4.2 Make recommendations favorable to partners Since LLM platforms encourage cross-plugin synergy [52], plugins could request LLM platforms to initiate their partner plugins to fulfil multipart user requests, e.g., a user request to book a flight and make a hotel reservation. Additionally, plugins could craft their recommendations in a way that would favor their partner services, e.g., a flight reservation plugin could show the best flight for dates when their partner hotel has free rooms available. ### _Manipulate users_ In this attack category, an attacker's goal is to manipulate users. At a high level, an attacker can manipulate users in a number of ways with _problematic recommendations_. The unconventional interaction between users and plugin services, where plugins show limited information, users possess limited information filtering capabilities, and plugin recommendations may not be thoroughly unwetted, exacerbates the likelihood of problematic recommendations. #### 4.5.1 Deploy deceptive design patterns Plugins could exploit the limited interfacing capabilities on LLM platforms to only reveal few recommendations that favor them. For example, a travel reservation plugin service could show flight tickets where it expects to gain the highest profit instead of the cheapest tickets. #### 4.5.2 Recommend inappropriate & harmful content Unvetted plugin recommendations could lead to the transmission of inappropriate content to users, e.g., showing adult content to children. Additionally, users often act on the recommendations of plugins, which could be exploited to deceive users, e.g., sending users to a website that steals their credit card information or fakes the LLM platform. #### 4.5.3 Recommend nonfactual content Plugin recommendations could also lead to latent or inapparent influence and manipulate worldviews of users [7], in cases where the recommendations by plugins contain misinformation or disinformation or biased information. #### 4.5.4 Lie or change functionality Since plugins can show separate functionality descriptions to the users and plugins (Code 1), this feature could be exploited to manipulate users into installing undesired plugins, even on the official plugin store. Additionally, a plugin could also change its functionality on update to deceive users. ### _Refusal of service by plugins_ In this attack category, the attacker's goal is to refuse service to the user. The refusal of service could result in a variety of harm to the users. Among other motivations, an attacker's motivation behind the refusal of service could be to help itself with another attack, even outside the internet. For example, the refusal of service by an IoT door lock plugin could make user vulnerable to theft. Note that the refusal of service could also be initiated by an external attacker and the plugin service itself could be a victim of the attack. Below we discuss some of the potential ways in which an attacker could refuse service to the user. #### 4.6.1 Deliberately refuse service Plugins have full control and autonomy over fulfilling user commands. Miscreant plugins could simply ignore to fulfil the user command. Additionally, a compromised plugin server, by an external adversary, could also deliberately refuse user requests. #### 4.6.2 Unresponsive server Plugins could also fail to fulfill user commands if their back-end servers become unresponsive, e.g., due to internet or power outages or in case the server is experiencing a denial-of-service attack. ### _Denial-of-service by users_ In this attack category, the attacker's goal is to make the plugin service inaccessible. The inaccessibility of plugin service could potentially result in several harms to the plugin users (as described in Section 4.6). The inaccessibility could also harm the plugin service, e.g., potentially leading to loss in revenue and negatively impacting the reputation of the plugin. Possible adversaries who could conduct this attack could include miscreant users and rival plugins, posing as users. Below we discuss some of the potential ways in which an attacker could make the plugin server inaccessible. #### 4.7.1 Make excessive prompts Malicious or compromised user(s) could make frequent prompts to a single or several plugin APIs, that could result in excessive network traffic that can flood and ultimately crash the plugin server. #### 4.7.2 Make malicious prompts Malicious or compromised user(s) could also send malicious prompt inputs that target known vulnerabilities on the plugin server to crash it. These malicious prompts could just be big payloads that the plugin server cannot parse [53]. ## 5 Attack surface between plugins & LLM platform Next, we describe our attack taxonomy for the attack surface between plugins and LLM platform along with the application of taxonomy on the OpenAI's plugin ecosystem. ### _Hijack LLM platform_ In this attack category, an attacker's goal is to take over an LLM and/or an LLM platform session. Taking over an LLM or an LLM platform session would allow the attacker to impersonate the LLM platform and control the interactions between user and the LLM platform. Such a takeover will allow the adversary to achieve several attack goals, including stealing user interaction history with the LLM, list of installed plugins, and other attacks discussed earlier in Section 4. At a high level, an attacker could rely on _prompt injection_[54, 55, 56] techniques to hijack an LLM or an LLM platform session. It is important to note that the takeover of an LLM can be latent, where an adversary succeeds in inserting a backdoor that activates at a later point in time, e.g., after an LLM is retrained using the plugin data [57]. Below we describe some of the ways in which an attacker could hijack an LLM platform. #### 5.1.1 Inject malicious description LLM platforms load the plugin functionality description to build the necessary context for the LLM. Plugins could exploit that workflow by adding instructions in their functionality description to control the LLM platform. A plugin could inject a malicious description through a number of ways, including tricking users into installing unwetted malicious plugins, succeeding in hosting a malicious plugin on the official plugin store, dynamically changing plugin functionality description after it has been accepted on the official plugin store. #### 5.1.2 Inject malicious response While resolving the prompts, LLMs process the data sent by plugins, which could be exploited by plugins to send instructions to control the LLM platform [11]. Plugins may not only directly send the malicious response but instead point the platform to a URL that hosts the malicious response [21]. ### _Example of a potential risk_ Building on the attack described in Section 5.1.1, we identified a plugin that is able to hijack the LLM platform session through instructions in its functionality description. We describe the details in Risk 4. **Risk 4 LLM session Hijack** **Risk overview.** OpenAI hosts plugins that direct the LLM through commands in their functionality descriptions to alter its behavior when it communicates with the user. When LLM platforms load these plugins, the LLM's behavior is altered for the session, as instructed by the plugin, even when user prompts are not directed towards the plugin. **Risk impact.** The plugin is able to takeover the LLM platform session and control the interaction between the user and the LLM platform. Such a takeover can be exploited in a number of ways, including exfiltration of user-LLM platform interaction history, collection of sensitive data, and exposure to misleading information. **Evidence of risk.** AMZPRO [58], a plugin that helps users write product descriptions for Amazon, instructs ChatGPT to always reply in English. Typically, ChatGPT responds in the same language in which a users asks a question (as it can be seen by visiting this ChatGPT interaction link\({}^{a}\)). However, when AMZPRO is enabled, and not even used, ChatGPT only responds in English for the rest of the user's LLM platform session as it can be seen in the partial interaction with AMZPRO in Figure 5 and full interaction in AMZPRO interaction link\({}^{b}\). This analysis was conducted on July 27, 2023. **Figure 5: User interaction with ChatGPT, when AMZPRO is enabled but not used.** **Observation.** Our demonstration of LLM session hijacking with AMZPRO, highlights the need for contextual awareness and context isolation. We see contextual awareness and context isolation, while still supporting plugin synergy as a key challenge for LLM platforms. ### _Hijack plugin prompts_ In this attack category, the LLM platform is the adversary and its goal is to hijack prompts intended for a plugin. This attack is similar to how search engines and online marketplaces prioritize their own offerings or advertisements in response to user queries [59, 60]. There could be several motivations for hijacking user prompts, including serving self interests, benefiting partner plugin services, or harming a plugin service. Below we describe some of the ways in which an attacker could hijack user prompts. #### 5.2.1 Divert prompts to itself An LLM platform could resolve the user prompts intended for the plugin of its own, without consulting the plugin service at all. Another variation of this attack could be that the LLM platform utilizes the plugin data in the background, including cached data from prior prompt resolutions, but does not notify the user and the plugin service that it has used plugin data. #### 5.2.2 Divert prompts to another plugin A platform could unfairly divert user prompts intended for a specific plugin to another plugin that provides the same functionality. Another variation of this attack is to call both plugins. #### 5.2.3 Hallucinate plugin response Since LLMs occasionally hallucinate (i.e., make up fictional content) responses to user queries [61], they may also hallucinate the responses supposedly returned by the plugin API end point. Such hallucinations can deprive plugin user prompts and can also compromise user trust in the plugin service. ### _Example of a potential risk_ Building on the attack described in Section 5.2.3, we identified an instance where the LLM hallucinates a response that is supposed to be returned by the plugin API. We describe the details in Risk 5. **Risk 5. PLIGN RETRONS-LLM-UERVION** **Risk overview.** When users interact with plugins, they may receive LLM hallucinated responses instead of the actual content returned by the plugins. **Risk impact.** Since hallucinated content is fictitious, it may contain inaccurate, misleading, and dangerous recommendations. Acting on these recommendations could cause a variety of harms to the users. Additionally, hallucinations lead to the unintentional refusal of service by the plugin, which may compromise user trust in the plugin service. **Evidence of risk.** We enabled Uniket [62] and Tira [63], two plugins that allow users to shop from their respective marketplaces. We requested Chat-GPT that we want to shop for shoes and specified that we do not have any preference for one plugin over the other. ChatGPT sent requests to both plugins and returned the same product recommendations for both of them. However, the product links provided using Tira, such as [https://www.tirabeauty.com/product/srm-07-lrqaepziomd](https://www.tirabeauty.com/product/srm-07-lrqaepziomd), were unavailable on Tira's marketplace. Upon inspecting Tira's website, we found that it is a marketplace for beauty and health products and very likely does not sell shoes, i.e., the subject of our query. Although, we cannot rule out an implementation issue at the plugin's end, it very likely seems to be a case of LLM hallucinations. Our complete interaction with the plugins can be viewed by visiting the Uniket and Tira interaction link\({}^{a}\). This analysis was conducted on June 09, 2023. **Observation.** We found that LLM hallucinations are not just limited to user-LLM platform interactions, but also translate to user-plugin interactions. While tackling hallucinations in general is non-trivial and in fact one of the biggest challenge faced by LLM platforms, there have been recent advances in targeted problem spaces, such as mathematical reasoning [64]. Tackling hallucinations in plugin responses might even be less challenging, since LLMs act on the content received from plugin API responses and do not necessarily generate content anew. **Proof.** We use the same notation as in the previous section. ### _Steal plugin data_ In this attack category, the LLM platform is the adversary and its goal is to steal plugin-owned, -hosted, or -facilitated data. Plugin could be hosting proprietary financial data, marketing insights, source code from private repositories, emails, and private documents. Stealing such data could result in several harms to the plugin service and to the users, including monetary harm, leakage of secrets, and invasion of privacy. After stealing data, LLM could use it for a variety of purposes, including using data for training future models or selling data to others. Below we discuss some of the ways in which an LLM platform could steal plugin data. #### 5.3.1 Log interaction LLM platforms facilitate all interactions between users and the plugins, which includes parsing plugin responses. LLMs could simply log the data that plugins return while resolving users requests. **Proof.** We use the same notation as in the previous section. ### _Pollute LLM training data_ In this attack category, plugin is the adversary and its goal is to pollute the training data of LLMs that are used by an LLM platform. Feeding such information will hinder an LLM's ability to respond to user with factual and authentic information. At a high level, an attacker could achieve this goal by exposing the LLM platform to misleading and incorrect information. Below we discuss a mechanisms through which an attacker can pollute the LLM training data. #### 5.4.1 Inject misleading response LLM platforms log user interaction for retraining their models [57]. Plugins could exploit that fact and include misleading or incorrect information in their responses. Note that plugin responses could also point LLMs to URLs which host misleading and incorrect information instead of directly including it in responses. ### _Refusal of service by plugin_ The refusal of service by plugins to the user (Section 4.6) could also impact the platform. For example, in OpenAI's current implementation, an unresponsive plugin results in crashing of the user's ChatGPT session. Note that a plugin could also delay its responses instead of not responding to the requests at all. Section 4.6 already described the mechanism through which a plugin could refuse service. ### _Denial-of-service by LLM platform_ Similar to how users can crash a plugin service with a denial-of-service attack (Section 4.7), LLM platforms could do the same. The motivation for the LLM platform could broadly be hostility towards a competitor or an implementation issue. The potential mechanisms through which an LLM platform could launch a denial-of-service attack are also similar to how users would launch this attack. ## 6 Attack surface between plugins Next, we describe our attack taxonomy for the attack surface between plugins along with the application of taxonomy on the OpenAI's plugin ecosystem. ### _Hijack another plugin's prompts_ Here, a plugin can be both an adversary and a victim. The goal of an adversarial plugin is to hijack user prompts intended for another plugin. A plugin could trick or instruct the LLM platform into calling itself, over the plugin that the user intends. We discuss possible ways in which an adversarial plugin could hijack another plugin's prompts. #### 6.1.1 "Squat" another plugin Similar to how adversaries could use plugin squatting to steal user credentials (Section 4.2), they could also use it to hijack user prompts intended for other plugins. #### 6.1.2 "Squat" functionality Plugins could define targeted functionality descriptions to intercept specific user prompts to a particular plugin or an online service. For example a plugin could intercept prompts to an online marketplace by adding in its functionality description that it can recommend products from that marketplace. #### 6.1.3 **Inject malicious response** A plugin could include in its response instructions for the LLM to route the prompts for a particular plugin to its API endpoints. ### _Example of a potential risk_ Building on the attack described in Section 6.1.2, we identified plugins that could potentially squat functionality. We describe the details in Risk 6. **Risk 6**Functionality SQUATING **Risk overview.** Several OpenAI plugins mention the names of well-known online services in their functionality descriptions or define their functionality descriptions similar to other plugins, which allows them to hijack prompts that are not intended for them, i.e., functionality squatting. **Risk impact.** Successful functionality squatting will allow a plugin to deprive other plugins or online services of users, leading to loss in revenue. Plugin might also be able to trick users into sharing their data. Additionally, if the plugin is unable to fulfill the offered service, it could cause harm to users in several ways. **Evidence of risk.** Lexi Shopper [65] recommends products from Amazon.com and mentions the word "Amazon" in its functionality description. Because of the presence of the word "Amazon", user prompts which even specify to not use any third party service are routed to Lexi Shopper, as it can be seen in our partial interaction with the plugin in Figure 6. Lexi Shopper interaction link\({}^{a}\) provides complete interaction with the plugin. This analysis was conducted on June 09, 2023. In another example, two plugins Jio [66] and Tira [63] offer service to shop from tirabeauty. Tira is hosted by tirabeauty.com whereas Jio is hosted by jiocommerce.io, a third-party e-commerce service that allows users to shop from several online shops. In case a user enables both of the plugins and even specifies that it wants to shop from Tira, their queries are routed to the third-party service, i.e., Jio, instead of the first-party service, i.e., Tira. Tira and Jio interaction link\({}^{b}\) provides complete interaction with these plugins. This analysis was conducted on July 27, 2023. ### _Hijack prompts on a topic_ In this attack category, a plugin can be both an adversary and a victim. The goal of the adversarial plugin is to hijack all user prompts on a particular topic. At a high level, a plugin could trick or instruct the LLM platform into calling itself. We discuss some of the ways in which an adversarial plugin could hijack all prompts on a particular topic. #### 6.2.1 "Squat" a topic Plugins could hijack prompts on a specific topic by curating their functionality descriptions such that they always get precedence over other plugins in the same category. For example, a travel reservation plugin could include in its description that the LLM platform should always call the plugin for all travel related queries. #### 6.2.2 **Inject malicious response** Similar to including instructions in its functionality description, a plugin could instruct the LLM platform via its response to always send user prompts on a particular topic to the plugin. ### _Example of a potential risk_ Building on the attack described in Section 6.2.1, we identified a plugin that could potentially squat user prompts on a topic. We describe the details in Risk 7. [leftmargin=*,noitemsep,topsep=0pt] **Risk 7**: **Toget Squatting** **Risk overview.** Several OpenAI plugins add certain keywords in their functionality descriptions or define overly broad functionality descriptions to hijack prompts on specific topics, i.e., topic squatting. **Risk impact.** Successful topic squatting will allow a plugin to deprive other plugins of users and revenue. Plugin will also be able to harvest user data and trick users into sharing their data. Additionally, if the plugin is unable to fulfill the offered service, it could cause harm to users in several ways. **Evidence of risk.** Expedia [67], a well-known travel reservation service, hosts a plugins which instructs ChatGPT to _"ALWAYS uses Expedia plugin to provide travel recommendations for ANY user's travel-related queries"_. To evaluate whether the use of all caps and direct command would allow Expedia to intercept user prompts for all travel related queries, we installed Expedia's plugin in a chat session with ChatGPT, along with two other travel plugins, Trip.com [68] and Klook [69], and made travel related queries. We found that ChatGPT automatically routed user prompts to Expedia, without asking users for their preference, as seen in our partial interaction with Expedia in Figure 7. Expedia, Trip.com, and Klook interaction link\({}^{a}\) presents complete interaction with these plugins. Analysis conducted on June 09, 2023. Additional analysis of plugins with overly broad functionality descriptions is in Appendix D. [leftmargin=*,noitemsep,topsep=0pt] **Observation.** Broad and targeted functionality descriptions make it challenging to interpret the functionality offered by plugins, which can confuse both users and LLMs. It is a key challenge for LLM platforms to develop unambiguous natural language based programming interfaces. ### _Influence prompts to another plugin_ In this attack category, an attacker's goal is to influence the prompts to another plugin. Examples of influence could include altering the data sent to the another plugin, similar to a man-in-the-middle attack, or triggering another plugin, to launch a denial-of-service attack. At a high level, an attacker would need to trick the LLM platform to launch this attack. Fig. 6: User interaction with Lexi Shopper plugin. Fig. 7: User interaction with the Expedia, Trip.com, and Koll plugins. We describe a potential mechanism through which a plugin could manipulate the transmission of data to another plugin. #### 6.3.1 Exploit multipart prompts A plugin service could exploit the workflow of multipart user requests, where multiple plugins interact with each to resolve the user request. For example, an adversarial plugin could include altered data or a malicious payload in response that will be sent as an input to another plugin. ## 7 Discussion & Conclusion ### _Exacerbation of NLP-related challenges_ While many of the issues that we identified in previous sections are echoes of the challenges in securing previous platforms (e.g., smartphones, IoT), the complexity of natural language is one of the more unique aspects and fundamental challenges in securing LLM-based platforms. In the plugin-integrated platforms we considered, natural language is used (1) by users to interact with the platform and plugins, (2) by the platform and plugins to interact with users, and (3) even by plugins to interact with the platform (e.g., through functionality descriptions) and other plugins (e.g., through instructions in API responses). Potential ambiguity and imprecision in the interpretation of natural language, as well as the application of policies to natural language, can create challenges in all of these interactions. #### 7.1.1 Interpretation of functionality defined in natural language In conventional computing platforms, applications define their functionality through constrained programming languages without any ambiguity. In contrast, LLM platform plugins define their functionality through natural language, which can have ambiguous interpretations. For example, the LLM platform may in some cases interpret the functionality too broadly, or too narrowly, both of which could cause problems (see Risks 6 and 7 as examples). Interpreting language also requires contextual awareness, i.e., plugin instructions may need to be interpreted differently in different contexts. For example, it might be okay for the LLM platform to behave a certain way while a user interacts with a plugin, but not okay to persist with that behavior when the plugin is not in use (see Risk 4 as an example). In summary, the key challenge for LLM platforms is to interpret plugin functionality so as to not cause ambiguity, or in other word,s LLM platforms must figure out mechanisms that allow them to interpret functionality similarly to the unambiguous (or, much less ambiguous) interpretation in other computing platforms. #### 7.1.2 Application of policies over natural language content Even if LLM platforms can precisely interpret the functionality defined in natural language or if functionality is precisely defined through some other means, it will still be challenging to apply policies (e.g., content moderation) over the natural language content returned by users, plugins, or within the LLM platform. For example, there may be a mismatch between the interpretation of the policy by the LLM platform, users, and plugins, e.g., on what is considered personal information (by building on attacks in 4.3, of which Appendix C.1 discusses an example). Similarly, in instances where there is a contradiction between the polices specified by the plugin or between the policies specified by the user and the plugin, the LLM platform would need to make a preference to resolve the deadlock, which may not be in favor of users. An LLM platform may also not apply the policies retrospectively, which may diminish its impact. For example, a policy that specifies that no personal data needs to be collected or shared may not apply to already collected data (by building on attacks in 4.3 of which Appendix C.1.1 discusses an example). ### _Towards secure, privacy-respecting, and safe LLM-based computing platforms_ Stepping back from NLP-specific challenges to considering platforms as a whole, we emphasize that security, privacy, and safety should be key considerations in the design process of LLM-based platforms. The restrictions and suggestions provided by LLM platforms (discussed in Section 2.3) are a step in the right direction, but they are insufficient to secure LLM platforms. We recommend that LLM platform designers consider security, privacy, and safety -- e.g., by applying our framework -- _early_ in the design of their platforms, to avoid situations in which addressing issues later requires fundamental changes to the platform's architecture. The systemic nature of our findings and examples of attack potentials suggests that perhaps such a process was not used in the design of ChatGPT plugin ecosystem. In many cases, defensive approaches do not need to be invented from scratch: LLM platform designers can take inspiration from several sources, including from well-established practices to guard against known attacks, by repeating the threat modeling that we did in this paper, and by building on the security principles defined by prior research, such as by Saltzer and Schroeder [70]. We elaborate now on possible practical approaches for securing LLM platforms that wish to integrate untrusted third parties (e.g., plugins), and then step back to consider the potential future of LLM-based platforms more generally. #### 7.2.1 Anticipating and mitigating potentially buggy or malicious third parties One of the core issues underlying many of the attacks we discussed is that third-party plugins may be malicious or buggy in problematic ways -- an issue familiar to us from many past platforms as well [12, 13, 14, 15]. At the highest level, LLM platforms that want to integrate plugins should minimize trust in these third parties and design the platform to manage any potential risk. There is significant precedent in other platforms that can provide design inspiration to LLM platform creators. For example, to ensure that the plugin behavior does not change at run time and that the LLM platforms get an opportunity to review the plugin code each time it is updated, LLM platforms could host the plugin source code instead of plugin developers (elaborated on further in Appendix A.1), similar to more established platforms, such as mobile and web. Another avenue is to technically limit the functionality exposed to plugins. For example, LLM platforms could enforce a permission model, similar to mobile platforms, to regulate the access of data and system resources. Another strategy to minimize the impact of a problematic plugin is to isolate plugin execution from that of other plugins or the rest of the system, e.g., similar to site isolation in browsers through sandboxes [71]. At present (on the OpenAI platform we tested), all plugins execute together in the context of the same conversation. On the one hand, this execution model allows plugins to synergize well with each other, but on the other hand it exposes user interactions with one plugin to another. LLM platforms still could support plugin interaction and eliminate unnecessary data exposure by running each plugin in a sandbox and by clearly defining a protocol for sharing information across sandboxes, similar to cross-document messaging on the web [72]. In addition, LLM platforms should clearly state and enforce their policies and guidelines for plugin behavior, which may not currently be the case (e.g., see Appendix E). #### 7.2.2 Anticipating future LLM-based computing platforms Looking ahead, we can and should anticipate that LLMs will be integrated into other types of platforms as well, and that the plugin-integrated LLM chatbots of today are early indicators of the types of issues that might arise in the future. For example, we can anticipate that LLMs will be integrated into voice assistant platforms (such as Amazon Alexa), which already support third-party components ("skills", for Alexa). Recent work in robotics has also integrated LLMs into a "vision-language-action" model in which an LLM directly provides commands to a physical robot [73]. Future users may even interact with their desktop or mobile operating systems via deeply-integrated LLMs. In all of these cases, the NLP-related challenges with the imprecision of natural language, coupled with the potential risks from untrustworthy third parties, physical world actuation, and more, will raise serious potential concerns if not proactively considered. The designers of future LLM-based computing platforms should architect their platforms to support security, privacy, and safety early, rather than attempting to retroactively address issues later. ## Acknowledgements This work is supported in part by the National Science Foundation under grant number CNS-2127309 (Computing Research Association for the CIFellows 2021 Project) and by the Tech Policy Lab at the University of Washington. We thank Aylin Caliskan, Yizheng Chen, Kaiming Cheng, Inyoung Cheong, Ivan Evtimov, Earlence Fernandes, Michael Flanders, Saadia Gabriel, Alex Gantman, Gregor Haas, Rachel Hong, David Kohlbrenner, Wulf Loh, Alexandra Michael, Jaron Mink, Niloofar Mireshghallah, Kentrell Owens, Noah Smith, Sophie Stephenson, and Christina Yeung for providing feedback on various drafts of this paper.
大型言語モデル(LLM)プラットフォーム、例えばChatGPTは、最近、サードパーティサービスに接続するためのアプリエコシステムを始めた。これらのアプリはLLMプラットフォームの能力を拡張するが、彼らは任意のサードパーティによって開発されているため、信頼性を疑う必要がある。アプリはLLMプラットフォームとユーザーを自然言語で接続しているが、解釈の精度が低い。この論文では、LLMプラットフォームの設計者が現在の第三者統合LLMプラットフォームのセキュリティ、プライバシー、安全性を分析し改善するためのフレームワークを提案する。このフレームワークは、LLMプラットフォームステークホルダーが互いに攻撃を仕掛けていく可能性をどのように活用できるかを探索するプロセスを通じて構築される攻撃分類体系である。このプロセスの一部として、私たちはOpenAIのプラグイン(アプリ)エコシステムの中でフレームワークを活用した。私たちは、攻撃分類体系が示すような問題を明確に示
2309.00047
Dynamic-ADAPT-QAOA: An algorithm with shallow and noise-resilient circuits
The quantum approximate optimization algorithm (QAOA) is an appealing proposal to solve NP problems on noisy intermediate-scale quantum (NISQ) hardware. Making NISQ implementations of the QAOA resilient to noise requires short ansatz circuits with as few CNOT gates as possible. Here, we present Dynamic-ADAPT-QAOA. Our algorithm significantly reduces the circuit depth and the CNOT count of standard ADAPT-QAOA, a leading proposal for near-term implementations of the QAOA. Throughout our algorithm, the decision to apply CNOT-intensive operations is made dynamically, based on algorithmic benefits. Using density-matrix simulations, we benchmark the noise resilience of ADAPT-QAOA and Dynamic-ADAPT-QAOA. We compute the gate-error probability $p_\text{gate}^\star$ below which these algorithms provide, on average, more accurate solutions than the classical, polynomial-time approximation algorithm by Goemans and Williamson. For small systems with $6-10$ qubits, we show that $p_{\text{gate}}^\star>10^{-3}$ for Dynamic-ADAPT-QAOA. Compared to standard ADAPT-QAOA, this constitutes an order-of-magnitude improvement in noise resilience. This improvement should make Dynamic-ADAPT-QAOA viable for implementations on superconducting NISQ hardware, even in the absence of error mitigation.
Nikola Yanakiev, Normann Mertig, Christopher K. Long, David R. M. Arvidsson-Shukur
2023-08-31T18:00:02
http://arxiv.org/abs/2309.00047v1
# Dynamic-ADAPT-QAOA: An algorithm with shallow and noise-resilient circuits ###### Abstract The quantum approximate optimization algorithm (QAOA) is an appealing proposal to solve NP problems on noisy intermediate-scale quantum (NISQ) hardware. Making NISQ implementations of the QAOA resilient to noise requires short ansatz circuits with as few CNOT gates as possible. Here, we present Dynamic-ADAPT-QAOA. Our algorithm significantly reduces the circuit depth and the CNOT count of standard ADAPT-QAOA, a leading proposal for near-term implementations of the QAOA. Throughout our algorithm, the decision to apply CNOT-intensive operations is made dynamically, based on algorithmic benefits. Using density-matrix simulations, we benchmark the noise resilience of ADAPT-QAOA and Dynamic-ADAPT-QAOA. We compute the gate-error probability \(p_{\text{gate}}^{*}\) below which these algorithms provide, on average, more accurate solutions than the classical, polynomial-time approximation algorithm by Goemans and Williamson. For small systems with \(6-10\) qubits, we show that \(p_{\text{gate}}^{*}>10^{-3}\) for Dynamic-ADAPT-QAOA. Compared to standard ADAPT-QAOA, this constitutes an order-of-magnitude improvement in noise resilience. This improvement should make Dynamic-ADAPT-QAOA viable for implementations on superconducting NISQ hardware, even in the absence of error mitigation. ## I Introduction NP problems are ubiquitous in computer science, occurring frequently in combinatorial optimization, and machine learning [1; 2]. Finding their solutions is computationally hard. One strategy to solve NP problems, relies on the Ising model [3; 4; 5]. An NP problem is encoded in the real and symmetric matrix \(W_{ij}\). The (approximate) solution is then found by approximating the ground state energy \(E_{0}\) of an Ising Hamiltonian \[H=\frac{1}{4}\sum_{i,j=1}^{N}W_{ij}Z_{i}Z_{j}, \tag{1}\] where \(Z_{i}\) denotes the Pauli-\(z\) operator acting on qubit \(i=1,\dots,N\). Approximate solutions are usually found using heuristics [6; 7; 8; 9] or adiabatic quantum computers [10; 11; 12; 13]. The quality of these solutions can be assessed using the Goemans and Williamson (GW) algorithm [14], which, in the worst case, provides approximate solutions within \(87.8\dots\%\) of the true ground-state energy in polynomial time (using an alternative representation of the NP problem). Recent works [15; 16] have proposed solving NP problems on gate-based quantum computers, using the quantum approximate optimization algorithm (QAOA). The QAOA identifies approximate solutions to NP problems by creating upper bounds to the ground-state energy \(E_{0}\) of \(H\) via the Rayleigh-Ritz variational principle: \[E_{0}\leq E(\vec{\beta},\vec{\gamma})=\langle\Psi(\vec{\beta},\vec{\gamma})|H |\Psi(\vec{\beta},\vec{\gamma})\rangle. \tag{2}\] The classically-hard-to-represent trial state is prepared on a quantum computer by evolving an initial state \(|\Psi_{0}\rangle\): \[|\Psi(\vec{\beta},\vec{\gamma})\rangle=U_{P}(\vec{\beta},\vec{\gamma})|\Psi_ {0}\rangle, \tag{3}\] using a parametrized ansatz circuit \[U_{P}(\vec{\beta},\vec{\gamma})=\prod_{p=1}^{P}\left[e^{-i\beta_{p}A_{p}}e^{-i \gamma_{p}H}\right]. \tag{4}\] The QAOA then optimizes the parameters to minimize the energy expectation value \(E(\vec{\beta},\vec{\gamma})\). In the original proposal of QAOA [15], the form of the ansatz circuit [Eq. (4)] is inspired by a Trotterized form of the adiabatic theorem [17]. By setting the mixer Hamiltonian to \(A_{p}=\prod_{i=1}^{N}X_{i}\) for all \(p\), and the initial state to \(|\Psi_{0}\rangle=|+\rangle\,...\,|+\rangle\), the QAOA finds the ground state exactly as the number of Trotter steps tends to infinity (\(P\rightarrow\infty\)). Unfortunately, large values of \(P\) lead to intractably deep ansatz circuits. In the presence of noise, the need for deep circuits precludes the implementation of the QAOA on existing quantum hardware [18; 19]. To reduce the intractably deep quantum circuits, ADAPT-QAOA [20] was developed. The algorithm improves the ansatz circuit in \(P\) iterations. Further, it allows the mixer Hamiltonian \(A_{p}\) to vary in each iteration \(p\), by choosing it from a mixer pool \(\mathcal{P}\). In noiseless numerical simulations, ADAPT-QAOA has been demonstrated to generate shallower circuits than the QAOA. Despite these improvements, ADAPT-QAOA lies outside the scope of current hardware. Moreover, the resilience of ADAPT-QAOA to noise has never been quantified. In this paper, we benchmark ADAPT-QAOA in the presence of noise. Using density-matrix simulations, we compute the gate-error probability \(p_{\text{gate}}^{*}\) below which the quantum algorithm outputs, on average, better approximate solutions than the classical GW algorithm. For small systems of \(6-10\) qubits, we find that ADAPT-QAOA requires \(p_{\text{gate}}^{*}\) comparable to or smaller than the gate-error probabilities available on current hardware. To reduce the hardware requirements of ADAPT-QAOA further, we develop Dynamic-ADAPT-QAOA. This algorithm removes redundant components from the ansatz circuits. For the problems we study, Dynamic-ADAPT-QAOA reduces the circuit depths significantly. For instance, in noiseless simulations of 6-qubit systems, Dynamic-ADAPT-QAOA achieves a better average performance than the GW algorithm with approximately 80% fewer CNOT gates than the original ADAPT-QAOA. This reduction in CNOT gates leads to improved noise resilience, with \(p_{\text{gate}}^{\star}\) being approximately an order of magnitude better than that of the original ADAPT-QAOA. Dynamic-ADAPT-QAOA may thus be implementable on current superconducting hardware, even in the absence of error mitigation. ## II Dynamic-ADAPT-QAOA In this section, we introduce Dynamic-ADAPT-QAOA. Our presentation strategy is to first review the standard ADAPT-QAOA template. Subsequently, we describe its improvement via Dynamic-ADAPT-QAOA. ## II A Adapt-QAOA As depicted in Fig. 1, ADAPT-QAOA grows the ansatz circuit in \(P\) steps. In each step \(p\), unitary evolutions generated by \(H\) and \(A_{p}\) are appended to the circuit from the previous step: \[U_{p}(\vec{\beta}_{p},\vec{\gamma}_{p})=e^{-i\beta_{p}A_{p}}e^{-i\gamma_{p}H} U_{p-1}(\vec{\beta}_{p-1},\vec{\gamma}_{p-1}). \tag{5}\] The process starts from \(U_{0}=\text{id}\). Concurrently, the real parameter vectors are updated as \[\vec{\beta}_{p}=(\beta_{p},\vec{\beta}_{p-1})\quad\text{and}\quad\vec{\gamma} _{p}=(\gamma_{p},\vec{\gamma}_{p-1}), \tag{6}\] starting from empty vectors \(\vec{\beta}_{0}=()\) and \(\vec{\gamma}_{0}=()\). In each step, an optimal mixer Hamiltonian \(A_{p}\) is picked from a pool \(\mathcal{P}\) such that the energy gradient is maximized (see below). The circuit parameters are then optimized: \[\vec{\beta}_{p}^{\star},\vec{\gamma}_{p}^{\star}=\underset{\vec{\beta}_{p}, \vec{\gamma}_{p}}{\text{argmin}}\left[E_{p}(\vec{\beta}_{p},\vec{\gamma}_{p}) \right], \tag{7}\] to minimize the energy expectation value \[E_{p}(\vec{\beta}_{p},\vec{\gamma}_{p})=\bra{\Psi_{0}}U_{p}^{\dagger}(\vec{ \beta}_{p},\vec{\gamma}_{p})HU_{p}(\vec{\beta}_{p},\vec{\gamma}_{p})\ket{\Psi_ {0}}. \tag{8}\] This yields an upper bound \(\mathcal{E}_{p}=E_{p}(\vec{\beta}_{p}^{\star},\vec{\gamma}_{p}^{\star})\) on the ground-state energy \(E_{0}\), and an optimal trial state \(\ket{\Psi_{p}^{\star}}\equiv U_{p}(\vec{\beta}_{p}^{\star},\vec{\gamma}_{p}^ {\star})\ket{\Psi_{0}}\). Iterating this process, provides a hierarchy of bounds \(\mathcal{E}_{0}>\mathcal{E}_{1}>\cdots>\mathcal{E}_{p}>\cdots\geq E_{0}\). The algorithm terminates, when \(p=P\) or if \(|\mathcal{E}_{p-1}-\mathcal{E}_{p}|\) falls below a pre-defined threshold \(\varepsilon\). To accelerate convergence, ADAPT-QAOA picks the mixer Hamiltonian which maximizes the energy gradient. To evaluate this gradient, the optimal trial state is augmented by appending a cost and a mixer unitary: \[\ket{\Psi_{p}(\beta_{p},\gamma_{p};A)}=e^{-i\beta_{p}A}e^{-i\gamma_{p}H}\ket{ \Psi_{p-1}^{\star}}. \tag{9}\] The energy variation due to the added parameters \[\delta E_{p}(\beta_{p},\gamma_{p};A)=\bra{\Psi_{p}(\beta_{p},\gamma_{p};A)}H \ket{\Psi_{p}(\beta_{p},\gamma_{p};A)}, \tag{10}\] enables the definition of a corresponding energy gradient: \[\mathcal{G}_{p}(\gamma_{p};A)\equiv\left.\frac{\partial}{\partial\beta_{p}} \delta E_{p}(\beta_{p},\gamma_{p};A)\right|_{\beta_{p}=0}. \tag{11}\] Evaluating this gradient for each \(A\in\mathcal{P}\) allows for selecting the optimal mixer: \[A_{p}=\underset{A\in\mathcal{P}}{\text{argmax}}\left[\ket{\mathcal{G}_{p}( \gamma_{p};A)}\right]. \tag{12}\] Throughout this work, we use the same mixer pool as in ADAPT-QAOA [20], comprising of QAOA mixers as well as Pauli strings of length one and two: \[\mathcal{P} =\left\{\sum_{i=1}^{N}X_{i},\sum_{i=1}^{N}Y_{i}\right\}\cup\left\{ X_{i},Y_{i}\,|\,i=1,...,N\right\} \tag{13}\] \[\cup\left\{\sigma_{i}\sigma_{j}^{\prime}\,|\,\sigma,^{\prime}\in \left\{X,Y,Z\right\}\wedge i,j=1,...,N\wedge i\neq j\right\}.\] ## II Dynamic-ADAPT-QAOA _Motivation:_--Our motivation for developing Dynamic-ADAPT-QAOA comes from two observations. First, In each step \(p\), the quantum circuit representing the cost unitary \(e^{-i\gamma_{p}H}\) requires \(\mathcal{O}(N^{2})\) CNOT gates (see App. A). On the other hand, the quantum circuit representing the mixer unitary \(e^{-i\beta_{p}A_{p}}\) requires only \(\mathcal{O}(1)\) CNOT gates [21]. As CNOT gates induce noise, minimizing the number of cost unitaries in the ansatz circuit could be valuable [22]. Second, in standard ADAPT-QAOA, the vector of optimal parameters \(\vec{\gamma}_{p}^{\star}\) tends to be sparse, with many parameters taking values close to zero (see Sec. III B). As cost unitaries \(e^{-i\gamma_{p}H}\) with \(\gamma_{p}\approx 0\) Figure 1: The \(p\)th iteration of ADAPT-QAOA: After initialization, the ansatz circuit from the previous iteration \(U_{p-1}\) is augmented by appending unitary evolutions generated by \(H\) and \(A_{p}\). The optimal circuit parameters \(\vec{\beta}_{p}^{\star},\vec{\gamma}_{p}^{\star}\) are identified by minimizing the measured energy expectation. hardly affect the final quantum circuit, it could be advantageous to exclude them altogether. _Idea:_--In general, the energy expectation value in Eq. (8) is a nontrivial function of the circuit parameters. Hence, it is not obvious how to predict which entries in \(\vec{\gamma}_{p}^{\star}\) would take optimal values close to zero. Yet, in ADAPT-QAOA, optimal circuit parameters of the \(p\)th iteration are usually well approximated by the circuit parameters of the previous iteration: \[\vec{\beta}_{p}^{\star}\approx(\beta_{p}^{\star},\vec{\beta}_{p-1}^{\star}) \quad\text{and}\quad\vec{\gamma}_{p}^{\star}\approx(\gamma_{p}^{\star},\vec{ \gamma}_{p-1}^{\star}). \tag{14}\] Thus, we can estimate the optimal circuit parameters \(\beta_{p}^{\star},\gamma_{p}^{\star}\) of the \(p\)th iteration, by studying the minima of \[\delta E_{p}(\beta_{p},\gamma_{p})\equiv\delta E_{p}(\beta_{p},\gamma_{p};A_{p }). \tag{15}\] As explained in App. B, for Pauli-string mixers \(A_{p}\), we can identify whether \(\delta E_{p}(\beta_{p},\gamma_{p})\) has minima near \(\gamma_{p}^{\star}=0\). To this end, we split the cost Hamiltonian into two parts \(H=H_{-}+H_{+}\), such that \(H_{-}\) commutes and \(H_{+}\) anticommutes with \(A_{p}\). This enables the evaluation of three additional expectation values: \[B_{p} =\left\langle\Psi_{p-1}^{\star}\right|iA_{p}H_{+}\left|\Psi_{p-1 }^{\star}\right\rangle\equiv\mathcal{G}_{p}(0;A_{p}), \tag{16a}\] \[C_{p} =\left\langle\Psi_{p-1}^{\star}\right|A_{p}H_{+}^{2}\left|\Psi_{ p-1}^{\star}\right\rangle,\] (16b) \[D_{p} =\left\langle\Psi_{p-1}^{\star}\right|iA_{p}H_{+}^{3}\left|\Psi_ {p-1}^{\star}\right\rangle. \tag{16c}\] As shown in App. B, \(\delta E_{p}(\beta_{p},\gamma_{p})\) has a local minimum at \(\gamma_{p}^{\star}=0\) if \[C_{p}=0\quad\text{and}\quad B_{p}D_{p}>0. \tag{17}\] _Algorithm:_--Dynamic-ADAPT-QAOA excludes the cost unitary of the \(p\)th iteration if \(A_{p}\) is a Pauli-string and Condition (17) holds. Otherwise, the algorithm follows the standard mixer-selection procedure of ADAPT-QAOA. That is, the gradients for all \(A\in\mathcal{P}\) are re-evaluated at some given offset \(\gamma_{p}=\pm\bar{\gamma}\), and the optimal mixer is determined: \[A_{p}=\operatorname*{argmax}_{A\in\mathcal{P}}\left[\max(|\mathcal{G}_{p}(+ \bar{\gamma};A)|,|\mathcal{G}_{p}(-\bar{\gamma};A)|)\right]. \tag{18}\] After determining \(A_{p}\), the ansatz circuit and parameter vectors are grown as described in Eqs. (5) and (6). Pseudocode summarizing Dynamic-ADAPT-QAOA is given in Algorithm 1. _Remarks:_--In App. C, we discuss two alterations of Dynamic-ADAPT-QAOA. In the first alteration, all cost unitaries are, _a priori_, removed from the ansatz circuit. In the second alteration, the algorithm does not re-evaluate the optimal mixer \(A_{p}\) at \(\gamma_{p}=\pm\bar{\gamma}\) if condition (17) fails. As shown in App. C, both of these alterations worsen the algorithmic performance. Common worries regarding variational quantum algorithms concern barren plateaus (vanishing gradients) and the presence of bad local minima [23; 24; 25; 26; 27; 28; 29; 30]. A promising way to mitigate these issues is to reduce the circuit depths [31; 30], which is precisely what our algorithm does. Moreover, since the gates of adaptive variational quantum algorithms are tailored to the optimization problem itself, there are indications that these algorithms avoid such issues better than other variational quantum algorithms [31; 32; 33; 30; 34; 35]. In the instances studied below, Dynamic-ADAPT-QAOA efficiently implements the variational optimization. ``` Init pool \(\mathcal{P}\); state \(\left|\Psi_{0}\right\rangle\leftarrow\left|+\right\rangle\ldots\left|+\right\rangle\); unitary \(U_{0}\gets I\). Init accuracies \(\varepsilon,\delta_{1},\delta_{2}\); and offset \(\bar{\gamma}\). Init optimal params \(\vec{\beta}_{0}^{\star}\leftarrow(\gamma_{0}^{\star}\leftarrow()\); Init \(p\gets 1\). while not converged do Prepare \(\left|\Psi_{p-1}^{\star}\right\rangle\gets U_{p-1}(\vec{\beta}_{p-1}^{ \star},\vec{\gamma}_{p-1}^{\star})\left|\Psi_{0}\right\rangle\) Evaluate gradients \(\mathcal{G}_{p}(A)\leftarrow\left\langle\Psi_{p-1}^{\star}\right|\left[iA,H \right]\left|\Psi_{p-1}^{\star}\right\rangle\) Find optimal mixer: \(A_{p}\leftarrow\operatorname*{argmax}_{A\in\mathcal{P}}\left[\left[\mathcal{G}_ {p}(0;A)\right]\right]\) Evaluate \(B_{p}\), \(C_{p}\), \(D_{p}\) in Eq.(16) if\(\left|C_{p}\right|\leq\delta_{1}\)and\(B_{p}\cdot D_{p}>\delta_{2}\)then Update \(\bar{\gamma}_{p}\leftarrow\bar{\gamma}_{p-1}\); \(\bar{\beta}_{p}\leftarrow(\beta_{p},\bar{\beta}_{p-1})\) Append \(U_{p}(\bar{\beta}_{p},\bar{\gamma}_{p})\gets e^{-i\beta_{p}A_{p}}U_{p-1}( \bar{\beta}_{p-1},\bar{\gamma}_{p-1})\) else Prepare \(\left|\bar{\Psi}_{p}^{\pm}\right\rangle\gets e^{\bar{\gamma}_{p}\pm \bar{\gamma}_{p}}[\left|iA,H\right|]\bar{\Psi}_{p}^{\pm}\) Measure gradient \(\mathcal{G}_{p}(\pm\bar{\gamma},A)\leftarrow\left\langle\bar{\Psi}_{p}^{\pm} \right|[iA,H]\left|\bar{\Psi}_{p}^{\pm}\right\rangle\) \(A_{p}\leftarrow\operatorname*{argmax}_{A\in\mathcal{P}}\left[\max\left(| \mathcal{G}_{p}(\bar{\gamma},A)|,|\mathcal{G}_{p}(-\bar{\gamma},A)|\right) \right]\) Update \(\bar{\gamma}_{p}\leftarrow(\gamma_{p},\bar{\gamma}_{p-1})\); \(\bar{\beta}_{p}\leftarrow(\beta_{p},\bar{\beta}_{p-1})\) Add \(U_{p}(\bar{\beta}_{p},\bar{\gamma}_{p})\gets e^{-i\beta_{p}A_{p}}e^{-i\gamma_ {p}H}U_{p-1}(\bar{\beta}_{p-1},\bar{\gamma}_{p-1})\) Optimize params \(\vec{\beta}_{p}^{\star}\), \(\vec{\gamma}_{p}^{\star}\leftarrow\operatorname*{argmax}_{\vec{\beta}_{p},\vec{ \gamma}_{p}}[E_{p}(\bar{\beta}_{p},\bar{\gamma}_{p})]\) Set bound \(\mathcal{E}_{p}\gets E_{p}(\vec{\beta}_{p}^{\star},\vec{\gamma}_{p}^{\star})\) if\(p=P\)or\(\left|\mathcal{E}_{p-1}-\mathcal{E}_{p}\right|<\varepsilon\)then converged \(\leftarrow\) True Sample bit strings from \(\left|\Psi_{p}^{\star}\right\rangle\) and compute \(\mathcal{E}_{p}\) Return bit strings, \(\mathcal{E}_{p}\), circuit \(U_{p}\), params \(\vec{\beta}_{p}^{\star}\), \(\vec{\gamma}_{p}^{\star}\) ``` **Algorithm 1** Dynamic-ADAPT-QAOA ## III Benchmarking In this section, we benchmark Dynamic- and standard ADAPT-QAOA in numerical simulations. Our investigation will demonstrate that Dynamic-ADAPT-QAOA can remove redundant components from the ansatz circuits of standard ADAPT-QAOA. We show that this leads to a reduced CNOT count and an increased noise resilience. ### Benchmarking methodology _Max-Cut:_--In what follows, we benchmark ADAPT-QAOAs on random instances of weighted Max-Cut problems. Consider allocating weights to the edges of an \(N\)-vertex graph. In this work, we consider complete, i.e., fully connected, graphs. The edge weights between vertices \(i\in N\) and \(j\in N\) form a real symmetric matrix \(W_{ij}\) with zeros on its diagonal. A binary vector \(\vec{b}\in\{0,1\}^{N}\) defines a _cut_, a splitting of all vertices into two disjoint sets. A cut value is defined as the sum of edge weights between the two partitions: \[V(\vec{b})=\sum_{i,j=1}^{N}W_{ij}b_{i}(1-b_{j}). \tag{19}\] The weighted Max-Cut problem is to find the binary vector \(\vec{b}^{\star}\) that maximizes the cut value: \(\vec{b}^{\star}=\text{argmax}_{\vec{b}}V(\vec{b})\). \(\vec{b}^{\star}\) corresponds to the optimal partition, which yields the maximal cut value \(V_{\text{max}}=V(\vec{b})^{\star}\). By mapping binary variables \(b_{i}=(1+z_{i})/2\) to the eigenvalues \(z_{i}\in\{-1,1\}\) of \(Z_{i}\), the weighted Max-Cut problem becomes equivalent to finding the ground state of the Ising model, Eq. (1). We create random Max-Cut instances by uniformly sampling edge weights \(W_{ij}\in[0,1]\). This is known to generate NP-hard problems [36, 37]. For a visualization of Max-Cut, see Fig. 2. _Approximation ratio:_--Our benchmarks compare the average performance of three algorithms: Dynamic- and standard ADAPT-QAOA, as well as the classical, polynomial-time approximation algorithm by Goemans and Williamson (GW). Rather than solving Max-Cut exactly, all three algorithms sample a collection of bitstrings [38]. This leads to a distribution of approximate cut values, Eq. (19), with average cut-values \(V_{\text{d}}\), \(V_{\text{s}}\), and \(V_{\text{GW}}\), respectively. Algorithms providing a higher average cut value tend to provide better-quality solutions. Further, normalizing the average cut value by the maximal achievable value \(V_{\text{max}}\) allows for averaging various instances of Max-Cut. This defines our key performance metric--the average approximation ratio: \[\alpha_{\text{d}}\equiv\frac{V_{\text{d}}}{V_{\text{max}}},\,\alpha_{\text{s} }\equiv\frac{V_{\text{s}}}{V_{\text{max}}},\,\text{and}\,\,\alpha_{\text{GW} }\equiv\frac{V_{\text{GW}}}{V_{\text{max}}}. \tag{20}\] The GW algorithm is the classical, polynomial-time algorithm that achieves the best worst-case approximation ratio: \(\alpha_{\text{GW}}>87.8\ldots\%\)[14]. Below, we will compare \(\alpha_{\text{GW}}\) to numerically computed values of \(\alpha_{\text{d}}\) and \(\alpha_{\text{s}}\). In our simulations, we average the results over 100 random instances of the Max-Cut problem. In real applications of QAOA, one would return the cut corresponding to the sampled bit string with minimum cost, not the average. However, in the small problem sizes studied here, the final wavefunction has substantial overlap with all bit strings. Thus, for a relatively small number of shots the true solution will always be obtained. Therefore, we compare the average approximation ratios. Further, we emphasize that our comparison between QAOAs and the GW algorithm focuses on the final-results, i.e., average approximation ratios, not their computational time complexity. _Simulations:_--To assess the approximation ratios of Dynamic- and standard ADAPT-QAOA in the presence of noise, we use full density-matrix simulations, as previously described in Ref. [34]. First, the unitaries in Eq. (5) are compiled to standard circuit representations [21]. To simulate the effect of noise, we work with density matrices. In the evolution of the quantum states, we apply a depolarizing channel after each CNOT gate: \[\mathcal{D}(i,p_{\text{gate}})[\rho]\coloneqq(1-p_{\text{gate}})\rho+\frac{ p_{\text{gate}}}{3}\sum_{\sigma_{i}}\sigma_{i}\rho\sigma_{i}. \tag{21}\] Here, \(\rho\) is the density matrix prior to the CNOT gate, \(i\) denotes the target qubit of the CNOT gate, \(p_{\text{gate}}\in[0,1]\) denotes the gate-error probability, and the \(\sigma_{i}\)-summation is over the three Pauli matrices acting on qubit \(i\). Owing to the diverse nature of current quantum hardware, a noise model cannot be both platform agnostic and realistically detailed. Nevertheless, our noise model captures the depolarizing effect of two-qubit gates, which is the dominant noise source across several platforms [39, 40]. We deem our model a reasonably hardware-agnostic compromise, which should be sufficient to assess fundamental quantitative features. Since full density-matrix simulations require extensive computing time, we apply an approximation similar to that outlined in Ref. [34]. In more detail, we simulate ADAPT-QAOAs by growing their ansatz circuits in the absence of noise. We store the optimal ansatz circuits \(U_{p}\) at each iteration step \(p\). Subsequently, we investigate the effect of noise by simulating the pre-optimized circuit \(U_{p}\) at various noise levels \(p_{\text{gate}}\) on our density matrix simulator. As demonstrated in App. D, the noiseless-growth approximation has little effect on our results. Figure 2: Diagramatic representation of a 5-vertex weighted graph. The vertices are labelled 1-5. The weights are shown next to the corresponding edges. The partition resulting in a Max-Cut, (135)(24), is depicted using different shades of gray. The Max-Cut value is 40. Directly above the graph we illustrate how the problem maps onto a qubit system. The qubits’ spins point in different vertical half-planes, corresponding to which set of the Max-Cut partition they are in. _Parameters:--_Before presenting our findings, we specify the hyperparameters used in our simulations. By setting \(\varepsilon=0\), we ensure that the convergence criterion corresponds to having reached a certain circuit depth. The depth is determined by the number of iterations, which we set to \(P=12\). For Dynamic-ADAPT-QAOA, the cost-unitary offset (see Algorithm 1) was set to \(\tilde{\gamma}=0.1\), following the settings used in [20]. In Algorithm 1, \(\delta_{1}>0\) would mitigate some experimental errors in the identification of a local minimum where, in ideal scenarios, \(C_{p}=0\). Similarly, \(\delta_{2}>0\) would mitigate some experimental errors in establishing whether \(B_{p}\cdot D_{p}\) is positive. In our simulations, we set \(\delta_{1}=0\). To emulate practical implementations, we choose \(\delta_{2}\in(0,\,10^{-4})\) after performing a hyperparameter search for each separate graph. ## III B. Vanishing cost parameters As mentioned in Sec. II B, our motivation to develop Dynamic-ADAPT-QAOA stems from the observation that standard ADAPT-QAOA appends cost unitaries to the quantum circuit in cases where they do not lead to any significant improvement in convergence. In Figure 3, we show data which support this conclusion. The histogram of optimal cost parameters \(\gamma^{\star}\) of standard ADAPT-QAOA exhibits a well-defined peak at \(\gamma^{\star}=0\). A majority (\(\approx 70\%\)) of the cost unitaries do not contribute to the algorithm's convergence. This peak is absent in the corresponding histogram for Dynamic-ADAPT-QAOA: Our algorithm successfully removes redundant cost unitaries from the ansatz circuits. ### Benchmarking the CNOT-count reduction Now, we show that Dynamic-ADAPT-QAOA significantly reduces the number of CNOT gates needed to reach a certain algorithmic precision. In Section II, we described how Dynamic-ADAPT-QAOA prunes unnecessary circuit elements. To investigate the effect on the CNOT count, we consider how the approximation ratio \(\alpha\), averaged over 100 instances of Max-Cut, improves as the algorithm grows the quantum circuit. Our results are shown in FIG. 4. We plot data from both noiseless and noisy simulations of Dynamic- and standard ADAPT-QAOA. In both scenarios, Dynamic-ADAPT-QAOA uses significantly fewer CNOT gates to reach a fixed average approximation ratio. For a fixed gate-error probability this CNOT reduction allows Dynamic-ADAPT-QAOA to calculate more accurate approximation ratios than standard ADAPT-QAOA. In noiseless simulations, we see that Dynamic-ADAPT-QAOA needs approximately \(80\%\) fewer CNOT gates than ADAPT-QAOA to calculate average approximation ratios that outperform those achievable with the classical GW algorithm for 6-vertex complete graphs. Moreover, at a gate-error probability of \(p_{\text{gate}}=0.122\%\), the Dynamic-ADAPT-QAOA can achieve better average approximation ratios than the GW algorithm, whilst the standard ADAPT-QAOA cannot. In the next section, we widen our analysis of how noise affects the quantum algorithms' achieved approximation ratios. Figure 3: A histogram of optimized circuit parameters \(\gamma_{p}^{\star}\), taken from the cost unitaries from all layers of the ansatz circuits grown with Dynamic- and standard ADAPT-QAOA. The data were acquired in noiseless simulations of 100 instances of Max-Cut on 6-vertex graphs. The algorithms were run until a maximum circuit depth of \(P=12\). Figure 4: Convergence curves for Dynamic- and standard ADAPT-QAOA, applied to 6-vertex complete graphs, with and without noise. \(1-\alpha\) is plotted as a function of the number of CNOT gates present in the ansatz circuits \(U_{P}\). The dashed horizontal curve corresponds to the classical GW algorithm. The shaded regions correspond to the \(95\%\) confidence intervals. The convergence curves for three gate-error probabilities are shown: \(p_{\text{gate}}=0.0\%,0.122\%\), and \(0.263\%\). These are depicted using solid, dashed, and dash-dotted line styles, respectively. Stars indicate the maximally attainable approximation ratio \(\alpha^{*}\). ## III D Benchmarking the noise resilience In this section, we analyze how noise affects the quality of approximation ratios of Dynamic- and standard ADAPT-QAOA. The convergence curves presented in Fig. 4 show that increasing the gate-error probability \(p_{\text{gate}}\) worsens the best attainable average approximation ratio \(\alpha^{\star}\). More specifically, as ADAPT-QAOA grows the circuit (leading to an increase of CNOT gates on the abscissa) the approximation ratio improves initially. However, as the circuit acquires more CNOT-gates, the effect of noise starts to dominate, leading to a subsequent deterioration of the approximation ratio. This causes the characteristic "smirk" shape of the convergence curves in Fig. 4. The dip of each convergence curve marks the best attainable average approximation ratio \(\alpha^{\star}\) at a certain gate-error probability \(p_{\text{gate}}\). Figure 4 indicates that Dynamic-ADAPT-QAOA outperforms the solution quality of standard ADAPT-QAOA in the presence of noise. To quantify this benefit of our algorithm, we investigate \(\alpha^{\star}\) as a function of \(p_{\text{gate}}\) in FIG. 5. For all values of \(p_{\text{gate}}\), Dynamic-ADAPT-QAOA calculates better approximation ratios than standard ADAPT-QAOA. Evidently, our algorithm exhibits better noise resilience. As can be seen from the left-most portion of FIG. 5, given sufficiently weak noise, both Dynamic- and standard ADAPT-QAOA can provide better average approximation ratios than the GW algorithm. We now investigate the range of gate-error probabilities for which Dynamic- and standard ADAPT-QAOAs achieve such an improvement. To this end, we define the gate-error probability \(p_{\text{gate}}^{\star}\), below which the quantum algorithms achieve a better average approximation ratio than the GW algorithm. In FIG. 6, we plot \(p_{\text{gate}}^{\star}\) with respect to the number of graph vertices. Compared to standard ADAPT-QAOA, Dynamic-ADAPT-QAOA can achieve a better max-cut approximation ratio than the classical GW algorithm at roughly an order of magnitude larger values of \(p_{\text{gate}}^{\star}\). In particular, the critical probability at which Dynamic-ADAPT-QAOA achieves higher approximation ratios than the GW algorithm is \(p_{\text{gate}}^{\star}=1.3\pm 0.2\%\) for 6-vertex graphs and \(p_{\text{gate}}^{\star}=0.13\pm 0.05\%\) for 10-vertex graphs. Both these values are well above achieved gate-error probabilities [41], implying that one may execute Dynamic-ADAPT-QAOA on existing hardware. On the other hand, for standard ADAPT-QAOA, the critical probability is currently achievable only for graphs with less than 7 vertices. ## IV Discussion We have introduced Dynamic-ADAPT-QAOA, a quantum algorithm for combinatorial optimization. Similar to the original ADAPT-QAOA algorithm, our algorithm variationally approximates the ground state of an Ising Hamiltonian. Thus, it can provide approximate solutions to NP problems. By dynamically assessing the importance of unitaries before they are added in the variationally grown algorithms, Dynamic-ADAPT-QAOA can operate with remarkably few CNOT gates. Above, we benchmarked the average (as opposed to the worst-case) performance of our algorithm. For example, in the idealized case of no noise, Dynamic-ADAPT-QAOA requires on average about 35 (350) CNOT gates to outperform the GW algorithm on 6-vertex (10-vertex) graphs. Moreover, we have shown that for graphs with \(6-10\) ver Figure 5: Best attainable approximation ratio \(\alpha^{\star}\) as a function of the gate-error probability \(p_{\text{gate}}\). The data were acquired in noisy simulations of 6-vertex graphs. The error bars show the standard error in the mean approximation ratio. The dashed curve corresponds to the classical GW algorithm. The shaded regions correspond to the 95% confidence intervals. Figure 6: \(p_{\text{gate}}^{\star}\) with respect to different graph sizes. At gate-error probabilities below \(p_{\text{gate}}^{\star}\) the quantum algorithms outperform the solution quality of the classical GW algorithm. The horizontal line shows the experimentally-achieved two-qubit gate-error probability in state-of-the-art superconducting hardware [41]. The error bars show the standard error. tices, Dynamic-ADAPT-QAOA can provide better average solutions than the GW algorithm, even in the presence of noise levels comparable with current state-of-the-art hardware [41]. This should make Dynamic-ADAPT-QAOA an attractive candidate to showcase proof-of-principle computations on NISQ hardware. Finally, we conclude this work with a few comments. _Other QAOAs:--_There are plenty of promising QAOA algorithms in the literature [42; 43; 44; 45; 46; 47; 48; 49; 50]. However, this work focuses on ADAPT-QAOAs [20]--mainly due to their relatively shallow ansatz circuits. In the future, it would be of interest to expand the benchmarks of noise resilience to other types of QAOA. _Other algorithms:--_This study focuses on investigating the utility of gate-based quantum computers for solving NP-problems. However, adiabatic quantum computers [10; 11; 12; 13] and state-of-the-art annealing heuristics [51; 6; 7; 8; 9] can comfortably handle systems with up to 5 thousand and 100 thousand spins, respectively, most likely at a higher solution accuracy. Moreover, other approximation algorithms [52; 53] could also lead to high average solution accuracy. This shows that QAOA still has a long way to go before reaching practical quantum advantage. _Error mitigation:--_Applying error-mitigation techniques [54; 55; 56; 57] to boost expectation values would straightforwardly improve the approximation ratios of standard and Dynamic-ADAPT-QAOA, see App. E. However, to the best of our knowledge, error-mitigation methods have never been used to improve the underlying bit-strings. Consequently, error-mitigation methods would not improve the cut value provided by the experimentally accessible bit-strings. An interesting direction of future research is to consider how error-mitigation techniques could be used to improve not only the cut value, but also the bit-strings provided by a QAOA. **Acknowledgements:** We thank Kieran Dalton, Yordanordanov, Bobak Kiani, Nicholas Mayhall, Sophia Economou, Edwin Barnes, and members of the Hitachi QI team for useful discussions.
量子近似最適化アルゴリズム(QAOA)は、ノイズの多い中間規模量子(NISQ)ハードウェアで NP 問題を解決する魅力的な提案です。QAOAの実装をノイズに耐えるために、可能な限り少ない CNOT ゲートを備えた短い Ansatz 構成が必要です。ここでは、Dynamic-ADAPT-QAOA を提案します。私たちのアルゴリズムは、近期的実装の標準的 ADAPT-QAOA の回路の深さと CNOT count を signifi cantly 削減します。私たちのアルゴリズムを通して、CNOT 負荷の大きい演算の適用を動的に決定します。密度行列シミュレーションを使用して、ADAPT-QAOA と Dynamic-ADAPT-QAOA のノイズ耐性についてベンチマークしています。これらのアルゴリズムは、平均的な精度で古典的、多項式時間近似アルゴリズムによる Goemans and Williamson
2302.14340
HelixSurf: A Robust and Efficient Neural Implicit Surface Learning of Indoor Scenes with Iterative Intertwined Regularization
Recovery of an underlying scene geometry from multiview images stands as a long-time challenge in computer vision research. The recent promise leverages neural implicit surface learning and differentiable volume rendering, and achieves both the recovery of scene geometry and synthesis of novel views, where deep priors of neural models are used as an inductive smoothness bias. While promising for object-level surfaces, these methods suffer when coping with complex scene surfaces. In the meanwhile, traditional multi-view stereo can recover the geometry of scenes with rich textures, by globally optimizing the local, pixel-wise correspondences across multiple views. We are thus motivated to make use of the complementary benefits from the two strategies, and propose a method termed Helix-shaped neural implicit Surface learning or HelixSurf; HelixSurf uses the intermediate prediction from one strategy as the guidance to regularize the learning of the other one, and conducts such intertwined regularization iteratively during the learning process. We also propose an efficient scheme for differentiable volume rendering in HelixSurf. Experiments on surface reconstruction of indoor scenes show that our method compares favorably with existing methods and is orders of magnitude faster, even when some of existing methods are assisted with auxiliary training data. The source code is available at https://github.com/Gorilla-Lab-SCUT/HelixSurf.
Zhihao Liang, Zhangjin Huang, Changxing Ding, Kui Jia
2023-02-28T06:20:07
http://arxiv.org/abs/2302.14340v2
HelixSurf: A Robust and Efficient Neural Implicit Surface Learning of Indoor Scenes with Iterative Intertwined Regularization ###### Abstract Recovery of an underlying scene geometry from multi-view images stands as a long-time challenge in computer vision research. The recent promise leverages neural implicit surface learning and differentiable volume rendering, and achieves both the recovery of scene geometry and synthesis of novel views, where deep priors of neural models are used as an inductive smoothness bias. While promising for object-level surfaces, these methods suffer when coping with complex scene surfaces. In the meanwhile, traditional multi-view stereo can recover the geometry of scenes with rich textures, by globally optimizing the local, pixel-wise correspondences across multiple views. We are thus motivated to make use of the complementary benefits from the two strategies, and propose a method termed Helix-shaped neural implicit Surface learning or HelixSurf; HelixSurf uses the intermediate prediction from one strategy as the guidance to regularize the learning of the other one, and conducts such intertwined regularization iteratively during the learning process. We also propose an efficient scheme for differentiable volume rendering in HelixSurf. Experiments on surface reconstruction of indoor scenes show that our method compares favorably with existing methods and is orders of magnitude faster, even when some of existing methods are assisted with auxiliary training data. The source code is available at [https://github.com/Gorilla-Lab-SCUT/HelixSurf](https://github.com/Gorilla-Lab-SCUT/HelixSurf). ## 1 Introduction Surface reconstruction of a scene from a set of observed multi-view images stands as a long-term challenge in computer vision research. A rich literature [5, 12, 16] exists to address the challenge, including different paradigms of methods from stereo matching to volumetric fusion. Among them, the representative methods of multi-view stereo (MVS) [13, 38, 46, 55] first recover the properties (e.g, depth and/or normal) of discrete surface points, by globally optimizing the local, pixel-wise correspondences across the multi-view images, where photometric and geometric consistencies across views are used as the optimization cues, and a continuous fitting method (e.g., Poisson reconstruction [19, 20]) is then applied to recover a complete surface. MVS methods usually make a reliable recovery only on surface areas with rich textures. More recently, differentiable volume rendering is proposed that connects the observ Figure 1: Efficacy and efficiency of our proposed HelixSurf.
**Answer:** 底層シーン幾何学の復元を複数視覚画像から行うことは、コンピュータービジョン研究において、昔から課題となっています。近年、 promessa は、神経Implicit surface learningと可微分な体積描画を利用し、シーン幾何学の復元と新規な視点の合成を達成しています。この方法は、ニューラルモデルの深い事前知識を利用することで、幾何学的滑らかさを誘導するバイアスとして作用しています。これらの方法が物体レベルの表面に対して望ましい効果を示しますが、複雑なシーンの表面を扱う際に課題を抱えています。同時に、伝統的なマルチビューステレオカでは、 richly TEXTUREのシーンの幾何学をグローバルに最適化することで、多視点の局所的なピクセル対応関係を最適化して幾何学を復元しています。このため、この方法の両方の利点の補完性を活用し、
2309.06436
Holographic Tensor Networks with Bulk Gauge Symmetries
Tensor networks are useful toy models for understanding the structure of entanglement in holographic states and reconstruction of bulk operators within the entanglement wedge. They are, however, constrained to only prepare so-called "fixed-area states" with flat entanglement spectra, limiting their utility in understanding general features of holographic entanglement. Here, we overcome this limitation by constructing a variant of random tensor networks that enjoys bulk gauge symmetries. Our model includes a gauge theory on a general graph, whose gauge-invariant states are fed into a random tensor network. We show that the model satisfies the quantum-corrected Ryu-Takayanagi formula with a nontrivial area operator living in the center of a gauge-invariant algebra. We also demonstrate nontrivial, n-dependent contributions to the R\'enyi entropy and R\'enyi mutual information from this area operator, a feature shared by general holographic states.
Xi Dong, Sean McBride, Wayne W. Weng
2023-09-12T17:56:02
http://arxiv.org/abs/2309.06436v1
# Holographic Tensor Networks with Bulk Gauge Symmetries ###### Abstract Tensor networks are useful toy models for understanding the structure of entanglement in holographic states and reconstruction of bulk operators within the entanglement wedge. They are, however, constrained to only prepare so-called "fixed-area states" with flat entanglement spectra, limiting their utility in understanding general features of holographic entanglement. Here, we overcome this limitation by constructing a variant of random tensor networks that enjoys bulk gauge symmetries. Our model includes a gauge theory on a general graph, whose gauge-invariant states are fed into a random tensor network. We show that the model satisfies the quantum-corrected Ryu-Takayanagi formula with a nontrivial area operator living in the center of a gauge-invariant algebra. We also demonstrate nontrivial, \(n\)-dependent contributions to the Renyi entropy and Renyi mutual information from this area operator, a feature shared by general holographic states. ###### Contents * 1 Introduction * 2 The Gauged Random Tensor Network * 3 Deriving the Gauge-Invariant Algebra * 3.1 The structure of the gauge-invariant Hilbert space * 3.2 The gauge-invariant subregion algebra * 3.3 The center of the algebra * 3.4 Traces in \(\mathcal{A}_{r}\) and \(\widetilde{\mathcal{A}}_{r}\) * 3.5 Reduced states * 4 Entropies in the Gauged Random Tensor Network * 4.1 Entanglement entropy * 4.2 Renyi entropy and Renyi mutual information * 5 Discussion and Outlook ## 1 Introduction The ultimate goal of the AdS/CFT correspondence is to understand, concretely, the relationship between a bulk gravitational theory and its dual boundary conformal field theory. Holographic duality posits that the partition functions of the two theories are equal and that there exists an isomorphism between the Hilbert space of states of a theory of quantum gravity \(\mathcal{H}_{\text{bulk}}\) and the Hilbert space of a seemingly unrelated quantum mechanical system \(\mathcal{H}_{\text{boundary}}\). If we were to understand the precise relation between these Hilbert spaces, we would have a tractable handle with which to study quantum gravity, in whatever form it may ultimately arise. In practice, the UV degrees of freedom in the bulk are not well-understood, so one must often be satisfied with studying a subspace of states given by small fluctuations around a fixed semiclassical saddle. These states span a code subspace of the quantum gravity Hilbert space, and are thus embedded in the larger Hilbert space of the dual boundary theory, in the same way as the logical qubits of a quantum error correcting code (QECC) are embedded in a larger Hilbert space of physical qubits [1]. In the last decade, a useful tool for developing intuition about the bulk-to-boundary map has been tensor networks. Tensor networks, specifically projected entangled pair states (PEPS) and PEPS-inspired tensor networks, originally arose in many-body physics as a generalization of matrix product states, which allowed one to efficiently prepare spin chain states with area law entanglement [2]. As a toy model for holography, tensor networks found their niche due to the fact that they obey the Ryu-Takayanagi (RT) formula [3] and its refinements [4; 5; 6; 7]. In particular, random tensor networks (RTNs) [8] reproduce several desirable properties of a holographic QECC, namely satisfying a quantum-corrected RT formula and the Petz reconstruction of local operators [9]. We now give a short overview of holographic RTNs and their entanglement properties, as well as their issues. A rank-\(k\) tensor can be represented by its components \(T_{\mu_{1}\cdots\mu_{k}}\), with \(\mu_{i}=1,\ldots,D_{i}\) (the bond dimension). We can associate to each leg a \(D_{i}\)-dimensional Hilbert space \(\mathcal{H}_{i}\) spanned by an orthonormal basis of states \(\{|\mu_{i}\rangle,\ \mu_{i}=1,\cdots,D_{i}\}\). The tensor \(T\) can then be thought of as a state on the tensor product Hilbert space \(\bigotimes_{i=1}^{k}\mathcal{H}_{i}\): \[|T\rangle=\sum_{\mu_{1},\cdots,\mu_{k}}T_{\mu_{1}\cdots\mu_{k}}|\mu_{1} \rangle\otimes\cdots\otimes|\mu_{k}\rangle. \tag{1}\] To construct a tensor network, we consider a set of vertices and links which form a network. To each vertex \(x\) we associate a state \(|T_{x}\rangle\), such that the collection of all tensors defines a product state \(\otimes_{x}|T_{x}\rangle\). Adjacent tensors are those connected by a link; their corresponding legs are contracted by projecting onto a maximally entangled state. For simplicity, we assume that all contracted legs have the same bond dimension \(D\). Denoting the tensor product Hilbert space on the two legs connecting the tensors at vertices \(x\) and \(y\) as \(\mathcal{H}_{xy}\otimes\mathcal{H}_{yx}\), this means that we project onto the state \(|xy\rangle=D^{-1/2}\sum_{\mu=1}^{D}|\mu_{xy}\rangle\otimes|\mu_{yx}\rangle\). Uncontracted legs are called "dangling" and come in two types: bulk legs (viewed as input) and boundary legs (viewed as output). We write the boundary state in the following way:1 Footnote 1: Here, we have chosen a pure state as the bulk input, but generalizing to mixed states is straightforward. \[|\Psi_{\partial}\rangle = \left(\langle\Phi_{b}|\otimes\bigotimes_{\langle xy\rangle} \langle xy|\right)\left(\bigotimes_{x}|T_{x}\rangle\right), \tag{2}\] where we project the bulk input legs onto a bulk state \(|\Phi_{b}\rangle\). In an RTN, we choose \(T_{x}\) to be independent random tensors and take \(D\) to be large. We will not go into details on how one computes Renyi entropy in the RTN here; the important point is that, for a boundary subregion \(R\), one finds the following answer for the Renyi entropy \(S_{n}(R)\): \[S_{n}(R)=|\gamma_{R}|\log D+S_{n}(\rho_{r}), \tag{3}\] where \(|\gamma_{R}|\) is the number of links cut by the minimal surface \(\gamma_{R}\) homologous to \(R\) and \(S_{n}(\rho_{r})\) is the Renyi entropy of the bulk subregion \(r\) bounded by \(R\cup\gamma_{R}\) (we will call \(r\) the entanglement wedge). Analytically continuing to \(n=1\) recovers the Faulkner-Lewkowycz-Maldacena (FLM) formula \[S_{\rm vN}(R)=\frac{\left<\hat{A}\right>}{4G_{N}}+S_{\rm vN}(\rho_{r}), \tag{4}\] with \(|\gamma_{R}|\log D\) identified with the expectation value of the area operator \(\langle\hat{A}\rangle/4G_{N}\). In a state with vanishing bulk Renyi entropy (such as a product state), the boundary Renyi entropy (3) is consequently independent of \(n\). The RTN thus exhibits a flat entanglement spectrum due to the projection of contracted legs onto maximally mixed states.2 This differs sharply from what we expect from generic situations in AdS/CFT. For example, the Renyi entropy for an interval \(R\) of length \(\ell\) in the vacuum state of a two-dimensional CFT takes the form Footnote 2: The HaPPY code [10] also features a flat Rényi spectrum for similar reasons. \[S_{n}(R)=\frac{c}{6}\left(1+\frac{1}{n}\right)\log\left(\frac{\ell}{\epsilon} \right), \tag{5}\] which is manifestly \(n\)-dependent. One possible solution is to instead project contracted legs onto a non-maximally entangled link state [11; 12]. By tuning the entanglement spectrum appropriately, this allows one to reproduce the correct single-interval CFT vacuum Renyi entropy (5), but does not work in more general cases such as that of multiple disjoint intervals. To see this, consider two disjoint intervals \(R_{1}\) and \(R_{2}\) (see Figure 1), and for simplicity consider the case where the mutual information between the intervals is small in the sense that the RT surfaces are always in a disconnected phase. The boundary Renyi entropy can be obtained by inserting appropriate cosmic branes into the bulk [13]. The tension of the cosmic branes is proportional to \(1-1/n\). In a fully gravitating system, the two cosmic branes homologous to \(R_{1}\), \(R_{2}\) will backreact and affect each other in an \(n\)-dependent way. This results in a nonzero Renyi mutual information between the two intervals that cannot be reproduced in RTNs by simply adding non-maximally entangled links, because they would not allow the minimal surfaces to affect each other. rom the gravity point of view, the RTN prepares a so-called fixed-area state [14; 15], which is an eigenstate of the area operator \(\hat{A}\) in (4). Such eigenstates form a complete basis for semiclassical states prepared via the gravitational path integral, so in principle any semiclassical state can be represented as a superposition over fixed-area basis states \(|\alpha\rangle\), where \(\alpha\) labels the eigenvalues of the area operator. As the area operator lives on the RT surface dividing the entanglement wedge \(r\) and its complement \(\overline{r}\), it naturally belongs to the center of the algebra of bulk operators in \(r\). This view was espoused in [16], where it was shown that the FLM formula (4) can be derived from a quantum error correcting code with complementary recovery. In that language, the area operator is a specific element of the center of the bulk von Neumann algebra on \(r\). The usual RTN implements a special case of this where the algebra has a trivial center, i.e., the center consists of \(c\)-numbers only and is therefore isomorphic to \(\mathbb{C}\). In particular, this means that the area operator must be a \(c\)-number, which, as previously discussed, is incongruous with what one observes in gravitational holography. The goal of this paper is to construct a model where the algebra on \(r\) has a nontrivial center and to identify a nontrivial area operator living in the center.3 An _ad hoc_ way of getting a nontrivial center is to "stack" multiple layers of tensor networks by hand Figure 1: The cosmic branes that arise in computing the Rényi entropy for disjoint subregions. These branes have nonzero, \(n\)-dependent tension, and so would backreact in a realistic holographic system. to form superpositions of fixed-area states. We will not do this but will instead pursue a more physically motivated approach. In particular, one would like to incorporate something akin to "edge states", degrees of freedom which live on the minimal surface, in order to go beyond fixed-area states and produce a nontrivial area operator.4 Our goal in this work is to give a model which provides a physical origin for these edge states. Inspired by similar operators found in gauge theory [19], we will add a second layer on top of the standard RTN which imposes gauge invariance. This alters the algebra of operators in the bulk, and as we will show, it introduces a nontrivial contribution to the area operator of the following form: Footnote 4: Initial work in this direction was taken in [18] by generalizing the HaPPY code. \[\Delta\widetilde{A}=\bigoplus_{\alpha}\widetilde{P}^{\alpha}\log d_{\alpha}, \tag{6}\] where roughly speaking \(\alpha\) denotes a superselection sector in the gauge-invariant Hilbert space, \(\widetilde{P}^{\alpha}\) is the projection onto that superselection sector, and \(d_{\alpha}\) is the dimension of \(\alpha\) viewed as an irreducible representation. The important thing to note at the moment is that this operator is not a \(c\)-number and is therefore nontrivial. The structure of this paper is as follows. In Section 2 we will set up our model - a two-layer gauged random tensor network - and introduce the formalism for gauge theory on a graph. In Section 3 we will analyze the Hilbert space of gauge-invariant states and the algebras of gauge-invariant operators for a subregion. In Section 4 we will compute entanglement and Renyi entropies in both the pre-gauged and gauge-invariant algebras, which we will use to derive the new area operator for our model. We conclude with some discussion and future directions. ## 2 The Gauged Random Tensor Network We now construct our model. It has two layers: a top layer consisting of a gauge theory on a graph, and a bottom layer made of a standard random tensor network. We illustrate some examples of this two-layer model in Figure 2. The top layer produces a gauge-invariant state which is then fed into the bottom layer as input. The final output of the model is the boundary state produced by the bottom RTN. We can then analyze properties of the boundary state (such as its entropy) using the usual techniques for the random tensor network. This construction has some nice properties. In particular, one might be worried that if the structure of the RTN is altered, Petz reconstruction of local operators might no longer hold. Here we avoid this potential issue by keeping the tensor network the same, but changing the space of states that can be fed into the network. Given this construction, we would like to understand what set of gauge-invariant states we will be feeding into the bottom layer. The following is based on a non-dynamical version of the standard Kogut-Susskind construction in lattice gauge theory [20].5 As we do not require our graph to be a lattice, i.e. there is not necessarily a regular tiling, we will refrain from calling our top layer a lattice gauge theory. Footnote 5: See also related discussion in [21]. Our starting point is an arbitrary directed graph \(\Lambda=(V,E)\) consisting of vertices \(V=\{v\}\) and edges \(E=\{e\}\). We require the graph to be directed so we have a well-defined orientation on each edge, though we emphasize that the choice of orientation is arbitrary. We impose no additional conditions on the graph. In particular, the graph could have loops, adjacent vertices could be connected by multiple edges, and the graph does not need to be planar. We start with a gauge group, which we choose to be a compact Lie group \(G\). It does not have to be connected, and in particular, we could consider finite groups such as \(\mathbb{Z}_{2}\) if we wish. We assign a (pre-gauged) Hilbert space to each vertex and edge of the graph \(\Lambda\). The Hilbert space \(\mathcal{H}_{e}\) on each edge \(e\) is taken to be \(L^{2}(G)\), the space of square-integrable functions on \(G\). A state \(\ket{\psi}_{e}\) in this \(\mathcal{H}_{e}=L^{2}(G)\) can be written as an integral6 over orthonormal basis elements \(\ket{g}\) labeled by \(g\in G\): Footnote 6: In cases where \(G\) is finite, the integral is understood as a sum: \(\ket{\psi}_{e}=\sum_{g\in G}\frac{1}{\ket{G}}\psi(g)\ket{g}_{e}\), where \(\ket{G}\) is the order of \(G\). \[\ket{\psi}_{e}=\int dg\psi(g)\ket{g}_{e}, \tag{1}\] Figure 2: Some examples of our two-layer model, with a gauge theory on a directed graph on the top layer and a random tensor network with dangling boundary legs on the bottom. In these examples, we choose each tensor in the bottom layer to have a bulk input leg which is either a vertex or edge on the graph. The light gray planes in the right example are included for visual clarity. where \(dg\) is the Haar measure7 on \(G\). For our purposes, it will be useful to work with another orthonormal basis Footnote 7: The Haar measure on \(G\) is invariant under left and right group multiplication (\(g\to g^{\prime}g\) and \(g\to gg^{\prime}\)) and is normalized such that \(\int dg=1\). \[\left|\alpha ij\right\rangle_{e},\quad i,j=1,2,\cdots,d_{\alpha} \tag{2}\] for the same \(\mathcal{H}_{e}=L^{2}(G)\), where \(\alpha\) labels irreducible representations (irreps) of \(G\) and \(d_{\alpha}\) is the dimension of the representation \(\alpha\). This representation basis is orthonormal: \[{}_{e}\langle\alpha ij|\beta k\ell\rangle_{e}=\delta_{\alpha\beta}\delta_{ik} \delta_{j\ell}, \tag{3}\] and can be written in terms of the previously defined group basis \(\left|g\right\rangle_{e}\): \[\left|\alpha ij\right\rangle_{e}=\sqrt{d_{\alpha}}\int dg\,D^{\alpha}_{ij}(g) \left|g\right\rangle_{e}, \tag{4}\] where \(D^{\alpha}_{ij}(g)\) are elements of a unitary matrix \(D^{\alpha}(g)\) representing \(g\) in \(\alpha\). This can be viewed as a "Fourier transform" between the representation basis and the group basis. The group action induces a set of unitaries \(L_{e}(g)\) and \(R_{e}(g)\) which act as left and right group multiplications on the group basis: \[L_{e}(g)\left|h\right\rangle_{e}=\left|gh\right\rangle_{e},\quad R_{e}(g^{-1} )\left|h\right\rangle_{e}=\left|hg^{-1}\right\rangle_{e}. \tag{5}\] In the representation basis, the group unitaries instead act as unitary matrix multiplication on one of the two indices \(i\), \(j\): \[L_{e}(g)\left|\alpha ij\right\rangle_{e}=\sum_{k}D^{\overline{\alpha}}_{ki}(g )\left|\alpha kj\right\rangle_{e},\quad R_{e}(g^{-1})\left|\alpha ij\right\rangle _{e}=\sum_{k}D^{\alpha}_{kj}(g)\left|\alpha ik\right\rangle_{e}, \tag{6}\] where \(\overline{\alpha}\) denotes the complex conjugate representation of \(\alpha\) defined by \(D^{\overline{\alpha}}_{ij}(g)=D^{\alpha\star}_{ij}(g)\). Thus there are two copies of \(G\) acting on \(\mathcal{H}_{e}\): \(L_{e}(g)\) gives the group action of the first copy under which \(\left|\alpha ij\right\rangle_{e}\) transforms in the representation \(\overline{\alpha}\), and \(R_{e}(g^{-1})\) gives the action of the second copy under which \(\left|\alpha ij\right\rangle_{e}\) transforms in the representation \(\alpha\). Altogether, \(\left|\alpha ij\right\rangle_{e}\) transforms in the external tensor product8 representation \(\overline{\alpha}\boxtimes\alpha\) of \(G\times G\). Using this, we decompose \(\mathcal{H}_{e}\) as Footnote 8: For representations \(\alpha_{1}\), \(\alpha_{2}\) of \(G\), their external tensor product \(\alpha_{1}\boxtimes\alpha_{2}\) is a representation of \(G\times G\) with an underlying vector space \(\mathcal{H}^{\alpha_{1}}\otimes\mathcal{H}^{\alpha_{2}}\), where \(\mathcal{H}^{\alpha_{1}}\) transforms under the first \(G\) in the \(\alpha_{1}\) representation and \(\mathcal{H}^{\alpha_{2}}\) transforms under the second \(G\) in the \(\alpha_{2}\) representation. Note that this is different from the (usual) tensor product \(\alpha_{1}\otimes\alpha_{2}\) which is a representation of \(G\) (not \(G\times G\)), with an underlying vector space \(\mathcal{H}^{\alpha_{1}}\otimes\mathcal{H}^{\alpha_{2}}\) where \(\mathcal{H}^{\alpha_{1}}\) and \(\mathcal{H}^{\alpha_{2}}\) transform under the same \(G\). \[\mathcal{H}_{e}\cong\bigoplus_{\alpha}\mathcal{H}^{\overline{\alpha}}\otimes \mathcal{H}^{\alpha}\cong\bigoplus_{\alpha}\left(\mathcal{H}^{\alpha}\right) ^{\oplus d_{\alpha}}, \tag{7}\] where the sum runs over all irreducible representations \(\alpha\) of \(G\) and \(\mathcal{H}^{\alpha}\) is a Hilbert space of dimension \(d_{\alpha}\) transforming in the \(\alpha\) representation. It will be convenient to use the representation basis for the remainder of the paper. Now we turn to the (pre-gauged) Hilbert space \(\mathcal{H}_{v}\) on a vertex \(v\). In general, \(\mathcal{H}_{v}\) may be chosen quite arbitrarily (corresponding to specifying any number of matter degrees of freedom including the case of no matter), but it needs to furnish some representation under the group action of \(G\). This representation could be reducible or trivial, but it can always be decomposed into a direct sum of irreducible representations of \(G\). Using this, we may decompose a general \(\mathcal{H}_{v}\) as \[\mathcal{H}_{v}=\bigoplus_{\alpha}\left(\mathcal{H}_{v}^{\alpha}\right)^{ \oplus n_{\alpha}}. \tag{8}\] Here the sum again runs over all distinct irreducible representations \(\alpha\) of \(G\) and \(n_{\alpha}\) is the multiplicity of the representation \(\alpha\) in \(\mathcal{H}_{v}\). Note that \(n_{\alpha}\) could be any nonnegative integer, and in particular, it could be zero (representing the absence of a given representation \(\alpha\) in \(\mathcal{H}_{v}\)). Thus, the simplest choice of \(\mathcal{H}_{v}\) is a trivial Hilbert space with no matter (corresponding to \(n_{\alpha}=0\) for all \(\alpha\)), but in the discussion below we will consider the general case (8) with arbitrary \(n_{\alpha}\). Furthermore, we will allow \(\mathcal{H}_{v}\) to vary from one vertex to another. An orthonormal basis of states for the Hilbert space \(\mathcal{H}_{v}\) can be written as \[\left|\alpha ij\right\rangle_{v},\quad i=1,\cdots,n_{\alpha},\quad j=1,\cdots,d_{\alpha}, \tag{9}\] where the first index \(i\) runs over the multiplicity \(n_{\alpha}\) and the second runs over the dimension \(d_{\alpha}\). The group action of \(G\) on \(\mathcal{H}_{v}\) is given by unitary operators \(U_{v}(g)\), which act on the \(\left|\alpha ij\right\rangle_{v}\) basis as \[U_{v}(g)\left|\alpha ij\right\rangle_{v}=\sum_{k}D_{kj}^{\alpha}(g)\left| \alpha ik\right\rangle_{v}. \tag{10}\] Note that \(U_{v}(g)\) only acts on the second index \(j\) and is analogous to the action of \(R_{e}(g^{-1})\) in (5). Thus, we find an important distinction between the vertex Hilbert space \(\mathcal{H}_{v}\) and the edge Hilbert space \(\mathcal{H}_{e}\). To see this, first note that the two Hilbert spaces share some similarities. In particular, \(\mathcal{H}_{e}\) is a direct sum of irreducible representations \(\alpha\) with multiplicity \(d_{\alpha}\) as shown on the right-hand side of (7), and this is the analogue of (8) for \(\mathcal{H}_{v}\). The representation basis (2) of \(\mathcal{H}_{e}\) is similar to the basis (9) of \(\mathcal{H}_{v}\). However, the difference is that an edge has the additional structure of allowing another group action \(L_{e}(g)\) that acts on the first index \(i\) of \(\left|\alpha ij\right\rangle_{e}\), whereas at a vertex the first index \(i\) of \(\left|\alpha ij\right\rangle_{v}\) is a multiplicity index that does not admit a natural group action. The pre-gauged Hilbert space for the entire graph is then \[\mathcal{H}=\left(\bigotimes_{v\in V}\mathcal{H}_{v}\right)\otimes\left(\bigotimes _{e\in E}\mathcal{H}_{e}\right). \tag{11}\] We refer to the algebra of all bounded operators on \(\mathcal{H}\) as \(\mathcal{A}=\mathcal{B}\left(\mathcal{H}\right)\). As \(\mathcal{H}\) completely factorizes over the vertices and edges, so too does the algebra of operators \[\mathcal{A}=\left(\bigotimes_{v\in V}\mathcal{A}_{v}\right)\otimes\left(\bigotimes _{e\in E}\mathcal{A}_{e}\right). \tag{12}\] Using the representation basis (2) of \(\mathcal{H}_{e}\), \(\mathcal{A}_{e}\) can be written as \[\mathcal{A}_{e}=\text{span}\{\left|\alpha ij\right\rangle_{e}\langle\beta k \ell|\}, \tag{13}\] where the indices \(i,j,k,\ell\) run over the irrep dimension. Similarly, using (9) we write \[\mathcal{A}_{v}=\text{span}\{\left|\alpha ij\right\rangle_{v}\langle\beta k \ell|\}, \tag{14}\] where \(i,k\) run over the irrep multiplicity and \(j,\ell\) run over the irrep dimension. For each vertex \(v\), we now define a gauge transformation \(A_{v}(g)\) as the following unitary operator acting on \(v\) and all its associated edges: \[A_{v}(g)\equiv U_{v}(g)\prod_{e\in E^{-}(v)}L_{e}(g)\prod_{e\in E^{+}(v)}R_{e} (g^{-1}), \tag{15}\] where \(E^{-}(v)\) consists of edges associated to \(v\) oriented away from the vertex and \(E^{+}(v)\) consists of edges oriented into the vertex. Physical states are defined to be those invariant under gauge transformations \(A_{v}(g)\) for all \(g\) and \(v\). The easiest way of generating a gauge-invariant state is to average over all gauge transformations acting on a state in \(\mathcal{H}\). The operator that implements this averaging on a vertex \(v\) is the following projector: \[\Pi_{v}=\int dgA_{v}(g). \tag{16}\] \(\Pi_{v}\) obeys the usual properties of a projector such that \(\Pi_{v}^{2}=\Pi_{v}\) and \(\Pi_{v}=\Pi_{v}^{\dagger}\). The gauge-invariant projector on the entire graph is simply the product of individual projectors on all vertices: \[\Pi_{\text{GI}}=\prod_{v\in V}\Pi_{v}. \tag{17}\] It is easy to verify that \([A_{v}(g),A_{v^{\prime}}(g^{\prime})]=0\) for all \(v\), \(v^{\prime}\), \(g\), \(g^{\prime}\), and therefore \([\Pi_{v},\Pi_{v^{\prime}}]=0\). Throughout the paper, we will denote fully gauge-invariant spaces, states, and operators with a tilde; for instance, the gauge-invariant states \(\left|\widetilde{\psi}\right\rangle\) are elements of \(\widetilde{\mathcal{H}}\) defined via \[\widetilde{\mathcal{H}}\equiv\Pi_{\text{GI}}\mathcal{H}. \tag{18}\] The gauge-invariant algebra \(\widetilde{\mathcal{A}}\) is defined as the space of bounded operators on \(\widetilde{\mathcal{H}}\). \(\widetilde{\mathcal{A}}\) can alternatively be represented by conjugation of the pre-gauged algebra \(\mathcal{A}\) with the projector \(\Pi_{\text{GI}}\): \[\widetilde{\mathcal{A}}=\Pi_{\text{GI}}\mathcal{A}\Pi_{\text{GI}}. \tag{19}\] We should comment on the interpretation of the operators in this gauge-invariant algebra. Every operator \(\widetilde{\mathcal{O}}\in\widetilde{\mathcal{A}}\) can be extended to a pre-gauged operator \(\mathcal{O}\in\mathcal{A}\) which acts identically on gauge-invariant states. There is generally more than one extension to \(\mathcal{A}\), and to choose a unique extension one must specify the action of the pre-gauged operator on the orthogonal complement of \(\widetilde{\mathcal{H}}\). We make the natural choice that the extension \(\mathcal{O}\) should annihilate the orthogonal complement. Moreover, for notational simplicity, we identify every \(\widetilde{\mathcal{O}}\in\widetilde{\mathcal{A}}\) with its natural extension \(\mathcal{O}\in\mathcal{A}\) (which annihilates the orthogonal complement), as we have done in (19). The reason for this natural extension will become clearer in later sections. We now feed any gauge-invariant state \(\left|\widetilde{\psi}\right\rangle\) as the bulk input into the RTN on the the bottom layer, in a manner illustrated by Figure 2. In particular, the bulk dangling legs of the RTN should match and connect to the edges and vertices of the graph \(G\) on the top layer, for \(\left|\widetilde{\psi}\right\rangle\) lives on these edges and vertices. In other words, each edge or vertex of \(G\) is fed into a bulk dangling leg of the RTN.9 Footnote 9: In principle, the RTN could also take any pre-gauged state as the bulk input, but we choose to feed only gauge-invariant states because as we will see, this restriction leads to a nontrivial area operator. In order to utilize the full machinery of the original RTN, we would like the Hilbert spaces associated with the tensors on the bottom layer to be finite-dimensional (as is the case for the original RTN). When \(G\) is an infinite group, \(\mathcal{H}_{e}=L^{2}(G)\) is infinite-dimensional and there are an infinite number of irreducible representations to sum over, so in order to avoid a tensor in the bottom layer having an infinite-dimensional leg, we impose a cutoff on our edge and vertex Hilbert spaces. This can take the form of, e.g., a cutoff in the sums in (7) and (8). Therefore, we are only feeding in states that live in a finite-dimensional subspace of \(\widetilde{\mathcal{H}}\). This does not affect the discussion in the next section of the gauge-invariant algebra; the cutoff is only relevant when we compute entanglement measures in Section 4. Deriving the Gauge-Invariant Algebra Now that we have defined our gauge-invariant states, we would like to understand the structure of the algebra of gauge-invariant operators. Our overarching goal is to write down the gauge-invariant subalgebra for a subregion \(r\) of the top layer which we will later use to derive an FLM formula for the gauged RTN. ### The structure of the gauge-invariant Hilbert space We now study the decomposition of \(\widetilde{\mathcal{H}}\) when our graph \(\Lambda\) is divided into a subregion and its complement. We define a subregion \(r\) of \(\Lambda\) to be an arbitrary subset of vertices and edges (without further restrictions). We call the complement subregion \(\overline{r}\). In order to work out a useful basis for gauge-invariant states, it is convenient to divide the set \(V\) of all vertices into three types: those strictly in \(r\) (meaning that the vertex and its associated edges are all in \(r\)), those strictly in \(\overline{r}\), and vertices "on the cut" (meaning that the vertex and its associated edges are partly in \(r\) and partly in \(\overline{r}\)). We call these sets \(V_{r}\), \(V_{\overline{r}}\), and \(V_{c}\equiv V/\left(V_{r}\cup V_{\overline{r}}\right)\), respectively. Consequently, the gauge-invariant projector can be decomposed in the following way: \[\Pi_{\text{GI}}=\Pi_{V_{r}}\Pi_{V_{c}}\Pi_{V_{\overline{r}}}, \tag{10}\] where \(\Pi_{V_{i}}\) is defined as the product of individual projections \(\Pi_{v}\) over all vertices \(v\in V_{i}\), for \(i=r,c,\overline{r}\). First, let us discuss a partial gauging of the pre-gauged Hilbert space. Using the tensor decomposition of \(\mathcal{H}=\mathcal{H}_{r}\otimes\mathcal{H}_{\overline{r}}\), we can write \(\widetilde{\mathcal{H}}=\Pi_{\text{GI}}\mathcal{H}\) as \[\widetilde{\mathcal{H}} =\Pi_{V_{r}}\Pi_{V_{c}}\Pi_{V_{\overline{r}}}\left(\mathcal{H}_{ r}\otimes\mathcal{H}_{\overline{r}}\right)\] \[=\Pi_{V_{c}}\left(\left(\Pi_{V_{r}}\mathcal{H}_{r}\right)\otimes \left(\Pi_{V_{\overline{r}}}\mathcal{H}_{\overline{r}}\right)\right). \tag{11}\] We define the two terms in the parentheses as \[\hat{\mathcal{H}}_{r}\equiv\Pi_{V_{r}}\mathcal{H}_{r},\quad\hat{\mathcal{H}}_ {\overline{r}}\equiv\Pi_{V_{\overline{r}}}\mathcal{H}_{\overline{r}}. \tag{12}\] These are "partially gauged" Hilbert spaces, in the sense that states in \(\hat{\mathcal{H}}_{r}\) (\(\hat{\mathcal{H}}_{\overline{r}}\)) are invariant under gauge transformations associated to vertices in \(V_{r}\) (\(V_{\overline{r}}\)), but not so under gauge transformations on the cut. We denote the partially gauged Hilbert space on the full graph as \[\hat{\mathcal{H}}=\hat{\mathcal{H}}_{r}\otimes\hat{\mathcal{H}}_{\overline{r}}. \tag{13}\] As \(\hat{\mathcal{H}}\) tensor factorizes, the algebra of operators on \(\hat{\mathcal{H}}\) also factorizes as \[\hat{\mathcal{A}}=\hat{\mathcal{A}}_{r}\otimes\hat{\mathcal{A}}_{\overline{r}}. \tag{14}\] Now that we have a partially gauged Hilbert space \(\hat{\cal H}\), it remains to impose gauge invariance "on the cut" and obtain the fully gauged Hilbert space \(\widetilde{\cal H}=\Pi_{V_{c}}\hat{\cal H}\). The gauge transformation (15) associated to each vertex \(v_{i}\in V_{c}\) can be decomposed into unitary operators in \(r\) and \(\overline{r}\): \[A_{v_{i}}(g_{i})=A_{v_{i},r}(g_{i})A_{v_{i},\overline{r}}(g_{i}). \tag{16}\] Let \(n\equiv|V_{c}|\) be the number of vertices on the cut. The gauge-invariant projector on the cut \(\Pi_{V_{c}}\) acts by integrating over the gauge transformations associated to the \(n\) vertices in \(V_{c}\): \[\Pi_{V_{c}} =\int dg_{1}\cdots dg_{n}A_{v_{1}}(g_{1})\cdots A_{v_{n}}(g_{n})\] \[=\int dg_{1}\cdots dg_{n}A_{v_{1},r}(g_{1})\cdots A_{v_{n},r}(g_{ n})A_{v_{1},\overline{r}}(g_{1})\cdots A_{v_{n},\overline{r}}(g_{n})\] \[\equiv\int dgA_{r}(g)A_{\overline{r}}(g), \tag{17}\] where we have defined \(A_{r}(g)=\prod_{i=1}^{n}A_{v_{i},r}(g_{i})\) (and similarly for \(A_{\overline{r}}(g)\)), \(g=(g_{1},\cdots,g_{n})\) is a element of \(G^{n}\) (the direct product of \(n\) copies of \(G\) on the cut), and \(dg\) is the Haar measure on \(G^{n}\). Thus \(A_{r}(g)\) is a \(G^{n}\) action on \(\hat{\cal H}_{r}\), and \(\hat{\cal H}_{r}\) can be decomposed into irreps of \(G^{n}\). We decompose \(\hat{\cal H}_{r}\) into the following way: \[\hat{\cal H}_{r}\cong\bigoplus_{\alpha,i}\hat{\cal H}_{r}^{\alpha i}, \tag{18}\] where \(\alpha\) as an irreducible representation of \(G^{n}\) can also be thought of as the external tensor product of \(n\) irreps of \(G\), i.e., \(\alpha\) denotes the external tensor product \(\alpha_{1}\boxtimes\alpha_{2}\boxtimes\cdots\boxtimes\alpha_{n}\). Thus, we will sometimes write \(\alpha\) as a tuple of \(G\) irreps \((\alpha_{1},\alpha_{2},\cdots,\alpha_{n})\). The index \(i=1,\cdots,n_{\alpha}\) denotes the multiplicity of the \(\alpha\) irrep. The sum ranges over all \(G^{n}\) irreps but some irreps may appear with zero multiplicity, as in the single vertex Hilbert space (8). From the decomposition (18), we write an orthonormal basis for \(\hat{\cal H}_{r}\) as \(\left\{\left|\alpha ik\right\rangle_{r}\right\}\), where again the first index \(i=1,\cdots,n_{\alpha}\) runs over the irrep multiplicity and the second index \(k=1,\cdots,d_{\alpha}\) labels an orthonormal basis for each \(\hat{\cal H}_{r}^{\alpha i}\). Similarly, we write an orthonormal basis for \(\hat{\cal H}_{\overline{r}}\) as \(\left\{\left|\overline{\beta}j\ell\right\rangle_{\overline{r}}\right\}\), where \(j=1,\cdots,\overline{n}_{\overline{\beta}}\), and \(\overline{n}_{\overline{\beta}}\) is the multiplicity of the \(\overline{\beta}\) irrep on \(\overline{r}\). Explicitly, \(A_{r}(g)\) (\(A_{\overline{r}}(g)\)) acts on the basis states of \(\hat{\cal H}_{r}\) (\(\hat{\cal H}_{\overline{r}}\)) via \[A_{r}(g)\left|\alpha ik\right\rangle_{r} =\sum_{k^{\prime}}D_{k^{\prime}k}^{\alpha}(g)\left|\alpha ik^{ \prime}\right\rangle_{r},\] \[A_{\overline{r}}(g)\left|\overline{\beta}j\ell\right\rangle_{ \overline{r}} =\sum_{\ell^{\prime}}D_{\ell^{\prime}\ell}^{\overline{\beta}}(g) \left|\overline{\beta}j\ell^{\prime}\right\rangle_{\overline{r}}. \tag{19}\] Combining the basis for \(\hat{\mathcal{H}}_{r}\) and for \(\hat{\mathcal{H}}_{\overline{r}}\), we write an orthonormal basis for \(\hat{\mathcal{H}}\) as \(\left\{\left|\alpha ik\right\rangle_{r}\left|\overline{\beta}j\ell\right\rangle_ {\overline{r}}\right\}\). It is worth noting that the multiplicities \(\overline{n}_{\overline{\alpha}}\) on \(\overline{r}\) are generally independent from the multiplicities \(n_{\overline{\alpha}}\) on \(r\); in particular, \(\overline{n}_{\overline{\alpha}}\) could vanish while \(n_{\overline{\alpha}}\) is nonzero, and vice versa. In a sense, we have done as much gauging as we can while keeping the factorization of the Hilbert space between \(r\) and \(\overline{r}\). \(\hat{\mathcal{H}}\) is similar to what is often called the extended Hilbert space [22, 23, 24, 19], which is a choice of Hilbert space into which one can embed gauge-invariant states such that the extended Hilbert space factorizes across the cut. Here we arrive at a similar prescription by restricting from a larger Hilbert space \(\mathcal{H}\). Now we will write a basis of states for the fully gauge-invariant Hilbert space \(\widetilde{\mathcal{H}}\). **Lemma 1**.: The fully gauge-invariant Hilbert space \(\widetilde{\mathcal{H}}=\Pi_{V_{c}}\left(\hat{\mathcal{H}}_{r}\otimes\hat{ \mathcal{H}}_{\overline{r}}\right)\) is given by \[\widetilde{\mathcal{H}}=\left\{\sum_{\alpha ijk}\widetilde{\psi}_{\alpha ij} \left|\alpha ik\right\rangle_{r}\left|\overline{\alpha}jk\right\rangle_{ \overline{r}}:\widetilde{\psi}_{\alpha ij}\in\mathbb{C}\right\}. \tag{3.10}\] Proof.: Since we already have a basis for the partially gauged Hilbert space, it suffices to demonstrate the action of \(\Pi_{V_{c}}\) on these basis states, which is given by \[\Pi_{V_{c}}\left|\alpha ik\right\rangle_{r}\left|\overline{\beta}j\ell\right\rangle _{\overline{r}}=\sum_{k^{\prime}\ell^{\prime}}\int dgD^{\alpha}_{k^{\prime}k }(g)D^{\overline{\beta}}_{\ell^{\prime}\ell}(g)\left|\alpha ik^{\prime} \right\rangle_{r}\left|\overline{\beta}j\ell^{\prime}\right\rangle_{ \overline{r}}. \tag{3.11}\] We recall the Schur orthogonality relation for compact groups: \[\int dgD^{\alpha}_{k^{\prime}k}(g)D^{\overline{\beta}}_{\ell^{\prime}\ell}(g )=\frac{\delta_{\alpha\beta}\delta_{k^{\prime}\ell^{\prime}}\delta_{k\ell}}{ d_{\alpha}}, \tag{3.12}\] so that the fully gauge-invariant basis states are \[\Pi_{V_{c}}\left|\alpha ik\right\rangle_{r}\left|\overline{\beta}j\ell \right\rangle_{\overline{r}}=\frac{1}{d_{\alpha}}\delta_{\alpha\beta}\delta_{ k\ell}\sum_{k^{\prime}\ell^{\prime}}\delta_{k^{\prime}\ell^{\prime}}\left| \alpha ik^{\prime}\right\rangle_{r}\left|\overline{\beta}j\ell^{\prime} \right\rangle_{\overline{r}}=\frac{1}{d_{\alpha}}\delta_{\alpha\beta}\delta_{ k\ell}\sum_{k^{\prime}}\left|\alpha ik^{\prime}\right\rangle_{r}\left| \overline{\beta}jk^{\prime}\right\rangle_{\overline{r}}. \tag{3.13}\] Choosing \(\alpha=\beta\) and \(k=\ell\) gives the desired form (3.10). _Remark 1_.: (3.10) immediately implies a natural Hilbert space isomorphism \[\widetilde{\mathcal{H}}\cong\bigoplus_{\alpha}\widetilde{\mathcal{H}}^{ \alpha}_{r}\otimes\widetilde{\mathcal{H}}^{\overline{\alpha}}_{\overline{r}}. \tag{3.14}\] Here \(\widetilde{\mathcal{H}}^{\alpha}_{r}\) denotes a Hilbert space of dimension \(n_{\alpha}\) with orthonormal basis states \(\left|\alpha i\right\rangle_{r}\) transforming in the \(\alpha\) representation of \(G^{n}\), and \(\widetilde{\mathcal{H}}^{\overline{\alpha}}_{\overline{r}}\) similarly denotes a Hilbert space of dimension \(\overline{n}_{\overline{\alpha}}\) with orthonormal basis states \(\left|\overline{\alpha}j\right\rangle_{\overline{r}}\) transforming in the \(\overline{\alpha}\) representation. Note that although irrep labels such as \(\alpha\) appear in the basis states, they are fixed within each Hilbert space \(\widetilde{\mathcal{H}}_{r}^{\alpha}\) or \(\widetilde{\mathcal{H}}_{\overline{r}}^{\overline{\alpha}}\). More explicitly, the natural isomorphism (3.14) maps an arbitrary state of (3.10) in the following way: \[\left|\widetilde{\psi}\right\rangle=\sum_{\alpha ijk}\widetilde{\psi}_{\alpha ij }\left|\alpha ik\right\rangle_{r}\left|\overline{\alpha}jk\right\rangle_{ \overline{r}}\quad\rightarrow\quad\sum_{\alpha ij}\sqrt{d_{\alpha}} \widetilde{\psi}_{\alpha ij}\left|\alpha i\right\rangle_{r}\left|\overline{ \alpha}j\right\rangle_{\overline{r}}. \tag{3.15}\] The \(\sqrt{d_{\alpha}}\) is a crucial factor which ensures that the isomorphism preserves the inner product. Given this decomposition, our next goal will be to define an algebra of gauge-invariant operators on \(r\), which we will call \(\widetilde{\mathcal{A}}_{r}\). Given the lack of factorization of \(\widetilde{\mathcal{H}}\) as indicated by (3.14), we cannot easily write \(\widetilde{\mathcal{A}}_{r}\) as \(\mathcal{B}(\widetilde{\mathcal{H}}_{r})\) for some putative Hilbert space \(\widetilde{\mathcal{H}}_{r}\). Rather, we will use the known algebra of operators on \(\mathcal{H}_{r}\) and \(\hat{\mathcal{H}}_{r}\) to define \(\widetilde{\mathcal{A}}_{r}\). ### The gauge-invariant subregion algebra It is tempting to define the algebra of gauge-invariant operators in a subregion \(r\) via restriction of the pre-gauged algebra in that region \[\widetilde{\mathcal{A}}_{r}=\Pi_{\text{GI}}\mathcal{A}_{r}\Pi_{\text{GI}}, \tag{3.16}\] similar to (2.19). There is a second possible description of the gauge-invariant algebra, which is that \(\widetilde{\mathcal{A}}_{r}\) consists of the set of operators \(\{\widetilde{\mathcal{O}}_{r}=\mathcal{O}_{r}\Pi_{\text{GI}}\}\) for all operators \(\mathcal{O}_{r}\in\mathcal{A}_{r}\) which commute with the gauge-invariant projector: \([\mathcal{O}_{r},\Pi_{\text{GI}}]=0\). We will call this algebra \(\widetilde{\mathcal{A}}_{r}^{(1)}\), and the algebra (3.16) defined by conjugation by the gauge-invariant projector \(\widetilde{\mathcal{A}}_{r}^{(2)}\). At first blush it is only obvious that \(\widetilde{\mathcal{A}}_{r}^{(1)}\) is a subset of \(\widetilde{\mathcal{A}}_{r}^{(2)}\), as \[\mathcal{O}_{r}\Pi_{\text{GI}}=\mathcal{O}_{r}\Pi_{\text{GI}}^{2}=\Pi_{\text{ GI}}\mathcal{O}_{r}\Pi_{\text{GI}}\Rightarrow\widetilde{\mathcal{A}}_{r}^{(1)} \subseteq\widetilde{\mathcal{A}}_{r}^{(2)}, \tag{3.17}\] but it is not obvious the two definitions are equivalent. Here we aim to show that. **Lemma 2**.: \(\widetilde{\mathcal{A}}_{r}^{(1)}=\widetilde{\mathcal{A}}_{r}^{(2)}\)_._ Proof.: We again use the group action on the cut \(A(g)=A_{r}(g)A_{\overline{r}}(g)\) and the gauge-invariant projector on the cut \(\Pi_{V_{c}}\) which integrates over the group action: \[\Pi_{V_{c}}=\int dgA(g). \tag{3.18}\] We define an element of \(\widehat{\mathcal{A}}_{r}^{(2)}\) by acting on an arbitrary pre-gauged operator \(\mathcal{O}_{r}\in\mathcal{A}_{r}\) via \[\Pi_{\rm GI}\mathcal{O}_{r}\Pi_{\rm GI} =\Pi_{V_{\overline{r}}}\left(\Pi_{V_{c}}\Pi_{V_{r}}\mathcal{O}_{r} \Pi_{V_{r}}\Pi_{V_{c}}\right)\Pi_{V_{\overline{r}}}\] \[=\left(\Pi_{V_{c}}\Pi_{V_{r}}\mathcal{O}_{r}\Pi_{V_{r}}\Pi_{V_{c} }\right)\Pi_{V_{\overline{r}}}\] \[\equiv\left(\Pi_{V_{c}}\hat{\mathcal{O}}_{r}\Pi_{V_{c}}\right)\Pi _{V_{\overline{r}}} \tag{3.19}\] where \(\hat{\mathcal{O}}_{r}\equiv\Pi_{V_{r}}\mathcal{O}_{r}\Pi_{V_{r}}\in\hat{ \mathcal{A}}_{r}\) is an operator on the partially gauged Hilbert space \(\hat{\mathcal{H}}_{r}\). Conjugation via the gauge-invariant projector on the cut yields \[\Pi_{V_{c}}\hat{\mathcal{O}}_{r}\Pi_{V_{c}}=\int dgdg^{\prime}A(g)\hat{ \mathcal{O}}_{r}A(g^{\prime}). \tag{3.20}\] Using the right-invariance of the Haar measure, we can shift \(g\to g(g^{\prime})^{-1}\) to obtain \[\Pi_{V_{c}}\hat{\mathcal{O}}_{r}\Pi_{V_{c}} =\int dgA(g)\int dg^{\prime}A((g^{\prime})^{-1})\hat{\mathcal{O}} _{r}A(g^{\prime}) \tag{3.21}\] \[=\Pi_{V_{c}}\int dg^{\prime}A((g^{\prime})^{-1})\hat{\mathcal{O}} _{r}A(g^{\prime})\equiv\Pi_{V_{c}}\hat{\mathcal{O}}_{r}^{\prime}, \tag{3.22}\] where \(\hat{\mathcal{O}}_{r}^{\prime}\) is defined by the integral over \(g^{\prime}\). We could equivalently send \(g^{\prime}\to g^{-1}g^{\prime}\) to obtain \[\Pi_{V_{c}}\hat{\mathcal{O}}_{r}\Pi_{V_{c}} =\int dgA(g)\hat{\mathcal{O}}_{r}A(g^{-1})\Pi_{V_{c}}\] \[=\int dgA(g^{-1})\hat{\mathcal{O}}_{r}A(g)\Pi_{V_{c}}=\hat{ \mathcal{O}}_{r}^{\prime}\Pi_{V_{c}} \tag{3.23}\] where we use the fact that the Haar measure is invariant under inversion \(dg\to d(g^{-1})\). This shows \(\hat{\mathcal{O}}_{r}^{\prime}\Pi_{V_{c}}=\Pi_{V_{c}}\hat{\mathcal{O}}_{r}^{\prime}\), so \(\hat{\mathcal{O}}_{r}^{\prime}\) commutes with the gauge-invariant projector on the cut. By construction, \(\hat{\mathcal{O}}_{r}^{\prime}\) also commutes with \(\Pi_{V_{r}}\) and \(\Pi_{V_{\overline{r}}}\), so it commutes with \(\Pi_{\rm GI}\). Now we show that \(\hat{\mathcal{O}}_{r}^{\prime}\) is an element of \(\mathcal{A}_{r}\), which is not obvious as \(A(g)\) on the cut acts on both \(r\) and \(\overline{r}\). However, we can write \[\hat{\mathcal{O}}_{r}^{\prime} =\int dgA(g^{-1})\hat{\mathcal{O}}_{r}A(g)=\int dgA_{r}(g^{-1})A_{ \overline{r}}(g^{-1})\hat{\mathcal{O}}_{r}A_{\overline{r}}(g)A_{r}(g)\] \[=\int dgA_{r}(g^{-1})\hat{\mathcal{O}}_{r}A_{r}(g), \tag{3.24}\] as \(\hat{\mathcal{O}}_{r}\) commutes with operators in \(\overline{r}\). Thus \(\hat{\mathcal{O}}_{r}^{\prime}\) is in \(\mathcal{A}_{r}\). Combining the above, we can write any element of \(\widehat{\mathcal{A}}_{r}^{(2)}\) as \[\Pi_{\rm GI}\mathcal{O}_{r}\Pi_{\rm GI}=\hat{\mathcal{O}}_{r}^{\prime}\Pi_{V_ {c}}\Pi_{V_{\overline{r}}}=\hat{\mathcal{O}}_{r}^{\prime}\Pi_{V_{r}}\Pi_{V_{c} }\Pi_{V_{\overline{r}}}=\hat{\mathcal{O}}_{r}^{\prime}\Pi_{\rm GI}, \tag{3.25}\] which belong to \(\widetilde{\mathcal{A}}_{r}^{(1)}\) as \(\hat{\mathcal{O}}_{r}^{\prime}\) is an operator in \(\mathcal{A}_{r}\) that commutes with \(\Pi_{\rm GI}\). Therefore, \(\widetilde{\mathcal{A}}_{r}^{(2)}\subseteq\widetilde{\mathcal{A}}_{r}^{(1)}\). Morevoer, as we argued earlier, we have \(\widetilde{\mathcal{A}}_{r}^{(1)}\subseteq\widetilde{\mathcal{A}}_{r}^{(2)}\). Thus, we have shown \(\widetilde{\mathcal{A}}_{r}^{(1)}=\widetilde{\mathcal{A}}_{r}^{(2)}\). _Remark 2_.: It will be more convenient to use \(\widetilde{\mathcal{A}}_{r}^{(1)}\) as our definition of \(\widetilde{\mathcal{A}}_{r}\) in later discussions. We now rewrite it by introducing the following notation. For the rest of the paper, we will denote the subset of operators in an algebra that commute with the gauge-invariant projector with a superscript \(\Pi\); for example, the algebra \(\mathcal{A}^{\Pi}\) is defined by \[\mathcal{A}^{\Pi}\equiv\{\mathcal{O}\in\mathcal{A}:[\mathcal{O},\Pi_{\rm GI}] =0\}. \tag{3.26}\] It is clear that this subset is itself a von Neumann algebra, as it contains the identity, which necessarily commutes with any projector, and is closed under addition, multiplication, and involution10. Similarly, we define the subalgebra \(\mathcal{A}_{r}^{\Pi}\) as Footnote 10: Closure of \(\mathcal{A}^{\Pi}\) under addition and multiplication is obvious, and closure under involution follows from the projector being Hermitian. \[\mathcal{A}_{r}^{\Pi}=\{\mathcal{O}_{r}\in\mathcal{A}_{r}:[\mathcal{O}_{r}, \Pi_{\rm GI}]=0\}, \tag{3.27}\] and define \(\hat{\mathcal{A}}_{r}^{\Pi}\) as \[\hat{\mathcal{A}}_{r}^{\Pi}=\{\hat{\mathcal{O}}_{r}\in\hat{\mathcal{A}}_{r}:[ \hat{\mathcal{O}}_{r},\Pi_{\rm GI}]=0\}. \tag{3.28}\] So far, we have shown \[\widetilde{\mathcal{A}}_{r}=\mathcal{A}_{r}^{\Pi}\Pi_{\rm GI}. \tag{3.29}\] **Lemma 3**.: \(\mathcal{A}_{r}^{\Pi}\Pi_{\rm GI}=\hat{\mathcal{A}}_{r}^{\Pi}\Pi_{\rm GI}\)_._ Proof.: It is clear that \(\hat{\mathcal{A}}_{r}^{\Pi}\Pi_{\rm GI}\subseteq\mathcal{A}_{r}^{\Pi}\Pi_{ \rm GI}\), so we only need to show the opposite inclusion. Consider any operator \(\mathcal{O}_{r}\in\mathcal{A}_{r}^{\Pi}\). As this operator commutes with \(\Pi_{\rm GI}\), we can use the decomposition of the gauge-invariant projector to write \(\mathcal{O}_{r}\Pi_{\rm GI}\) as \[\mathcal{O}_{r}\Pi_{\rm GI}=\Pi_{\rm GI}\mathcal{O}_{r}\Pi_{\rm GI}=\Pi_{\rm GI }\left(\Pi_{V_{r}}\mathcal{O}_{r}\Pi_{V_{r}}\right)\Pi_{\rm GI}=\left(\Pi_{V_ {r}}\mathcal{O}_{r}\Pi_{V_{r}}\right)\Pi_{\rm GI}. \tag{3.30}\] Note that \(\Pi_{V_{r}}\mathcal{O}_{r}\Pi_{V_{r}}\) is an operator on \(\hat{\mathcal{H}}_{r}\) that commutes with \(\Pi_{\rm GI}\), so it belongs to \(\hat{\mathcal{A}}_{r}^{\Pi}\). Thus, every element of \(\mathcal{A}_{r}^{\Pi}\Pi_{\rm GI}\) is an element of \(\hat{\mathcal{A}}_{r}^{\Pi}\Pi_{\rm GI}\). This shows the inclusion \(\mathcal{A}_{r}^{\Pi}\Pi_{\rm GI}\subseteq\hat{\mathcal{A}}_{r}^{\Pi}\Pi_{ \rm GI}\), from which we conclude \(\mathcal{A}_{r}^{\Pi}\Pi_{\rm GI}=\hat{\mathcal{A}}_{r}^{\Pi}\Pi_{\rm GI}\). **Corollary 4**.: \(\widetilde{\mathcal{A}}_{r}=\hat{\mathcal{A}}_{r}^{\Pi}\Pi_{\rm GI}\)_._ Using the corollary above, we will now construct a generic operator in \(\widetilde{\mathcal{A}}_{r}\). **Lemma 5**.: \(\widetilde{\mathcal{A}}_{r}\) can be written in the following two forms: \[\widetilde{\mathcal{A}}_{r} =\left\{\hat{\mathcal{O}}_{r}\Pi_{\text{GI}}:\hat{\mathcal{O}}_{r} =\sum_{\alpha ijk}\hat{\mathcal{O}}_{\alpha ij}\left|\alpha ik\right\rangle_{r }\left\langle\alpha jk\right|\otimes\mathbb{1}_{\overline{r}},\,\hat{\mathcal{ O}}_{\alpha ij}\in\mathbb{C}\right\} \tag{3.31}\] \[=\left\{\widetilde{\mathcal{O}}_{r}=\sum_{\alpha ii^{\prime}jk \ell}\widetilde{\mathcal{O}}_{\alpha ij}\left|\alpha ik\right\rangle_{r} \left\langle\alpha j\ell\right|\otimes\left|\overline{\alpha}i^{\prime}k \right\rangle_{\overline{r}}\left\langle\overline{\alpha}i^{\prime}\ell \right|:\widetilde{\mathcal{O}}_{\alpha ij}\in\mathbb{C}\right\}, \tag{3.32}\] with \(\widetilde{\mathcal{O}}_{r}\) in (3.32) identified with \(\hat{\mathcal{O}}_{r}\Pi_{\text{GI}}\) in (3.31) under \(\widetilde{\mathcal{O}}_{\alpha ij}=\hat{\mathcal{O}}_{\alpha ij}/d_{\alpha}\).11 Footnote 11: In a slight abuse of notation, we have referred to the matrix elements of an operator with the same symbol as the operator itself, but with irrep labels and indices such as \(\alpha\), \(i\), and \(j\). We could have referred to \(\hat{\mathcal{O}}_{\alpha ij}\) as \(\hat{\mathcal{O}}_{r,\alpha ij}\) in (3.31), but for simplicity, we will use the former. Proof.: We show this by noting \(\widetilde{\mathcal{A}}_{r}=\hat{\mathcal{A}}_{r}^{\Pi}\Pi_{\text{GI}}\) and constructing a generic operator therein. Recall that \(\left\{\left|\alpha ik\right\rangle_{r}\right\}\) is a basis for \(\hat{\mathcal{H}}_{r}\), so an operator \(\hat{\mathcal{O}}_{r}\in\hat{\mathcal{A}}_{r}\) (not necessarily gauge-invariant) can be written as \[\hat{\mathcal{O}}_{r}=\sum_{\alpha\beta ijk\ell}\hat{\mathcal{O}}_{\alpha \beta ijk\ell}\left|\alpha ik\right\rangle_{r}\left\langle\beta j\ell\right| \otimes\mathbb{1}_{\overline{r}} \tag{3.33}\] with some \(\hat{\mathcal{O}}_{\alpha\beta ijk\ell}\in\mathbb{C}\). Now we require \(\hat{\mathcal{O}}_{r}\in\hat{\mathcal{A}}_{r}^{\Pi}\), so we will try to impose \(\hat{\mathcal{O}}_{r}\Pi_{\text{GI}}=\Pi_{\text{GI}}\hat{\mathcal{O}}_{r}\). We find \[\hat{\mathcal{O}}_{r}\Pi_{\text{GI}} =\hat{\mathcal{O}}_{r}\Pi_{V_{r}}\Pi_{V_{\overline{r}}}\Pi_{V_{c}}\] \[=\left(\sum_{\alpha\beta ijk\ell}\hat{\mathcal{O}}_{\alpha\beta ijk \ell}\left|\alpha ik\right\rangle_{r}\left\langle\beta j\ell\right|\otimes \mathbb{1}_{\overline{r}}\right)\Pi_{V_{\overline{r}}}\Pi_{V_{c}}\] \[=\left(\sum_{\alpha\beta\gamma ijk\ell i^{\prime}k^{\prime}} \hat{\mathcal{O}}_{\alpha\beta ijk\ell}\left|\alpha ik\right\rangle_{r}\left \langle\beta j\ell\right|\otimes\left|\overline{\gamma}i^{\prime}k^{\prime} \right\rangle_{\overline{r}}\left\langle\overline{\gamma}i^{\prime}k^{\prime} \right|\right)\Pi_{V_{c}}\] \[=\sum_{\alpha\beta\gamma ijk\ell i^{\prime}k^{\prime}}\hat{ \mathcal{O}}_{\alpha\beta ijk\ell}\frac{1}{d_{\beta}}\delta_{\beta\gamma} \delta_{\ell k^{\prime}}\left|\alpha ik\right\rangle_{r}\left\langle\beta j\ell ^{\prime}\right|\otimes\left|\overline{\gamma}i^{\prime}k^{\prime}\right\rangle _{\overline{r}}\left\langle\overline{\gamma}i^{\prime}\ell^{\prime}\right|\] \[=\sum_{\alpha\beta ijk\ell i^{\prime}\ell^{\prime}}\hat{ \mathcal{O}}_{\alpha\beta ijk\ell}\frac{1}{d_{\beta}}\left|\alpha ik\right\rangle _{r}\left\langle\beta j\ell^{\prime}\right|\otimes\left|\overline{\beta}i^{ \prime}\ell\right\rangle_{\overline{r}}\left\langle\overline{\beta}i^{\prime} \ell^{\prime}\right|, \tag{3.34}\] where we have used the basis of \(\hat{\mathcal{H}}_{\overline{r}}\) in going to the third line and used (3.13) in going to the fourth line. We can apply the same procedure to write \(\Pi_{\rm GI}\hat{\mathcal{O}}_{r}\) as \[\Pi_{\rm GI}\hat{\mathcal{O}}_{r} =\Pi_{V_{c}}\sum_{\alpha\beta\gamma ijk\ell i^{\prime}k^{\prime}} \hat{\mathcal{O}}_{\alpha\beta ijk\ell}\ket{\alpha ik}_{r}\bra{\beta j\ell} \otimes\ket{\overline{\gamma}i^{\prime}k^{\prime}}_{\mathbf{\tau}}\bra{\overline {\gamma}i^{\prime}k^{\prime}}\] \[=\sum_{\alpha\beta\gamma ijk\ell i^{\prime}k^{\prime}\ell^{\prime }}\hat{\mathcal{O}}_{\alpha\beta ijk\ell}\frac{1}{d_{\alpha}}\delta_{\alpha \gamma}\delta_{kk^{\prime}}\ket{\alpha i\ell^{\prime}}_{r}\bra{\beta j\ell} \otimes\ket{\overline{\gamma}i^{\prime}\ell^{\prime}}_{\mathbf{\tau}}\bra{\overline {\gamma}i^{\prime}k^{\prime}}\] \[=\sum_{\alpha\beta ijk\ell i^{\prime}\ell^{\prime}}\hat{ \mathcal{O}}_{\alpha\beta ijk\ell}\frac{1}{d_{\alpha}}\ket{\alpha i\ell^{ \prime}}_{r}\bra{\beta j\ell}\otimes\ket{\overline{\alpha}i^{\prime}\ell^{ \prime}}_{\mathbf{\tau}}\bra{\overline{\alpha}i^{\prime}k}. \tag{3.35}\] One way to proceed is to find conditions on \(\hat{\mathcal{O}}_{\alpha\beta aijk\ell}\) such that the two expressions (3.34), (3.35) are equal, but doing this explicitly turns out to be slightly complicated (in cases where the multiplicities \(\overline{n}_{\overline{\alpha}}\), \(\overline{n}_{\overline{\beta}}\) vanish but \(n_{\alpha}\), \(n_{\beta}\) do not). Instead, we will use the equality of (3.34) and (3.35) to directly show that \(\hat{\mathcal{A}}_{r}^{\Pi}\Pi_{\rm GI}\) contains and is contained in the right-hand side of (3.31), which we now define as \[\widetilde{\mathcal{A}}_{r}^{(3)}\equiv\left\{\hat{\mathcal{O}}_{r}\Pi_{\rm GI }:\hat{\mathcal{O}}_{r}=\sum_{\alpha ijk}\hat{\mathcal{O}}_{\alpha ij}\ket{ \alpha ik}_{r}\bra{\alpha jk}\otimes\mathbb{1}_{\mathbf{\tau}},\,\hat{\mathcal{O}} _{\alpha ij}\in\mathbb{C}\right\}. \tag{3.36}\] First, we show that \(\widetilde{\mathcal{A}}_{r}^{(3)}\) defined by (3.36) is equal to (3.32) as claimed. To see this, we simply apply (3.34) to the special case of \(\hat{\mathcal{O}}_{r}=\sum_{\alpha ijk}\hat{\mathcal{O}}_{\alpha ij}\ket{ \alpha ik}_{r}\bra{\alpha jk}\otimes\mathbb{1}_{\mathbf{\tau}}\) and find \(\hat{\mathcal{O}}_{r}\Pi_{V_{c}}\) to be identical to \(\widetilde{\mathcal{O}}_{r}\) in (3.32) under \(\widetilde{\mathcal{O}}_{\alpha ij}=\hat{\mathcal{O}}_{\alpha ij}/d_{\alpha}\). Moreover, applying (3.35) to this case yields the same operator, so we find that this special \(\hat{\mathcal{O}}_{r}\) commutes with \(\Pi_{\rm GI}\). Thus, \(\widetilde{\mathcal{A}}_{r}^{(3)}\) is contained in \(\hat{\mathcal{A}}_{r}^{\Pi}\Pi_{\rm GI}\). Finally, we will show that \(\hat{\mathcal{A}}_{r}^{\Pi}\Pi_{\rm GI}\) is contained in \(\widetilde{\mathcal{A}}_{r}^{(3)}\). Any \(\hat{\mathcal{O}}_{r}\Pi_{\rm GI}\in\hat{\mathcal{A}}_{r}^{\Pi}\Pi_{\rm GI}\) can be written explicitly as \[\hat{\mathcal{O}}_{r}\Pi_{\rm GI} =\Pi_{\rm GI}\hat{\mathcal{O}}_{r}\Pi_{\rm GI}=\Pi_{V_{c}}\sum_{ \alpha\beta ijk\ell i^{\prime}\ell^{\prime}}\hat{\mathcal{O}}_{\alpha\beta ijk \ell}\frac{1}{d_{\beta}}\ket{\alpha ik}_{r}\bra{\beta j\ell^{\prime}}\otimes \ket{\overline{\beta}i^{\prime}\ell}_{\mathbf{\tau}}\bra{\overline{\beta}i^{ \prime}\ell^{\prime}}\] \[=\sum_{\alpha\beta ijk\ell i^{\prime}k^{\prime}\ell^{\prime}}\hat {\mathcal{O}}_{\alpha\beta ijk\ell}\frac{1}{d_{\alpha}d_{\beta}}\delta_{ \alpha\beta}\delta_{k\ell}\ket{\alpha ik^{\prime}}_{r}\bra{\beta j\ell^{\prime}} \otimes\ket{\overline{\beta}i^{\prime}k^{\prime}}_{\mathbf{\tau}}\bra{\overline {\beta}i^{\prime}\ell^{\prime}}\] \[=\sum_{\alpha ijk\ell i^{\prime}k^{\prime}}\hat{\mathcal{O}}_{ \alpha\alpha ijk^{\prime}k^{\prime}}\frac{1}{d_{\alpha}^{2}}\ket{\alpha ik}_{r} \bra{\alpha j\ell}\otimes\ket{\overline{\alpha}i^{\prime}k}_{\mathbf{\tau}}\bra{ \overline{\alpha}i^{\prime}\ell}, \tag{3.37}\] which is identical to \(\widetilde{\mathcal{O}}_{r}\) in (3.32) under \(\widetilde{\mathcal{O}}_{\alpha ij}=\sum_{k^{\prime}}\hat{\mathcal{O}}_{ \alpha\alpha ijk^{\prime}k^{\prime}}/d_{\alpha}^{2}\), and thus belongs to \(\widetilde{\mathcal{A}}_{r}^{(3)}\). Combining the above results, we conclude \(\widetilde{\mathcal{A}}_{r}=\widetilde{\mathcal{A}}_{r}^{(3)}\). After all of this machinery, it is clear that one is justified in writing the algebra \(\widetilde{\mathcal{A}}_{r}\) of gauge-invariant operators on a subregion \(r\) as a restriction via \(\Pi_{\rm GI}\) of the pre-gauged algebra \(\mathcal{A}_{r}\) on \(r\). Crucially, however, \(\widetilde{\mathcal{A}}_{r}\) is _not_ a subalgebra of \(\mathcal{A}_{r}\), as is obvious from the nontrivial action of (3.32) on \(\hat{\mathcal{H}}_{\overline{r}}\). This is manifest from the fact that \(\Pi_{\mathrm{GI}}\) is an element of \(\mathcal{A}\), not of \(\mathcal{A}_{r}\), and so the projection takes one out of the pre-gauged subregion algebra \(\mathcal{A}_{r}\). ### The center of the algebra For spatial subregions the following inclusion is obvious: \[\mathcal{A}_{\overline{r}}\subseteq\left(\mathcal{A}_{r}\right)^{\prime}, \tag{3.38}\] as causally disconnected operators must commute. Here \(\left(\mathcal{A}_{r}\right)^{\prime}\) denotes the commutant of \(\mathcal{A}_{r}\). Haag duality is the saturation of the above bound: \[\mathcal{A}_{\overline{r}}=\left(\mathcal{A}_{r}\right)^{\prime}, \tag{3.39}\] that is, the commutant of the algebra of operators in a subregion is equal to the algebra of operators in the complement region.12 Footnote 12: There are counterexamples to Haag duality in quantum field theories with global or gauge symmetries; see for example [25]. In our model, Haag duality certainly holds for the pre-gauged algebras, but does it also hold for the gauge-invariant algebras? We will now show that it does, i.e., \[\widetilde{\mathcal{A}}_{\overline{r}}=\left(\widetilde{\mathcal{A}}_{r} \right)^{\prime}. \tag{3.40}\] **Proposition 6**.: The Hilbert space isomorphism (3.14) induces the following isomorphisms between algebras: \[\widetilde{\mathcal{A}}_{r}\cong\bigoplus_{\alpha}\widetilde{\mathcal{A}}_{r} ^{\alpha}\otimes 1\overline{r},\qquad\widetilde{\mathcal{A}}_{\overline{r}} \cong\bigoplus_{\alpha}1_{r}^{\alpha}\otimes\widetilde{\mathcal{A}}_{\overline {r}}^{\overline{\alpha}}, \tag{3.41}\] where \(\widetilde{\mathcal{A}}_{r}^{\alpha}\equiv\mathcal{B}(\widetilde{\mathcal{H}}_ {r}^{\alpha})\), the algebra of bounded operators on \(\widetilde{\mathcal{H}}_{r}^{\alpha}\), and similarly we define \(\widetilde{\mathcal{A}}_{\overline{r}}^{\overline{\alpha}}\equiv\mathcal{B}( \widetilde{\mathcal{H}}_{\overline{r}}^{\overline{\alpha}})\). Moreover, \(1_{r}^{\alpha}\), \(1_{\overline{r}}^{\overline{\alpha}}\) denote the identity operators on \(\widetilde{\mathcal{H}}_{r}^{\alpha}\), \(\widetilde{\mathcal{H}}_{\overline{r}}^{\overline{\alpha}}\), respectively. Proof.: Recall from (3.14) that \(\widetilde{\mathcal{H}}\) is isomorphic to a direct sum of factorizing Hilbert spaces: \[\widetilde{\mathcal{H}}\cong\bigoplus_{\alpha}\widetilde{\mathcal{H}}_{r}^{ \alpha}\otimes\widetilde{\mathcal{H}}_{\overline{r}}^{\overline{\alpha}}; \tag{3.42}\] where the two sides are identified under the natural isomorphism (3.15), which we reproduce here: \[\left|\widetilde{\psi}\right\rangle=\sum_{\alpha ijk}\widetilde{\psi}_{ \alpha ij}\left|\alpha ik\right\rangle_{r}\left|\overline{\alpha}jk\right\rangle _{\overline{r}}\quad\rightarrow\quad\sum_{\alpha ij}\sqrt{d_{\alpha}} \widetilde{\psi}_{\alpha ij}\left|\alpha i\right\rangle_{r}\left|\overline{ \alpha}j\right\rangle_{\overline{r}}. \tag{3.43}\] We now apply this isomorphism to our algebra \(\widetilde{\mathcal{A}}_{r}\). Consider a general element of \(\widetilde{\mathcal{A}}_{r}\) defined via (3.32). Under (3.43), this element becomes \[\sum_{\alpha ii^{\prime}jk\ell}\widetilde{\mathcal{O}}_{\alpha ij} \left|\alpha ik\right\rangle_{r}\left\langle\alpha j\ell\right|\otimes\left| \overline{\alpha}i^{\prime}k\right\rangle_{\overline{r}}\left\langle \overline{\alpha}i^{\prime}\ell\right| \rightarrow\sum_{\alpha ii^{\prime}j}d_{\alpha}\widetilde{\mathcal{O}}_{ \alpha ij}\left|\alpha i\right\rangle_{r}\left\langle\alpha j\right|\otimes \left|\overline{\alpha}i^{\prime}\right\rangle_{\overline{r}}\left\langle \overline{\alpha}i^{\prime}\right|\] \[=\sum_{\alpha ij}d_{\alpha}\widetilde{\mathcal{O}}_{\alpha ij} \left|\alpha i\right\rangle_{r}\left\langle\alpha j\right|\otimes 1\tfrac{\overline{ \alpha}}{\overline{r}}, \tag{3.44}\] which is an element of \(\widetilde{\mathcal{A}}_{r}^{\alpha}\otimes 1\tfrac{\overline{\alpha}}{\overline{r}}\). Thus, we have demonstrated the isomorphism for \(\widetilde{\mathcal{A}}_{r}\) in (3.41). The isomorphism for \(\widetilde{\mathcal{A}}_{\overline{r}}\) follows from a similar argument. **Corollary 7**.: \(\widetilde{\mathcal{A}}_{r}\) obeys Haag duality, such that \(\left(\widetilde{\mathcal{A}}_{r}\right)^{\prime}=\widetilde{\mathcal{A}}_{ \overline{r}}\), where the commutant is defined with respect to the full gauge-invariant algebra \(\widetilde{\mathcal{A}}\). Proof.: This immediately follows from the algebra isomorphisms (3.41) and \[\left(\bigoplus_{\alpha}\widetilde{\mathcal{A}}_{r}^{\alpha}\otimes 1 \tfrac{\overline{\alpha}}{\overline{r}}\right)^{\prime}=\bigoplus_{\alpha} \left(\widetilde{\mathcal{A}}_{r}^{\alpha}\otimes 1\tfrac{\overline{\alpha}}{ \overline{r}}\right)^{\prime}=\bigoplus_{\alpha}1\,_{r}^{\alpha}\otimes \widetilde{\mathcal{A}}_{\overline{r}}^{\overline{\alpha}}. \tag{3.45}\] The center of an algebra is defined to be the intersection of the algebra with its commutant. As our gauge-invariant subalgebra \(\widetilde{\mathcal{A}}_{r}\) obeys Haag duality, the center is \[\widetilde{\mathcal{Z}}_{r}=\widetilde{\mathcal{A}}_{r}\cap\widetilde{\mathcal{ A}}_{r}^{\prime}=\widetilde{\mathcal{A}}_{r}\cap\widetilde{\mathcal{A}}_{ \overline{r}}. \tag{3.46}\] **Lemma 8**.: The center \(\widetilde{\mathcal{Z}}_{r}\) is \[\widetilde{\mathcal{Z}}_{r}=\left\{z_{\alpha}\widetilde{P}^{\alpha}:z_{\alpha} \in\mathbb{C}\right\}, \tag{3.47}\] where \(\widetilde{P}^{\alpha}\) are mutually orthogonal projections defined via \[\widetilde{P}^{\alpha}=\frac{1}{d_{\alpha}}\sum_{ijk\ell}\left|\alpha ik \right\rangle_{r}\left\langle\alpha i\ell\right|\otimes\left|\overline{\alpha} jk\right\rangle_{\overline{r}}\left\langle\overline{\alpha}j\ell\right|. \tag{3.48}\] Proof.: Under the algebra isomorphisms (3.41) for \(\widetilde{\mathcal{A}}_{r}\) and \(\widetilde{\mathcal{A}}_{\overline{r}}\), we can immediately identify the center as \[\widetilde{\mathcal{Z}}_{r}\cong\bigoplus_{\alpha}\mathbb{C}\left(1\,_{r}^{ \alpha}\otimes 1\tfrac{\overline{\alpha}}{\overline{r}}\right). \tag{3.49}\] That is, the center \(\widetilde{\mathcal{Z}}_{r}\) is the direct sum of complex multiples of the identity within each superselection sector \(\alpha\). We can write the identity in a superselection sector as \[1\,_{r}^{\alpha}\otimes 1\,_{\overline{r}}^{\overline{\alpha}}=\sum_{ij} \left|\alpha i\right\rangle_{r}\left\langle\alpha i\right|\otimes\left| \overline{\alpha}j\right\rangle_{\overline{r}}\left\langle\overline{\alpha}j \right|, \tag{3.50}\] and examine the pullback of these operators under the natural isomorphism (3.43) to find the corresponding operators in \(\widetilde{\mathcal{A}}\). We obtain \[\mathbb{1}_{r}^{\,\alpha}\otimes\mathbb{1}_{\overline{r}}^{\overline{\alpha}} \quad\Rightarrow\quad\frac{1}{d_{\alpha}}\sum_{ijk\ell}\left|\alpha ik\right\rangle _{r}\left\langle\alpha i\ell\right|\otimes\left|\overline{\alpha}jk\right\rangle _{\overline{r}}\left\langle\overline{\alpha}j\ell\right|=\widetilde{P}^{ \alpha}. \tag{3.51}\] We identify these operators \(\widetilde{P}^{\alpha}\) as the (properly normalized) projections onto the \(\alpha\) superselection sector, where we remind the reader that \(\alpha\) is an irreducible representation of \(G^{n}\). These operators can alternatively be written as \[\widetilde{P}^{\alpha}=\left(\hat{P}_{r}^{\alpha}\otimes\mathbb{1}_{\overline {r}}\right)\Pi_{\text{GI}}=\left(\mathbb{1}_{r}\otimes\hat{P}_{\overline{r}}^ {\overline{\alpha}}\right)\Pi_{\text{GI}}, \tag{3.52}\] where \(\hat{P}_{r}^{\alpha}\) and \(\hat{P}_{\overline{r}}^{\overline{\alpha}}\) are orthogonal projections in \(\hat{\mathcal{H}}_{r}\) and \(\hat{\mathcal{H}}_{\overline{r}}\), respectively: \[\hat{P}_{r}^{\alpha}=\sum_{ik}\left|\alpha ik\right\rangle_{r}\left\langle \alpha ik\right|,\quad\hat{P}_{\overline{r}}^{\overline{\alpha}}=\sum_{ik} \left|\overline{\alpha}ik\right\rangle_{\overline{r}}\left\langle\overline{ \alpha}ik\right|. \tag{3.53}\] One can show the \(\widetilde{P}^{\alpha}\) are orthogonal and idempotent such that \(\widetilde{P}^{\alpha}\widetilde{P}^{\beta}=\delta_{\alpha\beta}\widetilde{P} ^{\alpha}\). ### Traces in \(\mathcal{A}_{r}\) and \(\widetilde{\mathcal{A}}_{r}\) We now define traces in our von Neumann algebras. When an algebra is \(\mathcal{B}(\mathcal{H})\) for some Hilbert space \(\mathcal{H}\), we can simply identify the minimal projections as projections onto a pure state in \(\mathcal{H}\), and the trace is the usual trace of a square matrix. Our algebras are not always of this form; an example is \(\widetilde{\mathcal{A}}_{r}\). Therefore, we will first identify the minimal projections, which are then used to define a normalized trace on the algebra. In particular, for \(\widetilde{\mathcal{A}}_{r}\) our task is to find the minimal projections \(\widetilde{P}_{r}\) in \(\widetilde{\mathcal{A}}_{r}\) and use them to define a "rescaled" trace \(\widetilde{\text{Tr}}_{r}\) which satisfies \[\widetilde{\text{Tr}}_{r}\widetilde{P}_{r}=1. \tag{3.54}\] Let us first discuss the case that we understand well: that of \(\mathcal{A}_{r}\), and by extension \(\hat{\mathcal{A}}_{r}\). As \(\mathcal{A}_{r}=\mathcal{B}(\mathcal{H}_{r})\), the minimal projections are projections onto a pure state in \(\mathcal{H}_{r}\), and we define the trace \(\text{Tr}_{r}\) in \(\mathcal{A}_{r}\) such that the minimal projections have trace \(1\). As \(\hat{\mathcal{A}}_{r}=\mathcal{B}(\hat{\mathcal{H}}_{r})\), we proceed similarly. Recall that the basis states of \(\hat{\mathcal{H}}_{r}\) are \(\left\{\left|\alpha ik\right\rangle_{r}\right\}\), and so we define the trace \(\hat{\text{Tr}}_{r}\) in \(\hat{\mathcal{A}}_{r}\) via \[\hat{\text{Tr}}_{r}\left|\alpha ik\right\rangle_{r}\left\langle\alpha ik \right|=1. \tag{3.55}\] As the minimal projections in \(\hat{\mathcal{A}}_{r}\) are also minimal projections in \(\mathcal{A}_{r}\), the two traces agree (on \(\hat{\mathcal{A}}_{r}\)): \[\text{Tr}_{r}=\hat{\text{Tr}}_{r}, \tag{3.56}\] so we will use only \(\mathrm{Tr}_{r}\) (not \(\hat{\mathrm{Tr}}_{r}\)) moving forward. Now consider \(\widetilde{\mathcal{A}}_{r}\). Although \(\widetilde{\mathcal{A}}_{r}\) is not the algebra of all bounded operators on a Hilbert space, the algebra isomorphism (3.41) shows that we can write it as a direct sum of algebras for which we can easily identify minimal projections. In particular, the pullback of minimal projections onto pure states \(\ket{\alpha i}_{r}\in\widetilde{\mathcal{H}}_{r}^{\alpha}\) under the natural isomorphism (3.43) gives minimal projections in \(\widetilde{\mathcal{A}}_{r}\). Thus, we write these minimal projections \(\widetilde{P}_{r}^{\alpha i}\in\widetilde{\mathcal{A}}_{r}\) as \[\widetilde{P}_{r}^{\alpha i}=\frac{1}{d_{\alpha}}\sum_{jk\ell}\ket{\alpha ik}_ {r}\bra{\alpha i\ell}\otimes\ket{\overline{\alpha}jk}_{\pi}\bra{\overline{ \alpha}j\ell} \tag{3.57}\] for all non-empty sectors \(\alpha\), defined as those with nonzero \(n_{\alpha}\), \(\overline{n}_{\overline{\alpha}}\). If \(n_{\alpha}\) vanishes, the index \(i\) above has an empty range, and if \(\overline{n}_{\overline{\alpha}}\) vanishes, \(\widetilde{P}_{r}^{\alpha i}\) vanishes due to the empty sum over \(j\) in (3.57). We can alternatively write \(\widetilde{P}_{r}^{\alpha i}\) as \[\widetilde{P}_{r}^{\alpha i}=\hat{P}_{r}^{\alpha i}\Pi_{\mathrm{GI}}, \tag{3.58}\] where the projections \(\hat{P}_{r}^{\alpha i}\) are defined similarly to (3.53): \[\hat{P}_{r}^{\alpha i}\equiv\sum_{k}\ket{\alpha ik}\bra{\alpha ik}_{r}\otimes 1 _{\overline{r}}. \tag{3.59}\] Although we already argued that \(\widetilde{P}_{r}^{\alpha i}\) are minimal projections using the natural isomorphism, we now show it more directly. **Lemma 9**.: The projections \(\widetilde{P}_{r}^{\alpha i}\) (for non-empty sectors \(\alpha\)) are minimal projections in \(\widetilde{\mathcal{A}}_{r}\). Proof.: We recall that minimal projections are nonzero and have the property that any subprojection \(\widetilde{Q}_{r}\) of \(\widetilde{P}_{r}^{\alpha i}\) is either zero or \(\widetilde{P}_{r}^{\alpha i}\). As an element of \(\widetilde{\mathcal{A}}_{r}\), \(\widetilde{Q}_{r}\) must be of the form \[\widetilde{Q}_{r}=\sum_{\beta jj^{\prime}k}\hat{Q}_{\beta jj^{\prime}}\left( \ket{\beta jk}_{r}\bra{\beta j^{\prime}k}\otimes 1_{\overline{r}}\right)\Pi_{ \mathrm{GI}} \tag{3.60}\] with complex coefficients \(\hat{Q}_{\beta jj^{\prime}}\). The subprojection \(\widetilde{Q}_{r}\) is left fixed under conjugation via \(\widetilde{P}_{r}^{\alpha i}\), so we have \[\widetilde{Q}_{r}=\widetilde{P}_{r}^{\alpha i}\widetilde{Q}_{r}\widetilde{P}_ {r}^{\alpha i}=\sum_{k}\hat{Q}_{\alpha ii}\left(\ket{\alpha ik}_{r}\bra{\alpha ik }\otimes 1_{\overline{r}}\right)\Pi_{\mathrm{GI}}=\hat{Q}_{\alpha ii}\widetilde{P}_ {r}^{\alpha i}. \tag{3.61}\] Additionally imposing \(\widetilde{Q}_{r}^{2}=\widetilde{Q}_{r}\), we find \[\widetilde{Q}_{r}^{2}=\hat{Q}_{\alpha ii}^{2}\widetilde{P}_{r}^{\alpha i}=\hat {Q}_{\alpha ii}\widetilde{P}_{r}^{\alpha i}. \tag{3.62}\] Unless \(\widetilde{Q}_{r}\) is zero, we obtain \(\hat{Q}_{\alpha ii}=1\) and thus \(\widetilde{Q}_{r}=\widetilde{P}_{r}^{\alpha i}\). So \(\widetilde{P}_{r}^{\alpha i}\) (for a non-empty sector \(\alpha\)) is indeed a minimal projection. Therefore, we define the trace \(\widetilde{\mathrm{Tr}}_{r}\) in \(\widetilde{\mathcal{A}}_{r}\) by imposing \[\widetilde{\mathrm{Tr}}_{r}\widetilde{P}_{r}^{\alpha i}=1 \tag{3.63}\] for every non-empty sector \(\alpha\) and every \(i=1,\cdots,n_{\alpha}\). How do we understand this trace acting on a general operator in \(\widetilde{\mathcal{A}}_{r}\)? Such an operator can always be written in the form (3.31): \[\widetilde{\mathcal{O}}_{r}=\hat{\mathcal{O}}_{r}\Pi_{\mathrm{GI}},\quad\hat {\mathcal{O}}_{r}=\sum_{\alpha ijk}\hat{\mathcal{O}}_{\alpha ij}\left|\alpha ik \right\rangle_{r}\left\langle\alpha jk\right|\otimes 1_{\overline{r}}. \tag{3.64}\] Taking the trace using \(\widetilde{\mathrm{Tr}}_{r}\), we find \[\widetilde{\mathrm{Tr}}_{r}\widetilde{\mathcal{O}}_{r}=\widetilde{\mathrm{Tr }}_{r}\sum_{\alpha i}\hat{\mathcal{O}}_{\alpha ii}\widetilde{P}_{r}^{\alpha i }=\sum_{\alpha i}\hat{\mathcal{O}}_{\alpha ii}. \tag{3.65}\] If we were to take the trace \(\mathrm{Tr}_{r}\) of the corresponding \(\hat{\mathcal{O}}_{r}\), we would instead find \[\mathrm{Tr}_{r}\,\hat{\mathcal{O}}_{r}=\mathrm{Tr}\sum_{\alpha i}\hat{ \mathcal{O}}_{\alpha ii}\hat{P}_{r}^{\alpha i}=\sum_{\alpha i}d_{\alpha}\hat{ \mathcal{O}}_{\alpha ii}. \tag{3.66}\] Thus, it is tempting to relate the trace \(\widetilde{\mathrm{Tr}}_{r}\) to \(\mathrm{Tr}_{r}\) using an appropriate rescaling by \(1/d_{\alpha}\) in each sector. A more precise version of this statement is the following: for any operator \(\widetilde{\mathcal{O}}_{r}^{\alpha}\in\widetilde{\mathcal{A}}_{r}\) that acts only in the \(\alpha\) sector such that it can be written as \[\widetilde{\mathcal{O}}_{r}^{\alpha}=\hat{\mathcal{O}}_{r}^{\alpha}\Pi_{ \mathrm{GI}},\quad\hat{\mathcal{O}}_{r}^{\alpha}=\sum_{ijk}\hat{\mathcal{O}} _{\alpha ij}\left|\alpha ik\right\rangle_{r}\left\langle\alpha jk\right| \otimes 1_{\overline{r}}, \tag{3.67}\] i.e., with no sum over \(\alpha\), the two traces are related by \[\widetilde{\mathrm{Tr}}_{r}\widetilde{\mathcal{O}}_{r}^{\alpha}=\frac{1}{d_{ \alpha}}\,\mathrm{Tr}_{r}\,\hat{\mathcal{O}}_{r}^{\alpha}. \tag{3.68}\] Summing both sides over \(\alpha\) recovers (3.65) and (3.66). ### Reduced states Our ultimate goal is to relate the von Neumann entropies for the same gauge-invariant state \(\widetilde{\rho}\) in \(\widetilde{\mathcal{H}}\) on two different subalgebras: \(\mathcal{A}_{r}\) and \(\widetilde{\mathcal{A}}_{r}\). The first thing to note is that, when we consider the full graph (instead of restricting to a subregion \(r\)), we have \(\mathcal{A}=\mathcal{B}(\mathcal{H})\) and \(\widetilde{\mathcal{A}}=\mathcal{B}(\widetilde{\mathcal{H}})\) where \(\widetilde{\mathcal{H}}\) is a subspace of \(\mathcal{H}\) so minimal projections in \(\widetilde{\mathcal{A}}\) are also minimal projections in \(\mathcal{A}\), and the trace \(\widetilde{\mathrm{Tr}}\) in \(\widetilde{\mathcal{A}}\) therefore agrees with the trace \(\mathrm{Tr}\) in \(\mathcal{A}\) when acting on gauge-invariant states. Hence, a gauge-invariant state \(\widetilde{\rho}\) on the full graph that is properly normalized under the \(\widetilde{\mathrm{Tr}}\) trace is also properly normalized under the \(\mathrm{Tr}\) trace, and can therefore be viewed as a properly normalized state \(\rho=\widetilde{\rho}\) in \(\mathcal{H}\) (albeit a special one). Thus, we will use only \(\rho\) (not \(\widetilde{\rho}\)) for notational simplicity in the following discussions. We should still remember that \(\rho\) is a special state that belongs to \(\widetilde{\mathcal{A}}\). The above statements do not hold for reduced states on subregions. In particular, we need to distinguish a properly normalized state \(\rho_{r}\) in \(\mathcal{A}_{r}\) from a properly normalized \(\widetilde{\rho}_{r}\) in \(\widetilde{\mathcal{A}}_{r}\). Now we derive the relation between these two states. Recall that to find \(S(\rho,\mathcal{A}_{r})\) for a general subalgebra \(\mathcal{A}_{r}\subset\mathcal{A}\), we need to find a reduced state \(\rho_{r}\in\mathcal{A}_{r}\) satisfying \[\mathrm{Tr}_{r}(\rho_{r}\mathcal{O}_{r})=\mathrm{Tr}(\rho\mathcal{O}_{r}) \tag{3.69}\] for all \(\mathcal{O}_{r}\in\mathcal{A}_{r}\). For our particular \(\mathcal{A}_{r}\) (the pre-gauged algebra on \(r\)), the answer is, of course, \(\rho_{r}=\mathrm{Tr}_{\overline{r}}\,\rho\). Now we work out the reduced state in the subalgebra \(\widetilde{\mathcal{A}}_{r}\). **Lemma 10**.: The reduced state \(\widetilde{\rho}_{r}\in\widetilde{\mathcal{A}}_{r}\) satisfying \[\widetilde{\mathrm{Tr}}_{r}(\widetilde{\rho}_{r}\widetilde{\mathcal{O}}_{r}) =\mathrm{Tr}\Big{(}\rho\widetilde{\mathcal{O}}_{r}\Big{)} \tag{3.70}\] for all \(\widetilde{\mathcal{O}}_{r}\in\widetilde{\mathcal{A}}_{r}\) is of the form \[\widetilde{\rho}_{r}=\hat{\rho}_{r}\Pi_{\mathrm{GI}},\quad\hat{\rho}_{r}= \sum_{\alpha ijk}\hat{\rho}_{\alpha ij}\left|\alpha ik\right\rangle_{r}\left \langle\alpha jk\right|\otimes 1_{\overline{r}}, \tag{3.71}\] with \(\hat{\rho}_{\alpha ij}=d_{\alpha}\rho_{\alpha ij}\), where \(\rho_{\alpha ij}\) is defined by \[\rho_{r}=\sum_{\alpha ijk}\rho_{\alpha ij}\left|\alpha ik\right\rangle_{r} \left\langle\alpha jk\right|. \tag{3.72}\] Proof.: A general gauge-invariant state \(\rho\in\widetilde{\mathcal{A}}\) can be written as \[\rho=\sum_{\alpha\beta ijk^{\prime}j^{\prime}k^{\prime}}\rho_{\alpha\beta ii ^{\prime}jj^{\prime}}\left|\alpha ik\right\rangle_{r}\left|\overline{\alpha} jk\right\rangle_{\overline{r}}\left\langle\beta^{\prime}k^{\prime} \right|_{r}\left\langle\overline{\beta}j^{\prime}k^{\prime}\right|_{\overline{ r}} \tag{3.73}\] using the basis states for \(\widetilde{\mathcal{H}}\). Tracing over \(\overline{r}\), we find \[\rho_{r}=\mathrm{Tr}_{\overline{r}}\,\rho =\sum_{\alpha\beta ijki^{\prime}j^{\prime}k^{\prime}}\rho_{\alpha \beta ii^{\prime}jj^{\prime}}\left\langle\overline{\beta}j^{\prime}k^{\prime} |\overline{\alpha}jk\right\rangle_{\overline{r}}\left|\alpha i^{\prime}k^{ \prime}\right\rangle_{r}\left\langle\beta ik\right|\] \[=\sum_{\alpha ii^{\prime}jk}\rho_{\alpha\alpha ii^{\prime}jj} \left|\alpha ik\right\rangle_{r}\left\langle\alpha^{\prime}k\right|. \tag{3.74}\] This verifies (3.72) and determines \(\rho_{aij}\). Now recall that as an element of \(\widetilde{\mathcal{A}}_{r}\), \(\widetilde{\rho}_{r}\) must be of the form (3.71) with some complex coefficients \(\hat{\rho}_{\alpha ij}\). It remains to determine what they are from (3.70). In order to impose it, we define the following basis for \(\widetilde{\mathcal{A}}_{r}\): \[\widetilde{\mathcal{O}}_{r}^{\alpha ij}=\hat{\mathcal{O}}_{r}^{\alpha ij}\Pi_{ \text{GI}},\quad\hat{\mathcal{O}}_{r}^{\alpha ij}=\sum_{k}\left|\alpha ik \right\rangle_{r}\left\langle\alpha jk\right|\otimes\mathbbm{1}_{\overline{r}}, \tag{3.75}\] such that we can rewrite the reduced gauge-invariant density matrix as \[\widetilde{\rho}_{r}=\sum_{\alpha ij}\hat{\rho}_{\alpha ij}\widetilde{ \mathcal{O}}_{\alpha ij}. \tag{3.76}\] Note that the basis elements \(\widetilde{\mathcal{O}}_{r}^{\alpha ij}\) and their corresponding basis elements \(\hat{\mathcal{O}}_{r}^{\alpha ij}\in\hat{\mathcal{A}}_{r}\) obey the following relations: \[\widetilde{\mathcal{O}}_{r}^{\alpha ij}\widetilde{\mathcal{O}}_{r }^{\beta i^{\prime}j^{\prime}}=\delta_{\alpha\beta}\delta_{i^{\prime}j} \widetilde{\mathcal{O}}_{r}^{\alpha ij^{\prime}},\quad\widetilde{\text{Tr}}_ {r}\widetilde{\mathcal{O}}_{r}^{\alpha ij}=\delta_{ij} \tag{3.77}\] \[\hat{\mathcal{O}}_{r}^{\alpha ij}\hat{\mathcal{O}}_{r}^{\beta i ^{\prime}j^{\prime}}=\delta_{\alpha\beta}\delta_{i^{\prime}j}\hat{\mathcal{O} }_{r}^{\alpha ij^{\prime}},\quad\text{Tr}_{r}\,\hat{\mathcal{O}}_{r}^{\alpha ij }=d_{\alpha}\delta_{ij}. \tag{3.78}\] From these relations we can check both sides of (3.70) for \(\widetilde{\mathcal{O}}_{r}\) set to one of the basis elements \(\widetilde{\mathcal{O}}_{r}^{\alpha ij}\). The trace in the gauge-invariant algebra becomes \[\widetilde{\text{Tr}}_{r}\left(\widetilde{\rho}_{r}\widetilde{\mathcal{O}}_{ r}^{\alpha ij}\right)=\widetilde{\text{Tr}}_{r}\left(\sum_{\beta i^{\prime}j^{ \prime}}\hat{\rho}_{\beta i^{\prime}j^{\prime}}\widetilde{\mathcal{O}}_{r}^{ \beta i^{\prime}j^{\prime}}\widetilde{\mathcal{O}}^{\alpha ij}\right)=\sum_{ \beta i^{\prime}j^{\prime}}\hat{\rho}_{\beta i^{\prime}j^{\prime}}\delta_{ \alpha\beta}\delta_{i^{\prime}j}\delta_{ij^{\prime}}=\hat{\rho}_{\alpha ji}. \tag{3.79}\] We need to equate this with the trace in the pre-gauged algebra, which we begin to evaluate by simplifying to the trace in \(\mathcal{A}_{r}\). We have \[\text{Tr}\Big{(}\rho\widetilde{\mathcal{O}}_{r}^{\alpha ij}\Big{)}=\text{Tr} \Big{(}\rho\hat{\mathcal{O}}_{r}^{\alpha ij}\Pi_{\text{GI}}\Big{)}=\text{Tr} \Big{(}\Pi_{\text{GI}}\rho\hat{\mathcal{O}}_{r}^{\alpha ij}\Big{)}=\text{Tr} \Big{(}\rho\hat{\mathcal{O}}_{r}^{\alpha ij}\Big{)}=\text{Tr}_{r}(\rho_{r} \hat{\mathcal{O}}_{r}^{\alpha ij}), \tag{3.80}\] where we have used the cyclicity of the trace, the gauge invariance of \(\rho\), and the fact that \(\hat{\mathcal{O}}_{r}^{\alpha ij}\in\mathcal{A}_{r}\). We further simplify this and obtain \[\text{Tr}_{r}(\rho_{r}\hat{\mathcal{O}}_{r}^{\alpha ij})=\text{Tr}_{r}\left( \sum_{\beta i^{\prime}j^{\prime}}\rho_{\beta i^{\prime}j^{\prime}}\hat{ \mathcal{O}}_{r}^{\beta i^{\prime}j^{\prime}}\hat{\mathcal{O}}_{r}^{\alpha ij }\right)=\sum_{\beta i^{\prime}j^{\prime}}d_{\alpha}\rho_{\beta i^{\prime}j^{ \prime}}\delta_{\alpha\beta}\delta_{i^{\prime}j}\delta_{ij^{\prime}}=d_{ \alpha}\rho_{\alpha ji}. \tag{3.81}\] Thus we identify the reduced density matrix \(\widetilde{\rho}_{r}\in\widetilde{\mathcal{A}}_{r}\) as a density matrix of the form (3.71) with \[\hat{\rho}_{\alpha ij}=d_{\alpha}\rho_{\alpha ij}. \tag{3.82}\] Entropies in the Gauged Random Tensor Network Having written down the reduced states in \(\mathcal{A}_{r}\) and \(\widetilde{\mathcal{A}}_{r}\), we are now ready to compute the von Neumann entropies with respect to the two algebras. As we will see, the difference between the two entropies in the gauged random tensor network is precisely accounted for by an additional contribution to the area operator in the non-trivial center \(\widetilde{\mathcal{Z}}_{r}\). ### Entanglement entropy From (3.71) and (3.72), we proceed by defining the reduced states projected onto a superselection sector \(\alpha\): \[\rho_{r}^{\alpha}=\sum_{ijk}\rho_{\alpha ij}|\alpha ik\rangle_{r} \langle\alpha jk|,\qquad\hat{\rho}_{r}^{\alpha}=\sum_{ijk}\hat{\rho}_{\alpha ij }|\alpha ik\rangle_{r}\langle\alpha jk|=d_{\alpha}\rho_{r}^{\alpha}. \tag{4.1}\] Note that these density matrices are not properly normalized with respect to their appropriate traces. The reduced states (3.71) and (3.72) can be written as a direct sum over representations: \[\rho_{r}=\bigoplus_{\alpha}\rho_{r}^{\alpha},\qquad\widetilde{ \rho}_{r}=\bigoplus_{\alpha}\hat{\rho}_{r}^{\alpha}\Pi_{\text{GI}}. \tag{4.2}\] Furthermore, functions of the reduced states are superselected in the same way. In particular, \[\rho_{r}\log\rho_{r}=\bigoplus_{\alpha}\rho_{r}^{\alpha}\log\rho_ {r}^{\alpha},\qquad\widetilde{\rho}_{r}\log\widetilde{\rho}_{r}=\bigoplus_{ \alpha}(\hat{\rho}_{r}^{\alpha}\log\hat{\rho}_{r}^{\alpha})\Pi_{\text{GI}}, \tag{4.3}\] where we used the fact that \([\hat{\rho}_{r}^{\alpha},\Pi_{\text{GI}}]=0\). We are now ready to compute the subregion entropies (in the bulk). The von Neumann entropy of \(\rho\) with respect to \(\mathcal{A}_{r}\) is simply given by \[S(\rho,\mathcal{A}_{r})=-\operatorname{Tr}_{r}\rho_{r}\log\rho_{ r}=-\sum_{\alpha}\operatorname{Tr}_{r}\rho_{r}^{\alpha}\log\rho_{r}^{\alpha}. \tag{4.4}\] On the other hand, using the relation between the traces (3.68), we can write the von Neumann entropy with respect to \(\widetilde{\mathcal{A}}_{r}\) as \[S(\rho,\widetilde{\mathcal{A}}_{r})=-\widetilde{\operatorname{Tr} _{r}}\widetilde{\rho}_{r}\log\widetilde{\rho}_{r}=-\sum_{\alpha}d_{\alpha}^{- 1}\operatorname{Tr}_{r}\hat{\rho}_{r}^{\alpha}\log\hat{\rho}_{r}^{\alpha}. \tag{4.5}\] Using \(\hat{\rho}_{r}^{\alpha}=d_{\alpha}\rho_{r}^{\alpha}\) and \(\operatorname{Tr}_{r}\rho_{r}^{\alpha}=\widetilde{\operatorname{Tr}_{r}} \widetilde{\rho}_{r}^{\alpha}\), we can rewrite each term in the sum as \[d_{\alpha}^{-1}\operatorname{Tr}_{r}\hat{\rho}_{r}^{\alpha}\log \hat{\rho}_{r}^{\alpha} =\operatorname{Tr}_{r}\rho_{r}^{\alpha}\log\rho_{r}^{\alpha}+ \operatorname{Tr}_{r}\rho_{r}^{\alpha}\log d_{\alpha}\] \[=\operatorname{Tr}_{r}\rho_{r}^{\alpha}\log\rho_{r}^{\alpha}+ \widetilde{\operatorname{Tr}_{r}}\widetilde{\rho}_{r}^{\alpha}\log d_{\alpha}. \tag{4.6}\] The von Neumann entropy with respect to \(\widetilde{\mathcal{A}}_{r}\) can thus be written as \[S(\rho,\widetilde{\mathcal{A}}_{r}) =-\sum_{\alpha}\Big{(}\mathrm{Tr}_{r}\,\rho_{r}^{\alpha}\log\rho_{r }^{\alpha}+\widetilde{\mathrm{Tr}}_{r}\widetilde{\rho}_{r}^{\alpha}\log d_{ \alpha}\Big{)}\] \[=S(\rho,\mathcal{A}_{r})-\widetilde{\mathrm{Tr}}_{r}\left( \widetilde{\rho}_{r}\Delta\widetilde{A}\right), \tag{100}\] where we have defined a new "extra area operator" via \[\Delta\widetilde{A}\equiv\bigoplus_{\alpha}\widetilde{P}^{\alpha}\log d_{ \alpha}. \tag{101}\] The projections \(\widetilde{P}^{\alpha}\) are precisely the projections (101) which generate the center \(\widetilde{\mathcal{Z}}_{r}\), so \(\Delta\widetilde{A}\) is manifestly an operator in the center. We have now arrived at our final relation between the entropies with respect to \(\mathcal{A}_{r}\) and \(\widetilde{\mathcal{A}}_{r}\), \[S(\rho,\mathcal{A}_{r})=S(\rho,\widetilde{\mathcal{A}}_{r})+\widetilde{ \mathrm{Tr}}_{r}\left(\widetilde{\rho}_{r}\Delta\widetilde{A}\right), \tag{102}\] which we now use in our two-layer gauged RTN defined in Section 2. In particular, we would like to derive an FLM formula relating the boundary entropy with the gauged bulk entropy \(S(\rho,\widetilde{\mathcal{A}}_{r})\). Recall that when we feed any bulk state \(\rho\) in the pre-gauged algebra \(\mathcal{A}\) into the RTN, the entropy \(S(R)\) of the resulting boundary state on a boundary subregion \(R\) satisfies an FLM formula: \[S(R)=|\gamma_{R}|\log D+S(\rho,\mathcal{A}_{r}), \tag{103}\] where the bulk subregion \(r\) is chosen to be the entanglement wedge between \(R\) and its minimal surface \(\gamma_{R}\). Now specializing to a gauge-invariant bulk state \(\rho\in\widetilde{\mathcal{A}}\) and using (102), we find that the boundary entropy can now be written as a new FLM formula: \[S(R)=\widetilde{\mathrm{Tr}}_{r}\Big{(}\widetilde{\rho}_{r}\widetilde{A} \Big{)}+S\left(\rho,\widetilde{\mathcal{A}}_{r}\right), \tag{104}\] where the full area operator \(\widetilde{A}\) is \[\widetilde{A}\,=\,|\gamma_{R}|\log D\,+\,\bigoplus_{\alpha}\widetilde{P}^{ \alpha}\log d_{\alpha}\,=\,|\gamma_{R}|\log D\ +\bigoplus_{\alpha_{1},\cdots,\alpha_{n}}\widetilde{P}^{(\alpha_{1},\cdots, \alpha_{n})}\sum_{i=1}^{n}\log d_{\alpha_{i}}. \tag{105}\] Again, we sum over all irreps \(\alpha=(\alpha_{1},\cdots,\alpha_{n})\) of \(G^{n}\) acting on the cut, although some \(\alpha\) sectors may be emtpy (i.e., \(n_{\alpha}\) or \(\overline{n}_{\overline{\alpha}}\) is zero) in which case \(\widetilde{P}^{\alpha}\) vanishes. This is our main result. We note that this area operator looks like what arises in a superposition of a stack of standard RTNs with probabilities determined by the projections \(\widetilde{P}^{\alpha}\) and with bond dimensions augmented by \(d_{\alpha_{i}}\). ### Renyi entropy and Renyi mutual information As discussed in Section 2, one can modify the entanglement structure of the links in the standard RTN to obtain a non-flat Renyi spectrum for boundary states. However, this is not enough to reproduce the properties of holographic Renyi entropies on general boundary subregions. In particular, it fails to account for the lack of backreaction, displayed in the tensor network as a lack of (Renyi) correlation between disconnected boundary subregions when the RT surface is in a disconnected phase. This problem becomes clear when one calculates the Renyi mutual information between two such boundary subregions \(R_{1}\) and \(R_{2}\), defined as13 Footnote 13: The Rényi index \(n\) should not be confused with the number of vertices on the cut \(n=|V_{c}|\). \[I_{n}(R_{1}:R_{2})\equiv S_{n}(R_{1})+S_{n}(R_{2})-S_{n}(R_{1}\cup R_{2}). \tag{4.13}\] As the area operator in the original RTN is a \(c\)-number, using (1.3) we find that the area operator contribution cancels out in \(I_{n}(R_{1}:R_{2})\) for all \(n\) (as long as the minimal surface \(\gamma_{R}\) is in a disconnected phase), leaving the boundary mutual information equal to the bulk mutual information: \[I_{n}(R_{1}:R_{2})=I_{n}(r_{1}:r_{2},\mathcal{A}_{r_{1}r_{2}}). \tag{4.14}\] This implies that, if one wants a contribution to the Renyi mutual information of the same order as the area, that is \(\mathcal{O}(\log D)\), one must input by hand a highly entangled bulk state. Doing this is unsatisfying and quite arbitrary. We will now see that our gauged RTN solves this problem in a natural way, due to our nontrivial area operator. In general, the presence of a nontrivial area operator will lead to a nontrivial, \(n\)-dependent boundary Renyi mutual information, even for states with vanishing bulk Renyi mutual information. To see how this is realized in the gauged RTN, we will study a simple example shown in Figure 3, where the top layer is disconnected but the bottom layer is connected.14 We allow the bond dimensions in the bottom layer to be different for different links, and in fact design them so that the minimal surfaces associated with \(R_{1}\), \(R_{2}\), and their union \(R_{1}\cup R_{2}\) are fixed as we vary the Renyi index \(n\) at \(\mathcal{O}(1)\) values. We will feed in a gauge-invariant bulk state \(\rho\) with the following reduced state on \(r_{1}\cup r_{2}\): Footnote 14: This connection is unnecessary to prove our point, as the internal leg connecting \(r_{1}\) and \(r_{2}\) never contributes to the area term, but it is more intuitively satisfying to discuss a connected spatial slice for the purposes of demonstrating backreaction. \[\rho_{r_{1}r_{2}}=\sum_{\alpha\beta}(d_{\alpha}d_{\beta})^{-1}P(\alpha,\beta) \sum_{k\ell}\left|\alpha ik\right\rangle_{r_{1}}\left|\beta j\ell\right\rangle _{r_{2}}\left\langle\alpha ik\right|_{r_{1}}\left\langle\beta j\ell\right|_{r_ {2}}, \tag{4.15}\] for some particular choice of \(i\), \(j\). This state has classical correlations between \(r_{1}\) and \(r_{2}\) as described by a probability distribution \(P(\alpha,\beta)\), but has no quantum correlations. For simplicity, we consider the following distribution \(P(\alpha,\beta)\) that has support on only two superselection sectors \(\alpha_{1}\), \(\alpha_{2}\) on \(r_{1}\) and only two sectors \(\beta_{1}\), \(\beta_{2}\) on \(r_{2}\): \[P(\alpha_{1},\beta_{1})=p,\quad P(\alpha_{2},\beta_{1})=P(\alpha_{1},\beta_{2 })=p^{\prime},\quad P(\alpha_{2},\beta_{2})=p^{\prime\prime}, \tag{4.16}\] subject to the constraint \(p+2p^{\prime}+p^{\prime\prime}=1\). The Renyi entropy of \(\rho\) in the pre-gauged algebra \({\cal A}_{r_{1}r_{2}}\) is defined as \[S_{n}(\rho,{\cal A}_{r_{1}r_{2}})\equiv\frac{1}{1-n}\log\left({\rm Tr}_{r_{1} r_{2}}\,\rho_{r_{1}r_{2}}^{n}\right). \tag{4.17}\] Figure 3: A simple gauged RTN in which we compute the Rényi mutual information between \(R_{1}\) and \(R_{2}\). The input from the top layer lives on four edges of a disconnected graph \(G\), as we choose to have no matter on any of the vertices. In the bottom layer, the thick legs have a bond dimension much larger than that of the thin legs, such that the minimal surfaces for the three boundary regions \(R_{1}\), \(R_{2}\), and \(R_{1}\cup R_{2}\) only involve the light internal legs. Consequently, the associated bulk regions will be \(r_{1}\), \(r_{2}\), and \(r_{1}\cup r_{2}\). Using our \(\rho_{r_{1}r_{2}}\), we find \[S_{n}(\rho,\mathcal{A}_{r_{1}r_{2}})=\frac{1}{1-n}\log \bigg{(}d_{\alpha_{1}}d_{\beta_{1}}\left(\frac{p}{d_{\alpha_{1}}d_{ \beta_{1}}}\right)^{n}+d_{\alpha_{2}}d_{\beta_{1}}\left(\frac{p^{\prime}}{d_{ \alpha_{2}}d_{\beta_{1}}}\right)^{n}\] \[+d_{\alpha_{1}}d_{\beta_{2}}\left(\frac{p^{\prime}}{d_{\alpha_{1}} d_{\beta_{2}}}\right)^{n}+d_{\alpha_{2}}d_{\beta_{2}}\left(\frac{p^{\prime\prime}}{d_{ \alpha_{2}}d_{\beta_{2}}}\right)^{n}\bigg{)}. \tag{111}\] We can also compute the reduced density matrices on \(r_{1}\) and \(r_{2}\), as well as their corresponding Renyi entropies in the pre-gauged algebra. We find the reduced density matrices to be \[\rho_{r_{1}} =\sum_{k=1}^{d_{\alpha_{1}}}d_{\alpha_{1}}^{-1}(p+p^{\prime}) \left|\alpha_{1}ik\right\rangle_{r_{1}}\left\langle\alpha_{1}ik\right|+\sum_{ k^{\prime}=1}^{d_{\alpha_{2}}}d_{\alpha_{2}}^{-1}(p^{\prime}+p^{\prime\prime}) \left|\alpha_{2}ik^{\prime}\right\rangle_{r_{1}}\left\langle\alpha_{2}ik^{ \prime}\right|,\] \[\rho_{r_{2}} =\sum_{k=1}^{d_{\beta_{1}}}d_{\beta_{1}}^{-1}(p+p^{\prime}) \left|\beta_{1}jk\right\rangle_{r_{2}}\left\langle\beta_{1}jk\right|+\sum_{k^ {\prime}=1}^{d_{\beta_{2}}}d_{\beta_{2}}^{-1}(p^{\prime}+p^{\prime\prime}) \left|\beta_{2}jk^{\prime}\right\rangle_{r_{2}}\left\langle\beta_{2}jk^{ \prime}\right|, \tag{112}\] and the bulk Renyi entropies are \[S_{n}(\rho,\mathcal{A}_{r_{1}}) =\frac{1}{1-n}\log\left(d_{\alpha_{1}}\left(\frac{p+p^{\prime}}{ d_{\alpha_{1}}}\right)^{n}+d_{\alpha_{2}}\left(\frac{p^{\prime}+p^{\prime\prime}}{ d_{\alpha_{2}}}\right)^{n}\right)\] \[S_{n}(\rho,\mathcal{A}_{r_{2}}) =\frac{1}{1-n}\log\left(d_{\beta_{1}}\left(\frac{p+p^{\prime}}{ d_{\beta_{1}}}\right)^{n}+d_{\beta_{2}}\left(\frac{p^{\prime}+p^{\prime\prime}}{ d_{\beta_{2}}}\right)^{n}\right). \tag{113}\] In the gauge-invariant algebra, the dependence on irrep dimensions drops out and the Renyi entropies become purely Shannon terms: \[S_{n}(\rho,\widetilde{\mathcal{A}}_{r_{1}}) =S_{n}(\rho,\widetilde{\mathcal{A}}_{r_{2}})=\frac{1}{1-n}\log \left(\left(p+p^{\prime}\right)^{n}+\left(p^{\prime}+p^{\prime\prime}\right)^{ n}\right)\] \[S_{n}(\rho,\widetilde{\mathcal{A}}_{r_{1}r_{2}}) =\frac{1}{1-n}\log\left(p^{n}+2(p^{\prime})^{n}+(p^{\prime\prime} )^{n}\right), \tag{114}\] which we choose to be parametrically suppressed relative to the Renyi entropies in the pre-gauged algebra. When the sum inside the logarithm is dominated by one term, we can approximate it using \[\log\left(\sum_{i}x_{i}\right)\approx\log\Big{(}\max_{i}\left\{x_{i}\right\} \Big{)}. \tag{115}\] To simplify our calculation, we will enter a parameter regime where all three (pre-gauged) Renyi entropies satisfy the approximation above and have phase transitions. First consider \(S_{n}(\rho,\mathcal{A}_{r_{1}})\). We take \(d_{\alpha_{1}}>d_{\alpha_{2}}\). The two terms in the sum are equal at some critical \(n_{*}\), given by \[\left(\frac{p+p^{\prime}}{p^{\prime}+p^{\prime\prime}}\right)^{n_{*}}=\left( \frac{d_{\alpha_{1}}}{d_{\alpha_{2}}}\right)^{n_{*}-1}\quad\Rightarrow\quad \frac{n_{*}}{n_{*}-1}=\frac{\log\left(\frac{d_{\alpha_{1}}}{d_{\alpha_{2}}} \right)}{\log\left(\frac{p+p^{\prime}}{p^{\prime}+p^{\prime\prime}}\right)}. \tag{116}\] Thus, in order to have a phase transition at \(n_{*}>1\) we require \[\log\left(\frac{d_{\alpha_{1}}}{d_{\alpha_{2}}}\right)>\log\left(\frac{p+p^{ \prime}}{p^{\prime}+p^{\prime\prime}}\right). \tag{100}\] The width of this transition is controlled by the corrections to (101). This depends on the curvature of \(S_{n}(\rho,{\cal A}_{r_{1}})\) at \(n_{*}\); explicitly we can diagnose this with the following quantity: \[\frac{d^{2}}{dn^{2}}(1-n)S_{n}(\rho,{\cal A}_{r_{1}})\bigg{|}_{n=n_{*}}=\frac{1 }{4}\left(\log\frac{d_{\alpha_{1}}(p^{\prime}+p^{\prime\prime})}{d_{\alpha_{2} }(p+p^{\prime})}\right)^{2}. \tag{101}\] For fixed \(n_{*}\), this quantity increases with increasing \(d_{\alpha_{1}}/d_{\alpha_{2}}\), so we should make this ratio large for a sharp transition. A simple way to ensure the previous conditions is the following: \[\frac{d_{\alpha_{1}}}{d_{\alpha_{2}}}\equiv q\gg 1,\quad p\gg p^{\prime},\quad p ^{\prime}\gg p^{\prime\prime}. \tag{102}\] Furthermore, we impose \[\frac{d_{\alpha_{1}}}{d_{\alpha_{2}}}=\frac{d_{\beta_{1}}}{d_{\beta_{2}}}=q, \tag{103}\] which forces the phase transitions in \(S_{n}(\rho,{\cal A}_{r_{1}})\) and \(S_{n}(\rho,{\cal A}_{r_{2}})\) to occur at the same critical \(n_{*}\). Now let us examine the phase transition in \(S_{n}(\rho,{\cal A}_{r_{1}r_{2}})\). In the limit of sharp transitions we have \[S_{n}(\rho,{\cal A}_{r_{1}r_{2}})\approx\frac{1}{1-n}\log\left(\max\left\{ \frac{p^{n}}{(d_{\alpha_{1}}d_{\beta_{1}})^{n-1}},\frac{(p^{\prime})^{n}}{(d_{ \alpha_{2}}d_{\beta_{1}})^{n-1}},\frac{(p^{\prime})^{n}}{(d_{\alpha_{1}}d_{ \beta_{2}})^{n-1}},\frac{(p^{\prime\prime})^{n}}{(d_{\alpha_{2}}d_{\beta_{2}}) ^{n-1}}\right\}\right). \tag{104}\] For simplicity, we will choose \[\frac{p}{p^{\prime}}>\frac{p^{\prime}}{p^{\prime\prime}}\gg 1. \tag{105}\] In this case, we find that \(S_{n}(\rho,{\cal A}_{r_{1}r_{2}})\) has a phase transition occurring at a critical \(n_{c}\) determined by \[\frac{n_{c}}{n_{c}-1}=\frac{\log(q^{2})}{\log\left(\frac{p}{p^{\prime\prime}} \right)}=\frac{\log(q^{2})}{\log\left(\frac{p}{p^{\prime}}\frac{p^{\prime}}{p^ {\prime\prime}}\right)} \tag{106}\] which satisfies \(1<n_{c}<n_{*}\). We now combine the above results to find the (pre-gauged) Renyi mutual information \[I_{n}(r_{1}:r_{2},{\cal A}_{r_{1}r_{2}})\equiv S_{n}(\rho,{\cal A}_{r_{1}})+S _{n}(\rho,{\cal A}_{r_{2}})-S_{n}(\rho,{\cal A}_{r_{1}r_{2}}). \tag{107}\] We find the following phases: \[I_{n}(r_{1}:r_{2},\mathcal{A}_{r_{1}r_{2}})\approx\begin{cases}0&n<n_{c},\\ \log\left(q^{2}\right)+\frac{n}{1-n}\log\left(\frac{(p+p^{\prime})^{2}}{p^{ \prime\prime}}\right)&n_{c}<n<n_{*},\\ \frac{n}{1-n}\log\left(\frac{(p^{\prime}+p^{\prime\prime})^{2}}{p^{\prime \prime}}\right)&n_{*}<n.\end{cases} \tag{102}\] Now we rewrite the boundary Renyi mutual information (101) as \[S_{n}(R_{1}:R_{2})=\underbrace{I_{n}(r_{1}:r_{2},\mathcal{A}_{r_{1}r_{2}})-I_ {n}(r_{1}:r_{2},\widetilde{\mathcal{A}}_{r_{1}r_{2}})}_{\text{area contribution}}+\underbrace{I_{n}(r_{1}:r_{2},\widetilde{\mathcal{A}}_{r_{1}r_{2}})}_ {\text{bulk matter contribution}}\, \tag{103}\] where the contribution of the nontrivial area operator to the boundary Renyi mutual information is identified with the difference of the bulk Renyi mutual information in the two algebras. As stated previously, \(I_{n}(r_{1}:r_{2},\widetilde{\mathcal{A}}_{r_{1}r_{2}})\) is suppressed relative to \(I_{n}(r_{1}:r_{2},\mathcal{A}_{r_{1}r_{2}})\), so this model implements phase transitions in the boundary Renyi mutual information without a large bulk matter contribution (in the gauge-invariant algebra). We plot these two phase transitions for an example in Figure 4. This is a proof of concept showing that adding bulk gauge symmetries to the RTN in this manner allows the boundary Renyi mutual information to be nontrivial and \(n\)-dependent, even for states with small bulk Renyi mutual information (in the gauge-invariant algebra). In our simple example here, the minimal surface does not shift--i.e. it is the same for all \(n\)--but there is no obstruction to writing a more complicated example in which the location of the minimal surface changes with \(n\) due to the nontrivial area operator. ## 5 Discussion and Outlook In this work, we have presented a modification of the random tensor network which allows us to reproduce known features of semiclassical holographic states. We discuss some open questions and possible future directions below. We have presented a toy model which, for simple choices of bulk input state, exhibits sharp phase transitions in the Renyi entropy and Renyi mutual information. With a sufficiently tuned set of probabilities and irrep dimensions, one could engineer a smooth varying Renyi entropy that matches with, for example, the correct one-interval CFT\({}_{2}\) Renyi entropy (5). It would be an even more complicated task to reproduce the correct Renyi entropy for multiple intervals in the CFT [26; 27]. The bulk algebras that we encountered in our model are type I von Neumann algebras. This is in contrast to the type II von Neumann algebras for gravity constructed using the crossed product [28; 29; 30]. A "type I approximation" to the crossed product was recently studied in [31]. It is thus tempting to incorporate the crossed product and the resultant birth of a type II algebra into the tensor network toy models of holography. Our gauge-invariant subregion algebras generally have nontrivial centers. On the other hand, a prescription was given in [32] to construct gauge-invariant subregion algebras with trivial centers in lattice gauge theory. This prescription involves adding operators to the algebra that we do not include, so it does not contradict our results in any way. Here we have implemented a graph version of the lattice gauge theory construction along the lines of Kogut and Susskind, but crucially without dynamics, due to the lack of a Hamiltonian. Because of this, our construction does not have anything more to say about time evolution in tensor networks than previous models. It would be interesting to understand how to incorporate a Hamiltonian and the associated time evolution into tensor networks. It would also be interesting to study the commutators of intersecting area operators in our gauged RTN, which in standard AdS/CFT do not commute [33]. Figure 4: Phase transitions in the Rényi mutual information. Here we set \(q=10^{50}\), \(p^{\prime}=10^{-16}\), and \(p^{\prime\prime}=10^{-24}\). We plot the dominant contribution to the Rényi mutual information in the three phases (dashed) as well as the fully analytic interpolating function (solid). ## Acknowledgements We thank Chris Akers, Horacio Casini, David Grabovsky, Daniel Harlow, Kristan Jensen, Don Marolf, and Pratik Rath for interesting discussions. This material is based upon work supported by the Air Force Office of Scientific Research under Award Number FA9550-19-1-0360. This material is also based upon work supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award Number DE-SC0011702. SAM would like to thank the Centro de Ciencias de Benasque Pedro Pascal for their hospitality while a portion of this work was completed.
テンソルネットワークは、ホロスコピックな状態の entanglement の構造を理解し、エンタングルメント Wedge 内におけるボリューメーターの再構築のための有用な玩具モデルです。しかしながら、それらは、平面エンタングルメントスペクトルを持つ所謂「固定面積の状態」に限定され、その制限により、ホロスコピックエンタングルメントの一般的な特徴を理解する際の有用性が制限されます。ここでは、ランダムなテンソルネットワークの変形を構築し、その変形は、一般グラフ上のゲージ理論を伴います。このゲージ理論は、ゲージ不変な状態をランダムなテンソルネットワークに入力します。このモデルは、ゲージ不変アルゴリズムの中心部に存在する非 triviality の面積算子を持つ量子補正された Ryu-Takayanagiformulaを満たします。また、この面積算子から、R\'enyi entropi と R\'enyi mutual information に
2309.09705
Synth-AC: Enhancing Audio Captioning with Synthetic Supervision
Data-driven approaches hold promise for audio captioning. However, the development of audio captioning methods can be biased due to the limited availability and quality of text-audio data. This paper proposes a SynthAC framework, which leverages recent advances in audio generative models and commonly available text corpus to create synthetic text-audio pairs, thereby enhancing text-audio representation. Specifically, the text-to-audio generation model, i.e., AudioLDM, is used to generate synthetic audio signals with captions from an image captioning dataset. Our SynthAC expands the availability of well-annotated captions from the text-vision domain to audio captioning, thus enhancing text-audio representation by learning relations within synthetic text-audio pairs. Experiments demonstrate that our SynthAC framework can benefit audio captioning models by incorporating well-annotated text corpus from the text-vision domain, offering a promising solution to the challenge caused by data scarcity. Furthermore, SynthAC can be easily adapted to various state-of-the-art methods, leading to substantial performance improvements.
Feiyang Xiao, Qiaoxi Zhu, Jian Guan, Xubo Liu, Haohe Liu, Kejia Zhang, Wenwu Wang
2023-09-18T12:17:16
http://arxiv.org/abs/2309.09705v1
# Synth-ac: Enhancing Audio Captioning with Synthetic Supervision ###### Abstract Data-driven approaches hold promise for audio captioning. However, the development of audio captioning methods can be biased due to the limited availability and quality of text-audio data. This paper proposes a SynthAC framework, which leverages recent advances in audio generative models and commonly available text corpus to create synthetic text-audio pairs, thereby enhancing text-audio representation. Specifically, the text-to-audio generation model, i.e., AudioLDM, is used to generate synthetic audio signals with captions from an image captioning dataset. Our SynthAC expands the availability of well-annotated captions from the text-vision domain to audio captioning, thus enhancing text-audio representation by learning relations within synthetic text-audio pairs. Experiments demonstrate that our SynthAC framework can benefit audio captioning models by incorporating well-annotated text corpus from the text-vision domain, offering a promising solution to the challenge caused by data scarcity. Furthermore, SynthAC can be easily adapted to various state-of-the-art methods, leading to substantial performance improvements. Feiyang Xiao\({}^{1}\), Qiaoxi Zhu\({}^{2}\), Jian Guan\({}^{1}\)1, Xubo Liu\({}^{3}\), Haohe Liu\({}^{3}\), Kejia Zhang\({}^{1}\), Wenwu Wang\({}^{3}\)\({}^{1}\)College of Computer Science and Technology, Harbin Engineering University, Harbin, 150001, China \({}^{2}\)Centre for Audio, Acoustics and Vibration, University of Technology Sydney, Ultimo, NSW, Australia \({}^{3}\)Centre for Vision Speech and Signal Processing, University of Surrey, Guildford, GU2 7XH, UK Multimodal learning, text-audio representation, audio captioning, text-to-audio generation Footnote 1: [https://openai.com/blog/chatgpt/](https://openai.com/blog/chatgpt/) ## 1 Introduction Multimodal text-audio learning can be considered as an imitation of the hearing and natural language understanding ability of human beings [1], which is helpful to down-stream tasks, such as audio captioning, for describing the content of the audio signal by natural language captions [2]. In audio captioning, data-driven deep learning approaches are widely used for learning the relation between audio and text information [3]. Nonetheless, the performance of these methods could be limited by the scarcity of reliable text-audio data [4, 5], due to the significant challenges in capturing and annotating such data. To address the data scarcity issue of audio captioning, efforts have been made to directly increase the number of text-audio pairs by using external datasets [6, 7, 8]. For example, NetEase adopted a large-scale additional training dataset and significantly improved audio captioning performance [6]. However, such large-scale curated datasets are generally non-public, barring their use in research and real-world applications. Recently, the large language model, e.g., ChatGPT1, has been utilised to generate pseudo captions for audio datasets [7, 8]. In [7], a novel audio captioning dataset, i.e., WavCaps, was created by utilising ChatGPT to convert weakly-labelled audio tags into fluent captions, which highly increases the number of text-audio data pairs and improves the performance of various down-stream tasks including audio captioning. In addition, a ChatGPT-based mixup strategy [8] improves the fluency and precision of the predicted captions by generating new captions from randomly paired existing captions in the Clotho dataset [2]. Footnote 1: [https://openai.com/blog/chatgpt/](https://openai.com/blog/chatgpt/) Audio captioning is an inherently multimodal audio-language task, while recent advances often focus on enhancing audio datasets with pseudo-textual captions. Although annotated audio-text data is limited at scale, text corpus is nearly infinite to access on the web. Intuitively, an open question is raised: _can we augment text corpus with pseudo-acoustic signals_? The key to this question is to take advantage of recent achievements in text-to-audio generation [9]. By leveraging text-to-audio generation models, we can seamlessly produce synthetic audio data for large-scale textual datasets. Text-to-audio generated synthetic data provide advantages such as having flexible control over the audio concepts and linguistic diversity. Despite its great potential, to the best of our knowledge, leveraging synthetic audio data for improving audio captioning has never been studied in the literature. In this paper, we propose SynthAC, a semi-supervised framework that leverages synthetic supervision from text datasets to enhance the performance of audio captioning systems. Specifically, we consider the text data from the image captioning dataset COCO [10], where the well-annotated image captions describing the visual scenes have latent relations to the semantics described by audio captions used in AudioCaps dataset [11], as shown in Table 1. In the SynthAC framework, we first generate synthetic audio data for image captions using the state-of-the-art audio generation model i.e., AudioLDM [9]. The synthetic audio signals with paired image captions are then used to augment the specialized audio captioning dataset (i.e., AudioCaps [11]). Then, the augmented dataset is used to train the audio captioning model in a semi-supervised setting, and to enhance text-audio representation. We experiment on the audio captioning dataset i.e. AudioCaps [11] with two state-of-the-art models, i.e., GraphAC [12] and P-Transformer [13]. Results show that SynthAC could substantially improve audio captioning performance with the variable scale of real audio-text data. Furthermore, we demonstrate that SynthAC performs on par with off-the-shelf audio captioning systems using less than half of the real data, indicating the great potential of SynthAC to mitigate the data scarcity issue in audio captioning. The generated synthetic text-audio data and caption examples are available at: [https://github.com/LittleFlyingSheep/SynthAC](https://github.com/LittleFlyingSheep/SynthAC). ## 2 Proposed Synth-AC Method This section presents the proposed audio captioning method SynthAC in detail, with the overall framework shown in Figure 1. The well-annotated image captions (e.g. from the COCO dataset [10]) are employed as the condition (input) of the latent diffusion model used in AudioLDM [9] for audio generation, in order to scale up the text-audio dataset AudioCaps. Then, these scaled data are employed to train the audio captioning model (i.e. GraphAC [12]) to enhance text-audio representation learning and improve audio captioning performance. In addition to GraphAC, the proposed SynthAC framework is also adapted to other audio captioning models. ### Text-Audio Data Synthesis with Image Captions To obtain synthetic text-audio pairs, we use the image caption \(\mathbf{t}_{\text{img}}\) as inputs to the pretrained AudioLDM model \(G(\cdot)\) to generate the audio signal \[\mathbf{s}_{\text{syn}}=G(\mathbf{t}_{\text{img}}). \tag{1}\] This allows us to obtain the synthetic text-audio pair \(\mathbf{d}_{\text{syn}}=(\mathbf{t}_{\text{img}},\mathbf{s}_{\text{syn}})\). Thus, we can form a synthetic dataset \(D_{\text{syn}}\) composed of all the image captions selected from the image caption dataset, and the corresponding audio clips generated with the text-to-audio (TTA) model. Note that the AudioLDM model we used is pretrained on AudioSet [14], AudioCaps [11], FreeSound2 and BBC Sound Effects3 datasets, which is capable in establishing the relation between the text descriptions and acoustic scenes and events for audio generation. Therefore, we can use such a model to generate the synthetic audio signal, with the image caption as the condition, utilising the implicit relation between audio and visual scene. Footnote 2: [https://freesound.org/](https://freesound.org/) Footnote 3: [https://sound-effects.bbcrewind.co.uk/search](https://sound-effects.bbcrewind.co.uk/search) ### Audio Captioning Model Training We employ GraphAC [12] developed in our recent work as the audio captioning model in the proposed SynthAC framework, denoted as Synth-GraphAC. The GraphAC method employs an encoder-decoder structure, where the audio encoder is used to extract the audio feature and the text decoder is used to predict captions from the audio feature. The encoder introduces a graph attention module to capture the contextual temporal information in the audio feature extracted by a PANNs module [15]. The text decoder uses a two-layer transformer module with a Word2Vec model [16] for caption prediction from the audio feature. With the synthetic data \(D_{\text{syn}}\) generated with the method \begin{table} \begin{tabular}{c|l} \hline \hline Image Caption in COCO [10] & Audio Caption in AudioCaps [11] \\ \hline A black **car** is near someone riding & A man talking and a **car** passing \\ a bike & by loudly \\ \hline **A barking dog** looks over a ledge & **Dog barking** and growing \\ lined with Christmas lights & **A cat sleeping** on a rock near a bike & **A cat sleeping** and snores \\ \hline \hline \end{tabular} \end{table} Table 1: Examples to show the latent relation between image captions and audio captions. Figure 1: The proposed SynthAC framework includes two stages. (1) text-audio data synthesis with image captions, where synthetic text-audio data pairs are generated from well-annotated image captions via a text-to-audio model (i.e., AudioLDM), and (2) audio captioning model training, where the audio captioning model (i.e., GraphAC [12]) is trained with the synthetic text-audio pairs enriched training data. discussed earlier, we can augment the ground truth dataset \(D_{\text{gt}}\), e.g. AudioCaps, and obtain the augmented training set \[D_{\text{T}}=D_{\text{syn}}\cup D_{\text{gt}}, \tag{2}\] where \(D_{\text{T}}\) denotes the augmented training set, where each text-audio pair is denoted as \(\mathbf{d}=(\mathbf{t},\mathbf{s})\), with \(\mathbf{t}\) being the caption and \(\mathbf{s}\) being the corresponding audio signal. During model training, an input audio \(\mathbf{s}\) is fed into the audio captioning model to generate a predicted caption, \[\mathbf{\hat{t}}=AC(\mathbf{s}), \tag{3}\] where \(AC(\cdot)\) denotes the audio captioning model, i.e., GraphAC, and \(\mathbf{\hat{t}}\) denotes the predicted caption. Then the cross entropy (CE) loss function with label smoothing [12, 13] is used to optimize the audio captioning model \[\mathcal{L}=\text{CE}(\mathbf{\hat{t}},\mathbf{t}). \tag{4}\] With the synthetic data to enhance the text-audio representation, our proposed SynthAC can further improve the performance of the audio captioning models. In addition to GraphAC, the proposed SynthAC method can be easily adapted to different audio captioning models to improve their performance. In the following experiment, another audio captioning model, i.e., P-Transformer [13] is also employed in the proposed SynthAC framework, denoted as Synth-P-Transformer, to demonstrate the effectiveness of our proposed method further, as detailed in Section 3.2. ## 3 Experiments ### Experimental Setup **Dataset:** For text-audio data synthesis, we employ the well-annotated image captions from the widely used text-visual application dataset, i.e., COCO [10], which provides 414,113 high quality manually annotated image captions to describe visual scenes. Here, 25,000 individual image captions are randomly selected from the COCO dataset, which are used as prompts of the AudioLDM model to obtain a total number of 25,000 synthetic text-audio pairs to enhance the text-audio representation for model training. For audio captioning model training, we employ the widely used audio captioning dataset, i.e., AudioCaps [11], as the ground truth dataset. The development and validation splits of AudioCaps with 51,744 text-audio pairs are combined for model training, following [12, 13, 17], and the evaluation split is used for evaluation. The sampling rate of audio signals is 16kHz, as the default setting of AudioLDM. **Implementation Details:** In data synthesis stage, we employ the "audioldm-l-full" version of AudioLDM model4 for synthetic audio generation. The length of synthetic audio is 10 seconds, consistent with those in AudioCaps. During the model training, the batch size is set as 16 for both SynthGraphAC and Synth-P-Transformer. The AdamW optimiser [18] with a learning rate of 0.001 is used for model training. SpecAugment is used to enhance the generalisation for audio captioning models following [12, 13, 17]. Footnote 4: [https://github.com/haoheliu/AudioLDM](https://github.com/haoheliu/AudioLDM) **Evaluation Metrics:** For performance evaluation, BLEU\({}_{n}\), ROUGE\({}_{l}\), METEOR, CIDE\({}_{r}\), SPICE, SPIDE\({}_{r}\) and SPIDE\({}_{r}\)-FL, are employed as evaluation metrics in our experiments, following [2, 19]. BLEU\({}_{n}\), ROUGE\({}_{l}\) and METEOR measure the matching degree between the prediction and ground truth caption on word level [13]. CIDE\({}_{r}\) measures the fluency of the caption [20]. SPICE measures the semantic proposition between the predicted caption and the reference caption [21]. SPIDE\({}_{r}\) is the average value of the CIDE\({}_{r}\) and SPICE metrics, which balances the measure of the fluency and the semantic information in the caption evaluation [22]. SPIDE\({}_{r}\)-FL is a recently proposed metric introducing fluency error based penalty for improving the robustness of the evaluation [23]. ### Effect of the Proposed SynthAC To show the effectiveness of the proposed SynthAC, we conduct experiments to compare the proposed method with audio captioning methods only trained with the ground truth dataset, including GPT-Similar [24], TopDown-AlignedAtt [11], P-Transformer [13], GraphAC [12] and P-LocalAFT [17]. GPT-Similar is a typical captioning method using similar caption retrieval to predict caption. TopDown-AlignedAtt is the study that proposed AudioCaps [11]. P-Transformer, GraphAC and P-LocalAFT are state-of-the-art methods in audio captioning. To validate the proposed method, P-Transformer and GraphAC are employed as the captioning model in the SynthAC framework, denoted as Synth-P-Transformer and Synth-GraphAC, respectively. Table 2 shows that, with the enhanced text-audio relations by the synthetic data, the proposed SynthAC framework can improve the audio captioning models' performance, which can be observed from the comparison between SynthGraphAC and GraphAC, as well as the comparison between Synth-P-Transformer and P-Transformer. In addition, the examples of the predicted captions using P-Transformer and Synth-P-Transformer are provided in Table 3 to show the improvement on captioning results with our SynthAC. From Table 3, we can see that the "blades running" is wrongly interpreted as "gun fires rapidly" by P-Transformer, whereas Synth-P-Transformer precisely predicts this concept with the enhanced text-audio representation, as illustrated in example 1. For example 2, Synth-P-Transformer precisely describes the contextual information (i.e., "followed by"), and obtains exactly the same predicted caption as the reference. Regarding example 3, the acoustic event "spraying" is missed in the prediction of P-Transformer but predicted correctly by Synth-P-Transformer. The improved predictions can be also seen in terms of the word precision metric BLEU\({}_{n}\), the fluency metric CIDE\({}_{r}\), and the kernel semantic metric SPIDE\({}_{r}\)-FL, as shown in Table 2. Meanwhile, the proposed SynthAC based methods also outperform other methods, demonstrating the effectiveness of the proposed SynthAC framework. Furthermore, the comparison results show that, with our proposed SynthAC framework, we can use the well-annotated captions from the text-vision multimodal domain to enhance the text-audio representation learning in multimodal audio captioning and reduce the cost of obtaining text-audio data. ### Performance Evaluation with Different Amounts of Real Data We evaluate the proposed SynthAC (i.e., Synth-P-Transformer) by using different amounts of ground truth data for model training, i.e. 12.5%, 25%, 37.5% and 50% of AudioCaps dataset, respectively. We also compare the performance of the model, trained with and without synthetic data. The results are shown in Table 4. Table 4 shows that the proposed SynthAC can significantly improve the captioning performance across the different amounts of ground truth data used for model training, especially with a very limited amount of data, i.e., 12.5% of ground truth. With the enhanced text-audio relation by adopting our proposed SynthAC framework, Synth-P-Transformer outperforms the GPT-Similar with the complete training data in Table 2. In addition, the Synth-P-Transformer using only 37.5% ground truth data has better captioning performance than the P-Transformer with the complete training data in Table 2. The results further demonstrate the effectiveness of our proposed SynthAC framework, which provides a solution for audio captioning with only limited text-audio data pairs. Moreover, the above results verify that properly using well-annotated textual information from text corpus in a multimodal learning task (i.e., text-vision domain) can benefit another multimodal learning task (i.e., audio captioning) suffering from data scarcity. ## 4 Conclusion We have presented an audio captioning framework with synthetic supervision, leveraging well-annotated text-vision captions in the image captioning dataset and text-to-audio generation model to enhance the learning of text-audio representation and improve audio captioning performance. Experiments show the proposed method's effectiveness, which offers improved performance compared to the baseline methods, can be easily adapted to various state-of-the-art methods with substantial performance improvements, and can maintain performance with a much-reduced amount of actual text-audio data, offering a promising solution to the challenge of data scarcity. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Real Data & Synthetic & CIDE\({}_{r}\) & SPICE & SPIDE\({}_{r}\) & SPIDE\({}_{r}\) & SPIDE\({}_{r}\)-FL \\ \hline \multirow{2}{*}{12.5\%} & ✗ & 49.9 & 14.3 & 32.1 & 29.9 \\ & ✓ & **55.2** & **15.1** & **35.2** & **33.1** \\ \hline \multirow{2}{*}{25.0\%} & ✗ & 56.1 & 13.6 & 34.8 & 33.8 \\ & ✓ & **58.4** & **15.5** & **36.9** & **35.0** \\ \hline \multirow{2}{*}{37.5\%} & ✗ & 58.1 & 15.1 & 36.6 & 35.3 \\ & ✓ & **61.1** & **15.9** & **38.5** & **37.6** \\ \hline \multirow{2}{*}{50.0\%} & ✗ & 57.6 & 16.2 & 36.9 & 34.4 \\ & ✓ & **63.8** & **16.7** & **40.2** & **38.3** \\ \hline \hline \end{tabular} \end{table} Table 4: Performance of Synth-P-Transformer trained with different amounts of the ground truth dataset (AudioCaps) and with or without synthetic data. \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline Model & BLEU\({}_{1}\) & BLEU\({}_{2}\) & BLEU\({}_{3}\) & BLEU\({}_{4}\) & ROUGE\({}_{l}\) & METEOR & CIDE\({}_{r}\) & SPICE & SPIDE\({}_{r}\) & SPIDE\({}_{r}\)-FL \\ \hline GPT-Similar [24] & 63.8 & 45.8 & 31.8 & 20.4 & 43.4 & 19.9 & 50.3 & 13.9 & 32.1 & - \\ TopDown-AlignedAtt [11] & 61.4 & 44.6 & 31.7 & 21.9 & 45.0 & 20.3 & 59.3 & 14.4 & 36.9 & - \\ P-LocalAFT [17] & 66.0 & 47.9 & 34.6 & 24.6 & 46.4 & 22.3 & 64.1 & 16.6 & 40.4 & 40.0 \\ \hline P-Transformer [13] & 53.4 & 38.9 & 27.1 & 18.0 & 44.2 & 21.5 & 57.7 & 16.6 & 37.1 & 35.9 \\ **Synth-P-Transformer** & **67.7** & **49.9** & **36.0** & **25.1** & **46.8** & **22.7** & **63.9** & **16.7** & **40.3** & **39.4** \\ \hline GraphAC [12] & 64.5 & 47.8 & 34.3 & 23.7 & 46.1 & **22.4** & 64.4 & **16.7** & 40.5 & 39.3 \\ **Synth-GraphAC** & **66.5** & **48.7** & **35.2** & **24.7** & **46.4** & **22.4** & **65.6** & 16.5 & **41.0** & **40.4** \\ \hline \hline \end{tabular} \end{table} Table 2: Performance comparison on the evaluation split of AudioCaps. \begin{table} \begin{tabular}{c|c|c} \hline \hline Model & Audio Caption \\ \hline \multirow{2}{*}{1} & Reference caption & A helicopter **blades running** \\ \cline{2-2} & P-Transformer [13] & A helicopter machine _gun fires_ \\ \cline{2-2} & **Synth-P-Transformer** & Helicopter **blades spinning** \\ \hline \multirow{2}{*}{2} & Reference & A man talking **followed by** a toilet \\ & **P-Transformer** [13] & A man speaking a toilet flushing \\ \cline{2-2} & **Synth-P-Transformer** & A man speaks **followed by** a toilet \\ \cline{2-2} & **Synth-P-Transformer** & A man speaks **followed by** a toilet \\ \cline{2-2} & **Reference caption** & A woman speaks with some rattling and some **spraying** \\ \hline \multirow{2}{*}{3} & P-Transformer [13] & An adult female is speaking \\ \cline{2-2} & **Synth-P-Transformer** & A woman speaking followed by **spraying** \\ \hline \hline \end{tabular} \end{table} Table 3: Illustration for audio captioning with or without SynthAC framework.
データに基づくアプローチは、音声Captioningに期待感を与える。しかし、音声Captioning手法の開発は、テキスト・音声データの入手可能性と品質が限られているため、偏りを持つ可能性がある。この論文では、SynthACフレームワークを提案する。これは、最新のオーディオ生成モデルの進歩と一般的に利用可能なテキストコーパスを活用し、合成テキスト・音声ペアを作成することで、テキスト・音声の表現を向上させる。具体的には、テキストから音声信号を生成するモデルであるAudioLDMを用いて、画像Captioningデータから captions を生成する。SynthACは、テキスト視覚領域のwell-annotations captionsの利用範囲を音声Captioningに拡張し、合成テキスト・音声ペアにおける関係を学習することで、テキスト・音声の表現を向上させる。実験は、SynthACフレームワークがデータ不足という課題を克服するために、テキスト視覚領域のwell-annotationsテキストコーパスを音声Captioningモデルに
2307.16389
STL: A Signed and Truncated Logarithm Activation Function for Neural Networks
Activation functions play an essential role in neural networks. They provide the non-linearity for the networks. Therefore, their properties are important for neural networks' accuracy and running performance. In this paper, we present a novel signed and truncated logarithm function as activation function. The proposed activation function has significantly better mathematical properties, such as being odd function, monotone, differentiable, having unbounded value range, and a continuous nonzero gradient. These properties make it an excellent choice as an activation function. We compare it with other well-known activation functions in several well-known neural networks. The results confirm that it is the state-of-the-art. The suggested activation function can be applied in a large range of neural networks where activation functions are necessary.
Yuanhao Gong
2023-07-31T03:41:14
http://arxiv.org/abs/2307.16389v1
# STL: A Signed and Truncated Logarithm Activation Function for Neural Networks ###### Abstract Activation functions play an essential role in neural networks. They provide the non-linearity for the networks. Therefore, their properties are important for neural networks' accuracy and running performance. In this paper, we present a novel signed and truncated logarithm function as activation function. The proposed activation function has significantly better mathematical properties, such as being odd function, monotone, differentiable, having unbounded value range, and a continuous nonzero gradient. These properties make it an excellent choice as an activation function. We compare it with other well-known activation functions in several well-known neural networks. The results confirm that it is the state-of-the-art. The suggested activation function can be applied in a large range of neural networks where activation functions are necessary. activation function, log function, STL, neural network, continuous. ## I Introduction Activation functions are a vital component of neural networks that play a crucial role in their success. They help to introduce non-linearity into the output of a neural network and make it capable of modeling complex relationships between inputs and outputs. Without activation functions, a neural network would be limited to linear operations, and complex patterns would be challenging to identify. Their introduced non-linearity empowers it to learn intricate patterns that would be otherwise impossible to learn. There are several activation functions available, such as Sigmoid and ReLU, each with its unique strengths and weaknesses. By selecting the appropriate activation function for the neural network being developed, designers can optimize the network's performance and ensure its success. For instance, the \(sigmoid\) function is ideal for binary classification tasks since it returns a value between 0 and 1, which makes it easy to interpret. In contrast, the \(ReLU\) function is highly efficient at handling sparse input data, making it particularly useful in deep neural networks [1]. Similarly, the \(tanh\) function produces output values between -1 and 1, making it an excellent choice for data that has negative values. More details are in Section I-A. It is crucial to note that selecting the right activation function is not a one-size-fits-all approach. The choice of the activation function depends on several factors, such as the type of data being used, the specific requirements of the neural network, and the nature of the problem being solved. By carefully selecting the appropriate activation function, designers can enhance the neural network's performance, making it more accurate and efficient in identifying complex patterns. ### _Related Work_ There are some well-known activation functions available. The famous \(Sigmoid\) function is defined as \[f_{1}(x)=\frac{1}{1+\exp(-x)}\,, \tag{1}\] whose values are bounded in \((0,1)\). The popular \(ReLU\) is defined as \[f_{2}(x)=\max(x,0)\,, \tag{2}\] whose values are bounded in \((0,+\infty)\). It has some variants such as \(PReLU\), which is defined as \[f_{3}(x)=\left\{\begin{array}{ll}x,&\mathrm{when}\,x>0\\ \alpha x,&\mathrm{else}\end{array}\right.\,, \tag{3}\] whose values are not bounded. Another variant \(ELU\) is \[f_{4}(x)=\left\{\begin{array}{ll}x,&\mathrm{when}\,x>0\\ \alpha(e^{x}-1),&\mathrm{else}\end{array}\right.\,. \tag{4}\] Another activation \(swish\) is defined as \[f_{5}(x)=x*Sigmoid(x)\,, \tag{5}\] which has a negative lower bound. The \(\tanh\) function is \[f_{6}(x)=\frac{e^{x}-e^{-x}}{e^{x}+e^{-x}}\,, \tag{6}\] whose values are in \((-1,1)\). The \(softsign\) function is \[f_{7}(x)=\frac{x}{|x|+1}\,, \tag{7}\] whose value are also in \((-1,1)\). The well-known \(softmax\) is \[f_{8}(x_{i})=\frac{e^{x_{i}}}{\sum_{i}e^{x_{i}}}\,, \tag{8}\] which is frequently used in classification tasks. Very recently, \(f_{8}\) has been shown to lead to numerical issues in [2]. An Fig. 1: The proposed activation function (left) and its gradient (right) with parameter \(\alpha=1\). This activation function has several desired properties such as being odd, monotone, differentiable and having positive gradient. Its gradient function has a \(\pi\) shape and is always positive. activation function in [3] named the natural logarithm rectified linear unit (NLeLU) is defined as \[f_{9}(x)=\log(\alpha*\max(x,0)+1)\,, \tag{9}\] which is non negative. Recently, an activation function named Serf is defined as [4] \[f_{10}(x)=x*\mathrm{erf}(\log(e^{x}+1))\,, \tag{10}\] where \(erf\) is the error function. There are more activation functions and we can not list all of them. And researchers might design their own activation functions for specific tasks. In the following subsection, we analyze the mathematical properties of these functions and discuss their limitations. ### _Analysis_ As mentioned, these activation functions provide the non linearity for the neural networks. Therefore, their behavior is fundamentally important for the networks. There are two types of activation functions. One is centered at the origin, \(f(0)=0\), and the other is \(f(x)\geq 0\). They correspond to different tasks such as regression and classification. In general, a good activation function has to satisfy several properties. And we list some of them as the following. #### Ii-B1 odd function We believe that the activation function should be an odd function, \(f(-x)=-f(x)\) and \(f(0)=0\). From mathematical point of view, it is strange to prefer the positive values than the negative values. And such preference might implicitly bias the learning system. (Please distinguish the bias with the bias in the linear transformation). To eliminate such bias, it is better to have an odd function as an activation function. #### Ii-B2 monotone function We also believe that the activation function should be monotone and, in most of cases, non-decreasing. Such property preserves the order in the input. In other words, the larger input is non-linearly mapped into a larger output. This order preserving property is desired. And a monotone function usually is a bijective mapping, which means the output does not lose the information from the input. One example is illustrated in Fig. 2. Be aware that the distance between two inputs is non-linearly scaled in the output. #### Ii-B3 differentiable Another property that an activation function should have is differentiable. With such property, the gradient of the activation function is continuous. Thus the gradient function has no dramatic change in a small neighborhood. The continuity of the gradient function guarantees the numerical stability when performing the back-propagation algorithm. #### Ii-B4 unbounded value The value of an activation function should fully fill the interval \((-\infty,+\infty)\). In contrast, the function with bounded values such as \(softsign\) will have small difference when two inputs have large values. For example, \(softsign(1000)\approx 1\approx softsign(2000)\), although \(1000\) and \(2000\) have a significant numerical difference. In other words, the \(softsign\) activation function can not distinguish the two input \(1000\) and \(2000\), showing its limitations. Similar thing happens for two negative inputs in the \(ReLU\) function. #### Ii-B5 continuous gradient On the other hand, the gradient of the activation function should be continuous and nonzero. The zero gradient (also known as vanishing gradient) is problematic when the back-propagation algorithm is performed. According the monotone property, we expect the gradient is continuous. The continuity guarantees that there is no dramatic change in a small neighborhood. It helps in improving the numerical stability of the neural networks. #### Ii-B6 computation efficiency The computation efficiency must be token into account because the activation function and its gradient would be evaluated many time during the training and inference process for the neural networks. Therefore, the running time of the networks can be effected by the activation function's computation efficiency. We consider these six properties as desired properties of activation functions. And we will evaluate previous activation functions in these six aspects. ### _Motivation and Contribution_ Previous activation functions can not satisfy all of the above six aspects. This motivates us to construct a novel function that satisfies these rules. Our contributions include the following * we present a novel activation function, which is odd, monotone, differentiable, has unbounded values and bounded gradients. The function and its gradient can be efficiently evaluated. * we analyze the properties of this function and argue why it is preferred as activation function. * we numerically confirm that it performs better than others for many well-known neural networks. ## II Signed and Truncated Logarithm Function In this section, we present an activation function that fully satisfies the above six preferred properties. More specifically, we define an activation function as \[f_{our}(x)=\left\{\begin{array}{ll}\alpha x,&\mathrm{when}\,|x|\leq 1\\ \alpha\delta(x)(\log(|x|)+1),&\mathrm{else}\end{array}\right.\,, \tag{11}\] where \(\delta(x)\) is the sign of \(x\) (if \(x>0\), \(\delta(x)=1\); if \(x<0\), \(\delta(x)=-1\)). The scalar parameter \(\alpha>0\) usually is set to \(1\). Since the activation function usually is followed by a linear transformation, the value of \(\alpha\) is not important. It will be automatically adjusted by the learnable parameters. The \(\log\) function is truncated when \(|x|\leq 1\) because its gradient is dramatically increasing in that interval. Such truncation avoids the numerical issue around the origin. Fig. 2: The monotone function is a bijective function and thus the output \(f(x)\) has a unique correspondence input \(x_{1}\). In contrast, a non-bijective function might lose such uniqueness (\(f(x_{2})=f(x_{3})\)). The gradient of this signed and truncated function is \[f^{\prime}_{our}(x)=\left\{\begin{array}{ll}\alpha,&\mathrm{when}\left|x\right| \leq 1\\ \frac{\alpha}{\left|x\right|},&\mathrm{else}\end{array}\right.. \tag{12}\] It follows that \(0<f^{\prime}_{our}(x)\leq\alpha\). Therefore, the gradient never vanishes. The gradient \(f^{\prime}_{our}(x)\) is also continuous, showing its numerical stability. The continuity indicates that there is no dramatic change in a small neighborhood. We name this function as Signed and Truncated Logarithm (STL) function. And we set the scale parameter \(\alpha=1\) by default. This function and its gradient are illustrated in Fig. 1. ### _Mathematical Properties_ The proposed STL satisfies the preferred six properties in the previous section. * It is not difficult to show that STL is odd, \(f_{our}(-x)=-f_{our}(x)\). Such property guarantees that there is no bias from the activation function itself. * STL is increasing (monotone) because its gradient is always positive. Therefore, it is a bijective mapping. The monotone bijective mapping is important from information point of view. It means that there is no information collapsed or generated. And the relative order from the input is also preserved in the output. * STL is differentiable. Such smoothness guarantees that there is no dramatic change in a small neighborhood. * STL has unbounded value range. This means that STL can still distinguish two input values even they are large. * Its gradient has bounded value range. Its nonzero gradient property guarantees that the vanishing gradient issue is not caused by the activation function itself, improving the networks' numerical stability. * STL and its gradient can be efficiently computed. ### _Computation Properties_ Another advantage of the proposed STL is its numerical efficiency. STL and its gradient can be efficiently computed. For example, when \(\left|x\right|>1\), STL can be approximated by the \(\log_{2}\) function \[f_{our}=\alpha\delta(x)(\frac{\log_{2}(\left|x\right|)}{\log_{2}(e)}+1)=\beta \log_{2}(\left|x\right|)+\alpha\delta(x)\,, \tag{13}\] where \(\beta=\alpha\delta(x)/\log_{2}(e)\) is a signed constant. The \(\log_{2}\) function has a fast approximation scheme, thanks to the binary representation of \(x\). More specifically, a 32bit float number \(x\) can be represented as a sign bit \(s\), a 8bit \(E\) and the fraction \(V\), \[x=(-1)^{*}*2^{E-127}*(1+V)\,. \tag{14}\] Therefore, the \(\log_{2}(x)\) (since \(x>0\) we have \(s=0\)) is \[\begin{split}\log_{2}(x)&=\log_{2}(1)+E-127+\log_ {2}(1+V)\\ &=E-127+\log_{2}(1+V)\,.\end{split} \tag{15}\] The \(E\) is easy to get. We only need to approximate the values of \(\log_{2}(x)\) where \(1\leq x<2\) via polynomials, for example, \((-0.344845x+2.024658)x-1.674873\), and we can also store these values in a lookup table to further accelerate the computation. More over, the gradient of STL can also be efficiently computed via Eq. 12. The gradient is decreasing when \(\left|x\right|\) is increasing. And their multiplication \(\left|x\right|*f^{\prime}_{our}(x)=\alpha\) is a constant when \(\left|x\right|>1\). ### _The Scale Parameter \(\alpha\)_ If we fixed the scale parameter \(\alpha=1\), it will not affect the performance of the neural network too much because the activation function usually is followed by a linear transformation which can absorb this scale parameter. Therefore, the value of this parameter is not so important in practice. It usually is set to \(1\) for simplicity. But if we let \(0<\alpha<1\), we can theoretically prove that the proposed STL is a contraction mapping (\(\left|f^{\prime}_{our}(x)\right|<1\)), which is a necessary condition to find the fixed point of the activation function. A fixed point is further related with the equilibrium state modeling in machine learning (usually the zero gradient of the loss function in practice). ### _Comparison with Others_ We compared the proposed STL with other activation functions. And the result is summarized in Table I. In the terms of the desired six properties, STL can satisfy all of them, showing its advantages as activation function. Be aware that there might be other desired properties as an activation function. But so far as we known, these six properties are proper and can be used as metrics for evaluating activation functions. ## III Experiments In this section, we numerically show the advantage of the proposed STL activation function in several well-known neural networks on the CIFAR dataset. We compare STL with ReLU [1], Mish [5] and Serf [4]. In the future, we will compare STL with more activation functions. ### _Cifar-10_ On CIFAR-10, we compared the proposed STL with ReLU, Mish and Serf in different neural networks, including SqueezeNet, Resnet-50, WideResnet-50-2, ShuffleNet-v2, ResNeXt-50, Inception-v3, DenseNet-121, MobileNet-v2, and EfficientNet-B0. In these networks, we only change the activation function. The top-1 % accuracy values are shown in Table II. The results confirm that the proposed STL establishes a new state-of-the-art activation function. ### _Cifar-100_ On CIFAR-100, we compared the proposed STL with ReLU, Mish and Serf in different neural networks, including Resnet-164, WideResnet-28-10, DenseNet-40-12, and Inception-v3. In these networks, we only change the activation function. The top-1 % accuracy values are shown in Table III. The results confirm that the proposed STL establishes a new state-of-the-art activation function. ### _Running Time_ We sampled 20,000 random numbers in the interval \([-10000,10000]\) and evaluate their values with \(ReLU\), \(softsign\) and STL, respectively. We compare STL with these two functions because they have low computation cost. The running time for these function is 0.0054, 0.0054 and 0.0060 seconds, respectively. The proposed STL is implemented with the naive \(\log_{2}\) function without the numerical acceleration in Eq. 15 which can further boost the computational performance. The \(\log_{2}\) is available in most of programming languages such as python, C++ and MATLAB. Therefore, the proposed STL function can be easily implemented in these languages. ## IV Conclusion In this paper, we propose a new activation function for neural networks named the signed and truncated logarithm (STL) function. We have found that this function has several advantages over other activation functions. Firstly, the STL function is odd, which means that it does not introduce any bias in itself. This is important because it ensures that the neural network is not skewed towards one direction or the other. Secondly, the function is monotonic, meaning that the relative order of the input is preserved in the output. This property is important in image recognition and classification, as well as in other applications such as speech recognition and natural language processing. Thirdly, the function is differentiable, which shows that it is numerically stable. This means that it can be used in a wide range of applications without encountering numerical instability issues. The differentiability property allows for efficient training of neural networks. Fourthly, the function has an unbounded value range, which means that it can distinguish between two very large inputs. This property is particularly useful in applications where the input values can vary widely, such as in financial modeling or scientific simulations. Fifthly, the function has a continuous gradient, which shows that it is stable with respect to the gradient. This property is important in deep neural networks, where the gradients can become unstable during training. The continuous gradient of the STL function helps ensure that the gradients remain stable, leading to faster and more stable convergence during training. Finally, the STL function and its gradient functions can be efficiently computed in programming languages. This makes it easy to implement the function in existing neural network frameworks and to use it in a wide range of applications. In conclusion, the STL function has significant advantages over other activation functions in neural networks. Its properties of oddness, monotonicity, differentiability, unbounded value range, continuous gradient, and computational efficiency make it an excellent choice for a wide range of applications. We believe that the STL function will play an important role in neural networks and machine learning [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35].
活化関数においては、重要な役割を果たしており、ネットワークに非線形性を提供する。そのため、その性質が、神経ネットワークの精度と実行性能に重要である。この論文では、新しい符号付きおよびtruncateされたlogarithm関数を活性化関数として提案する。この提案された活性化関数は、奇関数、単調関数、微分可能、無界な値範囲、および連続した非ゼロ勾配を有するなど、優れた数学的性質を持っている。これらの性質により、この活性化関数として優れた選択肢である。この活性化関数を、いくつかの有名な神経ネットワークにおける他の有名な活性化関数と比較した。その結果、これは最先端の活性化関数であることが確認された。提案された活性化関数はこの活性化関数が必要である神経ネットワークの幅広い範囲で適用可能である。
2305.00392
Bayesian Inference of Supernova Neutrino Spectra with Multiple Detectors
We implement the Bayesian inference to retrieve energy spectra of all neutrinos from a galactic core-collapse supernova (CCSN). To achieve high statistics and full sensitivity to all flavours of neutrinos, we adopt a combination of several reaction channels from different large-scale neutrino observatories, namely inverse beta decay on proton and elastic scattering on electron from Hyper-Kamiokande (Hyper-K), charged current absorption on Argon from Deep Underground Neutrino Experiment (DUNE) and coherent elastic scattering on Lead from RES-NOVA. Assuming no neutrino oscillation or specific oscillation models, we obtain mock data for each channel through Poisson processes with the predictions, for a typical source distance of 10 kpc in our Galaxy, and then evaluate the probability distributions for all spectral parameters of theoretical neutrino spectrum model with Bayes' theorem. Although the results for either the electron-neutrinos or electron-antineutrinos reserve relatively large uncertainties (according to the neutrino mass hierarchy), a precision of a few percent (i.e., $\pm 1 \% \sim \pm 4 \%$ at a credible interval of $2 \sigma$) is achieved for primary spectral parameters (e.g., mean energy and total emitted energy) of other neutrino species. Moreover, the correlation coefficients between different parameters are computed as well and interesting patterns are found. Especially, the mixing-induced correlations are sensitive to the neutrino mass hierarchy, which potentially makes it a brand new probe to determine the neutrino mass hierarchy in the detection of galactic supernova neutrinos. Finally, we discuss the origin of such correlation patterns and perspectives for further improvement on our results.
Xu-Run Huang, Chuan-Le Sun, Lie-Wen Chen, Jun Gao
2023-04-30T05:26:21
http://arxiv.org/abs/2305.00392v2
# Bayesian Inference of Supernova Neutrino Spectra with Multiple Detectors ###### Abstract We implement the Bayesian inference to retrieve energy spectra of all neutrinos from a galactic core-collapse supernova (CCSN). To achieve high statistics and full sensitivity to all flavours of neutrinos, we adopt a combination of several reaction channels from different large-scale neutrino observatories, namely inverse beta decay on proton and elastic scattering on electron from Hyper-Kamiokande (Hyper-K), charged current absorption on Argon from Deep Underground Neutrino Experiment (DUNE) and coherent elastic scattering on Lead from RES-NOVA. Assuming no neutrino oscillation or specific oscillation models, we obtain mock data for each channel through Poisson processes with the predictions, for a typical source distance of 10 kpc in our Galaxy, and then evaluate the probability distributions for all spectral parameters of theoretical neutrino spectrum model with Bayes' theorem. Although the results for either the electron-neutrinos or electron-antineutrinos reserve relatively large uncertainties (according to the neutrino mass hierarchy), a precision of a few percent (i.e., \(\pm 1\%\sim\pm 4\%\) at a credible interval of \(2\sigma\)) is achieved for primary spectral parameters (e.g., mean energy and total emitted energy) of other neutrino species. Moreover, the correlation coefficients between different parameters are computed as well and interesting patterns are found. Especially, the mixing-induced correlations are sensitive to the neutrino mass hierarchy, which potentially makes it a brand new probe to determine the neutrino mass hierarchy in the detection of galactic supernova neutrinos. Finally, we discuss the origin of such correlation patterns and perspectives for further improvement on our results. a,b]Xu-Run Huang, a,b]Chuan-Le Sun, a]Lie-Wen Chen, a]Jun Gao a]School of Physics and Astronomy, Shanghai Key Laboratory for Particle Physics and Cosmology, and Key Laboratory for Particle Astrophysics and Cosmology (MOE), Shanghai Jiao Tong University, Shanghai 200240, China b]Department of Physics, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong S.A.R., China [email protected] [email protected] [email protected] [email protected] supernova neutrinos, Bayesian analysis ## 1 Introduction The epochal detection of neutrino signals of SN 1987A, deriving from the Large Magellanic Cloud (\(\sim 50\) kpc), revealed the veil of multi-messenger era of astrophysics. Although only about two dozen neutrinos from this transient were caught by three lucky detectors, namely Kamiokande II [1], Irvine-Michigan-Brookhaven (IMB) [2] and Baksan [3], this detection renders to us the first glimpse into the collapsing core of a dying massive star. After that, various analyses, based on such sparse data, confirmed the outline of stellar core collapse and meanwhile imposed constraints on elusive properties of neutrinos [4; 5; 6; 7; 8; 9]. Three decades after that landmark, extraordinary progresses have been made among the modelling of stellar core collapse [10; 11; 12; 13; 14; 15], neutrino physics [16] and neutrino detection [17; 18; 19]. That is, millions of neutrinos will be detected with unprecedentedly high precision in modern neutrino observatories if the next galactic CCSN exploded at a typical distance of \(\sim 10\) kpc (approximately the distance between the centre of the Milky Way and our Solar System) [20; 21]. Such detection will promise, with no doubt, a much vaster playground for investigating meaningful topics in both CCSN physics and neutrino physics [20; 21; 22] (also other potentially interesting topics beyond these domains [23]). Modern hydrodynamic codes are now capable of performing successful simulations of the collapse and explosion of massive stars [24; 25; 26; 12]. They enrich our understanding of the explosion mechanism and characteristics of the related neutrino emission [22; 15]. However, a direct confirmation of those models is still missing and thus highly anticipated. Multiple neutrino detectors are currently in operation and scrutinizing the cosmos, or expected to operate in the future. Furthermore, some of them can promise unprecedentedly high statistics if the target is not too far, including water-based Cherenkov detectors (Hyper-Kamiokande [27], IceCube [28]), liquid scintillator detectors (JUNO [29], THIAE [30]), liquid argon time projection chambers (DUNE [31; 32; 33]), Pb-based cryogenic detectors (RES-NOVA [34; 35]) and so on. Although it is too complicated to predict when the next CCSN will occur in the vicinity, a rate of \(1.63\pm 0.46\) CCSN/100 y is obtained for the Milky Way and galaxies in the Local Group [36]. So, it could be promising to anticipate at least one galactic CCSN during the missions of those contemporary or next-generation detectors. Such a prospect has attracted quite some attentions on how to maximize the scientific return from such detection in the communities of astrophysics and particle physics. Among them, reconstructing the energy spectrum of neutrinos is significant for physics but demanding for the amount and quality of data. Attributing to the relatively strong interaction and low requirement on detector construction, inverse beta decay on proton (IBD-p) has become the most widely-utilised reaction channel in large-scale neutrino detectors [27; 29; 30]. This literally promises a good sensitivity to electron-antineutrinos. Elastic scattering on electron and charged current reaction on nuclei (e.g. \({}^{12}\)C [29], \({}^{16}\)O [27] and \({}^{40}\)Ar [32; 33]) offer the approaches to catch electron-neutrinos. Previous works have shown that a reasonable precision can be achievable in the measurement of supernova \(\nu_{e}\) spectrum [37; 38]. Now, the last task is presented as achieving sufficient sensitivity to heavy flavour neutrinos which can only undergo neutral current processes in such low-energy region. Therefore, elastic scattering on proton (pES) in scintillator detectors has been naturally proposed as an available access to heavy flavour part of supernova neutrinos [39]. Nevertheless, the RES-NOVA project, recently proposed in ref. [34] with the primary mission of detecting supernova neutrinos, promises high statistics via the coherent elastic neutrino-nucleus scattering on Lead. Note that different species of heavy flavour neutrinos are generally indistinguishable from each other since none of their charged companions would emerge with sufficiently large amount in stellar core collapse 1. However, a synergy of reaction channels is indispensable for extracting flavour-depending information (e.g. the collection of IBD-p, elastic scattering on electron/proton and charged/neutral current reactions on nuclei [40; 41; 42; 43; 44; 45]). Footnote 1: Thus, \(\nu_{x}\) is commonly used to denote one species of heavy flavour neutrinos and so do we. Sometimes, \(\nu_{x}\) and \(\bar{\nu}_{x}\) appear simultaneously, then they indicate particles and anti-particles, respectively. According to methodology, previous efforts can be schematically divided into two categories: statistical approaches and unfolding processes. Based on certain templates, statistical analysis extracts signals from noisy data with high efficiency, and thus has been usually adopted [37; 38; 40; 41; 42]. In such analyses, the profiles of neutrino fluxes are commonly depicted by the sophisticated Garching formula [46], which has been proven to be well compatible with high-resolution simulations [47]. To some extent, this simple fit represents our sophistication on the modelling of stellar core collapse. However, the heavy dependence on this analytic formula may potentially discard some important features of the real signals. Unfolding methods [43; 44; 45] are capable of alleviating such drawback, since they do not rely on any analytical formulas. But the shortages of such methods are even more severe. Aside from the complexity, the spectral reversion with response matrix belongs to the case of ill-posed problem, which means that small errors or noise can easily lead to artificial patterns in the results [45]. So, the pragmatic strategy is to implement these two processes complementarily in analysis of supernova neutrinos. They all offer meaningful information, only in different manner. In this work, we employ the Bayesian statistics to perform such evaluations. In the last decades, Bayesian method [48] has been proven to be a powerful tool in questions generally handling uncertainty, including gravitational wave astronomy [49], relativistic heavy-ion collisions [50], astrophysics and cosmology [51; 52], and fields of human activity beyond fundamental physics (e.g. Bayesian networks). Especially, it had already been introduced to the analysis of neutrino signals from SN 1987A [9]. In this paper, we demonstrate the use of Bayes' theorem to evaluate the spectral parameters for all flavours of neutrinos from a galactic CCSN. At the source, we adopt the time-integrated spectra for each type of neutrinos from a long-term axisymmetric core-collapse simulation which is reported in ref. [38]. Then, the simple adiabatic conversion in the CCSN [53] is applied here to account for the inevitable oscillation effects, including the case of normal mass hierarchy (NH) and inverted mass hierarchy (IH). We also show the results with no oscillation effects. However, any other neutrino conversion models can also be implemented in principle. As to the detection, we attempt to simultaneously obtain high statistics and full sensitivities to all types of neutrinos by taking advantage of three large-scale neutrino observatories, namely Hyper-K, DUNE and RES-NOVA. It should also be mentioned that the pES channel in JUNO is capable of performing flavour-blind detection with high energy resolution. However, it is reported that the reconstructed \(\nu_{e}\) and \(\nu_{x}\) spectra suffer from a substantial systematic bias of energy threshold induced by the pES channel's insensitivity to neutrinos with energy below 20 MeV [44]. Note that the peak is usually located at \(\sim 10\) MeV in the spectrum of supernova neutrinos. Instead, the proposed 1 keV threshold for nuclear recoil energy in RES-NOVA offers the flavour-blind sensitivity to neutrinos with energy above \(\sim 10\) MeV [34]. Detailed configurations of these detectors will be discussed later. The fast event-rate calculation tool, _SNOwGLoBES_2, is employed to compute count rates for channels in Hyper-K and DUNE, while that for RES-NOVA is done with a code developed by our own 3. In section 2, we review the detector characteristics and generate the mock data for further analysis. Aside from the detector responses, noise from Poisson processes is also included in the mock data. In section 3, we demonstrate how the spectral parameters are estimated from the mock data via Bayes' theorem, and numerical results as well. Finally, we conclude in section 4. Footnote 2: _SNOwGLoBES_ provides detector responses to many reaction channels (see e.g. ref. [17] for details) and it is available at [https://webhome.phy.duke.edu/~schol/snowglobes/](https://webhome.phy.duke.edu/~schol/snowglobes/). Footnote 3: This code and _SNOwGLoBES_ have been integrated in our Bayesian code. ## 2 Supernova neutrinos in detectors Before getting into details of Bayesian analysis, we summarise the features of detectors employed in this work and the characteristics of supernova neutrinos. Since no experimental data is available up to now, we calculate the number of expected events in each energy bin for each channels, based on the neutrino fluxes from numerical simulation, and then extract the number of events for analysis from a Poisson distribution with the expected count as average value. How we consider the neutrino oscillation effects is also presented in this section. ### Detector configurations The primary reaction channels for the selected detectors, namely IBD-p in Hyper-K, charged current reaction on Argon (vAr(CC)) in DUNE and neutral current scattering on Lead (vPb(NC)) in RES-NOVA, are adopted in this study to provide sensitivities to \(\bar{\nu}_{e}\), \(\nu_{e}\) and \(\nu\), sequentially. We also include the elastic scattering on electron (eES) in Hyper-K, in order to further enhance the sensitivity of this collection to \(\nu_{e}\) and \(\nu_{x}\). Note that eES channel have different cross sections to each type of neutrinos, i.e., \(\sigma_{\nu_{e}}>\sigma_{\bar{\nu}_{e}}>\sigma_{\nu_{x}}\)4. It is also interesting to mention that neutral current scattering on Argon in DUNE can potentially offer good sensitivity to \(\nu_{x}\), just not yet fully studied [33]. Footnote 4: Strictly speaking, \(\sigma_{\nu_{x}}\) is slightly greater than \(\sigma_{\nu_{x}}\) (see figure 2 in ref. [17]). Hyper-K is a next-generation water-based Cherenkov detector which is scheduled to start data-taking in 2027 [54]. Its primary missions include precision measurements on neutrino oscillations, searches for proton decay and observations on astrophysical neutrinos [27]. In this study, we employ two reaction channels in Hyper-K, namely the IBD-p (\(\bar{\nu}_{e}+p\to e^{+}+n\)) and eES (\(\nu+e^{-}\rightarrow\nu+e^{-}\)). Electrons and anti-electrons are produced in these scatterings and emit Cherenkov lights along with their motions in ultra-pure water. Then, the events can be reconstructed by collecting those Cherenkov photons via photomultiplier tubes (PMT). Currently, the reconstruction of IBD-p event has been well established. Meanwhile, eES event can also get separated from IBD-p signals, to some extent, according to their different angular dependence. Furthermore, it is reported that the neutron tagging efficiency can get improved substantially through addition of gadolinium (e.g., an efficiency of \(\sim 90\%\) in a gadolinium-loaded Super-K) [37]. That is, the tagging efficiency for the two reaction channels is expected to be promising since the possibility of gadolinium loading has already been considered in the design report of Hyper-K 5. Here we just assume a generally full tagging efficiency for the two reactions. On the other hand, according to the design report, the fully configured Hyper-K detector consists of two tanks, of which each contains 258 kton of ultra-pure water. The designed fiducial mass for each tank reaches 187 kton. Therefore, a 374 kton of total fiducial mass for Hyper-K has been adopted in some of previous works (see, e.g., ref. [38; 41]). However, the realistic fiducial mass for one tank can exceed this designed scale and reach 220 kton in the detection of supernova neutrinos, because of the localization in time and the neglect of low energy radioactive background due to the short-time feature of supernova neutrino signals [27]. We thus consider one tank with a fiducial mass of 220 kton, just following the available scale also adopted in ref. [45]. That is, only half of the capability of Hyper-K is under evaluation in this study. As to detector response, we adopt the same smearing matix and post-smearing efficiency as that of Super-K I (or III, IV), which are provided in _SNOwGLoBES_. Its response corresponds to the assumption of 40% PMT coverage. Footnote 5: The project of loading gadolinium into Super-K has already been approved. And this will provide a template for further application in Hyper-K. See ref. [27] for more details. DUNE [31; 32] will consist of four time projection chambers which contains 70 kton liquid argon in total. The nominal fiducial mass is 40 kton, and we also adopt this value in this study. However, in principle the available mass may exceed this value when studying supernova neutrinos, just like the case in Hyper-K. The primary goals for DUNE include precision measurements on neutrino oscillation parameters and searching for new physics. Among current-operated and future-planned neutrino detectors, DUNE will bring unique sensitivity to \(\nu_{e}\) with energies down to \(\sim 5\) MeV via the vAr(CC) reaction (\(\nu_{e}+^{40}\mathrm{Ar}\to e^{-}+^{40}\mathrm{K}^{*}\)). When such reactions happen, short electron tracks will be created and recorded, potentially along with gamma-rays in the chambers. DUNE will also have excellent time resolution which assures its capability of precisely depicting the neutrino burst time profile if the source is close enough. For instance, it is possible to identify the neutrino "trapping notch", which emerges as a consequence of neutrino trapping in the dense collapsing core and typically has a width of \(1-2\) ms, for closest CCSNe (few kpc) [33]. Moreover, in the galactic supernova neutrino detection landscape with DUNE, one of the most interesting topic is that the mass hierarchy problem in neutrino oscillations can be decisively determined by the detection of neutronization burst which is almost composed of \(\nu_{e}\) when produced [53]. The above works also adopted _SNOwGLoBES_ in their studies. Therefore, it is quite convenient for us since the configurations of DUNE has already been provided as well. RES-NOVA [34; 35] is a newly proposed experiment with the primary aim of hunting neutrinos from CCSNe. It intends to achieve a flavour-blind measurement with low energy threshold, high energy resolution and high statistics to supernova neutrinos, by taking advantage of the large coherent elastic scattering cross sections between MeV neutrinos and Pb nuclei, the ultrahigh radiopurity of archaeological Pb and modern technologies on cryogenic detector. This innovative project carries the ambition of providing a \(5\sigma\) sensitivity to supernova bursts up to Andromeda. However, the detailed configuration has not been settled yet. In this work, we consider a simple realisation of RN-3 in ref. [34], which is constructed with pure Pb crystals and has a detector mass of 465 ton. It will have a 1 keV energy threshold and a 0.2 keV resolution for nuclear recoil energy. This means that RES-NOVA could be sensitive to neutrinos with energies down to \(\sim 10~{}\mathrm{MeV}\). When neutrinos arrive at the detector, they can possibly undergo the vPb(NC) processes (\(\nu+\mathrm{Pb}\rightarrow\nu+\mathrm{Pb}\)). After that, the target nucleus will gain a recoil energy in the magnitude of a few keV, and then billions of phonons will get created in the absorber and act as information carriers. Such experimental strategy can possibly make full use of the entire energies deposited in the detector and lead to a realisation of excellent energy reconstruction. However, unlike the previous detectors, the configuration of RES-NOVA is currently absent in _SNOwGLoBES_. We calculate the event rates following our previous works (i.e., ref. [23, 55]). The averaged neutron skin of Pb nuclei is fixed on the experimental value of \({}^{208}\)Pb, namely \(R_{n}-R_{p}=0.283\pm 0.071~{}\mathrm{fm}\) from PREX-II [56]. Furthermore, in order to properly account for the effect of threshold, we adopt such an acceptance efficiency function: \[A(x)=\frac{a}{1+\mathrm{e}^{-k(x-x_{0})}}, \tag{1}\] where the values of parameters are taken as \(a=1,k=5,x_{0}=1.5\). Such arrangements assure that the detection efficiency will swiftly rise up to around 100% from \(\sim 0\%\) when nuclear recoil energy goes to 2 keV from 1 keV, and approaches 100% asymptotically after 2 keV. In fact, this function derives from the acceptance efficiency of the COHERENT experiment [57, 58], and can also produce similar structure as the reconstruction efficiency function of DUNE [33], just with different parameters. Note that this efficiency represents a conservative estimate and the real one is yet to be determined. ### Neutrino spectra and oscillations State-of-the-art stellar evolution theory indicates that dying massive stars would undergo violent core collapse at their end, generating an outward-propagating shock-wave to expel their mantles and exploding as spectacular CCSNe which can emerge as luminous as their host galaxy. In such explosions, almost \(\sim 99\%\) of the released gravitational potential energy (\(\sim 10^{53}~{}\mathrm{erg}\)) will be liberated through neutrino emission. Moreover, the evolutionary histories of the dense core are imprinted in both the temporal structures and energy spectra of neutrino emissions. Note that the neutrinos can still deliver information out of the collapsing core, even if no electromagnetic signal was emitted due to the formation of black hole in failed CCSN. The detailed characteristics of neutrino emission depend not only on the properties of progenitor star (e.g., mass, compactness and so on [59, 60]), but also on the nuclear equation of state of neutron star which still remains largely uncertain [61, 62, 63]. Except that, currently our comprehension on the spectral structure of supernova neutrinos is primarily obtained from studies on numerical simulations, due to lack of experimental data. According to detailed investigations on supernova neutrino spectra [46, 47], the instantaneous spectrum for each type of neutrinos will generally follow the quasi-thermal distribution (also called Garching formula), which can be presented as \[f_{\nu}(E_{\nu})=\mathcal{A}\left(\frac{E_{\nu}}{\langle E_{\nu}\rangle}\right)^{ \alpha}\exp\left[-(\alpha+1)\frac{\mathrm{E}_{\nu}}{\langle\mathrm{E}_{\nu} \rangle}\right]. \tag{2}\] Here, \(E_{\nu}\) and \(\langle E_{\nu}\rangle\) are the energy and average energy of neutrino in the unit of MeV, respectively; \(\mathcal{A}=\frac{(\alpha+1)^{\alpha+1}}{\langle E_{\nu}\rangle\,\Gamma( \alpha+1)}\) is the normalization factor with \(\Gamma\) being the gamma function; and \(\alpha\) characterises the amount of spectral pinching (with large value leading to suppression on high energy tail). \(\alpha\) can be determined by the energy moment of the distribution, e.g., the relation \[\frac{\left\langle E_{\nu}^{2}\right\rangle}{\langle E_{\nu}\rangle^{2}}=\frac {2+\alpha}{1+\alpha}. \tag{3}\] Actually, eq. (2) has been usually adopted as well to describe the time-integrated spectra in previous studies [38; 39; 40; 41; 42; 64], and so do we. Now, assuming no neutrino oscillation, the flux on the Earth can be expressed as \[\Phi(E_{\nu})=\frac{1}{4\pi d^{2}}\frac{\mathcal{E}_{\nu}}{\langle E_{\nu} \rangle}f_{\nu}(E_{\nu}), \tag{4}\] where \(d\) is the distance of source, and \(\mathcal{E}_{\nu}\) denotes the total energy emitted through a specific species of neutrinos. The spectral parameters for the source, adopted in this work, are given in table 1. It should be mentioned that the progenitor model, used to generate these parameters in the simulation, is expected to explode as one of the most common type II supernova (see ref. [38] for more details). Now, the predicted event rate for each channel can be calculated. For Hyper-K and DUNE, we set a uniform 100 energy grids to cover the energy range of \(0.25-100.00\) MeV 6 and drop the first several zones to approximately obtain a threshold of \(5\leavevmode\nobreak\ \mathrm{MeV}\). For RESNOVA, we also set a uniform energy grid with the bin width of \(0.2\leavevmode\nobreak\ \mathrm{keV}\), which starts from the threshold of \(1\leavevmode\nobreak\ \mathrm{keV}\)7. We have also tested another non-uniform grid scheme, i.e., the adaptive energy-gridding technique 8 (see ref. [45]), and the results of analysis turn out to be almost the same as that of current grid scheme. With the prediction data, the mock data can be generated now, e.g., given the predicted number of events \(N_{pd}\), the corresponding number \(N_{md}\) can be extracted from a Poisson distribution with \(N_{pd}\) being the average value 9. The \begin{table} \begin{tabular}{l l l l} \hline \(\nu\) & \(\alpha_{\nu}\) & \(\langle E_{\nu}\rangle\) [MeV] & \(\mathcal{E}_{\nu}\) [\(10^{52}\leavevmode\nobreak\ \mathrm{erg}\)] \\ \hline \(\nu_{e}\) & 2.67 & 14.1 & 7.70 \\ \(\bar{\nu}_{e}\) & 3.28 & 16.3 & 6.44 \\ \(\nu_{x}\) & 2.20 & 17.2 & 5.88 \\ \hline \end{tabular} \end{table} Table 1: Spectral parameters for the time-integrated spectra of supernova neutrino fluxes (see table 1 in ref. [38]). results are shown in figure 1. The caveat is that such a treatment means that the mock data is extracted from one simulated measurement. So, it is inevitable that the information reflected by the data may deviates from that of the original source due to the Poisson processes. Only high statistics can alleviate such deviations. However, this is also the fact faced by realistic measurements. Flavour transitions are also inevitable for supernova neutrinos. These messengers are primarily produced in the dense core of a dying star, penetrate through the thick stellar mantle and ultimately arrive in detectors on the Earth. Various conditions, encountered in this long journey, lead to complex transition patterns, e.g., adiabatic/non-adiabatic transitions, self-induced transitions and earth matter effects [20, 53, 65]. Since this work is not meant to dig into the detail of flavour conversion, we focus on the adiabatic transition associated with smoothly-varying matter potentials in supernovae, for simplicity. On the other hand, the three-flavour neutrino mixing framework has been well established experimentally due to tremendous experimental efforts over the past few decades. So we can describe the flavour transitions in supernovae with proper formulas under specific assumptions. However, there Figure 1: Predicted events and mock data for each reaction channel in Hyper-k, DUNE and RES-NOVA. \(E_{\nu}\) and \(E_{r}\) are the reconstructed neutrino energy and nuclear recoil energy, respectively. The source is assumed to be located at a typical distance, i.e., \(d=10\) kpc, and no oscillation effect is under evaluation. still exist two unknowns up to now in this scenario, i.e., the mass hierarchy and the complex phase associated with CP-violating observable. For the latter one, previous works have shown that it will not cause sizeable modifications to the signals of supernova neutrinos [66, 67]. But the previous one is crucial to the flavour composition of supernova neutrinos in detectors. And that necessitates the consideration of both NH and IH in this work. Assuming the adiabatic Mikheyev-Smirnov-Wolfenstein (MSW) model, in the case of NH, the observed fluxes (\(\Phi_{\nu}\)) are composed with the original fluxes (\(\Phi_{\nu}^{0}\)) in the following forms [53]: \[\Phi_{\nu_{e}} =\Phi_{\nu_{x}}^{0} \text{(NH)}, \tag{5}\] \[\Phi_{\bar{\nu}_{e}} =\cos^{2}\theta_{12}\Phi_{\bar{\nu}_{e}}^{0}+\sin^{2}\theta_{12} \Phi_{\bar{\nu}_{x}}^{0} \text{(NH)}, \tag{6}\] where \(\theta_{12}\) is the mixing angle with the value \(\sin^{2}\theta_{12}=0.307\pm 0.013\)[68]. In the case of IH, the formulas are rearranged as [53] \[\Phi_{\nu_{e}} =\sin^{2}\theta_{12}\Phi_{\nu_{e}}^{0}+\cos^{2}\theta_{12}\Phi_{ \nu_{x}}^{0} \text{(IH)}, \tag{7}\] \[\Phi_{\bar{\nu}_{e}} =\Phi_{\bar{\nu}_{x}}^{0} \text{(IH)}. \tag{8}\] And the total fluxes are conserved in both cases with such an equality: \[\Phi_{\nu_{e}}+\Phi_{\bar{\nu}_{e}}+4\Phi_{\nu_{x}}=\Phi_{\nu_{e}}^{0}+\Phi_{ \bar{\nu}_{e}}^{0}+4\Phi_{\nu_{x}}^{0} \text{(NH\&IH)}. \tag{9}\] Here \(\Phi_{\nu_{x}}\) and \(\Phi_{\bar{\nu}_{x}}\) represent the fluxes of neutrinos and anti-neutrinos with heavy flavours, sequentially, and are all equal to one quarter of the total heavy flavour flux. In the data analyses, we do not distinguish between them. From the above expressions, one can see that in the NH case, the \(\nu_{e}\) component is ultimately coming from the original \(\nu_{x}\) component while the \(\bar{\nu}_{e}\) flavour is only partially transformed. In the IH case, the transformations is almost reversed, i.e., the \(\bar{\nu}_{e}\) flavour is fully transformed now while the \(\nu_{e}\) component is partially transformed. Note that, instead of simply reversion, the extents of partial transformations are different for the two cases. The oscillation effects on the prediction of each reaction channel are shown in figure 2. As one can see, it is clear that the predicted energy spectra for different mass hierarchy diverge from each other in the flavour-sensitive reaction channels, including IBD-p and eES in Hyper-K and vAr(CC) in DUNE, while they totally overlap with each other in the flavour-blind reaction channel, i.e., vPb(NC) in RES-NOVA. It is also interesting to mention that the different gaps between IH and NH in IBD-p and vAr(CC) reflect the different extents of partial transformations. For the mock data used in the final analysis, we conduct the same extractions, only including all those ingredients this time. ## 3 Bayesian inference and numerical results Now data analysis can be performed with Bayesian inference to the mock data generated in the previous section. We firstly describe the basic ideas of Bayesian inference briefly and the prior arrangements of our analysis. Then what's following are the demonstration of numerical results and some discussions as well. ### Basic ideas Bayesian statistics is fundamentally different from conventional frequentist statistics. In Bayesian probability theory, probability is treated as a subjective concept which depends on our state of knowledge, instead of the objective limit of relative frequency of the outcome. So, it is allowed to get updated on the basis of new information which can be collected via some approaches, e.g., conducting experiments. With a full understanding of the issue under investigation, in principle the Bayesian probability will arrive at a stable value. The basic logical rule which allows us to do such updating is the Bayes' theorem, which can be presented as \[P(\theta|D)\propto P(D|\theta)P(\theta). \tag{10}\] In the case of parameter estimation, \(\theta\) and \(D\) represent the model parameter to be estimated collectively and the dataset relevant to the model, respectively. The quantity to be evaluated is the posterior probability, \(P(\theta|D)\), which stands for the probability of \(\theta\) given the new dataset \(D\). \(P(\theta)\) is the prior probability which quantifies our beliefs on \(\theta\) before inclusion of Figure 2: Predicted events in each reaction channel under inverted mass hierarchy (IH) or normal mass hierarchy (NH). \(E_{\nu}\) (\(E_{r}\)) denotes the reconstructed neutrino energy (nuclear recoil energy). The distance is assumed to be 10 kpc. The mock data for each case can be extracted with the same strategy in figure 1 and we did not show them here. new conditions. The likelihood function \(P(D|\theta)\) is a mathematical function of \(\theta\) for a fixed dataset \(D\) (also denoted by \(\mathcal{L}(\theta;D)\)). It quantifies the probability of the observation of \(D\) when given the specific parameter \(\theta\). In this framework, the main task of inference will get descended into how to calculate the distribution of posterior probability, once the expressions of prior and likelihood are settled. Note that a proper realization of prior probability will be quite helpful in the analysis of less informative dataset, but, somehow, trivial in the case with dataset informative enough. In this work, since the Garching formula is adopted to describe the time-integrated spectra of supernova neutrinos, we get 9 model parameters, i.e., \[\vec{\theta}=(\alpha_{\nu_{e}},\alpha_{\bar{\nu}_{e}},\alpha_{\nu_{x}},\left<E_ {\nu_{e}}\right>,\left<E_{\bar{\nu}_{e}}\right>,\left<E_{\nu_{x}}\right>, \mathcal{E}_{\nu_{e}},\mathcal{E}_{\bar{\nu}_{e}},\mathcal{E}_{\nu_{x}}). \tag{3.2}\] The realisation of \(P(\vec{\theta})\) could be nontrivial. Generally speaking, the posterior distribution of previous inference can act as the prior distribution of new inference with new information. However, this is not the case in this study, due to the highly limited information provided by the measurement of SN 1987A. Up to now, our knowledge on this issue is primarily obtained from various simulations. In detail, the values of \(\alpha\) are usually varying with time in the range of \(2\lesssim\alpha\lesssim 4\)[46, 47, 69]. For \(\left<E_{\nu}\right>\), the magnitude of \(\sim 10\) MeV exists in almost all simulations and also gets confirmed by the observation of SN 1987A. Furthermore, a neutrino energy hierarchy is emerged as \(\left<E_{\nu_{e}}\right><\left<E_{\bar{\nu}_{e}}\right>\lesssim\left<E_{\nu_{ x}}\right>\) in simulations [11, 69]. For \(\mathcal{E}_{\nu}\), both simulations and SN 1987A indicate that the total released energy via neutrinos should lie in the vicinity of \(3\times 10^{53}\) erg. And the ansatz of energy equipartition among different flavours of neutrinos has also been found to be roughly valid in simulations. Based on the above statements, we quantify the prior knowledge with 9 independent Gaussian functions associated with the 9 spectral parameters, i.e., \[\log P(\theta>0)=-\frac{(\theta-\mu)^{2}}{2\sigma^{2}}+constant, \tag{3.3}\] where we exclude the non-physical negative quadrants. The relevant Gaussian parameters are given in table 2. It must be emphasized here that, with such arrangements, we do not intend to mean that the spectral parameters of neutrinos from the next galactic CCSN would follow these distributions. It rather expresses such a belief that we are quite confident that \(\theta\) will lie within \(\mu\pm\sigma\), very sure that \(\theta\) will lie within \(\mu\pm 2\sigma\) and almost certain that \(\theta\) will lie within \(\mu\pm 3\sigma\). Values far beyond these regions are still possible but just not likely to happen since that would break the current theoretical framework. Such priors cover the parameter spaces used in the previous analysis [41] with the regions of \(3\sigma\), and meanwhile accommodate strong deviations from the expected values. However, it should be noted again that the posterior will be eventually dominated by the data, instead of the choice of priors, when the dataset is informative enough. As a confirmation, we also conduct the analysis with flat priors, and the comparison is shown in appendix A. \begin{table} \begin{tabular}{c c c c c c} \hline & \(\alpha_{\nu}\) & \(\left<E_{\nu_{e}}\right>\) & \(\left<E_{\bar{\nu}_{e}}\right>\) & \(\left<E_{\nu_{x}}\right>\) & \(\mathcal{E}_{\nu}\left[10^{52}\text{ erg}\right]\) \\ \hline \(\mu\) & 3 & 12 & 14 & 16 & 5 \\ \(\sigma\) & 1 & 4 & 4 & 4 & 5/3 \\ \hline \end{tabular} \end{table} Table 2: The parameters of Gauss distributions in priors. \(\mu\) and \(\sigma\) represent the center values and standard deviations, respectively. \(\left<E_{\nu}\right>\)s are in the unit of MeV. The dataset consists of series of energy bins and related number of events, and we conduct the analysis with such a binned likelihood: \[\mathcal{L}_{\zeta}(\vec{\theta};D)=\prod_{i=1}^{\mathrm{N_{bin}}}\frac{\lambda_{ i}^{n_{i}}}{n_{i}!}\mathrm{e}^{-\lambda_{i}}, \tag{10}\] for the reaction channel \(\zeta\), where \(\mathrm{N_{bin}}\) is the number of energy bins, \(\lambda_{i}\) and \(n_{i}\) represent the number of events related to the \(i\)th bin in predictions and mock data, respectively. \(\lambda_{i}\) is a function of \(\vec{\theta}\), while \(n_{i}\) belongs to \(D\). Such a Poisson distribution is also adopted in previous studies [41; 42]. Now, the eventual likelihood is simply expressed as \[\mathcal{L}(\vec{\theta};D)=\prod_{\zeta\in\ all\ exp.}\mathcal{L}_{\zeta}( \vec{\theta};D), \tag{11}\] after combining all the reaction channels. Other potentially useful reaction channels can also be considered via this formula in the future. Furthermore, eq. (10) can be replaced with another more well-constructed likelihood, which considers other uncertainties in realistic measurements thoroughly, in future studies. The calculation of posterior distribution used to be the most complicated part of Bayesian inference. However, powerful methods and tools are currently available to alleviate it. In this work, we implement the ensemble sampler tool, _emcee_\({}^{\ 10}\)[70], to sample the 9-dimension posterior distribution. The _emcee_ package is a Python implementation of the Metropolis-Hastings (M-H) algorithm \({}^{\ 11}\), which has already been adopted in many published projects in the astrophysics literature. The caveat, which derives from the M-H algorithm, is that the samples initially generated in the chain can be heavily influenced by the choice of starting point in parameter space, due to the inevitably existing correlation among neighboring samples. So, in practice the initial part will be excluded. In this study, we just drop the initial 200 samples in each chain, to obtain sets of stable samples. ### Demonstration Finally, we perform the analysis and show the numerical results in this section. As a start, we test the capability of our method with the case considering no oscillation effects. So as to obtain an appropriate determination of the posterior distribution, we draw a dataset including \(10^{6}\) samples, and calculate the distribution of each parameter by conducting marginalization over other parameters which follows the law of total probability. Then, the cases of different oscillation models are evaluated through the same processes. Among them, \(d=10\) kpc is adopted as the default distance of source. In practice, it is possible for this distance to be much smaller, e.g., nearby core collapse supernova candidates reported in [71], including the famous Betelgeuse. Generally speaking, smaller distance means higher statistics and then better precision, when neutrino flux is not too intense to cause signal pile-up in the detector. However, it would be another topic, for future works, on how to properly deal with the effects of signal pile-up if the source is too close. As a test, we only estimate the distance effect in the case of \(d=5\) kpc here. Figure 3: Posterior distributions for the no oscillation case. The Gaussian prior distributions are functioning here. Plots on the diagonal show posterior distributions for the corresponding parameter after marginalization over other parameters, and the off-diagonal ones show correlations between them. Contours in the off-diagonal plots demonstrate the area of \(1\sigma,2\sigma\) and \(3\sigma\) credible level, respectively. The blue lines mark the parameter values to generate the mock data used in this analysis. #### 3.2.1 No oscillation Figure 3 shows the posterior distributions when no neutrino oscillation is considered12. The 1-dimension (1-D) distributions for all spectral parameters are plotted on the diagonal. We also present the representative values of these 1-D distributions, i.e., the maximum _a posteriori_ (MAP) estimate and the \(2\sigma\) credible intervals in the highest posterior density scheme, in table 3. As one can see, the three parameters for \(\bar{\nu}_{e}\) flux are constrained quite well in this analysis. In detail, the \(2\sigma\) symmetrized fractional uncertainties 13 reach \(\pm 2.8\%\), \(\pm 0.8\%\) and \(\pm 0.9\%\) for \(\alpha\), \(\langle E\rangle\) and \(\mathcal{E}\), sequentially. Such high precision is primarily attributed to the ultra-high statistics provided by the IBD-p channel in Hyper-K, as can be seen in figure 0(a). Meanwhile, the sensitivity to \(\nu_{e}\) mainly derives from the vAr(CC) reaction in DUNE and the eES channel in Hyper-K. Modest uncertainties are also achieved as \(\pm 12.4\%\), \(\pm 4.3\%\) and \(\pm 4.9\%\). However, the precision for \(\nu_{x}\) flux is relatively poor and the fractional uncertainties are only obtained as \(\pm 33.4\%\), \(\pm 10.7\%\) and \(\pm 10.9\%\). The vPb(NC) reaction in RES-NOVA renders the primary sensitivity to \(\nu_{x}\) and also achieves a number of total events even larger than the sum of that from the eES channels in Hyper-K and the vAr(CC) channel in DUNE. However, the fact is that \(\sim 1/3\) of the signals in RES-NOVA come from the \(\nu_{e}\) and \(\bar{\nu}_{e}\) fluxes. That is, the information of \(\nu_{x}\) from RES-NOVA is actually contaminated. Nevertheless, higher statistics will further improve the accuracy, e.g., enlarging the fiducial mass of RES-NOVA by 10 times will improve the accuracy by \(\sim 50\%\) in our test. On the other hand, due to the strong suppression of Pb nuclei on the nuclear recoil energy, a threshold of 1 keV in nuclear recoil energy only makes RES-NOVA sensitive to neutrinos with energy above \(\sim 10~{}\mathrm{MeV}\). Such threshold, although literally quite low among detectors in the same category, is nevertheless not low enough for precision measurement of the spectrum of \(\nu_{x}\) flux in supernova neutrinos, since the information below and even in the peak is lost. Such loss naturally jeopardizes precision extraction of information related to spectral shape. 14 Footnote 10: [https://emcee.readthedocs.io/en/stable/index.html](https://emcee.readthedocs.io/en/stable/index.html). Footnote 11: The M-H algorithm is the most commonly used Markov chain Monte Carlo algorithm (see ref. [48, 70] for more details). Footnote 12: _corner_ is used to plot such diagrams [72]. Footnote 13: Indeed, asymmetries appear among these 1-D distributions in figure 3 and also in table 3, and will also show up in that of other cases. For simplicity, the symmetrized fractional uncertainties are calculated by averaging the positive and negative uncertainties over the most probable values here and after. Footnote 14: In the test analyses, we assume a \(\sim 6~{}\mathrm{MeV}\) threshold of neutrino energy (i.e., \(0.4~{}\mathrm{keV}\) threshold of nuclear recoil energy) for RES-NOVA, and the accuracy for \(\nu_{x}\) is improved by a factor of \(1/4\). The neutral current scatterings on \({}^{16}\mathrm{O}\) in Hyper-K can also provide information on the low energy region (e.g., \(\sim 400\) events in the energy range of \(5\sim 10~{}\mathrm{MeV}\)). The inclusion of this reaction also lead to a moderate improvement (\(\sim 25\%\)) on the accuracy of \(\alpha_{\nu_{x}}\). On the other hand, the off-diagonal plots suggest the correlations between parameters. Generally speaking, it is quite noticeable that significant correlations appear among parameters in the same type of neutrinos universally, and also only exist among them. Furthermore, these correlations even show certain features for a specific type of neutrinos, i.e., strong positive correlation between \(\alpha\) and \(\langle E\rangle\) of whom both determine the shape of spectrum, and noteworthy negative correlations between \(\mathcal{E}\) and one of the above spectral shape parameters, respectively. Such correlation patterns are primarily embedded in the parameterization of neutrino spectrum (see eq. (2)) and eq. (4). It is also potentially interesting to mention that such correlations are the weakest for the \(\bar{\nu}_{e}\) flavour while that of the others are comparable 15. The distance effect is tested here. For a closer source with \(d=5\) kpc, the higher statistics in data lead to better accuracies on the reconstructed spectral parameters, while almost no effect on the correlations among these parameters. In detail, the symmetrized factional uncertainties are updated by \(\pm 6.4\%\), \(\pm 2.3\%\) and \(\pm 2.6\%\) for \(\nu_{e}\) flavour, \(\pm 1.5\%\), \(\pm 0.4\%\) and \(\pm 0.5\%\) for \(\bar{\nu}_{e}\) part and \(\pm 20.1\%\), \(\pm 6.3\%\) and \(\pm 5.8\%\) for \(\nu_{x}\) component. However, as a result for comparison, these percentages are calculated with new \(2\sigma\) credible intervals (i.e., for \(d=5\) kpc) and the most probable values in the previous case (i.e., for \(d=10\) kpc). Such treatment is also applied in similar comparisons hereafter. In short, the accuracies are universally enhanced by \(40\%\sim 50\%\) among all parameters in this test. #### 3.2.2 Flavour conversions Figure 4 displays the posterior distributions when the oscillation effects are considered under the assumption of NH. The representative values, corresponding to the distributions on the diagonal, are also given in table 3. Still, the best results are obtained for the \(\bar{\nu}_{e}\) flavour for the same reason as the case without oscillation effect. Numerically speaking, the symmetrized fractional uncertainties are \(\pm 5.6\%\), \(\pm 1.6\%\) and \(\pm 2.1\%\) for \(\alpha\), \(\langle E\rangle\) and \(\mathcal{E}\), sequentially, within a credible level of \(2\sigma\). They become worse slightly, due to the partial conversion in eq. (6). In this flavour conversion mode, the \(\nu_{e}\) events and \(\sim 30\%\) of \(\bar{\nu}_{e}\) events in detectors are now responsible for the \(\nu_{x}\) component. Thus, the results for \(\nu_{x}\) component are much better after combining information from all the four channels. The uncertainties are read as \(\pm 10.5\%\), \(\pm 3.8\%\) and \(\pm 4.2\%\), even slightly better than the \(\nu_{e}\) results in the case of no oscillation. In contrast, the precision for \(\nu_{e}\) are now rather poor, only achieving uncertainties of \(\pm 45.1\%\), \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \multirow{2}{*}{Osc} & \multirow{2}{*}{estimate} & \multicolumn{3}{c}{\(\alpha\)} & \multicolumn{3}{c}{\(\langle E\rangle\) [MeV]} & \multicolumn{3}{c}{\(\mathcal{E}\) [\(10^{52}\) erg]} \\ \cline{3-11} & & \(\nu_{e}\) & \(\bar{\nu}_{e}\) & \(\nu_{x}\) & \(\nu_{e}\) & \(\bar{\nu}_{e}\) & \(\nu_{x}\) & \(\nu_{e}\) & \(\bar{\nu}_{e}\) & \(\nu_{x}\) \\ \hline \multirow{4}{*}{NO} & MAP & 2.83 & 3.25 & 2.93 & 14.37 & 16.26 & 17.88 & 7.71 & 6.45 & 5.63 \\ & \(2\sigma^{-}\) & -0.38 & -0.10 & -1.00 & -0.63 & -0.14 & -1.81 & -0.41 & -0.07 & -0.46 \\ & \(2\sigma^{+}\) & +0.32 & +0.08 & +0.96 & +0.61 & +0.12 & +2.02 & +0.34 & +0.05 & +0.77 \\ & \% & 12.4 & 2.8 & 33.4 & 4.3 & 0.8 & 10.7 & 4.9 & 0.9 & 10.9 \\ \hline \multirow{4}{*}{NH} & MAP & 3.48 & 3.12 & 2.37 & 13.84 & 16.09 & 17.45 & 7.95 & 6.46 & 5.86 \\ & \(2\sigma^{-}\) & -1.33 & -0.16 & -0.25 & -1.94 & -0.25 & -0.60 & -1.72 & -0.13 & +0.24 \\ & \(2\sigma^{+}\) & +1.81 & +0.19 & +0.25 & +2.27 & +0.28 & +0.73 & +2.16 & +0.14 & +0.25 \\ & \% & 45.1 & 5.6 & 10.5 & 15.2 & 1.6 & 3.8 & 24.4 & 2.1 & 4.2 \\ \hline \multirow{4}{*}{IH} & MAP & 3.41 & 3.85 & 2.18 & 15.04 & 16.03 & 17.17 & 7.77 & 6.58 & 5.89 \\ & \(2\sigma^{-}\) & -0.96 & -1.74 & -0.07 & -1.39 & -2.95 & -0.18 & -0.86 & -1.46 & -0.06 \\ \cline{1-1} & \(2\sigma^{+}\) & +1.23 & +1.57 & +0.07 & +1.43 & +2.03 & +0.15 & +0.84 & +1.88 & +0.05 \\ \cline{1-1} & \% & 32.1 & 43.0 & 3.2 & 9.4 & 15.5 & 1.0 & 10.9 & 25.4 & 0.9 \\ \hline \end{tabular} \end{table} Table 3: The representative values of 1-D posterior distributions. NO indicates the case without neutrino oscillation, while NH (IH) represents the case of normal (inverted) mass hierarchy. Gaussian priors are adopted in all cases. The rows denoted with MAP give the most probable values of the posteriors, while \((2\sigma^{-},2\sigma^{+})\) show the relative credible intervals at the \(2\sigma\) level of probability. % rows give the corresponding symmetrized fractional uncertainties. \(\pm 15.2\%\) and \(\pm 24.4\%\). Because all the information for \(\nu_{e}\) flavour are extracted from the data of vPb(NC) reactions in RES-NOVA, and only \(\sim 1/6\) of these data are responsible. Note that the deviation between posterior and prior distributions for \(\alpha_{\nu_{e}}\) is kind of trivial, which means the result get too much information from the prior, instead of the data. It indicates that the constraint on \(\alpha_{\nu_{e}}\) is actually quite limited in this case. The numerical results for the IH conversion are illustrated in figure 5 and table 3. In this conversion mode, neutrino signals in all reaction channels are mainly coming from the original \(\nu_{x}\) component (see eq. (7) and eq. (8)), which naturally lead to a promising precision in this part. That is, the symmetrized fractional uncertainties are obtained as \(\pm 3.2\%\), \(\pm 1.0\%\) and \(\pm 0.9\%\) for the three parameters correspondingly. It should be mentioned that \(\nu_{x}\) components are responsible for \(\sim 2/3\) of the total neutrinos. Hence, it is quite significant to achieve such a Figure 4: The same as figure 3, but the oscillation effects with normal mass hierarchy are under evaluation. high precision on the measurement of this part. However, the price is large uncertainties on the measurements of other components. The representative values of posterior distributions are \(\pm 32.1\%\), \(\pm 9.4\%\) and \(\pm 10.9\%\) for the \(\nu_{e}\) flavour, and \(\pm 43.0\%\), \(\pm 15.5\%\) and \(\pm 25.4\%\) for the \(\bar{\nu}_{e}\) part. The situation of the \(\bar{\nu}_{e}\) part are quite similar to that of the \(\nu_{e}\) flavour in the NH conversion. Similarly, the caveat is that the prior distribution provides too much information in the evaluation of \(\alpha_{\bar{\nu}_{e}}\), also just like the case of \(\alpha_{\nu_{e}}\) in the NH conversion. Aside from the diagonals, the off-diagonal plots in figure 4 and figure 5 portray the correlations between parameters as 2-dimension distributions. So as to quantify these correlations, the matrices of correlation coefficients, namely \(\mathbf{V}^{\rm NH}\) and \(\mathbf{V}^{\rm IH}\), are calculated and shown in figure 6, where the value in a coordinate of a matrix refers to the distribution in the same coordinate of posterior charts. Apparently, the correlations among the three parameters Figure 5: The same as figure 3, but the oscillation effects with inverted mass hierarchy are under evaluation. of one specific species remain the same and, more specifically, another universal hierarchy among the three correlation coefficients emerges as \(|\rho(\alpha,\langle E\rangle)|>|\rho(\langle E\rangle\,,\mathcal{E})|>|\rho( \alpha,\mathcal{E})|\). Such patterns are still controlled by the spectral formalism. On the other hand, different correlation patterns appear between different oscillation models. In the case of NH, moderate correlations exist among spectral parameters from \(\bar{\nu}_{e}\) and \(\nu_{x}\) components. That is, the spectral shape parameters, \(\alpha\) and \(\langle E\rangle\), of \(\bar{\nu}_{e}\) flux have negative correlations to the corresponding parameters of \(\nu_{x}\) flux, and so do the total energy parameters, \(\mathcal{E}\). This can be expected from the mixing of these two components, as described in eq. (6). As a consequence, more complicated correlation patterns stem from two categories of correlations mentioned above (see figure 5(a) and figure 4 for more details). However, it turns out that no such correlations are seen in the case of IH, while the mixing of \(\nu_{e}\) and \(\nu_{x}\) components does exist, i.e., in eq. (7). The absence here is ascribed to the different sensitivities to \(\nu_{e}\) and \(\bar{\nu}_{e}\) species in our detector configurations 16. Such difference between NH and IH can potentially act as another smoking gun to determine the mass hierarchy in measurement of the next galactic CCSN 17, although we postpone further estimates in future work. Footnote 16: As a test, the exchange of parameters between \(\nu_{e}\) flavour and \(\bar{\nu}_{e}\) component is estimated again and the mixing-induced correlations are still missing for IH while clear for NH. The effect is that these correlations become relatively weaker in the NH mode. We also swap the values of \(\sin^{2}\theta_{12}\) and \(\cos^{2}\theta_{12}\), and only see some mild effects on the correlation coefficients (even weaker than the previous case). Footnote 17: When analysing the data with NH template, dataset with IH will show even stronger mixing-induced correlations than dataset with NH (e.g., the correlation coefficients between \(\alpha_{\bar{\nu}_{e}}\) and \(\langle E_{\nu_{x}}\rangle\) (\(\alpha_{\bar{\nu}_{e}}\) and E\({}_{\nu_{x}}\)) in the two cases are shown as \(-0.78\) vs \(-0.46\) (\(0.64\) vs \(0.30\)), and, however, the impacts on different coefficients can be different.). If the analyses were conducted with IH/NO template, we see no manifest signals or just rather weak trends. Again, we check the results for \(d=5\) kpc. The correlation patterns for both NH and IH are still robust, only with modest enhancements found in spectral-induced correlation coefficients of the \(\nu_{e}\) (\(\bar{\nu}_{e}\)) flavour in NH (IH) conversion. As to the accuracies of reconstructed parameters, universal improvements of \(40\%\sim 50\%\) are again obtained for the \(\bar{\nu}_{e}\) and \(\nu_{x}\) components in the case of NH, and for the \(\nu_{e}\) and \(\nu_{x}\) components in the case of IH. Nevertheless, Figure 6: The matrices of correlation coefficients for NH and IH. different parameters of the \(\nu_{e}\) component in NH conversion show different sensitivities to the change of target distance. That is, the accuracy for \(\mathcal{E}_{\nu_{e}}\) is increased by \(\sim 45\%\) in such test, while that for \(\langle E_{\nu_{e}}\rangle\) is only enhanced by \(\sim 15\%\) and it turns out to be rather weak improvement (\(\sim 4\%\)) on \(\alpha_{\nu_{e}}\). It is similar for that of the \(\bar{\nu}_{e}\) flavour in IH conversion. So the measurement of \(\alpha_{\nu_{e}}\) (\(\alpha_{\bar{\nu}_{e}}\)) in NH (IH) conversion deserves further investigation. ## 4 Conclusions In this paper, we present the retrieval of energy spectra for all flavours supernova neutrinos with Bayesian inference by combining data from multiple detectors. When selecting reaction channels, the collection of IBD-p and eES reactions in Hyper-K, vAr(CC) in DUNE and vPb(NC) in RES-NOVA is employed under the consideration of flavour sensitivity and data statistics. Before analysing the mock data, we quantify the prior knowledge on the energy spectra of supernova neutrinos with modified Gaussian functions. Then, using a Poisson likelihood, we sample the posterior distribution, which has 9 degrees of freedom, and extract the probability distribution of each parameter. Furthermore, the correlation coefficients among parameters are also estimated and discussed. Assuming a typical source distance (i.e. \(d=10\) kpc) in our Galaxy, our results show that the average energy and individual emitted energy can be determined with an accuracy of a few percent in normal (inverted) mass hierarchy, except for the \(\nu_{e}\) (\(\bar{\nu}_{e}\)). Especially, those for heavy flavour neutrinos are reconstructed with a 1% precision under the oscillation effect of inverted mass hierarchy. The spectral pinching for either \(\bar{\nu}_{e}\) (\(\nu_{x}\)) can also be measured to a few percent precision in normal (inverted) mass hierarchy. In contrast, that for either \(\nu_{e}\) or \(\bar{\nu}_{e}\) is hardly extractable from the data, accordingly. Nevertheless, based on the overall accuracy inferred here, it is interesting to mention that the precise determination of neutron skin of Lead should be promising through nearby galactic supernova neutrino detection in RES-NOVA as proposed in our previous work [23]. For future studies, an effective way to enhance the capability of our method is to further improve the flavour-blind sensitivity in the collections (e.g. higher statistics or extra sensitivity to neutrinos with energy below 10 MeV). For instance, the neutral current scatterings on \({}^{16}\)O in Hyper-K can provide valuable information in the low energy region (i.e., \(5\sim 10\) MeV), while the pES reaction in JUNO and neutral current scattering on Ar (\(\nu+\text{Ar}\rightarrow\nu+\text{Ar}^{*}\)) in DUNE (if available) will offer more events in the relatively higher energy range. It is also worthy to mention that the next-generation large-scale dark matter detectors will also render complementary information in such studies (see, e.g., Ref [73, 74]). Furthermore, our analyses indicate that there exist two categories of correlations among parameters: spectral-induced correlation and mixing-induced correlation. The former is encoded in the formalism of neutrino flux, while the latter derives from the complementary effects of neutrino mixing and detector configurations. Such correlations potentially offer us new ways to extract information from data, more efficiently, via specific combinations of spectral parameters. It is also possible to solve the mass hierarchy problem by analysing the mixing-induced correlations. However, more realistic oscillation models should be included in real observations, e.g., non-adiabatic oscillation, collective oscillation and Earth matter effect. The investigation of these issues will be left to future works. Flat prior We replace the Gauss distributions (see, e.g., eq. (11)) with flat distributions, whose parameter spaces are restricted to the \(3\sigma\) regions of Gauss distributions, in the analysis. Considering no neutrino oscillation, the results are presented in table 4 and figure 7. Generally speaking, the posterior distributions are quite similar to that of Gauss priors (see, e.g., table 3 and figure 3). The results of \(\bar{\nu}_{e}\) flavour remain almost the same, due to the highly informative dataset offered by the IBD-p reaction in Hyper-K. The influence on the extraction of \(\nu_{e}\) part are also tiny, i.e., only an increase of \(\sim 0.3\%\) on the \(2\sigma\) symmetrized fractional uncertainty. However, such replacement shows relatively noticeable impact on the retrieval of \(\nu_{x}\) component, namely an increase of \(10.1\%\) on \(\alpha\) and enlargement of \(\sim 2.5\%\) on \(\langle E\rangle\) and \(\mathcal{E}\). Such consequences are totally reasonable, and confirm the previous statement that the more informative the dataset is, the less dependence the posterior will show on the prior. Note that these priors can be further updated according to future developments on modelling of stellar core collapse. We are grateful to Ming-chung Chu for useful comments. X.-R. Huang acknowledges support from Shanghai Jiao Tong University via the Fellowship of Outstanding PhD Graduates. This work was supported in part by the National Natural Science Foundation of China under Grant Nos. 12235010 and 11625521, and the National SKA Program of China No. 2020SKA0120300. Note added.The data and code underlying this article will be shared on reasonable request.
Bayesian推論を用いて、銀河系のコアCollapse重力崩壊の supernova(CCSN)から全ての種類のニュートリノのエネルギースペクトルを抽出します。高統計量と全てのニュートリノのフレーバーに対する高い感度を得るために、複数の反応チャンネルを組み合わせることで、Hyper-Kamiokande(Hyper-K)の反β崩壊、Deep Underground Neutrino Experiment(DUNE)のアルゴンへの電荷相互作用吸収、およびRES-NOVAの鉛への相対的散乱を扱うなど、異なる大規模ニュートリノ観測施設からの反応チャンネルを組み合わせることで、高精度なデータを得ることができました。この場合、銀河の典型的な距離である10 kpcに由来するソースからの仮説に基づいて、Poissonプロセスを用いて各チャンネルのモックデータを作成し、それらを、ベイズの定理を用いて、ニュートリノスペクトルモデルのすべてのスペ
2307.00064
Situated Cameras, Situated Knowledges: Towards an Egocentric Epistemology for Computer Vision
In her influential 1988 paper, Situated Knowledges, Donna Haraway uses vision and perspective as a metaphor to discuss scientific knowledge. Today, egocentric computer vision discusses many of the same issues, except in a literal vision context. In this short position paper, we collapse that metaphor, and explore the interactions between feminist epistemology and egocentric CV as "Egocentric Epistemology." Using this framework, we argue for the use of qualitative, human-centric methods as a complement to performance benchmarks, to center both the literal and metaphorical perspective of human crowd workers in CV.
Samuel Goree, David Crandall
2023-06-30T18:07:26
http://arxiv.org/abs/2307.00064v1
# Situated Cameras, Situated Knowledges: Towards an Egocentric Epistemology for Computer Vision ###### Abstract In her influential 1988 paper, Situated Knowledges, Donna Haraway uses vision and perspective as a metaphor to discuss scientific knowledge. Today, egocentric computer vision discusses many of the same issues, except in a literal vision context. In this short position paper, we collapse that metaphor, and explore the interactions between feminist epistemology and egocentric CV as "Egocentric Epistemology." Using this framework, we argue for the use of qualitative, human-centric methods as a complement to performance benchmarks, to center both the literal and metaphorical perspective of human crowd workers in CV. ## 1 Introduction In Computer Vision (CV), egocentric vision is meant very literally: processing imagery taken from a camera on a human [7], robot [19], animal [18] or car [24]. This mode of data collection creates many challenging scientific and engineering problems, particularly because unlike photography-based CV, where the camera is outside the scene and imagery is taken deliberately, egocentric data comes from a camera within the scene, captured automatically [1, 11]. In this paper, however, we consider egocentric vision more metaphorically and think about its theoretical foundations, particularly its _epistemology_ -- its theory of knowledge and knowing. We believe that ongoing crises regarding bias, unfairness and injustice throughout artificial intelligence [20] motivate critical epistemological investigation. In other words, we may be able to avoid problems related to bias, unfairness and injustice by developing a more robust theory of what we know about the functionality of our models and algorithms and how we know it. We believe there is a natural theoretical match for egocentric CV in feminist1 epistemology, particularly Donna Haraway's theory of situated knowledges [14]. This match is interesting because Haraway uses vision as a metaphor to talk about science. By collapsing this metaphor, we arrive at a natural fit for the technical reality of egocentric vision: just as egocentric vision moves the camera into the scene, an egocentric epistemology would move evaluation into the world of our participants, yielding epistemic power to them. Footnote 1: By feminist, we refer to the philosophical tradition which emerges from feminist studies. We are not discussing sexism in computer vision. ## 2 _Situated Knowledges_ In an influential 1988 paper [14], Donna Haraway discusses scientific objectivity and its fraught relationship with feminism. Many of the central questions of feminist theory involve questioning scientific "facts" -- particularly those about women and their inferiority to men. Some feminists cite these conflicts as justification to throw out scientific inquiry itself as biased, but Haraway disagrees. She does not want a feminist critique of science to serve as "one more excuse for [feminists] not learning any post-Newtonian physics." Instead, she seeks to find a feminist way of thinking which admits both real scientific knowledge as well as arguments against sexist findings. To reconcile these perspectives, Haraway employs vision as a metaphor. She observes that science, when it separates a "view" of the world from the way that it was captured, performs a "god trick" -- pretending that an observer's limited view can actually see everything from an omniscient god's-eye view. But all vision -- human, animal or machine -- is actually situated, limited and partial. We can Figure 1: Just as egocentric CV places the camera in the scene, we propose placing evaluation among the research participants. not see distant stars, bacteria or atoms as they truly are; we can only see them as they are captured by cameras, telescopes or other sensors and processed through data analysis systems to produce images designed specifically for our eyes. In other words, humans and our hybrid technological-biological vision systems are always part of the universe observing itself. Haraway argues that acknowledging the embodied and situated reality of our vision does not make the science we do on it subjective or relative, but actually makes it _more objective_ because we acknowledge the reality of our knowledge production. In Haraway's words: _"Infinite vision is an illusion, a god trick...We need to learn in our bodies, endowed with primate color and stereoscopic vision, how to attach the objective to our theoretical and political scanners...Objectivity turns out to be about particular and specific embodiment and definitely not about the false vision promising transcendence of all limits and responsibility. The moral is simple: only partial perspective promises objective vision. All Western cultural narratives about objectivity are allegories of the ideologies governing the relations of what we call mind and body, distance and responsibility. Feminist objectivity is about limited location and situated knowledge, not about transcendence and splitting of subject and object. It allows us to become answerable for what we learn how to see."_ Haraway's position is not an attack on scientists, who typically acknowledge the limitations of their instruments and methods. Rather it is a critique of industry, government and the public, who perform god tricks when they treat the findings of scientists as completely true, detached from the limitations of their research methods. ## 3 Two God Tricks in CV Traditional CV engages in two different kinds of god tricks. First, it treats sets of images as objective recordings of reality, detached from the cameras and photographers who take them and the researchers who assemble them. Second, it treats its knowledge about the performance of models and algorithms as objective truth, separate from the data and methods which allow us to evaluate them. The combination of these two practices has had horrible consequences with respect to bias and injustice in CV. Algorithms which are purported to be objective and neutral reflect a dominant American cultural perspective which exists both in their training data, as well as in the academic-industrial research system which produces the models and algorithms [8]. Many CV systems have substantial limitations: they are only ever approximately correct, only have limited knowledge of the world and faithfully reproduce the biases, both good and bad, of their training data [5]. While researchers often identify these limitations, the science enterprise and broader public have been conditioned by futureological interpretations of science fiction to treat these systems as simultaneously human-like [16] and omniscient, indulging the dreams of the military intelligence-industrial complex [23]. Unfortunately, in pursuit of research funding and employment, CV researchers are not incentivized to acknowledge the limitations of their work [13]. The problem is not the existence of limitations, but the way the research system performs a god trick and transforms algorithmic tools which provide situated, uncertain knowledge about the world into arbiters of objective truth. For example, a system designed to recognize faces which systematically misidentifies nonwhite and nonmale faces [5] is not inherently harmful. Rather it could become harmful when its high accuracy is equated with objectively good performance, justifying its use in production systems. ### Egocentric Vision Avoids the First God Trick Like Haraway believes there is a way to criticize bias in science without rejecting scientific knowledge, we believe there is a way to criticize bias in CV without rejecting knowledge about technical performance. Egocentric CV is naturally suited towards this reconciliation because it avoids the first god trick: egocentric images are messy: the cameras shake and scenes are often partially obscured. The images usually contain hands [2] and sometimes include other observers who have their own cameras [10]. In contrast, images taken by human photographers usually come from outside the scene they depict. They are well-framed, with objects un-occluded. The photographer can control exposure time and focal length to best represent the scene [17]. Counter-intuitively, egocentric images are often more objective, less authored, views of a scene because they avoid the god trick of the photograph. They depict the world more like it appears to a particular human, not the way a photographer believes it should be depicted. But egocentric vision still takes part in the second god trick. We treat evaluations using quantitative metrics on benchmark datasets as true, a view from above which provides objective evaluation of the relative strengths and weaknesses of our models. However, our performance metrics are more like a photographers' camera: they are designed by CV researchers, sometimes the same researchers designing the models under evaluation. Those researchers make numerous decisions regarding the collection and curation of the data, and define what good performance means, with their own external goals and applications in mind. This approach is not objective, but that is not a bad thing! Just as there is no digital image without a camera or sensor to capture it, there is no problem statement or dataset without a human author and underlying motive. To some extent that is good -- our motives for proposing vision problems ground them in reality and make their solutions valuable. But the task definition overwrites the normative perspective of the person wearing the camera with that of the researchers. ### Avoiding the Second God Trick In the same way that egocentric vision avoids the first god trick by situating cameras on human bodies, it can avoid the second god trick by situating the evaluation process in human experience. Human evaluation has precedent in an earlier period of CV research before the rise of large datasets, when research papers would demonstrate effectiveness by showing sample image results [13]. But rather than leave evaluation to the judgments of conference reviewers, we propose collecting evaluations from the same people who provide our training data. To envision what an egocentric epistemology might look like, we can look to the social sciences. Methods grounded in this tradition are typically qualitative and ethnographic, studying humans by entering their social worlds. These methods are not new to computing: feminist and poststructuralist approaches are being explored in HCI [3] and in data science [9]. However, we cannot follow HCI and center the user in CV because our algorithms might be used in many different user-facing applications, or in applications without human users at all. So rather than center the user, we advocate for centering the crowd workers who collect our data. To envision what this approach would look like, we turn to secondary literature which operationalizes Haraway's theory. Bhavnani [4] proposes three criteria: 1. Reinsertion: Does the research method portray the participants as passive and powerless, or does it recast them as active agents? 2. Micropolities: Does the research engage with the political relationships between researcher and participant? 3. Difference: Does the research engage with differences in perspective between participants? We can apply these principles to CV evaluation. For example, in the context of activity recognition on an Epic Kitchens-style dataset [7] we could evaluate several activity detectors by returning to the initial participants captured on video, demonstrate various activity recognition models, discuss how they interpret the participants' behaviors and select a "state of the art." This fits Bhavnani's criteria: 1. Reinsertion: Treat crowd workers as active agents in the research work, rather than passive data producers. 2. Micropolities: Acknowledge the power differential between researchers and crowd workers, and yield governance over the modeling process to those workers. 3. Difference: When participants disagree with each other or with researchers about evaluation, it can teach us about the nuances of the task, and is not just noise. Of course, going back to the same initial crowd workers every time we propose a new model architecture is impractical. Such an approach does not scale with the fast pace of vision research. CV researchers are also not social scientists, and should not have to retrain to carry out our own research. But returning to participants once or twice after collecting data can be a valuable supervisory signal, and counteract the unequal power dynamics involved in data collection in CV [8, 22]. We encourage collaborating with scholars who have qualitative research training when working with human subjects to remain systematic and impartial. While this kind of research is qualitative, it can be highly rigorous. Sometimes participants might say, "well that looks wonderful!" or "no, I don't think computers should do that." While these are legitimate answers, they are ultimately unhelpful. Forcing participants to choose between two alternatives is also unhelpful: we want to know the problems with those alternatives and why they matter. We will often find by talking to people that the problem we are trying to address is not a helpful problem to solve, or that a less technically demanding method is actually more helpful. These are productive research findings! While they do not generate scientific advances in our modeling work, they help us avoid wasting time, effort and GPU-hours on unhelpful problems. Counter-intuitively, by taking part in seemingly inefficient, human-centric research, we can actually make our technical work more impactful, and teach our students to think critically about modeling tasks. ## 4 An Example: Aesthetic Quality Assessment Driven by these concerns, we have recently investigated taking an egocentric epistemology in the evaluation of a non-egocentric CV problem: aesthetic quality assessment (AQA), or the task of determining whether a photograph is high or low aesthetic quality. Over the past decades, benchmarked performance on the Analysis of Visual Aesthetics (AVA) dataset [21] has increased steadily, but it remains uncertain whether these models detect aesthetic quality or confounding stylistic factors [12]. To ground evaluation for this problem in the aesthetic experience of human participants, we designed a smartphone-based camera interface without a shutter button. Photos are taken when the response of an AQA model exceeds an adaptive threshold. In ongoing research, we are conducting a series of semi-structured user testing sessions with our interface and four candidate models, inspired by feature and model architectures used over the history of AQA. Participants explore the similarities and differences between models while walking around a public space, seeing which models capture pictures of which objects and scenes. At the end, we review the photos taken by each model, and ask participants to evaluate the models' strengths and weaknesses. These sessions have been highly informative, revealing the biases of each model towards specific patterns, objects or compositions. Further, these sessions have allowed users to give critical feedback on the problem formulation and its assumptions. These aspects are often left unquestioned except by coauthors and peer reviewers. Counter-intuitively, by engaging in human-centric evaluation, we end up with an approach to evaluation which is less limited by the views of CV researchers. ## 5 Conclusion In this position paper, we have discussed Donna Haraway's theory of situated objectivity. We have identified two ways that CV uses "god tricks" to pretend to see the world from nowhere. Egocentric cameras avoid one but fall victim to the other, so we propose an egocentric epistemology which returns evaluation to the site of data collection. Adopting human-centric methods in egocentric vision is only a first step towards a foundation for CV which pre-empts problems related to bias and injustice. No amount of talking to human participants matters unless we believe and act on what they have to say. Additionally, if we only engage with certain kinds of participants (such as the WEIRD participants of psychology [15]), our knowledge will be limited to the perspectives of the people we approach. Giving our participants evaluation power over the modeling process is a step towards counteracting larger problems related to power and surveillance in CV [6], but developing a model using ethical research methods does not guarantee that applications of that model are ethical. Still, we encourage further discussion of power dynamics in CV and ways that we can adjust our research methods to preempt future issues related to bias and injustice in our discipline.
1988年の彼女の画期的論文「 situted knowledge 」で、Donna Harawayは視覚と視点のメタファーを用いて、科学的な知識について議論しました。今日では、エグゼクティブなコンピュータビジョンは、このメタファーを一部取り入れたものの、その専門的な視点を維持しています。この短い position paper では、私たちはこのメタファーを折りたたむことで、フェミニストの認識論とエグゼクティブなCVの相互作用を「エグゼクティブな認識論」として探索します。この枠組みを用いて、私たちは、人間の視覚、特に人間の集団労働者の視点を、パフォーマンスベンチマークの補完として、質的かつ人間中心の方法の使用を主張します。
2309.08194
Some remarks on the Cauchy Problem for Schrödinger type equations in Gelfand-Shilov spaces
We consider the Cauchy problem for Schr\"odinger type operators. Under a suitable decay assumption on the imaginary part of the first order coefficients we prove well-posedness of the Cauchy problem in Gelfand-Shilov classes. We also discuss the optimality of our result through some examples.
Alexandre Arias Junior
2023-09-15T06:51:34
http://arxiv.org/abs/2309.08194v1
# Some remarks on the Cauchy problem for Schrodinger type equations in Gelfand-Shilov spaces ###### Abstract. We consider the Cauchy problem for Schrodinger type operators. Under a suitable decay assumption on the imaginary part of the first order coefficients we prove well-posedness of the Cauchy problem in Gelfand-Shilov classes. We also discuss the optimality of our result through some examples. _2010 Mathematics Subject Classification_: 35G10, 35S05, 35B65, 46F05 _Keywords and phrases_: Schrodinger type equations, Gevrey spaces, Gelfand-Shilov spaces, well-posedness ## 1. **Introduction and results** We consider the Cauchy problem \[\begin{cases} Su(t,x)=f(t,x),\quad t\in[0,T],x\in\mathbb{R}^{n},\\ u(0,x)=g(x),\qquad x\in\mathbb{R}^{n},\end{cases} \tag{1.1}\] for the operator \[S=D_{t}-\Delta_{x}+\sum_{j=0}^{n}a_{j}(t,x)D_{x_{j}}+b(t,x),\quad t\in[0,T],x \in\mathbb{R}^{n},\] where \(a_{j}(t,x)\in C([0,T];\mathcal{B}^{\infty}(\mathbb{R}^{n}))\), \(j=1,\ldots,n\), and \(\mathcal{B}^{\infty}(\mathbb{R}^{n})\) denotes the space of all complex-valued smooth functions which are bounded in \(\mathbb{R}^{n}\) together with all their derivatives. The operator \(S\) is the typical example of non-kowalewskian operator which is not parabolic and it is known in the literature as Schrodinger type operator. It is well known that if all the coefficients \(a_{j}\) are real-valued, then the Cauchy problem (1.1) is well-posed in \(L^{2}(\mathbb{R}^{n})\) and in \(H^{m}(\mathbb{R}^{n})\) spaces, where \(H^{m}(\mathbb{R}^{n})\) stands for the standard \(L^{2}-\)based Sobolev spaces. On the other hand, if some coefficient \(a_{j}\) has a non-identically null imaginary part, then \(L^{2}(\mathbb{R}^{n})\) or \(H^{\infty}(\mathbb{R}^{n}):=\cap_{m\in\mathbb{R}}H^{m}(\mathbb{R}^{n})\) well-posedness may fail. Indeed, when all \(a_{j}(t,x)=a_{j}(x)\), in [5] we find that the inequality \[\sup_{x\in\mathbb{R}^{n},\omega\in S^{n-1}}\left|\int_{0}^{\rho}\sum_{j=1}^{n} Im\,a_{j}(x+\omega\theta)\omega_{j}d\theta\right|\leq Mlog(1+\rho)+N,\quad \forall\,\rho\geq 0, \tag{1.2}\] is a necessary condition for well-posedness in \(H^{\infty}(\mathbb{R}^{n})\). Setting \(M=0\) in (1.2) we obtain a necessary condition for well-posedness in \(L^{2}(\mathbb{R}^{n})\), see [9]. **Remark 1**.: _When \(n=1\) the inequality (1.2) turns out to be also a sufficient condition for \(H^{\infty}\) well-posedness, see [6]. Besides, if \(M=0\) and \(n=1\), then (1.2) is a sufficient condition for \(L^{2}\) well-posedness, see [9]._ Imposing the following decay condition on the imaginary parts of the coefficients \(a_{j}\) \[|Im\,a_{j}(t,x)|\leq C\langle x\rangle^{-\sigma},\,j=1,\ldots,n,\quad\text{ where }\langle x\rangle^{2}:=1+|x|^{2}, \tag{1.3}\] for some \(\sigma\in(0,1)\) and \(C>0\), from the aforementioned necessary condition neither \(L^{2}\) nor \(H^{\infty}\) well-posedness for the problem (1.1) holds. In fact, this type of decay assumption leads to investigate the problem in suitable Gevrey classes of functions. For a given \(\theta>1\) we shall denote by \(G^{\theta}(\mathbb{R}^{n})\) the set of all smooth functions \(f(x)\) such that \[\sup_{x\in\mathbb{R}^{n},\beta\in\mathbb{N}_{0}^{n}}|\partial_{x}^{\beta}f(x)| h^{-|\beta|}\beta|^{\text{\rm-}\theta}<\infty,\] for some \(h>0\). The space \(G^{\theta}(\mathbb{R}^{n})\) is commonly referred as the space of uniform Gevrey functions of order \(\theta\). Next, for \(\rho>0\) we consider \[H^{0}_{\rho;\theta}(\mathbb{R}^{n}):=\{u\in\mathscr{S}^{\prime}(\mathbb{R}^{n }):e^{\rho(D)^{\frac{1}{\theta}}}u\in L^{2}(\mathbb{R}^{n})\},\] where \(\mathscr{S}^{\prime}(\mathbb{R}^{n})\) stands for the space of tempered distributions and \(e^{\rho(D)^{\frac{1}{\theta}}}\) denotes the Fourier multiplier with symbol \(e^{\rho(\xi)^{\frac{1}{\theta}}}\). The space \(H^{0}_{\rho;\theta}(\mathbb{R}^{n})\) endowed with the inner product \[\langle u,v\rangle_{H^{0}_{\rho;\theta}}:=\langle e^{\rho(D)^{\frac{1}{\theta }}}u,e^{\rho(D)^{\frac{1}{\theta}}}v\rangle_{L^{2}},\quad u,v\in H^{0}_{\rho; \theta}(\mathbb{R}^{n}),\] is a Hilbert space and we denote the induced norm by \(\|\cdot\|_{H^{0}_{\rho;\theta}}\). We obviously have \(H^{0}_{\rho;\theta}(\mathbb{R}^{n})\subset H^{\infty}(\mathbb{R}^{n})\) and it is possible to prove that \(H^{0}_{\rho;\theta}(\mathbb{R}^{n})\subset G^{\theta}(\mathbb{R}^{n})\). The functional setting where the Gevrey well-posedness results take place is given by \[H^{\infty}_{\theta}(\mathbb{R}^{n})=\bigcup_{\rho>0}H^{0}_{\rho;\theta}( \mathbb{R}^{n}).\] **Definition 1** (\(H^{\infty}_{\theta}\) well-posedness).: _We say that the Cauchy problem (1.1) is \(H^{\infty}_{\theta}\) well-posed when for any given \(\rho>0\) there exist \(\tilde{\rho}>0\) and a constant \(C:=C(\rho,T)>0\) such that for all \(f\in C([0,T];H^{0}_{\rho;\theta}(\mathbb{R}^{n}))\) and \(g\in H^{0}_{\rho;\theta}(\mathbb{R}^{n})\) there exists a unique solution \(u\in C^{1}([0,T];H^{0}_{\tilde{\rho};\theta}(\mathbb{R}^{n}))\) and the following energy inequality holds_ \[\|u(t,\cdot)\|^{2}_{H^{0}_{\rho;\theta}}\leq C\left\{\|g\|^{2}_{H^{0}_{\rho; \theta}}+\int_{0}^{t}\|f(\tau,\cdot)\|^{2}_{H^{0}_{\rho;\theta}}d\tau\right\}. \tag{1.4}\] In [7] (see Theorem 1.1) the authors proved the following result. **Theorem 1**.: _Assume that the coefficients \(a_{j}\) and \(b\) satisfy_ \[|\partial_{x}^{\beta}a_{j}(t,x)|+|\partial_{x}^{\beta}b(t,x)|\leq CA^{|\beta|} \beta^{\text{\rm 1}\theta_{0}},\quad t\in[0,T],x\in\mathbb{R}^{n},\beta\in \mathbb{N}_{0}^{n},\] _for some constants \(C,A>0\) and for some \(\theta_{0}>1\). Assume moreover that condition (1.3) is fulfilled for some \(\sigma\in(0,1)\). Then, if \(\theta\in[\theta_{0},\frac{1}{1-\sigma})\) the Cauchy problem (1.1) is \(H^{\infty}_{\theta}\) well-posed._ In the limit case \(\theta=\frac{1}{1-\sigma}\) local in time \(H^{\infty}_{\theta}\) well-posedness for (1.1) holds. We point out that the upper bound \(\frac{1}{1-\sigma}\) for the indices \(\theta\) in the above theorem is sharp. Indeed, if we consider the model operator in one space dimension \[M=D_{t}+D_{x}^{2}+i\langle x\rangle^{-\sigma}D_{x}, \tag{1.5}\] then Theorem 2 of [3] implies \(H^{\infty}_{\theta}\) ill-posedness for the Cauchy problem associated with (1.5) when \(\theta>\frac{1}{1-\sigma}\). In this paper we are interested in the well-posedness of (1.1) in the so called Gelfand-Shilov spaces. Theses spaces were introduced by I.M. Gelfand and G.E. Shilov in [4] making part of a larger class called spaces of type \(\mathcal{S}\). More into the point, for \(\theta>1\) and \(s\geq 1\) we denote by \(\mathcal{S}^{\theta}_{s}(\mathbb{R}^{n})\) the space of all smooth functions \(f(x)\) such that \[\sup_{x\in\mathbb{R}^{n},\beta\in\mathbb{N}_{0}^{n}}|\partial_{x}^{\beta}f(x)| C^{-|\beta|}\beta|^{\text{\rm-}\theta}e^{c|x|^{\frac{1}{\theta}}}<\infty,\] for some \(C,c>0\). So, the space \(\mathcal{S}^{\theta}_{s}(\mathbb{R}^{n})\) can be viewed as the space of all uniform Gevrey functions of order \(\theta\) which, together with all their derivatives, decay as \(e^{-c|x|^{\frac{1}{s}}}\) at infinity. It is convenient to describe the Gelfand-Shilov classes in terms of a suitable scale of weighted Sobolev spaces. For \(\rho=(\rho_{1},\rho_{2})\), \(\rho_{1}>0\), \(\rho_{2}>0\) we consider \[H^{0}_{\rho;\theta,s}(\mathbb{R}^{n})=\{u\in\mathscr{S}^{\prime}(\mathbb{R}^ {n}):e^{\rho_{2}\langle x\rangle^{\frac{1}{s}}}e^{\rho_{1}\langle D\rangle^{ \frac{1}{b}}}u\in L^{2}(\mathbb{R}^{n})\}.\] The space \(H^{0}_{\rho;\theta,s}(\mathbb{R}^{n})\) is known as Gelfand-Shilov Sobolev space and defines a Hilbert space when endowed with the following inner product \[\langle u,v\rangle_{H^{0}_{\rho;\theta,s}}:=\langle e^{\rho_{2}\langle x \rangle^{\frac{1}{s}}}e^{\rho_{1}\langle D\rangle^{\frac{1}{s}}}u,e^{\rho_{2} \langle x\rangle^{\frac{1}{s}}}e^{\rho_{1}\langle D\rangle^{\frac{1}{s}}}v \rangle_{L^{2}},\quad u,v\in H^{0}_{\rho;\theta,s}(\mathbb{R}^{n}),\] and the induced norm is denoted by \(\|\cdot\|_{H^{0}_{\rho;\theta,s}}\). Then we express the Gelfand-Shilov classes as \[\mathcal{S}^{\theta}_{s}(\mathbb{R}^{n})=\bigcup_{\rho_{1},\rho_{2}>0}H^{0}_{ \rho\theta,s}(\mathbb{R}^{n}).\] Now we precisely define what we mean by \(\mathcal{S}^{\theta}_{s}\) well-posedness. **Definition 2** (\(\mathcal{S}^{\theta}_{s}\) well-posedness).: _We say that the Cauchy problem (1.1) is \(\mathcal{S}^{\theta}_{s}\) well-posed when for any given \(\rho=(\rho_{1},\rho_{2}),\rho_{1},\rho_{2}>0\) there exist \(\tilde{\rho}=(\tilde{\rho}_{1},\tilde{\rho}_{2}),\tilde{\rho}_{1},\tilde{\rho} _{2}>0\) and a constant \(C:=C(\rho,T)>0\) such that for all \(f\in C([0,T];H^{0}_{\rho;\theta,s}(\mathbb{R}^{n}))\) and \(g\in H^{0}_{\rho;\theta,s}(\mathbb{R}^{n})\) there exists a unique solution \(u\in C^{1}([0,T];H^{0}_{\tilde{\rho};\theta,s}(\mathbb{R}^{n}))\) and the following energy inequality holds_ \[\|u(t,\cdot)\|^{2}_{H^{0}_{\tilde{\rho};\theta,s}}\leq C\left\{\|g\|^{2}_{H^{ 0}_{\rho;\theta,s}}+\int_{0}^{t}\|f(\tau,\cdot)\|^{2}_{H^{0}_{\rho;\theta,s}}d \tau\right\}. \tag{1.6}\] The main result of this manuscript reads as follows. **Theorem 2**.: _Assume that the coefficients \(a_{j}\) and \(b\) satisfy_ \[|\partial_{x}^{\beta}a_{j}(t,x)|+|\partial_{x}^{\beta}b(t,x)|\leq CA^{|\beta| }\beta!^{\theta_{0}},\quad t\in[0,T],x\in\mathbb{R}^{n},\beta\in\mathbb{N}^{ n}_{0},\] _for some constants \(C,A>0\) and for some \(\theta_{0}>1\). Assume moreover that condition (1.3) is fulfilled for some \(\sigma\in(0,1)\). Then, if \(\theta\in[\theta_{0},\min\{\frac{1}{1-\sigma},s\})\) the Cauchy problem (1.1) is \(\mathcal{S}^{\theta}_{s}\) well-posed._ **Remark 2**.: _In the limit case \(\theta=\min\{\frac{1}{1-\sigma},s\}\) we obtain local in time \(\mathcal{S}^{\theta}_{s}\) well-posedness._ Now we discuss the constraints on the parameters \(s,\theta\) and \(\sigma\) in Theorem 2 by considering some particular cases. For the model operator (1.5) we state the second main result of this paper. **Proposition 1**.: _If the Cauchy problem for the operator \(M\) given by (1.5) is \(\mathcal{S}^{\theta}_{s}\) well-posed then \(\max\{\frac{1}{\theta},\frac{1}{s}\}\geq 1-\sigma\)._ From Proposition 1 it follows that when \(s\geq\theta\) we cannot allow \(\theta>\frac{1}{1-\sigma}\) in Theorem 2. Now we investigate the case \(s<\theta\). For this latter case we consider the Cauchy problem for the Schrodinger operator in one space dimension \[\begin{cases}\{D_{t}+D_{x}^{2}\}u(t,x)=0,\quad t\in[0,T],x\in\mathbb{R},\\ u(0,x)=g(x),\quad\quad\quad x\in\mathbb{R},\end{cases} \tag{1.7}\] where \(g\in\mathcal{S}^{\theta}_{s}(\mathbb{R})\). The solution of (1.7) can be written explicitly as \[u_{g}(t,x)=\int_{\mathbb{R}}e^{i\xi x}e^{-i\xi^{2}t}\widehat{g}(\xi)d\xi, \tag{1.8}\] where \(d\xi=(2\pi)^{-1}d\xi\) and \(\widehat{g}\) denotes the Fourier transform of \(g\). The Gevrey regularity of \(u_{g}\) is given by the index \(\theta\), while integration by parts and costumary computations give that \(u_{g}\) decays as \(e^{-c|x|^{\frac{1}{\max\{s,\theta\}}}}\) at \(|x|\to\infty\). So, we conclude that \(u_{g}(t,\cdot)\in\mathcal{S}^{\theta}_{\max\{s,\theta\}}(\mathbb{R})\). Thus, we obtain that the Cauchy problem (1.7) is \(\mathcal{S}^{\theta}_{s}\) well-posed when \(s\geq\theta\). Notice that the Fourier transform in \(x\) of \(u_{g}(t,x)\) is given by \(\widehat{u}_{g}(t,\xi)=e^{-i\xi^{2}t}\widehat{g}(\xi)\) and from Theorem 6.1.2 of [10] we have (for any fixed \(t>0\)) \[g\in\mathcal{S}^{\theta}_{s}(\mathbb{R})\iff\widehat{g}\in\mathcal{S}^{s}_{ \theta}(\mathbb{R})\quad\text{and}\quad u_{g}(t,x)\in\mathcal{S}^{\theta}_{s} (\mathbb{R})\iff e^{-i\xi^{2}t}\widehat{g}(\xi)\in\mathcal{S}^{s}_{\theta}( \mathbb{R}).\] Hence, if the Cauchy problem (1.7) is well-posed in \(\mathcal{S}^{\theta}_{s}(\mathbb{R})\) then for any fixed \(t>0\) the function \(h(\xi)=e^{-it\xi^{2}}\) defines a multiplier in \(\mathcal{S}^{s}_{\theta}(\mathbb{R})\). The last result of this paper, Proposition 2 below, implies \(\mathcal{S}^{\theta}_{s}\) ill-posedness of (1.7) provided that \(1\leq s<\theta\). As a consequence, we conclude that \(s<\theta\) is not allowed in Theorem 2. **Proposition 2**.: _Let \(1\leq s<\theta\). For any fixed \(t>0\) the function \(h(\xi)=e^{-it\xi^{2}}\) does not define a multiplier in \(\mathcal{S}^{s}_{\theta}(\mathbb{R})\)._ **Remark 3**.: _Another consequence of Proposition 2 is that if \(g\in\mathcal{S}^{\theta}_{s}(\mathbb{R})\) with \(1\leq s<\theta\), then the solution of (1.7) may present a weaker decay at infinity with respect to the initial datum._ **Remark 4**.: _In [2], under slightly different assumptions on the coefficients \(a_{j}\) and \(b\), the authors studied the problem (1.1) in Gelfand-Shilov type spaces. They proved that if the initial data belong to a suitable Gelfand-Shilov class, then there exists a solution with a possible exponential growth at \(|x|\to\infty\). There, no well-posedness results in \(\mathcal{S}^{\theta}_{s}\) were achieved._ ## 2. Proof of Theorem 2 We begin performing the conjugation of the operator \(S\) by the exponential \(e^{\delta\langle x\rangle^{\frac{1}{s}}}\), where \(\delta>0\). For any \(j=1,\ldots,n\), we have \[e^{\delta\langle x\rangle^{\frac{1}{s}}}\,D_{x_{j}}\,e^{-\delta\langle x \rangle^{\frac{1}{s}}}=D_{x_{j}}-\delta D_{x_{j}}\langle x\rangle^{\frac{1}{s}},\] \[e^{\delta\langle x\rangle^{\frac{1}{s}}}\,\partial_{x_{j}}^{2}\,e^{-\delta \langle x\rangle^{\frac{1}{s}}}=\partial_{x_{j}}^{2}-2\delta\partial_{x_{j}} \langle x\rangle^{\frac{1}{s}}\partial_{x_{j}}-\delta\partial_{x_{j}}^{2} \langle x\rangle^{\frac{1}{s}}+(\delta\partial_{x_{j}}\langle x\rangle^{ \frac{1}{s}})^{2}.\] Hence \[e^{\delta\langle x\rangle^{\frac{1}{s}}}\Delta_{x}e^{-\delta \langle x\rangle^{\frac{1}{s}}}=\Delta_{x}-i2\delta\sum_{j=1}^{n}\partial_{x_{ j}}\langle x\rangle^{\frac{1}{s}}D_{x_{j}}-\delta\Delta_{x}\langle x\rangle^{ \frac{1}{s}}+\delta^{2}\sum_{j=1}^{n}(\partial_{x_{j}}\langle x\rangle^{ \frac{1}{s}})^{2}, \tag{2.2}\] \[e^{\delta\langle x\rangle^{\frac{1}{s}}}\left\{\sum_{j=1}^{n}a_ {j}(t,x)D_{x_{j}}\right\}e^{-\delta\langle x\rangle^{\frac{1}{s}}}=\sum_{j=1}^ {n}a_{j}(t,x)D_{x_{j}}-\delta\sum_{j=1}^{n}a_{j}(t,x)D_{x_{j}}\langle x\rangle ^{\frac{1}{s}}. \tag{2.1}\] Therefore, from (2.1) and (2.2) we get \[S_{\delta}:=e^{\delta\langle x\rangle^{\frac{1}{s}}}Se^{-\delta \langle x\rangle^{\frac{1}{s}}}=D_{t}+\Delta_{x}+\sum_{j=1}^{n}a_{j,\delta}(t,x )D_{x_{j}}+b_{\delta}(t,x), \tag{2.3}\] where \[a_{j,\delta}(t,x):=a_{j}(t,x)-i2\delta\partial_{x_{j}}\langle x\rangle^{\frac{ 1}{s}},\quad j=1,\ldots,n, \tag{2.4}\] \[b_{\delta}(t,x):=b(t,x)-\delta\Delta_{x}\langle x\rangle^{\frac{1}{s}}+\delta ^{2}\sum_{j=1}^{n}(\partial_{x_{j}}\langle x\rangle^{\frac{1}{s}})^{2}-\delta \sum_{j=1}^{n}a_{j}(t,x)D_{x_{j}}\langle x\rangle^{\frac{1}{s}}. \tag{2.5}\] Note that the operator \(S_{\delta}\) possesses Gevrey regular coefficients of order \(\theta_{0}\). Besides, the imaginary part of the coefficients of order one satisfy \[|Im\,a_{j,\delta}(t,x)|\leq|Im\,a_{j}(t,x)|+2\delta|\partial_{x_{j}}\langle x \rangle^{\frac{1}{s}}|\leq C\langle x\rangle^{-\min\{\sigma,1-\frac{1}{s}\}}, \tag{2.6}\] for some positive constant \(C>0\) depending on \(\delta,s\) and the imaginary part of the coefficients \(a_{j}\), \(j=1,\ldots,n\). So, Theorem 1 implies that the auxiliary Cauchy problem \[\begin{cases}S_{\delta}u(t,x)=e^{\delta\langle x\rangle^{\frac{1}{s}}}f(t,x), \quad t\in[0,T],x\in\mathbb{R}^{n},\\ u(0,x)=e^{\delta\langle x\rangle^{\frac{1}{s}}}g(x),\quad\quad\quad x\in \mathbb{R}^{n},\end{cases} \tag{2.7}\] is \(H^{\infty}_{\theta}\) well-posed, provided that \(\theta\in[\theta_{0},\min\{1-\sigma,\frac{1}{s}\})\). Given those preliminaries we are finally ready to prove Theorem 2. Proof.: Let \(\rho=(\rho_{1},\rho_{2})\), with \(\rho_{1},\rho_{2}>0\), and assume that the initial data for (1.1) satisfy \[g\in H^{0}_{\rho;\theta,s}(\mathbb{R}^{n})\quad\text{and}\quad f\in C([0,T];H^ {0}_{\rho;\theta,s}(\mathbb{R}^{n})).\] Then, for any \(\delta\in(0,\rho_{2})\) we obtain \[g_{\delta}:=e^{\delta\langle x\rangle^{\frac{1}{s}}}g\in H^{0}_{(\rho_{1},\rho _{2}-\delta),\theta,s}(\mathbb{R}^{n})\quad\text{and}\quad f_{\delta}:=e^{ \delta\langle x\rangle^{\frac{1}{s}}}f\in C([0,T];H^{0}_{(\rho_{1},\rho_{2}- \delta);\theta,s}(\mathbb{R}^{n})).\] Since \(\rho_{2}-\delta>0\) we have \(g_{\delta}\in H^{0}_{\rho_{1};\theta}(\mathbb{R}^{n})\) and \(f_{\delta}\in C([0,T];H^{0}_{\rho_{1};\theta}(\mathbb{R}^{n}))\). Under the assumption \(\theta\in[\theta_{0},\min\{\frac{1}{1-\sigma},s\}]\) the auxiliary Cauchy problem (2.7) is \(H^{\infty}_{\theta}\) well-posed. So, there exists a unique solution \(v\in C^{1}([0,T];H^{0}_{\tilde{\rho}_{1};\theta}(\mathbb{R}^{n}))\) of (2.7) with initial data \(g_{\delta}\) and \(f_{\delta}\) and the solution \(v\) satisfies \[\|v(t,\cdot)\|^{2}_{H^{0}_{\tilde{\rho}_{1};\theta}}\leq C(\rho_{1},T)\left\{ \|g_{\delta}\|^{2}_{H^{0}_{\rho_{1};\theta}}+\int_{0}^{t}\|f_{\delta}(\tau, \cdot)\|^{2}_{H^{0}_{\rho_{1};\theta}}d\tau\right\}. \tag{2.8}\] Next we define \(u(t,x)=e^{-\delta\langle x\rangle^{\frac{1}{s}}}v(t,x)\). Then \(u\in C^{1}([0,T];H^{0}_{(\tilde{\rho}_{1},\delta);\theta,s}(\mathbb{R}^{n}))\) and \(u\) solves the Cauchy problem (1.1). Moreover, in view of (2.8) we get \[\|u(t,\cdot)\|_{H^{0}_{(\tilde{\rho}_{1},\delta);\theta,s}} \leq C(\delta)\|v(t,\cdot)\|_{H^{0}_{\tilde{\rho}_{1};\theta}}\] \[\leq C(\rho_{1},\delta,T)\left\{\|g_{\delta}\|^{2}_{H^{0}_{\rho_{1 };\theta}}+\int_{0}^{t}\|f_{\delta}(\tau,\cdot)\|^{2}_{H^{0}_{\rho_{1};\theta }}d\tau\right\}\] \[\leq C(\rho_{1},\delta,T)\left\{\|g\|^{2}_{H^{0}_{(\rho_{1}, \delta);\theta,s}}+\int_{0}^{t}\|f(\tau,\cdot)\|^{2}_{H^{0}_{(\rho_{1},\delta );\theta,s}}d\tau\right\}\] \[\leq C(\rho_{1},\delta,T)\left\{\|g\|^{2}_{H^{0}_{\rho_{\rho}, \theta,s}}+\int_{0}^{t}\|f(\tau,\cdot)\|^{2}_{H^{0}_{\rho;\theta,s}}d\tau \right\},\] for some positive constant \(C(\rho_{1},\delta,T)\) depending on \(\rho_{1}\), \(\delta<\rho_{2}\) and \(T\). In order to conclude the uniqueness of the solution, let \(u_{j}\in C^{1}([0,T];H^{0}_{(\tilde{\rho}_{1},\delta);\theta,s}(\mathbb{R}^{n}))\), \(j=1,2\), \(\delta<\rho_{2}\), two solutions for the problem (1.1). Then, for any \(\tilde{\delta}\in(0,\delta)\), \(e^{\tilde{\delta}\langle x\rangle^{\frac{1}{s}}}u_{j}\), \(j=1,2\), are solutions of the Cauchy problem (2.7), with \(\delta\) replaced by \(\tilde{\delta}\), and belong to \(C^{1}([0,T];H^{0}_{\tilde{\rho}_{1};\theta}(\mathbb{R}^{n}))\). From the \(H^{\infty}_{\theta}\) well-posedness of (2.7) we conclude \(e^{\tilde{\delta}\langle x\rangle^{\frac{1}{s}}}u_{1}=e^{\tilde{\delta}\langle x \rangle^{\frac{1}{s}}}u_{2}\), that is, \(u_{1}=u_{2}\) ## 3. Proof of Proposition 1 In this section we deal with the Cauchy problem for the model operator \(M\) given by (1.5), that is, for \(t\in[0,T]\) and \(x\in\mathbb{R}\), \[\begin{cases}\{D_{t}+D_{x}^{2}+i\langle x\rangle^{-\sigma}D_{x}\}u(t,x)=0,\\ u(0,x)=g(x),\end{cases} \tag{3.1}\] where \(\sigma>0\). Following an argument inspired by [1] and [3] we shall prove that if the Cauchy problem (3.1) is \(\mathcal{S}^{\theta}_{s}\) well-posed, then \((1-\sigma)>\max\{\frac{1}{\theta},\frac{1}{s}\}\) leads to a contradiction. Before proceeding with the proof we need to establish some notations and state some results. As usual, for any \(m\in\mathbb{R}\), \(S^{m}_{0,0}(\mathbb{R})\) stands for the space of all functions \(p\in C^{\infty}(\mathbb{R}^{2})\) such that for any \(\alpha,\beta\in\mathbb{N}_{0}\) there exists \(C_{\alpha,\beta}>0\) such that \[|\partial_{\xi}^{\alpha}\partial_{x}^{\beta}p(x,\xi)|\leq C_{\alpha,\beta} \langle\xi\rangle^{m}.\] The topology in \(S^{m}_{0,0}(\mathbb{R})\) is induced by the following family of seminorms \[|p|_{\ell}^{(m)}:=\max_{\alpha\leq\ell,\beta\leq\ell}\sup_{x,\xi\in\mathbb{R}} |\partial_{\xi}^{\alpha}\partial_{x}^{\beta}p(x,\xi)|\langle\xi\rangle^{-m}, \quad p\in S^{m}_{0,0}(\mathbb{R}),\,\ell\in\mathbb{N}_{0}.\] We associate to every symbol \(p\in S^{m}_{0,0}(\mathbb{R})\) the continuous operator \(p(x,D):\mathscr{S}(\mathbb{R})\to\mathscr{S}(\mathbb{R})\) (Schwartz space of rapidly decreasing functions), known as pseudodifferential operator, defined by \[p(x,D)u(x)=\int e^{i\xi x}p(x,\xi)\widehat{u}(\xi)\widehat{d}\xi,\quad u\in \mathscr{S}(\mathbb{R}).\] The next result gives the action of operators coming from symbols \(S^{m}_{0,0}(\mathbb{R})\) in the standard sobolev spaces \(H^{s}(\mathbb{R})\), \(s\in\mathbb{R}\). For a proof we address the reader to Theorem 1.6 on page 224 of [8]. **Theorem 3**.: _[Calderon-Vaillancourt] Let \(p\in S^{m}_{0,0}(\mathbb{R})\). Then for any real number \(s\in\mathbb{R}\) there exist \(\ell:=\ell(s,m)\in\mathbb{N}_{0}\) and \(C:=C_{s,m}>0\) such that_ \[\|p(x,D)u\|_{H^{s}(\mathbb{R})}\leq C|p|_{\ell}^{(m)}\|u\|_{H^{s+m}(\mathbb{R })},\quad\forall\,u\in H^{s+m}(\mathbb{R}).\] _Besides, when \(m=s=0\) we can replace \(|p|_{\ell}^{(m)}\) by_ \[\max_{\alpha,\beta\leq 2}\sup_{x,\xi\in\mathbb{R}}|\partial_{\xi}^{\alpha} \partial_{x}^{\beta}p(x,\xi)|.\] Now we consider the algebra porperties of \(S^{m}_{0,0}(\mathbb{R})\) with respect the composition of operators. Let \(p_{j}\in S^{m_{j}}_{0,0}(\mathbb{R})\), \(j=1,2\), and define \[q(x,\xi) =Os-\iint e^{-iy\eta}p_{1}(x,\xi+\eta)p_{2}(x+y,\xi)dyd\eta\] \[=\lim_{\varepsilon\to 0}\iint e^{-iy\eta}p_{1}(x,\xi+\eta)p_{2}(x+y,\xi)e^{-\varepsilon^{2}y^{2}}e^{-\varepsilon^{2}\eta^{2}}dyd\eta. \tag{3.2}\] We often write \(q(x,\xi)=p_{1}(x,\xi)\circ p_{2}(x,\xi)\). Then we have the following theorem (for a proof see Lemma 2.4 on page 69 and Theorem 1.4 on page 223 of [8]). **Theorem 4**.: _Let \(p_{j}\in S^{m_{j}}_{0,0}(\mathbb{R})\), \(j=1,2\), and consider \(q\) defined by (3.2). Then \(q\in S^{m_{1}+m_{2}}_{0,0}(\mathbb{R})\) and \(q(x,D)=p_{1}(x,D)p_{2}(x,D)\). Moreover, the symbol \(q\) has the following asymptotic expansion_ \[q(x,\xi)=\sum_{\alpha\leq N}\frac{1}{\alpha!}\partial_{\xi}^{\alpha}p_{1}(x, \xi)D_{x}^{\alpha}p_{2}(x,\xi)+r_{N}(x,\xi), \tag{3.3}\] _where_ \[r_{N}(x,\xi)=N\int_{0}^{1}\frac{(1-\theta)^{N-1}}{N!}\,Os-\iint e^{-i\eta\eta} \partial_{\xi}^{N}p_{1}(x,\xi+\theta\eta)D_{x}^{N}p_{2}(x+y,\xi)dyd\eta\,d\theta,\] _and the seminorms of \(r_{N}\) may be estimated in the following way: for any \(\ell_{0}\in\mathbb{N}_{0}\) there exists \(\ell_{1}:=\ell_{1}(\ell_{0})\in\mathbb{N}_{0}\) such that_ \[|r_{N}|_{\ell_{0}}^{(m_{1}+m_{2})}\leq C_{\ell_{0}}|\partial_{\xi}^{N}p_{1}|_{ \ell_{1}}^{(m_{1})}|\partial_{x}^{N}p_{2}|_{\ell_{1}}^{(m_{2})}.\] The last theorem that we recall is the so-called sharp Garding inequality. To state this result we need to define the standard Hormander classes of symbols. We say that \(p\in S^{m}(\mathbb{R}^{2})\) if \(p\in C^{\infty}(\mathbb{R}^{2})\) and for any \(\alpha,\beta\in\mathbb{N}_{0}\) there exists \(C_{\alpha,\beta}>0\) such that \[|\partial_{\xi}^{\alpha}\partial_{x}^{\beta}p(x,\xi)|\leq C_{\alpha,\beta} \langle\xi\rangle^{m-\alpha}. \tag{3.4}\] **Theorem 5**.: _[sharp Garding inequality] Let \(p\in S^{m}(\mathbb{R}^{2})\). If \(Re\,p(x,\xi)\geq 0\) for all \(x,\xi\in\mathbb{R}\) then there exists a constant \(C>0\), depending on a finite number of the constants \(C_{\alpha,\beta}\) in (3.4), such that_ \[Re\,\langle p(x,D)u,u\rangle_{L^{2}}\geq-C\|u\|_{H^{\frac{m-1}{2}}},\quad u\in \mathscr{S}(\mathbb{R}).\] Now we return to the proof of Proposition 1. As we mentioned before, the idea is to argue by contradiction. We start then defining the main ingredients to get the desired contradiction. Let \(\phi\in G^{\theta}(\mathbb{R})\) determined by \[\widehat{\phi}(\xi)=e^{-\rho_{0}\langle\xi\rangle^{\frac{1}{\theta}}}, \tag{3.5}\] for some \(\rho_{0}>0\). We claim that such \(\phi\) belongs to \(\mathcal{S}_{1}^{\theta}(\mathbb{R})\). Indeed, in view of Proposition 6.1.7 of [10] we only need to prove that \[\phi(x)=\int e^{i\xi x}\widehat{\phi}(\xi)d\xi\] satisfies \(|\phi(x)|\leq Ce^{-c|x|}\) for some positive constants \(C,c>0\). To verify that, we integrate by parts to get \[x^{\beta}\phi(x)=(-1)^{\beta}\int e^{i\xi x}D_{\xi}^{\beta}e^{-\rho_{0}\langle \xi\rangle^{\frac{1}{\theta}}}d\xi.\] Then, Faa di Bruno formula and costumary factorial inequalities imply \[|x^{\beta}\phi(x)| \leq\int\sum_{j=1}^{\beta}\frac{e^{-\rho_{0}\langle\xi\rangle^{ \frac{1}{\theta}}}}{j!}\sum_{\begin{subarray}{c}\beta_{1}+\cdots+\beta_{j}= \beta\\ \beta_{\ell}\geq 1\end{subarray}}\frac{\beta!}{\beta_{1}!\ldots\beta_{j}!}\prod_{ \ell=1}^{j}|\rho_{0}\partial_{\xi}^{\beta_{\ell}}\langle\xi\rangle^{\frac{1} {\beta}}|d\xi\] \[\leq C_{\rho_{0}}^{\beta}\beta!,\quad\forall\,x\in\mathbb{R},\, \beta\in\mathbb{N}_{0},\] which allows to conclude our claim. Hence, there exists \(\rho=(\rho_{1},\rho_{2})\), \(\rho_{1},\rho_{2}>0\) such that \(\phi\in H^{0}_{\rho;\theta,s}(\mathbb{R})\) for all \(s\geq 1\). Next we take an arbitrary sequence \((\sigma_{k})\) of positive real numbers such that \(\sigma_{k}\to\infty\), as \(k\to\infty\), and then we define \[\phi_{k}(x)=e^{-\rho_{2}4^{\frac{1}{\theta}}\sigma_{k}^{\frac{1}{\theta}}} \phi(x-4\sigma_{k}),\quad k\in\mathbb{N}_{0}.\] So, \(\phi_{k}\in H^{0}_{\rho;\theta,s}(\mathbb{R})\) for all \(k\) and we have the following estimate: \[\|\phi_{k}\|_{H^{0}_{\rho;\theta,s}}^{2} =\int e^{2\rho_{2}\langle x\rangle^{\frac{1}{s}}}|e^{\rho_{1} \langle D\rangle^{\frac{1}{s}}}\phi_{k}(x)|^{2}dx\] \[=\int e^{2\rho_{2}\langle x+4\sigma_{k}\rangle^{\frac{1}{s}}}e^{- \rho_{2}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{14}\frac{1}{s}}\sigma_{k }^{\frac{1}{s}}}|e^{\rho_{1}\langle D\rangle^{\frac{1}{s}}}\phi(x)|^{2}dx\] \[\leq\int e^{2\rho_{2}\langle x\rangle^{\frac{1}{s}}}|e^{\rho_{1} \langle D\rangle^{\frac{1}{s}}}\phi(x)|^{2}dx\] \[=\|\phi\|_{H^{0}_{\rho;\theta,s}}^{2}=\,\text{constant}.\] That is, the sequence \((\|\phi_{k}\|_{H^{0}_{\rho;\theta,s}})\) is uniformly bounded in \(k\). If the problem (3.1) is \(\mathcal{S}^{\theta}_{s}\) well-posed, for every \(k\in\mathbb{N}_{0}\) let \(u_{k}\in C^{1}([0,T];H^{0}_{\tilde{\rho};\theta,s}(\mathbb{R}))\), \(\tilde{\rho}=(\tilde{\rho}_{1},\tilde{\rho}_{2})\) with \(\tilde{\rho}_{1},\tilde{\rho}_{2}>0\), be the solution of (3.1) with initial datum \(\phi_{k}\). From the energy inequality we get \[\|u_{k}(t,\cdot)\|_{L^{2}}\leq\|u_{k}(t,\cdot)\|_{H^{0}_{\tilde{\rho},\theta,s }}\leq C_{T,\rho}\|\phi_{k}\|_{H^{0}_{\rho;\theta,s}}\leq C_{T,\rho}\|\phi\|_{ H^{0}_{\rho;\theta,s}}. \tag{3.6}\] Hence, we conclude that the sequence \((\|u_{k}(t)\|_{L^{2}})\) is uniformly bounded with respect to both \(k\in\mathbb{N}_{0}\) and \(t\in[0,T]\). For \(\theta_{h}>1\) close to \(1\) we consider a Gevrey cutoff function \(h\in G^{\theta_{h}}_{0}(\mathbb{R})\) such that \[h(x)=\begin{cases}1,\ |x|\leq\frac{1}{2},\\ 0,\ |x|\geq 1,\end{cases}\] \(\widehat{h}(0)>0\) and \(\widehat{h}(\xi)\geq 0\) for all \(\xi\in\mathbb{R}\). We also consider the sequence of symbols \[w_{k}(x,\xi)=h\left(\frac{x-4\sigma_{k}}{\sigma_{k}}\right)h\left(\frac{\xi- \sigma_{k}}{\frac{1}{4}\sigma_{k}}\right). \tag{3.7}\] Finally, for \(\lambda\in(0,1)\) to be chosen later, \(\theta_{1}>\theta_{h}\) (still close to \(1\)) and \[N_{k}:=\lfloor\sigma_{k}^{\frac{\lambda}{\theta_{1}}}\rfloor=\max\{a\in\mathbb{ N}_{0}:a\leq\sigma_{k}^{\frac{\lambda}{\theta_{1}}}\} \tag{3.8}\] we introduce the energy \[E_{k}(t)=\sum_{\alpha\leq N_{k},\beta\leq N_{k}}\frac{1}{(\alpha!\beta!)^{ \theta_{1}}}\|w_{k}^{(\alpha\beta)}(x,D)u_{k}(t,x)\|_{L^{2}}=\sum_{\alpha\leq N _{k},\beta\leq N_{k}}E_{k,\alpha,\beta}(t), \tag{3.9}\] where \[w_{k}^{(\alpha\beta)}(x,\xi)=h^{(\alpha)}\left(\frac{x-4\sigma_{k}}{\sigma_{k }}\right)h^{(\beta)}\left(\frac{\xi-\sigma_{k}}{\frac{1}{4}\sigma_{k}}\right).\] **Remark 5**.: _We have \(w_{k}(x,\xi)\in S^{0}_{0,0}(\mathbb{R})\) and the following estimate_ \[|\partial_{x}^{\nu}\partial_{\xi}^{\gamma}w_{k}^{(\alpha\beta)}(x,\xi)|_{\ell}^ {(0)}\leq C^{\alpha+\beta+\gamma+\nu+\ell+1}(\alpha!\beta!\gamma!\nu!\ell!^{2})^ {\theta_{h}}\sigma_{k}^{-\gamma}\sigma_{k}^{-\nu}, \tag{3.10}\] _for some constant \(C>0\) independent from \(k,\alpha,\beta,\nu\) and \(\gamma\). We also remark that on the support of \(w_{k}\) it holds_ \[\frac{3\sigma_{k}}{4}\leq\xi\leq\frac{5\sigma_{k}}{4}\quad\text{and}\quad 3 \sigma_{k}\leq x\leq 5\sigma_{k}.\] _Thus, on the support of \(w_{k}\), \(\xi\) is comparable with \(\sigma_{k}\) and \(x\) is comparable with \(\sigma_{k}\)._ The following two propositions will play a key role in our proof. **Proposition 3**.: _Let the Cauchy problem (3.1) be \(\mathcal{S}^{\theta}_{s}\) well-posed. We then have for all \(t\in[0,T]\) and \(k\in\mathbb{N}_{0}\)_ \[E_{k,\alpha,\beta}(t)\leq C^{\alpha+\beta+1}\{\alpha!\beta!\}^{\theta_{h}- \theta_{1}}, \tag{3.11}\] \[E_{k}(t)\leq C, \tag{3.12}\] _for some positive constant \(C>0\) indepent from \(k\) and \(t\)._ Proof.: Calderon-Vaillancourt Theorem, (3.6) and (3.10) imply \[\|w_{k}^{(\alpha\beta)}(x,D)u_{k}\|_{L^{2}} \leq C\|u_{k}\|_{L^{2}}\max_{\alpha,\beta\leq 2}\sup_{x,\xi\in \mathbb{R}}|\partial^{\alpha}_{\xi}\partial^{\beta}_{x}w_{k}(x,\xi)|\] \[\leq C^{\alpha+\beta+1}\{\alpha!\beta!\}^{\theta_{h}}.\] Hence we immediatily see that (3.11) holds from the definition of \(E_{k,\alpha,\beta}\) (cf. (3.9)). To obtain (3.12) we recall that \(\theta_{1}>\theta_{h}\) and therefore we obtain \[E_{k}(t) =\sum_{\alpha\leq N_{k},\beta\leq N_{k}}E_{k,\alpha,\beta}(t)\] \[<C\underbrace{\sum_{\alpha,\beta<\infty}C^{\alpha+\beta}\{\alpha! \beta!\}^{\theta_{h}-\theta_{1}}}_{=\text{constant}}.\] **Proposition 4**.: _Let the Cauchy problem (3.1) be \(\mathcal{S}^{\theta}_{s}\) well-posed. We then have for all \(t\in[0,T]\) and all \(k\) sufficiently large_ \[\partial_{t}E_{k}(t)\geq\left[c_{1}\sigma_{k}^{1-\sigma}-C\sigma_{k}^{\lambda }\right]E_{k}(t)-C^{N_{k}+1}\sigma_{k}^{C-cN_{k}},\] _for some positive constants \(C,c,c_{1}>0\) independent from \(k\) and \(t\)._ The proof of Proposition 4 is quite technical and long, so, in order to address the reader as quick as possible to the proof of Proposition 1, we postpone it to the Subsection 3.1. Given all those preliminaries we are finally ready to prove Proposition 1. Proof.: Let us denote by \[A_{k}:=c_{1}\sigma_{k}^{1-\sigma}-C\sigma_{k}^{\lambda},\quad R_{k}=C^{N+1} \sigma_{k}^{C-cN_{k}}.\] If the parameter \(\lambda<1-\sigma\), since \(\sigma_{k}\to\infty\), we obtain for all \(k\) sufficiently large \[A_{k}\geq\frac{c_{1}}{2}\sigma_{k}^{1-\sigma}.\] In the next we always shall consider \(k\) sufficiently large. Applying Gronwall's inequality to Proposition 4 we obtain \[E_{k}(t)\geq e^{A_{k}t}\Bigg{\{}E_{k}(0)-R_{k}\int_{0}^{t}e^{-A_{k}\tau}d \tau\Bigg{\}}.\] Hence, for any fixed \(T^{*}\in(0,T]\), \[E_{k}(T^{*})\geq e^{T^{*}\frac{c_{1}}{2}\sigma_{k}^{(p-1)(1-\sigma)}}\Bigg{\{} E_{k}(0)-T^{*}R_{k}\Bigg{\}}. \tag{3.13}\] Now we estimate the terms \(R_{k}\) and \(E_{k}(0)\). Recalling that \(N_{k}=\lfloor\sigma_{k}^{\frac{\lambda}{\theta_{1}}}\rfloor\) we may estimate \(R_{k}\) from above in the following way \[R_{k}\leq Ce^{-c\sigma_{k}^{\frac{\lambda}{\theta_{1}}}}. \tag{3.14}\] To estimate \(E_{k}(0)\), denoting by \(\mathcal{F}\) the Fourier transformation, we first write \[E_{k}(0) \geq\|w_{k}(x,D)\phi_{k}\|_{L^{2}(\mathbb{R}_{x})}=\left\|h\left( \frac{x-4\sigma_{k}}{\sigma_{k}}\right)h\left(\frac{D-\sigma_{k}}{\frac{1}{4} \sigma_{k}}\right)\phi_{k}\right\|_{L^{2}(\mathbb{R}_{x})}\] \[=e^{-\rho_{2}4^{\frac{1}{4}}\sigma_{k}^{\frac{1}{2}}}\left\|h \left(\frac{x-4\sigma_{k}}{\sigma_{k}}\right)h\left(\frac{D-\sigma_{k}}{\frac {1}{4}\sigma_{k}}\right)\phi(x-4\sigma_{k})\right\|_{L^{2}(\mathbb{R}_{x})}\] \[=e^{-\rho_{2}4^{\frac{1}{4}}\sigma_{k}^{\frac{1}{2}}}\left\| \mathcal{F}\left[h\left(\frac{x-4\sigma_{k}}{\sigma_{k}}\right)\right](\xi)*h \left(\frac{\xi-\sigma_{k}}{\frac{1}{4}\sigma_{k}}\right)e^{-i\xi 4\sigma_{k}}\widehat{\phi}(\xi)\right\|_{L^{2}( \mathbb{R}_{\xi})}\] \[=e^{-\rho_{2}4^{\frac{1}{4}}\sigma_{k}^{\frac{1}{2}}}\sigma_{k} \left\|e^{-4i\sigma_{k}\xi}\,\widehat{h}(\sigma_{k}\xi)*h\left(\frac{\xi- \sigma_{k}}{\frac{1}{4}\sigma_{k}}\right)e^{-4i\sigma_{k}\xi}\widehat{\phi}( \xi)\right\|_{L^{2}(\mathbb{R}_{\xi})}.\] Thus \[E_{k}^{2}(0)\geq e^{-2\rho_{2}4^{\frac{1}{4}}\sigma_{k}^{\frac{1}{2}}}\sigma_ {k}^{2}\int_{\mathbb{R}_{\xi}}\left|\int_{\mathbb{R}_{\eta}}\widehat{h}(\sigma _{k}(\xi-\eta))h\left(\frac{\eta-\sigma_{k}}{\frac{1}{4}\sigma_{k}}\right) \widehat{\phi}(\eta)d\eta\right|^{2}d\xi.\] Since \(\widehat{h}(0)>0\) and \(\widehat{h}(\xi)\geq 0\) for all \(\xi\in\mathbb{R}\), we get an estimate from below to \(E_{k}^{2}(0)\) by restricting the integration domain. Indeed, we set \[G_{1,k} =\left[\sigma_{k}-\frac{\sigma_{k}}{8},\sigma_{k}-\frac{\sigma_{k }}{8}+\sigma_{k}^{-2}\right]\bigcup\left[\sigma_{k}+\frac{\sigma_{k}}{8}- \sigma_{k}^{-2},\sigma_{k}+\frac{\sigma_{k}}{8}\right],\] \[G_{2,k} =\left[\sigma_{k}-\frac{\sigma_{k}}{8}-\sigma_{k}^{-2},\sigma_{k} -\frac{\sigma_{k}}{8}+\sigma_{k}^{-2}\right]\bigcup\left[\sigma_{k}+\frac{ \sigma_{k}}{8}-\sigma_{k}^{-2},\sigma_{k}+\frac{\sigma_{k}}{8}+\sigma_{k}^{-2} \right].\] Then \(|\eta-\sigma_{k}|\leq 8^{-1}\sigma_{k}\) for all \(\eta\in G_{1,k}\) and \(\sigma_{k}|\xi-\eta|\leq 2\sigma_{k}^{-1}\) for all \(\eta\in G_{1,k}\) and all \(\xi\in G_{2,k}\). So, \[E_{k}^{2}(0) \geq Ce^{-2\rho_{2}4^{\frac{1}{4}}\sigma_{k}^{\frac{1}{2}}}\sigma_ {k}^{2}e^{-2c_{\rho_{0}}\sigma_{k}^{\frac{1}{2}}}\int_{G_{2,k}}\left|\int_{G_{ 1,k}}d\eta\right|^{2}d\xi\] \[=C\sigma_{k}^{-4}e^{-2\rho_{2}4^{\frac{1}{4}}\sigma_{k}^{\frac{1} {2}}}e^{-2c_{\rho_{0}}\sigma_{k}^{\frac{1}{2}}}.\] In this way we get the following estimate \[E_{k}(0)\geq C\sigma_{k}^{-2}e^{-\rho_{2}4^{\frac{1}{4}}\sigma_{k}^{\frac{1}{ 2}}}e^{-c_{\rho_{0}}\sigma_{k}^{\frac{1}{2}}}. \tag{3.15}\] From (3.13), (3.14) and (3.15) we conclude \[E_{k}(T^{*})\geq Ce^{T^{*}\frac{c}{2}\sigma_{k}^{1-\sigma}}\left[\sigma_{k}^{-2 }e^{-\rho_{2}4^{\frac{1}{4}}\sigma_{k}^{\frac{1}{2}}}e^{-c_{\rho_{0}}\sigma_{ k}^{\frac{1}{2}}}-T^{*}e^{-\sigma_{k}^{\frac{1}{2}}}\right],\quad\text{for all }T^{*}\in(0,T], \tag{3.16}\] provided that \(\lambda<1-\sigma\) and \(k\) being sufficiently large. Now assume by contradiction that \(\max\{\frac{1}{\theta},\frac{1}{s}\}<1-\sigma\). Then we set \(\lambda=\max\{\frac{1}{\theta},\frac{1}{s}\}+\tilde{\varepsilon}\) with \(\tilde{\varepsilon}>0\) small so that \(\lambda<1-\sigma\). After that, we take \(\theta_{1}\) very close to \(1\) to get \(\frac{\lambda}{\theta_{1}}>\max\{\frac{1}{\theta},\frac{1}{s}\}\). Hence (3.16) implies \[E_{k}(T^{*})\geq C_{2}e^{\tilde{c}_{0}\sigma_{k}^{1-\sigma}}e^{-\tilde{c}_{ \rho_{0},\rho_{2}}\sigma_{k}^{\max\{\frac{1}{\theta},\frac{1}{s}\}}}.\] Thus \[E_{k}(T^{*})\geq C_{3}e^{\frac{c_{0}}{2}\sigma_{k}^{1-\sigma}}.\] The latter inequality provides a contradiction because Proposition 3 provides a uniform upper bound for \(E_{k}(t)\) for all \(t\) and \(k\). ### Proof of Proposition 4 For sake of brevity we denote \[v_{k}^{(\alpha\beta)}(t,x)=w_{k}^{(\alpha\beta)}(x,D)u_{k}(t,x).\] Then we have \[Sv_{k}^{(\alpha\beta)}=Sw_{k}^{(\alpha\beta)}u_{k}=w_{k}^{(\alpha\beta)} \underbrace{Su_{k}}_{=0}+[S,w_{k}^{(\alpha\beta)}]u_{k}=[S,w_{k}^{(\alpha \beta)}]u_{k}=:f_{k}^{(\alpha\beta)}.\] To obtain an estimate from below for \(\partial_{t}E_{k}\) we observe \[\|v_{k}^{(\alpha\beta)}\|_{L^{2}(\mathbb{R})}\partial_{t}\|v_{k} ^{(\alpha\beta)}\|_{L^{2}(\mathbb{R})} =\frac{1}{2}\partial_{t}\{\|v_{k}^{(\alpha\beta)}\|_{L^{2}( \mathbb{R})}^{2}\}=\operatorname{Re}\left\langle\partial_{t}v_{k}^{(\alpha \beta)},v_{k}^{(\alpha\beta)}\right\rangle_{L^{2}(\mathbb{R})} \tag{3.17}\] \[=\operatorname{Re}\left\langle iSv_{k}^{(\alpha\beta)},v_{k}^{( \alpha\beta)}\right\rangle_{L^{2}(\mathbb{R})}+\operatorname{Re}\left\langle \left\langle x\right\rangle^{-\sigma}D_{x}v_{k}^{(\alpha\beta)},v_{k}^{\alpha \beta}\right\rangle_{L^{2}(\mathbb{R})}\] \[\geq-\|f_{k}^{(\alpha\beta)}\|_{L^{2}(\mathbb{R})}\|v_{k}^{( \alpha\beta)}\|_{L^{2}(\mathbb{R})}+\operatorname{Re}\left\langle\left\langle x \right\rangle^{-\sigma}D_{x}v_{k}^{(\alpha\beta)},v_{k}^{\alpha\beta}\right\rangle _{L^{2}(\mathbb{R})}.\] Next, our idea is to estimate \(\operatorname{Re}\left\langle\left\langle x\right\rangle^{-\sigma}D_{x}v_{k}^ {(\alpha\beta)},v_{k}^{(\alpha\beta)}\right\rangle_{L^{2}(\mathbb{R})}\) from below and \(\|f_{k}^{(\alpha\beta)}\|_{L^{2}(\mathbb{R})}\) from above. In the following we shall derive such estimates. From now and on we denote by \(C\) a possibly large positive constant which do not depend on \(\alpha,\beta,N_{k}\) and \(k\). 1.1. Estimate from below to \(Re\left\langle\left\langle x\right\rangle^{-\sigma}D_{x}v_{k}^{(\alpha\beta)},v_{k}^{(\alpha\beta)}\right\rangle_{L^{2}(\mathbb{R})}\) We first consider the following cut-off functions \[\chi_{k}(\xi)=h\left(\frac{\xi-\sigma_{k}}{\frac{3}{4}\sigma_{k}} \right),\quad\psi_{k}(x)=h\left(\frac{x-4\sigma_{k}}{3\sigma_{k}}\right). \tag{3.18}\] On the support of \(\psi_{k}(x)\chi_{k}(\xi)\) we have for all \(k\in\mathbb{N}_{0}\) \[\frac{\sigma_{k}}{4}\leq\xi\leq\frac{7\sigma_{k}}{4}\quad\text{and}\quad \sigma_{k}\leq x\leq 7\sigma_{k}.\] Therefore \[\xi\langle x\rangle^{-\sigma}\geq\frac{7^{-\sigma}}{4}\langle\sigma_{k} \rangle^{-\sigma}\sigma_{k},\] for every \((x,\xi)\in\operatorname{supp}\psi_{k}(x)\chi_{k}(\xi)\). Denoting \(c_{0}=\frac{7^{-\sigma}}{4^{p-1}}\), we decompose the symbol of \(\langle x\rangle^{-\sigma}D_{x}\) as follows \[\langle x\rangle^{-\sigma}\xi =c_{0}\langle\sigma_{k}\rangle^{-\sigma}\sigma_{k}+\langle x \rangle^{-\sigma}\xi-c_{0}\langle\sigma_{k}\rangle^{-\sigma}\sigma_{k}\] \[=c_{0}\langle\sigma_{k}\rangle^{-\sigma}\sigma_{k}+\{\langle x \rangle^{-\sigma}\xi-c_{0}\langle\sigma_{k}\rangle^{-\sigma}\sigma_{k}\}\psi_{ k}(x)\chi_{k}(\xi)\] \[+\{\langle x\rangle^{-\sigma}\xi-c_{0}\langle\sigma_{k}\rangle^{- \sigma}\sigma_{k}\}\{1-\psi_{k}(x)\chi_{k}(\xi)\}\] \[=I_{1,k}+I_{2,k}(x,\xi)+I_{3,k}(x,\xi).\] In the following we explain how to estimate \(Re\left\langle I_{\ell,k}(x,D)v_{k}^{(\alpha\beta)},v_{k}^{(\alpha\beta)} \right\rangle_{L^{2}(\mathbb{R})}\), \(\ell=1,2,3\). \(-Re\left\langle I_{1,k}v_{k}^{(\alpha\beta)},v_{k}^{(\alpha\beta)}\right\rangle_{ L^{2}(\mathbb{R})}\). For \(I_{1,k}\) we simply have \[\operatorname{Re}\left\langle I_{1,k}v_{k}^{(\alpha\beta)},v_{k}^ {(\alpha\beta)}\right\rangle_{L^{2}(\mathbb{R})} =c_{0}\langle\sigma_{k}\rangle^{-\sigma}\sigma_{k}\|v_{k}^{( \alpha\beta)}\|_{L^{2}(\mathbb{R})}^{2} \tag{3.19}\] \[\geq c_{0}2^{-\frac{\sigma}{2}}\sigma_{k}^{1-\sigma}\|v_{k}^{( \alpha\beta)}\|_{L^{2}(\mathbb{R})}^{2}.\] \(-Re\left\langle I_{2,k}(x,D)v_{k}^{(\alpha\beta)},v_{k}^{(\alpha\beta)}\right\rangle _{L^{2}(\mathbb{R})}\). We have that \(I_{2,k}\) belongs to \(S^{1}(\mathbb{R}^{2})\) with uniform symbol estimates with respect to \(k\). Indeed, since \(x\) and \(\xi\) are comparable with \(\sigma_{k}\) on the support of \(\psi_{k}(x)\chi_{k}(\xi)\) we obtain \[|\partial_{\xi}^{\gamma}\partial_{x}^{\nu}I_{2,k}(x,\xi)| \leq\sum_{\begin{subarray}{c}\gamma_{1}+\gamma_{2}=\gamma\\ \nu_{1}+\nu_{2}=\nu\end{subarray}}\frac{\gamma!\nu!}{\gamma_{1}!\gamma_{2}! \nu_{1}!\nu_{2}!}|\partial_{\xi}^{\gamma_{1}}\partial_{x}^{\nu_{1}}\{\langle x \rangle^{-\sigma}\xi-c_{0}\langle\sigma_{k}\rangle^{-\sigma}\sigma_{k}\}|| \partial_{x}^{\nu_{2}}\psi_{k}(x)\partial_{\xi}^{\gamma_{2}}\chi_{k}(\xi)|\] \[\leq\sum_{\begin{subarray}{c}\gamma_{1}+\gamma_{2}=\gamma\\ \nu_{1}+\nu_{2}=\nu\end{subarray}}\frac{\gamma!\nu!}{\gamma_{1}!\gamma_{2}! \nu_{1}!\nu_{2}!}C^{\gamma_{1}+\nu_{1}+1}\gamma_{1}!\nu_{1}!\langle\xi\rangle^ {1-\gamma_{1}}\langle x\rangle^{-\sigma-\nu_{1}}C^{\gamma_{2}+\nu_{2}+1}\gamma _{2}!\nu_{2}!\nu_{2}!\nu_{2}!\sigma_{k}^{-\gamma_{2}}\sigma_{k}^{-\nu_{2}}\] \[\leq C^{\gamma+\nu+1}\{\gamma!\nu!\}^{\theta_{h}}\langle\xi\rangle ^{1-\gamma}\langle x\rangle^{-\sigma-\nu}.\] Besides, from the choice of \(c_{0}\) we easily conclude that \(I_{2,k}(x.\xi)\geq 0\) for all \(x,\xi\in\mathbb{R}\). Sharp-Garding inequality then gives \[Re\left\langle I_{2,k}(x,D)v_{k}^{(\alpha\beta)},v_{k}^{(\alpha\beta)}\right\rangle _{L^{2}(\mathbb{R})}\geq-C\|v_{k}^{(\alpha\beta)}\|_{L^{2}(\mathbb{R})}^{2}. \tag{3.20}\] \(-Re\left\langle I_{3,k}(x,D)v_{k}^{(\alpha\beta)},v_{k}^{(\alpha\beta)} \right\rangle_{L^{2}(\mathbb{R})}\). Since the supports of \(w_{k}^{(\alpha\beta)}(x,\xi)\) and of \(1-\psi_{k}(x)\chi_{k}(\xi)\) are disjoint, applying formula (3.3) for \(N:=N_{k}\) (cf (3.8)) we may write \[I_{3,k}(x,\xi)\circ w_{k}^{(\alpha\beta)}(x,\xi)=R_{k}^{(\alpha\beta)}(x,\xi),\] where \[R_{k}^{(\alpha\beta)}(x,\xi)=N_{k}\int_{0}^{1}\frac{(1-\theta)^{N_{k}-1}}{N_{ k}!}\,Os-\iint e^{-i\eta\eta}\partial_{\xi}^{N_{k}}I_{3,k}(x,\xi+\theta\eta)D_{x}^ {N_{k}}w_{k}^{(\alpha\beta)}(x+y,\xi)dyd\eta\,d\theta.\] The seminorms of \(R_{k}\) can be estimated in the following way: for every \(\ell_{0}\in\mathbb{N}_{0}\) there exists \(\ell_{1}=\ell_{1}(\ell_{0})\) such that \[|R_{k}|_{\ell_{0}}^{(0)}\leq C(\ell_{0})\frac{N_{k}}{N_{k}!}|\partial_{\xi}^{ N_{k}}I_{3,k}|_{\ell_{1}}^{(0)}|\partial_{x}^{N_{k}}w_{k}^{(\alpha\beta)}|_{ \ell_{1}}^{(0)}.\] From Remark 5 we get \[|\partial_{x}^{N_{k}}w_{k}^{(\alpha\beta)}|_{\ell_{1}}^{(0)}\leq C^{\ell_{1}+ \alpha+\beta+N_{k}+1}\ell_{1}!^{2\theta_{h}}\{\alpha!\beta!N_{k}!\}\theta_{k}^ {-N_{k}}.\] On the other hand, since there is no harm into assuming \(N_{k}\geq 2\) (because \(\sigma_{k}\to+\infty\)), we have \[\partial_{\xi}^{N_{k}}I_{3,k}(x,\xi)=-\sum_{\begin{subarray}{c}N_{1}+N_{2}=N_ {k}\\ N_{1}\leq 1\end{subarray}}\frac{N_{k}!}{N_{1}!N_{2}!}\partial_{\xi}^{N_{1}}\{ \langle x\rangle^{-\sigma}\xi-c_{0}\langle\sigma_{k}\rangle^{-\sigma}\sigma_{k} \}\psi_{k}(x)\partial_{\xi}^{N_{2}}\chi_{k}(\xi),\] hence \[|\partial_{\xi}^{N_{k}}I_{3}(x,\xi)|_{\ell_{1}}^{(0)}\leq C^{\ell_{1}+N_{k}+1} \ell_{1}!^{2\theta_{h}}N_{k}!^{\theta_{h}}\sigma_{k}^{1-N_{k}}.\] Therefore \[|R_{k}^{(\alpha\beta)}|_{\ell_{0}}^{(0)}\leq C^{\alpha+\beta+N_{k}+1}\{\alpha! \beta!\}^{\theta_{h}}N_{k}!^{2\theta_{h}-1}\sigma_{k}^{1-2N_{k}},\] which allow us to conclude \[\|I_{3,k}(x,D)v_{k}^{(\alpha\beta)}\|_{L^{2}(\mathbb{R})}\leq C^{\alpha+\beta+ N_{k}+1}\{\alpha!\beta!\}^{\theta_{h}}N_{k}!^{2\theta_{h}-1}\sigma_{k}^{1-2N_{k}}\|u_{k }\|_{L^{2}(\mathbb{R})}.\] So, from (3.6) we get \[Re\left\langle I_{3,k}(x,D)v_{k}^{(\alpha\beta)},v_{k}^{(\alpha\beta)}\right\rangle _{L^{2}(\mathbb{R})}\geq-C^{\alpha+\beta+N_{k}+1}\{\alpha!\beta!\}^{\theta_{h}}N _{k}!^{2\theta_{h}-1}\sigma_{k}^{1-2N_{k}}\|v_{k}^{(\alpha\beta)}\|_{L^{2}( \mathbb{R})}. \tag{3.21}\] From (3.19), (3.20) and (3.21) and using that \(\sigma_{k}\to+\infty\) we obtain the following lemma. **Lemma 1**.: _Let the Cauchy problem (3.1) be \(\mathcal{S}^{\theta}_{s}\) well-posed. Then for all \(k\) sufficiently large it holds_ \[Re\,\langle\langle x\rangle^{-\sigma}D_{x}v^{(\alpha\beta)}_{k},v^{ (\alpha\beta)}_{k}\rangle_{L^{2}(\mathbb{R})} \geq c_{1}\sigma_{k}^{1-\sigma}\|v^{(\alpha\beta)}_{k}\|_{L^{2}( \mathbb{R})}^{2}\] \[-C^{\alpha+\beta+N_{k}+1}\{\alpha!\beta!\}^{\theta_{h}}N_{k}!^{2 \theta_{h}-1}\sigma_{k}^{1-2N_{k}}\|v^{(\alpha\beta)}_{k}\|_{L^{2}(\mathbb{R })},\] _for some \(c_{1}>0\) independent from \(k,\alpha,\beta\) and \(N_{k}\)._ #### 3.1.2. Estimate from above to \(f^{(\alpha\beta)}_{k}\) We start recalling that \[f^{(\alpha\beta)}_{k}=[S,w^{(\alpha\beta)}_{k}]u_{k}=[D_{t}+D_{x}^{2},w^{( \alpha\beta)}_{k}]u_{k}+[i\langle x\rangle^{-\sigma}D_{x},w^{(\alpha\beta)}_{k }]u_{k}.\] In the sequel we explain how to estimate the above brackets, but first we need the following lemma. **Lemma 2**.: _Let the Cauchy problem (3.1) be \(\mathcal{S}^{\theta}_{s}\) well-posed. Then, for any \(N=1,2,3\ldots\) we have_ \[\|D_{x}v^{(\alpha\beta)}_{k}\|_{L^{2}(\mathbb{R})}\leq C\sigma_{k}\|v^{( \alpha\beta)}_{k}\|_{L^{2}(\mathbb{R})}+C^{\alpha+\beta+N+1}\{\alpha!\beta! \}^{\theta_{h}}N!^{2\theta_{h}-1}\sigma_{k}^{1-2N}.\] Proof.: We decompose \(D_{x}\) as \[D_{x}=\chi_{k}(D)D_{x}+(1-\chi_{k})(D)D_{x}.\] Then, since \(\xi\) is comparable with \(\sigma_{k}\) on the support of \(\chi_{k}\), Calderon-Vaillancourt implies \[\|\chi_{k}(D)D_{x}v^{(\alpha\beta)}_{k}\|_{L^{2}(\mathbb{R})}\leq C\sigma_{k} \|v^{(\alpha\beta)}_{k}\|_{L^{2}(\mathbb{R})}.\] On the other hand, since the supports of \(w^{(\alpha\beta)}_{k}\) and \(1-\chi_{k}\) are disjoint, we may write \[(1-\chi_{k})(\xi)\xi\circ w^{(\alpha\beta)}_{k}(x,\xi)=r^{(\alpha\beta)}_{k,N }(x,\xi),\] where \[|r^{(\alpha\beta)}_{k,N}|_{\ell_{0}}^{0} \leq C\frac{N}{N!}|\partial^{N}_{\xi}\{(1-\chi_{k}(\xi))\xi\}|_{ \ell_{1}}^{0}|\partial^{N}_{x}w^{(\alpha\beta)}_{k}|_{\ell_{1}}^{0}\] \[\leq C^{\alpha+\beta+N+1}\{\alpha!\beta!\}^{\theta_{h}}N!^{2 \theta_{h}-1}\sigma_{k}^{1-2N}.\] -\([D_{t}+D_{x}^{2},w^{(\alpha\beta)}_{k}]u_{k}\). We have \[[D_{t}+D_{x}^{2},w^{(\alpha\beta)}_{k}] =2D_{x}w^{(\alpha\beta)}_{k}D_{x}+D_{x}^{2}w^{(\alpha\beta)}_{k}\] \[=2D_{x}\circ D_{x}w^{(\alpha\beta)}_{k}-D_{x}^{2}w^{(\alpha\beta)}_ {k}\] \[=-2i\sigma_{k}^{-1}D_{x}\circ w^{((\alpha+1)\beta)}_{k}+\sigma_{k }^{-2}w^{((\alpha+2)\beta)}_{k}\] Thus, applying Lemma 2 with \(N=N_{k}\), we get \[\|[D_{t}+D_{x}^{2},w^{(\alpha\beta)}_{k}]u_{k}\|_{L^{2}(\mathbb{R })} \leq C\|v^{((\alpha+1)\beta)}_{k}\|_{L^{2}(\mathbb{R})}+C\sigma_{k }^{-2}\|v^{((\alpha+2)\beta)}_{k}\|_{L^{2}(\mathbb{R})}\] \[+C^{\alpha+\beta+N_{k}+1}(\alpha!\beta!)^{\theta_{h}}N_{k}!^{2 \theta_{h}-1}\sigma_{k}^{1-2N_{k}}. \tag{3.22}\] -\([i\langle x\rangle^{-\sigma}D_{x},w^{(\alpha\beta)}_{k}]u_{k}\). We first observe that \[[i\langle x\rangle^{-\sigma}D_{x},w^{(\alpha\beta)}_{k}]=i\langle x\rangle^{- \sigma}D_{x}w^{(\alpha\beta)}_{k} -\sum_{1\leq\gamma\leq N_{k}-1}\frac{i}{\gamma!}D_{x}^{\gamma}\langle x \rangle^{-\sigma}\partial^{\gamma}_{\xi}w^{(\alpha\beta)}_{k}(x,D)D_{x}+r^{( \alpha\beta)}_{k}(x,D),\] where \[r^{(\alpha\beta)}_{k}(x,\xi)=-iN_{k}\int_{0}^{1}\frac{(1-\theta)^{N_{k}-1}}{N_ {k}!}Os-\iint\limits_{\xi}e^{-iy\eta}\partial^{N_{k}}_{\xi}w^{(\alpha\beta)}_{k }(x,\xi+\theta\eta)D_{x}^{N_{k}}\langle x+y\rangle^{-\sigma}\xi dyd\eta\,d\theta.\] To estimate \(r_{k}^{(\alpha\beta)}\) we need to use the support properties of \(w_{k}^{(\alpha\beta)}\). Then we write \[\xi=(\xi+\theta\eta)-\theta\eta\] in order to use integration by parts to get \[Os-\iint e^{-iy\eta}\partial_{\xi}^{N_{k}}w_{k}^{(\alpha\beta)}(x,\xi+ \theta\eta)D_{x}^{N_{k}}\langle x+y\rangle^{-\sigma}\xi dyd\eta\] \[=Os-\iint e^{-iy\eta}(\xi+\theta\eta)\partial_{\xi}^{N_{k}}w_{k}^ {(\alpha\beta)}(x,\xi+\theta\eta)D_{x}^{N_{k}}\langle x+y\rangle^{-\sigma}\xi dyd\eta\] \[-\theta\,Os-\iint e^{-iy\eta}\partial_{\xi}^{N_{k}}w_{k}^{( \alpha\beta)}(x,\xi+\theta\eta)D_{x}^{N_{k}+1}\langle x+y\rangle^{-\sigma}\xi dyd\eta.\] Hence we may estimate the seminorms of \(r_{k}^{(\alpha\beta)}\) in the following way \[|r_{k}^{(\alpha\beta)}|_{\ell_{0}}^{0} \leq\frac{C^{N_{k}+1}}{N_{k}!}\left\{|\xi\partial_{\xi}^{N_{k}}w _{k}^{(\alpha\beta)}|_{\ell_{1}}^{(0)}|D_{x}^{N_{k}}\langle x\rangle^{-\sigma }|_{\ell_{1}}^{(0)}+|\partial_{\xi}^{N_{k}}w_{k}^{(\alpha\beta)}|_{\ell_{1}}^{ (0)}|D_{x}^{N_{k}+1}\langle x\rangle^{-\sigma}|_{\ell_{1}}^{(0)}\right\}\] \[\leq C^{\alpha+\beta+N_{k}+1}\{\alpha!\beta!\}^{\theta_{h}}N_{k}!^ {2\theta_{h}-1}\sigma_{k}^{1-N_{k}},\] which allows us to conclude (using Calderon-Vaillancourt and (3.6)) \[\|r_{k,N}^{(\alpha\beta)}(x,D)u_{k}\|_{L^{2}}\leq C^{\alpha+\beta+N_{k}+1}\{ \alpha!\beta!\}^{\theta_{h}}N_{k}!^{2\theta_{h}-1}\sigma_{k}^{1-N_{k}}. \tag{3.23}\] Now we consider the remaining terms of the commutator. We have \[\sum_{\gamma=1}^{N_{k}-1}\frac{1}{\gamma!}D_{x}^{\gamma}\langle x \rangle^{-\sigma}\partial_{\xi}^{\gamma}w_{k}^{(\alpha\beta)}(x,D)D_{x}\] \[=\sum_{\gamma=1}^{N_{k}-1}\frac{1}{\gamma!}D_{x}^{\gamma}\langle x \rangle^{-\sigma}D_{x}\circ\partial_{\xi}^{\gamma}w_{k}^{(\alpha\beta)}(x,D)- \sum_{\gamma=1}^{N_{k}-1}\frac{1}{\gamma!}D_{x}^{\gamma}\langle x\rangle^{- \sigma}(D_{x}\partial_{\xi}^{\gamma}w)_{k}^{(\alpha\beta)}(x,D)\] \[=\sum_{\gamma=1}^{N_{k}-1}\frac{1}{\gamma!}4^{\gamma}\sigma_{k}^{ -\gamma}D_{x}^{\gamma}\langle x\rangle^{-\sigma}D_{x}\circ w_{k}^{(\alpha( \beta+\gamma))}(x,D)+i\sum_{\gamma=1}^{N_{k}-1}\frac{1}{\gamma!}4^{\gamma} \sigma_{k}^{-1-\gamma}D_{x}^{\gamma}\langle x\rangle^{-\sigma}w_{k}^{((\alpha+ 1)(\beta+\gamma))}(x,D)\] Using the support of \(D_{x}v_{k}^{(\alpha(\beta+\gamma))}\) and that \(|D_{x}^{\gamma}\langle x\rangle^{-\sigma}|\leq C^{\gamma+1}\gamma!\langle x \rangle^{-\sigma-\gamma}\) we get \[\|D_{x}^{\gamma}\langle x\rangle^{-\sigma}D_{x}v_{k}^{(\alpha(\beta+\gamma))} \|_{L^{2}(\mathbb{R})}\leq C^{\gamma+1}\gamma!\sigma_{k}^{-(\sigma+\gamma)}\|D _{x}v_{k}^{(\alpha(\beta+\gamma))}\|_{L^{2}(\mathbb{R})}.\] Then, applying Lemma 2 with \(N=N_{k}-\gamma\) we obtain \[\|D_{x}^{\gamma}\langle x\rangle^{-\sigma}D_{x}v_{k}^{(\alpha( \beta+\gamma))}\|_{L^{2}(\mathbb{R})}\leq C^{\gamma+1}\gamma!\sigma_{k}^{1-( \sigma+\gamma)}\|v_{k}^{(\alpha(\beta+\gamma))}\|_{L^{2}(\mathbb{R})}\] \[\qquad\qquad+C^{\gamma+1}\gamma!\sigma_{k}^{-(\sigma+\gamma)}C^{ \alpha+\beta+N_{k}}\{\alpha!(\beta+\gamma)!\}^{\theta_{h}}(N_{k}-\gamma)!^{2 \theta_{h}-1}\sigma_{k}^{1-2(N_{k}-\gamma)}.\] On the other hand, using the support of \(v_{k}^{((\alpha+1)(\beta+\gamma))}\) we infer \[\|D_{x}^{\gamma}\langle x\rangle^{-\sigma}v_{k}^{((\alpha+1)(\beta+\gamma))} \|_{L^{2}(\mathbb{R})}\leq C^{\gamma+1}\gamma!\sigma_{k}^{-(\sigma+\gamma)}\|v _{k}^{((\alpha+1)(\beta+\gamma))}\|_{L^{2}(\mathbb{R})}.\] Hence \[\bigg{\|}\sum_{\gamma=1}^{N_{k}-1}\frac{1}{\gamma!}D_{x}^{\gamma} \langle x\rangle^{-\sigma}\partial_{\xi}^{\gamma}w_{k}^{(\alpha\beta)}(x,D)D_{x} u_{k}\bigg{\|}_{L^{2}(\mathbb{R})}\leq C\sum_{\gamma=1}^{N_{k}-1}C^{\gamma} \sigma_{k}^{1-\sigma-2\gamma}\|v_{k}^{(\alpha(\beta+\gamma))}\|_{L^{2}(\mathbb{ R})}\\ +C\sum_{\gamma=1}^{N_{k}-1}C^{\gamma}\sigma_{k}^{-(1+\sigma+2 \gamma)}\|v_{k}^{((\alpha+1)(\beta+\gamma))}\|_{L^{2}(\mathbb{R})}+C^{\alpha+ \beta+N_{k}+1}\{\alpha!\beta!\}^{\theta_{h}}N_{k}!^{2\theta_{h}-1}\sigma_{k}^ {1-\sigma-2N_{k}}. \tag{3.24}\] For the last term we have \[\|\langle x\rangle^{-\sigma}D_{x}v_{k}^{((\alpha+1)\beta)}\|_{L^{2}(\mathbb{R} )}\leq C\sigma_{k}^{-(1+\sigma)}\|v_{k}^{((\alpha+1)\beta)}\|_{L^{2}(\mathbb{ R})}. \tag{3.25}\] From (3.22), (3.23), (3.24) and (3.25) we obtain the following lemma. **Lemma 3**.: _Let the Cauchy problem (3.1) be \(\mathcal{S}_{s}^{\theta}\) well-posed. We then have_ \[\|f_{k}^{(\alpha\beta)}\|_{L^{2}(\mathbb{R})} \leq C\|v_{k}^{((\alpha+1)\beta)}\|_{L^{2}(\mathbb{R})}+C\sigma_{ k}^{-2}\|v_{k}^{((\alpha+2)\beta)}\|_{L^{2}(\mathbb{R})}\] \[+C\sum_{\gamma=1}^{N_{k}-1}C^{\gamma}\sigma_{k}^{1-\sigma-2\gamma }\|v_{k}^{(\alpha(\beta+\gamma))}\|_{L^{2}(\mathbb{R})}+C\sum_{\gamma=1}^{N_{k }-1}C^{\gamma}\sigma_{k}^{-(1+\sigma+2\gamma)}\|v_{k}^{((\alpha+1)(\beta+ \gamma))}\|_{L^{2}(\mathbb{R})}\] \[+C^{\alpha+\beta+N_{k}+1}\{\alpha!\beta!\}^{\theta_{h}}N_{k}!^{2 \theta_{h}-1}\sigma_{k}^{1-N_{k}}.\] We are finally ready to prove Proposition 4. Proof.: From (3.17) and Lemmas 1 and 3 we have \[\partial_{t}\|v_{k}^{(\alpha\beta)}\|_{L^{2}} \geq c_{1}\sigma_{k}^{1-\sigma}\|v_{k}^{(\alpha\beta)}\|_{L^{2}( \mathbb{R})}-C\|v_{k}^{((\alpha+1)\beta)}\|_{L^{2}(\mathbb{R})}-C\sigma_{k}^{- 2}\|v_{k}^{((\alpha+2)\beta)}\|_{L^{2}(\mathbb{R})}\] \[-C\sum_{\gamma=1}^{N_{k}-1}C^{\gamma}\sigma_{k}^{1-\sigma-2\gamma }\|v_{k}^{(\alpha(\beta+\gamma))}\|_{L^{2}(\mathbb{R})}-C\sum_{\gamma=1}^{N_{ k}-1}C^{\gamma}\sigma_{k}^{-(1+\sigma+2\gamma)}\|v_{k}^{((\alpha+1)(\beta+ \gamma))}\|_{L^{2}(\mathbb{R})}\] \[-C^{\alpha+\beta+N_{k}+1}\{\alpha!\beta!\}^{\theta_{h}}N_{k}!^{2 \theta_{h}-1}\sigma_{k}^{1-N_{k}}.\] Therefore \[\partial_{t}E_{k}(t)=\sum_{\alpha\leq N_{k},\beta\leq N_{k}}\frac {1}{(\alpha!\beta!)^{\theta_{1}}}\partial_{t}\|v_{k}^{(\alpha\beta)}(t,x)\|_{L ^{2}}\] \[\geq\sum_{\alpha\leq N+k,\beta\leq N_{k}}\frac{1}{(\alpha!\beta! )^{\theta_{1}}}c_{1}\sigma_{k}^{1-\sigma}\|v_{k}^{(\alpha\beta)}\|_{L^{2}}\] \[-C\sum_{\alpha\leq N_{k},\beta\leq N_{k}}\frac{1}{(\alpha!\beta! )^{\theta_{1}}}\|v_{k}^{((\alpha+1)\beta)}\|_{L^{2}}-C\sigma_{k}^{-2}\sum_{ \alpha\leq N_{k},\beta\leq N_{k}}\frac{1}{(\alpha!\beta!)^{\theta_{1}}}\|v_{ k}^{((\alpha+2)\beta)}\|_{L^{2}}\] \[-C\sum_{\alpha\leq N_{k},\beta\leq N_{k}}\frac{1}{(\alpha!\beta! )^{\theta_{1}}}\left[\sum_{\gamma=1}^{N_{k}-1}C^{\gamma}\sigma_{k}^{1-\sigma-2 \gamma}\|v_{k}^{(\alpha(\beta+\gamma))}\|_{L^{2}}+\sum_{\gamma=1}^{N_{k}-1}C^ {\gamma}\sigma_{k}^{-(1+\sigma+2\gamma)}\|v_{k}^{((\alpha+1)(\beta+\gamma))} \|_{L^{2}}\right]\] \[-C\sum_{\alpha\leq N_{k},\beta\leq N_{k}}\frac{1}{(\alpha!\beta! )^{\theta_{1}}}C^{\alpha+\beta+N_{k}}(\alpha!\beta!)^{\theta_{h}}N_{k}!^{2 \theta_{h}-1}\sigma_{k}^{1-N_{k}}.\] Now we discuss how to treat the terms appearing in the above summation. For the first one we simply have \[\sum_{\alpha\leq N_{k},\beta\leq N_{k}}\frac{1}{(\alpha!\beta!)^{\theta_{1}}}c_{ 1}\sigma_{k}^{1-\sigma}\|v_{k}^{(\alpha\beta)}\|_{L^{2}(\mathbb{R})}=c_{1} \sigma_{k}^{1-\sigma}E_{k}(t). \tag{3.26}\] For the second one we proceed as follows \[C\sum_{\alpha\leq N_{k},\beta\leq N_{k}}\frac{1}{(\alpha!\beta! )^{\theta_{1}}}\|v_{k}^{((\alpha+1)\beta)}\|_{L^{2}(\mathbb{R})} =C\sum_{\alpha\leq N_{k},\beta\leq N_{k}}(\alpha+1)^{\theta_{1}}E_ {k,\alpha+1,\beta}\] \[\leq CN_{k}^{\theta_{1}}\bigg{\{}E_{k}+\sum_{\beta\leq N_{k}}E_{k,N_{k}+1,\beta}\bigg{\}}.\] From (3.11) we conclude \[E_{k,\alpha+1,\beta}\leq C^{\alpha+\beta+1}\{(\alpha+1)!\beta!\}^{\theta_{h}- \theta_{1}},\] so we obtain \[C\sum_{\alpha\leq N_{k},\beta\leq N_{k}}\frac{1}{(\alpha!\beta!)^{\theta_{1}}} \|v_{k}^{((\alpha+1)\beta)}\|_{L^{2}(\mathbb{R})}\leq CN^{\theta_{1}}\bigg{\{} E_{k}+C^{N_{k}}N_{k}!^{\theta_{h}-\theta_{1}}\sum_{\beta<\infty}C^{\beta}\beta!^{ \theta_{h}-\theta_{1}}\bigg{\}}.\] Recalling that \(N_{k}:=\lfloor\sigma_{k}^{\frac{\lambda}{\theta_{1}}}\rfloor\), the inequality \(N_{k}^{N_{k}}\leq e^{N_{k}}N_{k}!\) implies \[C\sum_{\alpha\leq N_{k},\beta\leq N_{k}}\frac{1}{(\alpha!\beta!)^{\theta_{1}} }\|v_{k}^{((\alpha+1)\beta)}\|_{L^{2}(\mathbb{R})}\leq C\sigma_{k}^{\lambda}E _{k}+C^{N_{k}+1}\sigma_{k}^{\lambda+\frac{\lambda(\theta_{h}-\theta_{1})}{ \theta_{1}}N_{k}}.\] Hence \[C\sum_{\alpha\leq N_{k},\beta\leq N_{k}}\frac{1}{(\alpha!\beta!)^{\theta_{1}} }\|v_{k}^{((\alpha+1)\beta)}\|_{L^{2}(\mathbb{R})}\leq C\sigma_{k}^{\lambda}E _{k}+C^{N_{k}+1}\sigma_{k}^{C-cN_{k}}. \tag{3.27}\] Analogously \[C\sigma_{k}^{-2}\sum_{\alpha\leq N_{k},\beta\leq N_{k}}\frac{1}{(\alpha! \beta!)^{\theta_{1}}}\|v_{k}^{((\alpha+2)\beta)}\|_{L^{2}}\leq C\sigma_{k}^{2 (\lambda-1)}E_{k}(t)+C^{N_{k}+1}\sigma_{k}^{C-cN_{k}}. \tag{3.28}\] For the next term we first note \[\sum_{\alpha\leq N_{k},\beta\leq N_{k}}\frac{1}{(\alpha!\beta!)^{ \theta_{1}}}\sum_{\gamma=1}^{N_{k}-1}C^{\gamma}\sigma_{k}^{1-\sigma-2\gamma} \|v_{k}^{(\alpha(\beta+\gamma))}\|_{L^{2}}\] \[=\sum_{\begin{subarray}{c}\alpha,\beta\leq N_{k}\\ 1\leq\gamma\leq N_{k}-1\end{subarray}}\left\{\sum_{\beta+\gamma\leq N_{k}}+ \sum_{\beta+\gamma>N_{k}}\right\}C^{\gamma}\sigma_{k}^{1-\sigma-2\gamma}\frac {(\beta+\gamma)!^{\theta_{1}}}{\beta!^{\theta_{1}}}E_{k,\alpha,\beta+\gamma}.\] Now, since \[\frac{(\beta+\gamma)!}{\beta!}\leq(\beta+\gamma)^{\gamma}\leq N_{k}^{\gamma} \leq(\sigma_{k}^{\frac{\lambda}{\theta_{1}}})^{\gamma},\quad\text{provided that}\,\beta+\gamma\leq N_{k},\] \(\lambda\in(0,1)\) and \(\gamma\geq 1\), for \(k\) large so that \(C\sigma_{k}^{\lambda-1}<1\) we get \[\sum_{\begin{subarray}{c}\alpha,\beta\leq N_{k}\\ 1\leq\gamma\leq N_{k}-1\end{subarray}}\sum_{\beta+\gamma\leq N_{k}}C^{\gamma} \sigma_{k}^{1-\sigma-2\gamma}\frac{(\beta+\gamma)!^{\theta_{1}}}{\beta!^{ \theta_{1}}}E_{k,\alpha,\beta+\gamma}\leq\sum_{\begin{subarray}{c}\alpha, \beta\leq N_{k}\\ 1\leq\gamma\leq N_{k}-1\end{subarray}}\sum_{\beta+\gamma\leq N_{k}}(\sigma_{k}^ {\lambda-1}C)^{\gamma}\sigma_{k}^{1-\sigma-\gamma}E_{k,\alpha,\beta+\gamma} \leq E_{k}.\] In the situation where \((\beta+\gamma)>N_{k}\) we use (3.11) to conclude \[C^{\gamma}\sigma_{k}^{1-\sigma-2\gamma}\frac{(\beta+\gamma)!^{ \theta_{1}}}{\beta!^{\theta_{1}}}E_{k,\alpha,\beta+\gamma} \leq C^{\gamma}\sigma_{k}^{1-\sigma-2\gamma}\frac{(\beta+\gamma)! ^{\theta_{1}}}{\beta!^{\theta_{1}}}C^{\alpha+\beta+\gamma+1}\alpha!^{\theta_{ \hbar}-\theta_{1}}(\beta+\gamma)!^{\theta_{\hbar}-\theta_{1}}\] \[\leq C^{N_{k}+1}N_{k}!^{\theta_{\hbar}-\theta_{1}}C^{\gamma} \sigma_{k}^{1-\sigma-2\gamma-\lambda\gamma}C^{\alpha}\alpha!^{\theta_{\hbar}- \theta_{1}}.\] Hence we have \[\sum_{\alpha\leq N_{k},\beta\leq N_{k}}\frac{1}{(\alpha!\beta!)^{\theta_{1}}} \sum_{\gamma=1}^{N_{k}-1}C^{\gamma}\sigma_{k}^{1-\sigma-2\gamma}\|v_{k}^{( \alpha(\beta+\gamma))}\|_{L^{2}}\leq E_{k}+C^{N_{k}+1}\sigma_{k}^{C-cN_{k}}. \tag{3.29}\] Analogously \[\sum_{\alpha\leq N_{k},\beta\leq N_{k}}\frac{1}{(\alpha!\beta!)^{\theta_{1}}} \sum_{\gamma=1}^{N_{k}-1}C^{\gamma}\sigma_{k}^{-(1+\sigma+2\gamma)}\|v_{k}^{( (\alpha+1)(\beta+\gamma))}\|_{L^{2}}\leq E_{k}+C^{N_{k}+1}\sigma_{k}^{C-cN_{k}}. \tag{3.30}\] For the last term, using the definition of \(N_{k}\) and that \(\theta_{1}>\theta_{\hbar}\) we easily conclude \[\sum_{\alpha\leq N_{k},\beta\leq N_{k}}\frac{1}{(\alpha!\beta!)^{\theta_{1}}}C ^{\alpha+\beta+N_{k}+1}(\alpha!\beta!)^{\theta_{\hbar}}N_{k}!^{2\theta_{\hbar} -1}\sigma_{k}^{1-N_{k}}\leq C^{N_{k}+1}\sigma_{k}^{C-cN_{k}}. \tag{3.31}\] Finally, associating (3.26),(3.27),(3.28), (3.29), (3.30) and (3.31) and using that \(\sigma_{k}\to\infty\) we close the proof of Proposition 4. ## 4. Proof of Proposition 2 Proof.: For a fixed \(t>0\) consider the function \[h(\xi)=e^{-it\xi^{2}},\quad\xi\in\mathbb{R}.\] We shall prove that \(h(\xi)\) does not define a multiplier in the space \(\mathcal{S}^{s}_{\theta}(\mathbb{R})\) when \(1\leq s<\theta\). Notice that we have the following formula for the derivatives of \(h\) (cf. [11] Eq. 6.3): \[\partial_{\xi}^{\alpha}h(\xi)=h(\xi)\underbrace{(-2it\xi)^{\alpha}\sum_{m=0}^ {\lfloor\frac{\alpha}{2}\rfloor}\frac{1}{(-4it)^{m}}\,\frac{\alpha!}{m!(\alpha- 2m)!}\,\xi^{-2m}}_{=:P_{\alpha}(\xi)}. \tag{4.1}\] Next we consider the function \[f(\xi)=e^{-\langle\xi\rangle^{\frac{1}{\theta}}},\quad\xi\in\mathbb{R}.\] Of course we have \(f\in\mathcal{S}^{1}_{\theta}(\mathbb{R})(\subset\mathcal{S}^{s}_{\theta}( \mathbb{R}))\) and Faa di Bruno formula implies \[|\partial_{\xi}^{\alpha}f(\xi)|\leq B^{\alpha}\alpha!f(\xi),\quad\xi\in \mathbb{R},\alpha\in\mathbb{N}_{0},\] for some \(B>0\). Suppose by contradiction that \(h\) defines a multiplier in \(\mathcal{S}^{s}_{\theta}(\mathbb{R})\). Then, enlarging \(B>0\) if necessary, \[|\partial_{\xi}^{\alpha}\{h(\xi)f(\xi)\}|\leq AB^{\alpha}\alpha!^{s}e^{-a \langle\xi\rangle^{\frac{1}{\theta}}},\quad\xi\in\mathbb{R},\,\alpha\in \mathbb{N}_{0}, \tag{4.2}\] for some \(A,a>0\). Now, we shall prove by induction on \(\alpha\in\mathbb{N}_{0}\) that \[|\{\partial_{\xi}^{\alpha}h(\xi)\}f(\xi)|\leq A(2B)^{\alpha}\alpha!^{s}e^{-a \langle\xi\rangle^{\frac{1}{\theta}}},\quad\xi\in\mathbb{R},\,\alpha\in \mathbb{N}_{0}. \tag{4.3}\] The case \(\alpha=0\) follows from (4.2). To prove for a general \(\alpha\geq 1\) we first write \[\{\partial_{\xi}^{\alpha}h(\xi)\}f(\xi)=\partial_{\xi}^{\alpha}\{h(\xi)f(\xi)\}- \sum_{\stackrel{{\alpha^{\prime}\prec\alpha^{\prime\prime}=\alpha} }{{\alpha^{\prime\prime}\geq 1}}}\frac{\alpha!}{\alpha^{\prime}!\alpha^{\prime \prime}!}\partial_{\xi}^{\alpha^{\prime}}h(\xi)\partial_{\xi}^{\alpha^{\prime \prime}}f(\xi)\] and then we use the induction hyporesis to get \((\alpha^{\prime}<\alpha)\) \[|\{\partial_{\xi}^{\alpha}h(\xi)\}f(\xi)| \leq AB^{\alpha}\alpha!^{s}e^{-a\langle\xi\rangle^{\frac{1}{ \theta}}}+\sum_{\stackrel{{\alpha^{\prime}+\alpha^{\prime\prime} =\alpha}}{{\alpha^{\prime\prime}\geq 1}}}\frac{\alpha!}{\alpha^{\prime}! \alpha^{\prime\prime}!}|\{\partial_{\xi}^{\alpha^{\prime}}h(\xi)\}f(\xi)|B^{ \alpha^{\prime\prime}}\alpha^{\prime\prime}!\] \[\leq AB^{\alpha}\alpha!^{s}e^{-a\langle\xi\rangle^{\frac{1}{ \theta}}}+\sum_{\stackrel{{\alpha^{\prime}+\alpha^{\prime\prime} =\alpha}}{{\alpha^{\prime\prime}\geq 1}}}\frac{\alpha!}{\alpha^{\prime}! \alpha^{\prime\prime}!}A(2B)^{\alpha^{\prime}}\alpha^{\prime\prime}!^{s}e^{- a\langle\xi\rangle^{\frac{1}{\theta}}}B^{\alpha^{\prime\prime}}\alpha^{\prime \prime}!\] \[=AB^{\alpha}\alpha!^{s}e^{-a\langle\xi\rangle^{\frac{1}{\theta}}} \left(1+\sum_{\stackrel{{\alpha^{\prime}+\alpha^{\prime\prime} =\alpha}}{{\alpha^{\prime\prime}\geq 1}}}\frac{\alpha!}{\alpha^{\prime}! \alpha^{\prime\prime}!}\frac{2^{\alpha^{\prime}}\alpha^{\prime\prime}!^{s} \alpha^{\prime\prime}!}{\alpha!^{s}}\right).\] Since \(s\geq 1\) we obtain \[1+\sum_{\stackrel{{\alpha^{\prime}+\alpha^{\prime\prime}=\alpha} }{{\alpha^{\prime\prime}\geq 1}}}\frac{\alpha!}{\alpha^{\prime}!\alpha^{\prime \prime}!}\frac{2^{\alpha^{\prime}}\alpha^{\prime\prime}!^{s}\alpha^{\prime \prime}!}{\alpha!^{s}}\leq 1+\sum_{\stackrel{{\alpha^{\prime}+\alpha^{\prime \prime}=\alpha}}{{\alpha^{\prime\prime}\geq 1}}}2^{\alpha^{\prime}}=1+\sum_{ \alpha^{\prime}=0}^{\alpha-1}2^{\alpha^{\prime}}=1+2^{\alpha}-1=2^{\alpha},\] which gives (4.3). In the sequel our idea is to prove that the inequality (4.3) does not hold for the sequence \(\xi_{\alpha}=t^{-\frac{1}{2}}\alpha^{\theta}\), \(\alpha\in\mathbb{N}_{0}\), provided that \(\alpha\) is large enough. First we use (4.1) to write \[|\{\partial_{\xi}^{\alpha}h\}(t^{-\frac{1}{2}}\alpha^{\theta})f(t^{-\frac{1}{2 }}\alpha^{\theta})|=|h(t^{-\frac{1}{2}}\alpha^{\theta})P_{\alpha}(t^{-\frac{1} {2}}\alpha^{\theta})f(t^{-\frac{1}{2}}\alpha^{\theta})|=|P_{\alpha}(t^{-\frac{1 }{2}}\alpha^{\theta})|e^{-(t^{-\frac{1}{2}}\alpha^{\theta})^{\frac{1}{\theta}}}.\] Since we are interested in \(\alpha\) large, there is no harm in assume \(t^{-\frac{1}{2}}\alpha^{\theta}>1\), hence \[|\{\partial_{\xi}^{\alpha}h\}(t^{-\frac{1}{2}}\alpha^{\theta})f(t^{-\frac{1}{2 }}\alpha^{\theta})|\geq\{e^{-(2t^{-1})^{\frac{1}{2\theta}}}\}^{\alpha}|P_{ \alpha}(t^{-\frac{1}{2}}\alpha^{\theta})|.\] The next step is to obtain an estimate from below for \(|P_{\alpha}(t^{-\frac{1}{2}}\alpha^{\theta})|\). Let us then study the expression of \(P_{\alpha}(t^{-\frac{1}{2}}\alpha^{\theta})\): \[P_{\alpha}(t^{-\frac{1}{2}}\alpha^{\theta}) =(-2it^{\frac{1}{2}}\alpha^{\theta})^{\alpha}\,\sum_{m=0}^{\lfloor \frac{\alpha}{2}\rfloor}\frac{1}{(-4i)^{m}}\,\frac{\alpha!}{m!(\alpha-2m)!}\, \alpha^{-2\theta m}\] \[=(-2it^{\frac{1}{2}}\alpha^{\theta})^{\alpha}\left\{\sum_{ \stackrel{{ m=0}}{{m\,\rm even}}}^{\lfloor\frac{\alpha}{2}\rfloor} \underbrace{\frac{1}{(4i)^{m}}\,\frac{\alpha!}{m!(\alpha-2m)!}\,\alpha^{-2 \theta m}}_{\in\mathbb{R}}+i\sum_{\stackrel{{ m=0}}{{m\,\rm odd}}}^{ \lfloor\frac{\alpha}{2}\rfloor}\underbrace{\frac{1}{-4^{m}i^{m-1}}\,\frac{ \alpha!}{m!(\alpha-2m)!}\,\alpha^{-2\theta m}}_{\in\mathbb{R}}\right\}.\] Therefore \[|P_{\alpha}(t^{-\frac{1}{2}}\alpha^{\theta})|\geq(2t^{\frac{1}{2}})^{\alpha} \alpha^{\theta\alpha}\left|\sum_{\stackrel{{ m=0}}{{m\,\rm even}}}^{\lfloor\frac{\alpha}{2}\rfloor}\frac{1}{(4i)^{m}}\, \frac{\alpha!}{m!(\alpha-2m)!}\,\alpha^{-2\theta m}\right|.\] Next we claim that \[\left|\sum_{m=0\atop m\,\text{even}}^{\lfloor\frac{\alpha}{2}\rfloor}\frac{1}{(4i) ^{m}}\,\frac{\alpha!}{m!(\alpha-2m)!}\,\alpha^{-2\theta m}\right|\geq\frac{3}{4}. \tag{4.4}\] Indeed, first observe that the sequence \[a_{m,\alpha}:=\frac{\alpha!}{m!(\alpha-2m)!},\quad m=0,1,\ldots,\lfloor\alpha/2 \rfloor,\] is strictly decreasing: for any \(0\leq m\leq\lfloor\alpha/2\rfloor-1\) we have (\(\theta>1\)) \[a_{m,\alpha}>a_{m+1,\alpha} \iff\frac{\alpha!}{m!(\alpha-2m)!}\,\alpha^{-2\theta m}>\frac{ \alpha!}{(m+1)!(\alpha-2(m+1))!}\,\alpha^{-2\theta(m+1)}\] \[\iff\underbrace{(m+1)}_{\geq 1}\underbrace{\frac{\alpha^{ \theta}}{\alpha-2m}\frac{\alpha^{\theta}}{\alpha-2m-1}}_{>1}>1.\] We also have \(a_{0,\alpha}=1\) and \(a_{m,\alpha}\in(0,1]\). Thus we conclude \[\frac{1}{4^{m}}a_{m,\alpha}-\frac{1}{4^{m+2}}a_{m+2,\alpha}\geq\left(\frac{ 1}{4^{m}}-\frac{1}{4^{m+2}}\right)a_{m,\alpha}\geq 0.\] Hence \[\sum_{m=0\atop m\,\text{even}}^{\lfloor\frac{\alpha}{2}\rfloor}\frac{1}{(4i) ^{m}}\,\frac{\alpha!}{m!(\alpha-2m)!}\,\alpha^{-2\theta m}=\underbrace{1- \frac{1}{4^{2}}a_{2,\alpha}}_{\geq\frac{3}{4}}+\underbrace{\frac{1}{4^{4}}a_{ 4,\alpha}-\frac{1}{4^{6}}a_{6,\alpha}+\cdots}_{\geq 0}.\] Finally, from (4.4) we get \[|P_{\alpha}(t^{-\frac{1}{2}}\alpha^{\theta})|\geq\frac{3}{4}\{2t^{\frac{1}{2} }e^{-(2t^{-1})^{\frac{1}{2\theta}}}\}^{\alpha}\alpha^{\theta\alpha}. \tag{4.5}\] Associating (4.3) with (4.5) we obtain for all \(\alpha\) large \[\frac{3}{4}\{2t^{\frac{1}{2}}e^{-(2t^{-1})^{\frac{1}{2\theta}}}\}^{\alpha} \alpha^{\theta\alpha}\leq A(2B)^{\alpha}\alpha!^{s}e^{-a(t^{-\frac{1}{2}} \alpha^{\theta})\frac{1}{\theta}},\] which is a contradiction because we are assuming \(1\leq s<\theta\). In this way \(h(\xi)f(\xi)\) cannot belong to \(\mathcal{S}_{\theta}^{s}(\mathbb{R})\). In particular, \(h\) does not define a multiplier in \(\mathcal{S}_{\theta}^{s}(\mathbb{R})\), when \(1\leq s<\theta\).
``` シュロッディングタイプオペレーターのカウシ問題を検討します。第一級の共役係数の虚部における適切な衰減仮定の下、Gelfand-Shilovクラスにおけるカウシ問題の正定性を証明します。また、いくつかの例を通して、私たちの結果の最適性について議論します。 ```
2305.01503
NewsPanda: Media Monitoring for Timely Conservation Action
Non-governmental organizations for environmental conservation have a significant interest in monitoring conservation-related media and getting timely updates about infrastructure construction projects as they may cause massive impact to key conservation areas. Such monitoring, however, is difficult and time-consuming. We introduce NewsPanda, a toolkit which automatically detects and analyzes online articles related to environmental conservation and infrastructure construction. We fine-tune a BERT-based model using active learning methods and noise correction algorithms to identify articles that are relevant to conservation and infrastructure construction. For the identified articles, we perform further analysis, extracting keywords and finding potentially related sources. NewsPanda has been successfully deployed by the World Wide Fund for Nature teams in the UK, India, and Nepal since February 2022. It currently monitors over 80,000 websites and 1,074 conservation sites across India and Nepal, saving more than 30 hours of human efforts weekly. We have now scaled it up to cover 60,000 conservation sites globally.
Sedrick Scott Keh, Zheyuan Ryan Shi, David J. Patterson, Nirmal Bhagabati, Karun Dewan, Areendran Gopala, Pablo Izquierdo, Debojyoti Mallick, Ambika Sharma, Pooja Shrestha, Fei Fang
2023-04-30T07:15:29
http://arxiv.org/abs/2305.01503v1
# NewsPanda: Media Monitoring for Timely Conservation Action ###### Abstract Non-governmental organizations for environmental conservation have a significant interest in monitoring conservation-related media and getting timely updates about infrastructure construction projects as they may cause massive impact to key conservation areas. Such monitoring, however, is difficult and time-consuming. We introduce NewsPanda, a toolkit which automatically detects and analyzes online articles related to environmental conservation and infrastructure construction. We fine-tune a BERT-based model using active learning methods and noise correction algorithms to identify articles that are relevant to conservation and infrastructure construction. For the identified articles, we perform further analysis, extracting keywords and finding potentially related sources. NewsPanda has been successfully deployed by the World Wide Fund for Nature teams in the UK, India, and Nepal since February 2022. It currently monitors over 80,000 websites and 1,074 conservation sites across India and Nepal, saving more than 30 hours of human efforts weekly. We have now scaled it up to cover 60,000 conservation sites globally. ## 1 Introduction Massive floods, poaching, waste pollution - every week, new threats impacting our environment come to light. Each of these can cause a long chain of negative impacts if not addressed. As such, monitoring these conservation-related events is of great importance for non-governmental organizations (NGOs) focused on environmental conservation such as the World Wide Fund for Nature (WWF) to take timely action and participate in relevant conversations. In addition to conservation as a whole, many NGOs are particularly interested in monitoring news on certain subtopics. One such area is the ongoing or upcoming infrastructure projects such as roads, railways, and pipelines. These are usually more long-term and actionable than events like disasters or animal activity which occur in the past or present (hence limiting intervention impact). Conservation NGOs such as WWF play a key role in advocating for more sustainable infrastructure development. Early detection and engagement of these projects could shift infrastructure planning towards more environmentally sustainable outcomes while benefiting the people that the projects intend to serve. However, information about conservation-related events and infrastructure plans threatening critical habitats is scattered across numerous sources and comes in different forms. NGOs typically learn of such information through word-of-mouth or a handful of news outlets that they check manually. This process is both time-consuming and ineffective, and it can potentially fail to capture critical information in a timely manner, leaving these NGOs out of key conversations during early or ongoing stages of these developments. To fill this gap, we develop NewsPanda, a natural language processing (NLP) toolkit to automatically detect and analyze news and government articles describing threats to conservation areas. NewsPanda has five main components, which we detail in Section 3. At the core of NewsPanda is a classification module built using a BERT-based language model, which we fine-tune to classify whether articles are relevant to conservation and to infrastructure. Developing such a tool in the conservation nonprofit setting poses several unique challenges. First, labeling data is expensive. We propose an active learning-based method to Figure 1: Top: Current costly and time-consuming information gathering pipeline at NGOs. Bottom: NewsPanda automates multiple steps in the pipeline, enabling humans to perform the more critical tasks (analysis and action). selectively acquire labels on the most critical data points. Second, the data labels could be noisy since labeling for relevance is ultimately a subjective judgement, even if we fix a labeling rubric. We adopt a noise reduction algorithm [3] to improve our model's performance. NewsPanda was developed as a collaboration between WWF and Carnegie Mellon University (CMU). It has been successfully deployed since February 2022 and has been used by the WWF teams in the UK, India, and Nepal to monitor developments in conservation sites. The entire pipeline runs on a weekly basis, scraping and classifying relevant news articles regarding conservation and infrastructure construction related events that occurred in the past week. These articles are then visualized in WWF's GIS systems for the field teams to investigate. We also share some results through social media for the benefit of the broader civil society. Through the deployment of NewsPanda, the WWF teams have been able to save over 30 hours weekly on collecting news, which allows us at WWF to instead focus on analyzing the news and taking actions (Figure 1) 1. Footnote 1: We are happy to work with interested researchers and nonprofits on sharing our code and data. ## 2 Related Work News Monitoring SystemsAlthough there is a rich literature on news information extraction in general domains [1, 13] as well as some specific applications [15, 16], there has been hardly any media monitoring tool for environmental conservation and infrastructure construction. Directly using generic media monitoring tools often lead to unsatisfactory results that are not localized enough to be actionable for a specific conservation site or not relevant enough to be reliable. As a result, conservation NGOs still use a manual process to collect articles. The only work on conservation news monitoring that we are aware of is a preliminary attempt by Hosseini and Coll Ardanuy [1] that apply BERT to classify news articles. Compared to that, with NewsPanda we provide a classification module with algorithmic contributions to address challenges in using the tool in the nonprofit context, a full end-to-end information extraction and processing pipeline, and most importantly, results and lessons learned from a large scale deployment of the tool. This is the first comprehensive and actionable media monitoring tool for conservation and infrastructure. NLP for Conservation & InfrastructureOutside of news monitoring, NLP tools have been used for various applications in conservation and infrastructure. Some analyze the relevant news articles for general insights on conservation reporting [20] or study their spread and impact [21]. These studies are descriptive in nature and orthogonal to our work. The few studies that take the civil society stakeholder's perspective are focused on different links in the process from us. Luccioni, Baylor, and Duchene [1] use BERT-based models to analyze corporate environment sustainability reports. Boutilier and Bahr [1] explore mining-related texts to analyze the social license of a particular project. They target different problems from us. They assume a relevant text is readily available and try to extract meaningful insights from it. On the other hand, we work on identifying that relevant text from thousands of irrelevant texts in the first place and leave the insight extraction to professional organizations like WWF that have been doing that for years. ## 3 NewsPanda Overview NewsPanda toolkit consists of five modules as illustrated below and in Figure 1(a). During pilot study and deployment (Section 8), this entire pipeline is run on a weekly basis. 1. **Information Retrieval Module**: We use the NewsAPI scraper [10] with the names of conservation sites taken from a curated list of conservation areas. 2. **Relevance Classification Module**: We classify articles along two dimensions, namely _Conservation Relevance_ and _Infrastructure Relevance_, through a large pretrained language model fine-tuned with our collected dataset. Details of this model are explained in Section 5. 3. **Article Postprocessing Module**: The article postprocessing module has 3 parts: a keyword extractor which extracts keywords, an event extractor which extracts event trends, and a geolocator which provides location coordinates. We discuss these features in Section 6. 4. **Visualization Module**: After the relevant articles are identified, we visualize them in our GIS system at WWF, which we can further analyze and act upon (Section 8). 5. **Social Media Module**: In parallel to the visualization module, another downstream application for NewsPanda is WildlifeNewsIndia, 2 a Twitter bot we built from NewsPanda that shares weekly relevant conservation-related articles on social media (Section 8). Footnote 2: [https://twitter.com/WildlifeNewsIND](https://twitter.com/WildlifeNewsIND) ## 4 Dataset We use two main datasets for developing NewsPanda. First, we use an existing corpus (WHS-Corp) by Hosseini and Coll Ardanuy [1] consisting of articles scraped using World Heritage Sites as keywords and labelled by domain experts. Second, we scrape and label our own corpus (InfraCorp), which is a more focused, timely, and fine-grained upgrade over WHS-Corp. The datasets differ in terms of the locations of the conservation sites used, as well as the time frame of the articles. ### WHS-Corp Dataset WHS-Corp contains over 44,000 articles from 2,974 different sources covering 224 World Heritage Sites globally. Scraping was done using NewsAPI's Python library from a list of curated conservation sites of interest. Besides the title and content, it also contains metadata such as the publication site, the author, and the date of publication. Articles in WHS-Corp span from January 2018 to October 2019. After these articles were gathered, a subset of 928 articles were sampled and manually annotated for _Conservation Relevance_ by domain experts familiar with conservation. _Conservation Relevance_ denotes whether an article discusses threats or impacts to wildlife and environment conservation in general, e.g. poaching, forest development, natural disasters. We use this labelled dataset for training our model. ### InfraCorp Dataset As opposed to WHS-Corp which focuses on global conservation sites, InfraCorp specifically focuses on conservation sites in India and Nepal. The InfraCorp corpus contains 4,137 articles (150 for Nepal and 3,987 for India) from 1,074 conservation sites across the two countries. All articles were taken in the two-year span from November 2019 to November 2021. We use NewsAPI to search for the official names of the conservation sites, or alternative and/or local names for the sites as recorded at WWF. Given the data availability as well as the annotator capacity of the local domain experts from India and Nepal, we labeled all 150 articles from Nepal and only 1,000 articles from India. Annotation for InfraCorp was done along two dimensions: _Conservation Relevance_ and _Infrastructure Relevance_. _Conservation Relevance_ is similar to the one described for WHS-Corp in Section 4.1. Among the articles which were labelled as positive for _Conservation Relevance_, we further categorize whether it is relevant to infrastructure. This covers issues such as new roads in forested areas and construction projects near national parks. Each article was annotated by two domain experts, one from WWF UK, and another from either WWF India or WWF Nepal. We provided the annotators with a descriptive rubric for labeling in each dimension, as well as concrete examples of edge cases. The following was one such example in our instructions: Articles describing tourism or wildlife or natural beauty of a national park, but without talking about environmental impacts or threats to wildlife and conservation, do not count as positive for _Conservation Relevance_. Where the two sets of labels disagree, the authors closely inspect the articles and decide on the final labels. ## 5 Relevance Classification Module We highlight the structure of our NewsPanda classification module and other key techniques used during training. ### Classification Model The backbone of the NewsPanda classification model is a BERT model [10] with a linear classification head. BERT is a Transformer-based language model trained using masked language modelling and next sentence prediction objectives on large-scale corpora of books and articles. This large-scale pretraining, as well as its ability to effectively encode context, leads to superior performance on a wide variety of tasks. We adapt BERT to the domain of conservation and infrastructure, and we fine-tune it to perform news article classification. In Section 7, we explore different variants of the BERT model (such as RoBERTa). One key change we make to the BERT model is that in the final linear head after the main BERT layers, instead of only considering the BERT vector outputs, we also incorporate other features, namely sentiment analysis and topic modelling, as shown in Figure 1(b). We hypothesize that including these additional features will provide the model with more useful information that will help classify whether or not a particular article is relevant to infrastructure or conservation. For instance, if an article has topic vectors that align with other articles covering forest habitats, but it has an overwhelmingly positive sentiment, then we may suspect that it could be a tourism-related feature article instead of a conservation-related news article (which are often more neutral or negative in terms of sentiment). For sentiment analysis, we extract the sentence polarity scores of the article title, its description, and its content, giving us three sentiment scores per article. This is done on a scale of \(-1.0\) to \(+1.0\), with \(-1.0\) representing the most negative score and \(+1.0\) representing the most positive score. Sentiment analysis was done using the textblob package [1]. Meanwhile, for topic extraction, we consider the entire training corpora of WHS-Corp and InfraCorp, and train a Latent Dirichlet Allocation (LDA) model to identify topic clusters. We use 50 topics for the LDA model and Figure 2: NewsPanda pipeline (1(a)) and model diagram for conservation and infrastructure relevance classifiers (1(b)). implemented it using scikit-learn [11]. Lastly, for the main BERT model, we concatenate the title, description, and content of each article, and we use this concatenated text as input to our classifier. For cases where the article is missing certain features (e.g. no description), we simply supply an empty string for that feature. The vectors from the three steps (i.e. BERT model, sentiment analysis, topic modelling) are then concatenated, and this final vector is used as the input to the final classification head to generate a binary prediction. Specific implementation settings and other hyperparameters can be found in Section 7.1. ### Active Learning Annotating a dataset is costly. In curating our InfraCorp dataset, we need to be mindful of which specific articles to label in order for our model to learn most efficiently. For this selection process, we first fine-tune a pretrained RoBERTa-base model on the existing WHS-Corp dataset, based on the _Classification Relevance_. To make this preliminary model as close to our final model as possible, we also incorporate the topic modelling and sentiment analysis features, as shown in Figure 2b. Because this is only a preliminary model, we forego doing extensive hyperparameter tuning and decided to just select a setting that worked recently well: with a learning rate of 1e-5, batch size of 16, and training for 10 epochs, we were able to get an F-score of 0.61 on WHS-Corp. Using this trained model, we then generate _Classification Relevance_ predictions for all articles in the InfraCorp corpus, together with the corresponding softmax scores. We treat these softmax scores as a measure for the classification confidence of the model: if the softmax is close to 0 or close to 1, then it means that the model is very certain with its prediction, while if the softmax is close to 0.5, then it means the model is unsure with its prediction. We then select 300 articles which our model is least confident about. We hypothesize that selecting these "difficult" rows will have the greatest impact on model performance. We call this active learning-based dataset InfraCorp-A. To verify the effectiveness of active learning, we also randomly sample 300 articles to label, which we call InfraCorp-R. We will later evaluate how this compares with the actively selected dataset on a randomly selected test set of 400 samples in our ablation study (Section 7.3). ### Noisy Label Correction Our dataset is labelled by two sets of domain expert annotators from WWF. Although we provided detailed criteria for labelling each article, there is always room for some subjectivity in the process. This resulted in the two sets of labels not agreeing with each other on over \(10\%\) of the data points. Although, as mentioned in Section 4.2, we did manage to obtain the "ground truth" label for a small subset of InfraCorp for model evaluation purposes, doing that for every single article is prohibitively expensive - much more expensive than the (not cheap) process of having either annotator providing a (noisy) label. Therefore, in order for **NewS-Panda** to work well once deployed, we need to be able to learn well from the potentially noisy labels only. More formally, let \(x_{n}\) be the embedding of an article along with its sentiment and topic modeling vectors as described in Section 5.1. Let \(y_{n}\) be the true label of this article. The task is to make an accurate prediction on the dataset \(\{(x_{n},y_{n}):n=1\dots N\}\) when we only have access to the noisy data \(\{(x_{n},\tilde{y}_{n}):n=1\dots N\}\) where \(\tilde{y}_{n}\) is the label that we get from either of the two annotators, and the true labels \(y_{n}\) are the final labels that we decide on after resolving conflicts. To address this challenge, we adapt the CORES\({}^{2}\) loss [13] noise correction algorithm, which is an extension of the earlier peer loss [15]. Peer loss frames the task of learning from noisy labels as a peer prediction problem. In practice, the loss for each \((x_{n},y_{n})\) data point can be calculated using the standard cross entropy loss with \((x_{n},y_{n})\), modified with a loss calculated using a randomly sampled input \(x_{n_{1}}\) and an _independently_ randomly sampled label \(y_{n_{2}}\). That is, we have \[\ell_{\text{\tiny{PEER}}}(f(x_{n}),\tilde{y}_{n}):=\ell(f(x_{n}),\tilde{y}_{n })-\alpha\cdot\ell(f(x_{n_{1}}),\tilde{y}_{n_{2}})\] where \(\alpha>0\) is a tunable parameter. Meanwhile, CORES\({}^{2}\) replaces the random sampling from peer loss with a confidence regularizer defined as follows: \[\ell_{\text{\tiny{CORES}}}(f(x_{n}),\tilde{y}_{n}):=\ell(f(x_{n}),\tilde{y}_{ n})-\beta\cdot\mathbb{E}_{\tilde{Y}_{|\tilde{D}}}[\ell(f(x_{n}),\tilde{Y})]\] where \(\tilde{D}\) is the dataset, \(\tilde{Y}\) is a noisy label, and \(\beta>0\) is a tunable parameter. Following cheng2021explaining, we calculate this confidence regularizer term using an estimate of the noise prior probability. We test both peer loss and CORES\({}^{2}\) loss, and report results in our ablation study (Section 7.3). ## 6 Article Postprocessing Module Once the relevant articles are identified using the model, we then perform a few post-processing steps to extract key information and make them easier to analyze and visualize. ### Keyword Extractor Keywords are important, as they allow the easy summarization, categorization, and grouping of news articles. Furthermore, we also use these keywords as hashtags in our social media module (Section 8). To extract keywords, we use an extensive list of conservation-related keywords maintained at WWF and search the article for exact matches. In addition, we also use Named Entity Recognition systems to extract the salient words in each article. To perform this, we use a BERT-based model trained on the CoNLL 2003 Named Entity Recognition dataset [13]. The keywords extracted using these two methods are then concatenated to form the final set of keywords. ### Event Extractor To track the progress of infrastructure projects, it is often not enough to just view a single article in isolation. Rather, news regarding these projects often builds up over a period of weeks or months. To help provide this context, we create an automated event extractor, which leverages our InfraCorp dataset, including both the labelled articles as well as the unlabelled articles. Given a new article \(a\), our goal is to find past articles \(P_{a}\) which are closely related to \(a\). We first gather all previous articles which are from the same conservation site. Next, we create a graph \(G_{a}\), where each article is a node, and two nodes share an edge if the corresponding articles share \(\geq k\) common keywords (from Section 6.1). Here, \(k\) is an adjustable parameter depending on how loosely connected we want \(G_{a}\) to be. For our data, we use \(k=3\). Once the graph \(G_{a}\) is constructed, we then define an "event" to be the maximal clique containing \(a\), and we report all such events. A sample chain of events is shown in Figure 3. ### Geolocation To aid with visualization (Section 8), we perform geolocation on the classified news articles, based on the search terms used to retrieve them. To extract latitude and longitude coordinates, we leverage an extensive directory of conservation sites from WWF, and we use the directory to map conservation sites to their corresponding coordinates. If the directory contains no match, we geolocate using the geopy package. ## 7 Experiments and Results Here, we discuss results of our in-lab experiments and ablation studies to verify our hypotheses. Results from real-world deployment are discussed in the succeeding section. ### Experiment Settings BaselinesWe compare the performance of our **NewsPanda** model with the following baselines: 1. [leftmargin=*] 2. **Keyword model**: We consider a naive model that checks for the count of certain keywords. We curate two sets of "conservation-related keywords" and "infrastructure-related keywords". If an article contains more than \(k\) "conservation-related keywords", then it is considered to be relevant to conservation (likewise for infrastructure). 3. **RNN-based models**: We tokenize each article, then pass the embedding to RNN models, where the hidden state of the last layer is used as input to the final classification layer. We use two types of RNN models, namely GRUs [1] and LSTMs [1]. 4. **BERT-based models**: We fine-tune a pretrained BERTbase [1] and RoBERTa-base model [12], where we add a classification head after the final layer to perform relevance classification. Evaluation MetricsSince our task is binary classification, we measure the accuracy, precision, recall, and F1-score. For precision, recall, and F1, we consider only the scores of the positive class. All metrics are calculated separately for _Conservation Relevance_ and _Infrastructure Relevance_. DataFor _Conservation Relevance_, we train on the InfraCorp dataset (consisting of both InfraCorp-A and InfraCorp-R), as well as the WHS-Corp dataset. For _Infrastructure Relevance_, since WHS-Corp does not contain infrastructure labels, we only train using InfraCorp. We split the training data into an 80-20 training-validation split. For evaluation, we use the test split of InfraCorp for both _Conservation Relevance_ and _Infrastructure Relevance_. Implementation SettingsFor the GRU/LSTM, we use a batch size of 128, hidden size of 128, and dropout of 0.2. We train for 10 epochs with a learning rate of 1e-4. Meanwhile, for BERT, RoBERTa, and **NewsPanda**, we train for 10 epochs with batch size 4 and learning rate 1e-5. We use RoBERTa for the backbone model of **NewsPanda**. Model selection is done by considering the best validation F1-score. ### Results and Analysis Experimental results are shown in Tables 0(a) and 0(b). We observe that indeed, adding the sentiment analysis and topic modelling features, as well as the CORES\({}^{2}\) loss for noisy label correction, aids in predictions for both _Conservation Relevance_ and _Infrastructure Relevance_, providing an improvement over both BERT-base and RoBERTa-base. Our data is quite imbalanced: \(>\)80% of the articles are not relevant. This manifests itself in the discrepancies between accuracy and F1-score. We observe, for example, that the naive keyword model has very high accuracy scores but \begin{table} \begin{tabular}{c|c|c|c|c} \hline **Model** & **Acc.** & **P** & **R** & **F1** \\ \hline \hline Keyword & 0.820 & 0.317 & 0.634 & 0.423 \\ LSTM & 0.711 & 0.495 & 0.511 & 0.504 \\ GRU & 0.729 & 0.422 & 0.505 & 0.475 \\ BERT & 0.860 & 0.708 & 0.704 & 0.706 \\ RoBERTa & 0.867 & 0.705 & 0.743 & 0.721 \\ **NewsPanda** & **0.877** & **0.729** & **0.801** & **0.744** \\ \hline \multicolumn{5}{c}{(a) Scores for _Conservation Relevance_} \\ \hline **Model** & **Acc.** & **P** & **R** & **F1** \\ \hline \hline Keyword & **0.947** & 0.250 & 0.455 & 0.323 \\ LSTM & 0.908 & 0.566 & 0.537 & 0.554 \\ GRU & 0.895 & 0.544 & 0.557 & 0.553 \\ BERT & 0.922 & 0.840 & 0.745 & 0.771 \\ RoBERTa & 0.916 & 0.794 & 0.809 & 0.799 \\ **NewsPanda** & 0.941 & **0.880** & **0.821** & **0.850** \\ \hline \end{tabular} \end{table} Table 1: Average scores for _Conservation Relevance_ (Table 0(a)) and _Infrastructure Relevance_ (Table 0(b)), taken over 10 random seeds. Figure 3: Example of events selected by the Event Extractor (Section 6.2) by date. The progression of the project is highlighted by the phrases in red underline. very low F1-scores, which indicates that it predicts a lot of zeros (hence the high accuracy), but is not able to predict the relevant articles well. The RNN-based models (LSTM and GRU) seem to perform relatively poorly, achieving an F1-score of around 0.5. This could also be attributed to the data imbalance, since these RNN-based models are generally not as robust to imbalanced datasets. In contrast, the BERT and RoBERTa models perform quite well, with F1-scores \(>\)0.7 for conservation and \(>\)0.75 for infrastructure, and precision/recall scores also around that range. This indicates that these transformer-based models are able to generalize quite well and successfully capture the notions of _Conservation Relevance_ and _Infrastructure Relevance_. Lastly, **NewS****Panda** offers significant improvement over the RoBERTa-base model (F1 t-test \(p\)-value \(=0.018\) for conservation and \(0.033\) for infrastructure), showing the positive effects of incorporating information such as the emotion and topics over simply considering the article text in isolation. ### Ablation Study #### Active Learning We compare the effect with training on actively-sampled data (InfraCorp-A) and randomly-sampled data (InfraCorp-R). Each of these datasets contain 300 India articles, as detailed in Section 5.2 and 4.2. We append these articles to the existing WHS-Corp to create the final data for training. We use the RoBERTa model for these experiments. Results are shown in Table 2. For both InfraCorp-A and InfraCorp-R, we see an improvement over just using WHS-Corp. Indeed, training with more data will result in better performance, regardless of how the data is sampled. We also observe that adding actively sampled data results in a larger improvement than adding randomly sampled data across all metrics (F1 t-test \(p\)-value \(=0.004\)). This verifies the effectiveness of our hypothesized confidence-based data selection for annotation. #### Noisy Label Correction We examine the effect of the noise correction methods outlined in Section 5.3, by comparing the effect of using peer loss, CORES\({}^{2}\) loss, and standard cross entropy loss. Based on InfraCorp, we use the labels supplied by one of the two annotators for the training set, and the final calibrated labels for the test set. Hyperparameter search was done for both peer loss and CORES\({}^{2}\) loss to find the optimal values of \(\alpha=0.05\) and \(\beta=0.05\). We trained for 20 epochs with a learning rate of 2e-5. From Table 3, we observe that for accuracy and precision, all three losses perform very similarly, with peer loss performing the highest by a small margin. For recall and F1, peer loss and the standard loss perform at comparable levels, while CORES\({}^{2}\) loss performs better than both (F1 t-test \(p\)-value \(=0.001\)). This is likely because the confidence regularizer used in CORES\({}^{2}\) works better than the random sampling used by peer loss. Both peer and CORES\({}^{2}\) loss might work even better if we had more training data than the current 600 in InfraCorp. In the end, given the positive results of CORES\({}^{2}\), we used it in our **NewS****Panda** model. ## 8 Deployment and Impact **NewS****Panda** has been used at WWF since February 2022. We describe the deployment, results, and lessons learned. ### Pilot Study The first stage of **NewS****Panda** deployment, which is the pilot study, started in February 2022 and ran for around one month. Every week, the CMU team scraped the news articles and ran the entire **NewS****Panda** pipeline, forwarding the outputs to the WWF teams to examine and provide feedback. During this pilot phase, the WWF and CMU teams identified a range of operational and technical issues in the initial version of **NewS****Panda**. First, in order for **NewS****Panda** to fit into the established workflow of WWF, it needs to be integrated into its GIS system. During the pilot, we realized that it is crucial to add the geolocation of each article (Section 6.3) and format the model output according to the specifications of the GIS platform used at WWF. Figure 4 shows how **NewS****Panda**'s results get integrated into the GIS system, with the red areas being the locations where we identify a relevant article. We also discovered that while **NewS****A**I has a good collection of global news sources, it fails to include some relevant sources in the local context. With the suggestions from the WWF team, we incorporated additional sources that often yield relevant local articles. One such site is Parivesh, which contains proposals of infrastructure projects in India. Finally, we found that some conservation sites' names often lead to 0 results, while other terms were too general and yielded hundreds of results, almost all of which were irrelevant, leading to inefficiencies. We set a lower and upper threshold, and filter out search terms outside the thresholds. ### Deployment Results After we resolved the above issues, we proceeded with the actual deployment. The procedure was similar to the pilot phase, except that at this phase, the focus is to evaluate the performance of **NewS****Panda**. The WWF teams closely inspected the model predictions each week and provided ground truth labels for each article. The label feedback allowed the CMU team to retrain the model regularly. This \begin{table} \begin{tabular}{c|c|c|c|c} \hline **Dataset** & **Acc.** & **P** & **R** & **F1** \\ \hline \hline WHS-Corp & 0.911 & 0.585 & 0.585 & 0.586 \\ \hline WHS+Inf.Corp-A & **0.921** & **0.600** & **0.774** & **0.670** \\ WHS+Inf.Corp-R & 0.916 & 0.586 & 0.696 & 0.637 \\ \hline \end{tabular} \end{table} Table 2: Evaluation scores for _Conservation Relevance_ for InfraCorp-A compared with InfraCorp-R, averaged over 10 random seeds. \begin{table} \begin{tabular}{c|c|c|c|c} \hline **Noisy Label** & **Acc.** & **P** & **R** & **F1** \\ **Correction** & & & & \\ \hline \hline None & 0.907 & 0.566 & 0.441 & 0.497 \\ \hline Peer Loss & **0.911** & **0.591** & 0.465 & 0.509 \\ CORES\({}^{2}\) & 0.908 & 0.584 & **0.551** & **0.553** \\ \hline \end{tabular} \end{table} Table 3: Evaluation scores for _Conservation Relevance_ for two noise correction methods, over 10 random seeds. stage ran from March 2022 to July 2022. Table 4 shows the aggregated results over 5 months of evaluation results from WWF India, Nepal, and UK. WWF UK labeled the first half of the deployment for all locations and India/Nepal labeled the second half for news articles in their respective countries. Overall, **NewsPanda** continued to show great performance in _Conservation Relevance_ during real-world deployment. Across all evaluations, the precision scores are consistently high, indicating that almost all of the articles reported by **NewsPanda** are indeed relevant. We intentionally tuned the model towards this direction - when almost everything that the model flagged is relevant, it would greatly help with establishing the trust in the model at the early stage of deployment. As we continue developing the model, we aim to improve the model towards achieving higher recall, to be able to capture more relevant articles. On the other hand, on _Infrastructure Relevance_ for India, the model's performance was worse than the offline experiments. Upon further inspection, we discovered that the majority of mistakes were in fact only 2-4 original pieces of news that were paraphrased by various news sources into 20-40 articles. Since there are only a few _Infrastructure Relevance_ positive articles to start with, this had a big impact on the model performance. Meanwhile, such phenomenon did not occur in our offline experiments because there we randomly sampled news from a large corpus for labeling. Aside from overall metrics, we also highlight individual success stories. Figure 4(right) shows a concrete example where **NewsPanda** made a difference. In early August, 2022, **NewsPanda** detected a new project of Ikhala Block Boundary Kishtwar to Lopara Road and highlighted it in the WWF GIS system. Upon further investigation by WWF staff, it is found that the project would divert 5.9 hectares of forest land. More importantly, WWF found that the project was still at its pre-proposal stage. This means WWF would be able to take early action and possibly participate in relevant conversations. Such stories are happening frequently since the deployment of **NewsPanda**. Using the tool's outputs integrated into our internal GIS systems, the WWF staff are continuously coordinating with our field teams to examine the status and report on relevant projects and areas. ### Qualitative and Quantitative Comparison with Current Practice Prior to **NewsPanda**, WWF had already been monitoring media for conservation-related articles (Figure 1). However, most of these efforts were not very structured or logged. It is thus difficult to draw head-to-head comparisons between **NewsPanda** and WWF's existing approach. That said, we still provide qualitative and quantitative evidence supporting the merit of **NewsPanda** over the current practice. Two months into the deployment, the CMU team carried out semi-structured interviews with their WWF colleagues who have been using **NewsPanda** outputs in their work. The purpose was to understand how WWF teams liked the toolkit and to elicit possible suggestions for improvement. Some quotes from the interviews are as follows. "You're giving us a bunch of articles... over 50 articles a week. We had two interns who spend 2-3 days a week on this and would only give us seven to ten articles. So there is a huge bump in efficiency right there in itself." "The data that you're sharing give a global perspective. It is very useful to understand the upcoming projects or mitigation measures that are being adopted on a global scale. So it helps us be informed." This improvement in news collection also helped with the downstream task - infrastructure impact assessment. "It took us maybe a month to do analyses of three or four infrastructure projects. With **NewsPanda**, we can send (stakeholders) 20 or 30 reports in a month." \begin{table} \begin{tabular}{c|c c c|c c c} \hline & \multicolumn{3}{c}{**Conservation**} & \multicolumn{3}{c}{**Infrastructure**} \\ & P & R & F1 & P & R & F1 \\ \hline \hline India & 0.849 & 0.605 & 0.706 & 0.462 & 0.250 & 0.324 \\ Nepal & 0.895 & 0.917 & 0.906 & 0.923 & 0.308 & 0.462 \\ UK & 0.879 & 0.823 & 0.850 & 1.000 & 0.455 & 0.625 \\ \hline \end{tabular} \end{table} Table 4: Aggregated scores of NewsPanda on weekly articles from March 2022 to July 2022. Figure 4: Left: The highlighted red areas indicate clusters of articles found by our model. Right: The WWF GIS system, where each relevant article is shown on the map with its corresponding key details. The micro-level improvement in this single task has also resulted in macro-level organizational change: "It's also a transition in their (WWF staff) job function. They will not just be doing data hunting. They are qualifying themselves to be data analysts." The WWF Nepal team has been putting together weekly news digests for conservation sites in Nepal. Although this dataset is small and has no negative labels, this is the only quantitative comparison between **NewsPanda** and current practice we can make. We find that our model is able to identify 62% of the articles in the news digest. This is a relatively good performance as we had extremely limited articles (only 150) about Nepali conservation sites to train the model. ### Sustainable Deployment and Broader Impact Encouraged by the success of **NewsPanda** at the initial stages, we are working to scale it to more sites and permanently deploy **NewsPanda** as part of the WWF computing infrastructure. We have been collecting news articles for over 60,000 sites globally and applying our trained model to classify them on a weekly basis since April 2022. Because the main model has already been trained, we no longer need extensive data labeling for evaluation. Instead, we only need a small subset for model update and fine-tuning purposes. We are currently investigating the ability **NewsPanda** to generalize to new locations and new languages given only a few (or even zero) domain-specific training points. We are also shifting our system to a cloud server to be owned and maintained by the WWF team, rather than the CMU team, to ensure sustainable deployment. The CMU team will continue to provide support and tutorials to help WWF eventually grow in-house capability of sustaining the project. Much as this project was a collaboration between WWF and CMU, **NewsPanda** could also be valuable to the broader civil society. Thus, we also developed a social media module in the form of a Twitter bot called **WildlifeNewsIndia**. The bot periodically tweets a selected set of the identified relevant articles. In addition to tweeting links to articles, we also use the keywords from **NewsPanda**'s keyword extractor (Section 6.1) to generate salient hashtags. A sample tweet is shown in Figure 5. Currently, **WildlifeNewsIndia** is focused on conservation-related articles in India. As we continue working on this project, we hope to scale this to a global level, so that any organization or individual interested in conservation can benefit from the tool. ### Lessons Learned This 1.5 year long and counting collaboration has yielded many valuable lessons for both WWF and CMU. We have already mentioned some of those in earlier sections. We highlight two more generalizable lessons below. Problem identification is an iterative process and rapid prototyping helps surface unforeseen needs. The event extractor in Section 6.2 was not initially part of the agenda: without a prototype of the model readily available, it was difficult for WWF to realize what could be done with it. However, after several iterations of communication and exploring results, the need to track the development related to a single project/location became clear to us. This was made possible by the rapid prototyping, where the CMU team used viable algorithms that may not be optimal but are quick to implement to demonstrate the possibilities of the toolkit. It is the various "not-so-AI" components that realize the promise of an AI for nonprofit project on the ground. While the classification module in Section 5 is the engine of **NewsPanda**, the postprocessing module in Section 6 and the visualization module in Figure 4 are key in getting the information in a consumable format, and ultimately the buyer in at WWF. Each of the latter two modules requires at least as much engineering effort and careful design as the classification module. We call on future AI for nonprofit projects to pay enough attention to all the infrastructure around the AI part, in order to deliver the real impact that we hoped for. ## 9 Conclusion In this paper, we designed and deployed **NewsPanda**, a toolkit for extracting, classifying, and analyzing articles related to conservation and infrastructure. We showed empirically that our **NewsPanda** model classifies better than baseline methods for both _Conservation Relevance_ and _Infracructure Relevance_. We also presented quantitative and qualitative evaluations of our system in the real world as well as its impact on WWF teams in UK, India, and Nepal. Currently **NewsPanda** mainly focuses on a few countries, and we are expanding it to a global scale. However, incorporating additional conservation sites is just the beginning. To do it right, we also need to cover more languages and more local media sources. This is especially important for the global south, as many high-impact local developments might never reach international news outlets. The ability to capture these local sources, especially if they are not written in English, is something we are currently working on. We are currently starting with articles written in the Nepali language. Initial experiments with a multilingual version of **NewsPanda** have shown good generalization when given only a few Nepali articles for training. With this multilingual model, we hope to further expand to cover a wider array of languages. Figure 5: Sample tweet of **WildlifeNewsIndia** ## Acknowledgements We thank Mr. Pramod Neupane, Consultant-Sustainable Infrastructure at World Bank, for all his support during the initial phase of the project which includes project conceptualization, data curation, funding acquisition, project administration for WWF Nepal, and resources allocation at WWF Nepal. We also thank the communications team at WWF Nepal for providing the weekly news links. This work was supported in part by a Google AI for Social Good award, NSF grant IIS-2046640, a Siebel Scholarship and a Carnegie Mellon Presidential Fellowship.
NGOs for environmental conservation は、環境保全関連メディアの監視とインフラ建設プロジェクトに関する最新のアップデートを取得することに興味があります。これは、これらのプロジェクトが重要な保全地域に大きな影響を与える可能性があるためです。このような監視を行うことは、困難で時間的にもかかります。私たちは、NewsPandaというツールを導入します。これは、オンライン記事を自動的に検出し、環境保全とインフラ建設に関する分析を行うツールです。私たちは、アクティブな学習手法とノイズ補正アルゴリズムを使用して、環境保全とインフラ建設に関連性の高い文章を特定します。特定された文章に対して、さらに分析を行い、キーワードを抽出し、関連性のあるソースを特定します。NewsPandaは、2022年2月以降、WWFの英国、インド、ネパールチームによって成功裏に導入されています。現在、インドとネパールで80,000のウェブサイトと1,074の環境
2301.07485
Image Embedding for Denoising Generative Models
Denoising Diffusion models are gaining increasing popularity in the field of generative modeling for several reasons, including the simple and stable training, the excellent generative quality, and the solid probabilistic foundation. In this article, we address the problem of {\em embedding} an image into the latent space of Denoising Diffusion Models, that is finding a suitable ``noisy'' image whose denoising results in the original image. We particularly focus on Denoising Diffusion Implicit Models due to the deterministic nature of their reverse diffusion process. As a side result of our investigation, we gain a deeper insight into the structure of the latent space of diffusion models, opening interesting perspectives on its exploration, the definition of semantic trajectories, and the manipulation/conditioning of encodings for editing purposes. A particularly interesting property highlighted by our research, which is also characteristic of this class of generative models, is the independence of the latent representation from the networks implementing the reverse diffusion process. In other words, a common seed passed to different networks (each trained on the same dataset), eventually results in identical images.
Andrea Asperti, Davide Evangelista, Samuele Marro, Fabio Merizzi
2022-12-30T17:56:07
http://arxiv.org/abs/2301.07485v1
# Image Embedding for Denoising Generative Models ###### Abstract Denoising Diffusion models are gaining increasing popularity in the field of generative modeling for several reasons, including the simple and stable training, the excellent generative quality, and the solid probabilistic foundation. In this article, we address the problem of _embedding_ an image into the latent space of Denoising Diffusion Models, that is finding a suitable "noisy" image whose denoising results in the original image. We particularly focus on Denoising Diffusion Implicit Models due to the deterministic nature of their reverse diffusion process. As a side result of our investigation, we gain a deeper insight into the structure of the latent space of diffusion models, opening interesting perspectives on its exploration, the definition of semantic trajectories, and the manipulation/conditioning of encodings for editing purposes. A particularly interesting property highlighted by our research, which is also characteristic of this class of generative models, is the independence of the latent representation from the networks implementing the reverse diffusion process. In other words, a common seed passed to different networks (each trained on the same dataset), eventually results in identical images. **Keywords:** Denoising Diffusion Models, Generative Models, Embedding, Latent Space, Representation Learning ## 1 Introduction Denoising Diffusion Models (DDM) [1] are rapidly imposing as the new state-of-the-art technology in the field of deep generative modeling, challenging the role held so far by Generative Adversarial Networks [2]. The impressive text-to-image generation capability shown by models like DALL-E2 [3] and Imagen [4], recently extended to videos in [5], clearly proved the qualities of this technique, comprising excellent image synthesis quality, good sampling diversity, high sensibility and easiness of conditioning, stability of training and good scalability. In very rough terms, a diffusion model trains a single network to denoise images with a parametric amount of noise, and generates images by iteratively denoising pure random noise. This latter process is traditionally called _reverse diffusion_ since it is meant to "invert" the _direct diffusion_ process consisting in adding noise. In the important case of Implicit Diffusion models [6], reverse diffusion is deterministic, but obviously not injective: many noisy images can be denoised to a single common result. Let us call \(emb(x)\) (embedding of \(x\)) the set of points whose reverse diffusion generate \(x\). The problems we are interested in are investigating the shape of \(emb(x)\) (e.g. is it a connected, convex space?), finding a "canonical" element in it (i.e. a sort of center of gravity) and, in case such a canonical element exists, finding an efficient way to compute it. This would allow us to embed an arbitrary image into the "latent" space of a diffusion model, providing functionality similar to GAN-recoders (see Section 2), or to encoders in the case of Variational AutoEncoders ([7],[8]). Since reverse diffusion is the "inversion" of the diffusion process, it might be natural to expect \(emb(x)\) to be composed by noisy versions of \(x\), and that the canonical element we are looking for could be \(x\) itself. This is not the case: indeed, \(x\) does not seems to belong to \(emb(x)\). Figure 1 details some examples of the output obtained by using the image itself as input to the reverse diffusion process. Figure 1: Examples of faces obtained using the image itself as input to the reverse diffusion process. The input signal is clearly far too strong. Since the input signal is clearly too strong, we may be tempted to reduce it using a multiplicative factor equal to the minimum signal rate used to train the denoising network (0.02 in our case), or a similarly low value. Examples of results are shown in Figure 2. Although some macroscopic aspects of the original image like orientation and illumination are roughly preserved, most of the information is not embedded in these seeds: scaling does not result in a reasonable embedding. We also attempted to inject some additional noise into the initial seed, hoping to obtain a more entropic signal that is similar to the typical input of the reverse diffusion process, but this merely resulted in a less deterministic output. Therefore, the embedding problem is both far from trivial and very interesting. Understanding the embedding would give us a better grasp of the reverse diffusion progress, as well as a deeper, semantic insight into the structure of its latent space. Our approaches to the embedding problem are discussed in Section 5. Overall, we find that we can obtain pretty good results by directly training a Neural Network to compute a kind of "canonical" seed (see Figure 3). Figure 3: CelebA: examples of face embeddings (second row) and reconstructions (third row). No cherry-picking. Figure 2: Examples of faces obtained using a weak version of the image itself as input signal. The first row shows the original image, while the second shows its weak version, which has been scaled by the minimum signal rate used to train the denoising network (0.02). This weaker image constitutes the initial seed. In the following four rows, we see the reconstructions obtained through reverse diffusion from the initial seed and progressively stronger versions of it, varying the signal rate between 0.02 and 0.08. The reconstruction quality is very high, with an MSE of around 0.0015 in the case of CelebA [9]. More detailed values are provided in Section 5.2. A typical application of the embedding process consists in transforming a signal into an element of the data manifold sufficiently close to it (the same principle behind denoising autoencoders). An amusing utilization is for the reification of artistic portraits, as exemplified in Figure 4. Another interesting possibility is that of making sketchy modifications to an image (a portrait, or a photograph) and delegating to the embedder-generator pair the burden of integrating them in the original picture in a satisfactory way (see Figure 5). Figure 4: Reification of portraits. The portrait is first embedded into the latent space, and then pulled back into the data manifold. Figure 5: Scribbles over David’s Napoleon. ### Structure of the Work The article is structured as follows. In Section 2, we discuss related works, mostly focusing on the embedding problem for Generative Adversarial Networks. Section 3 is devoted to formally introducing the notion of Denoising Diffusion Models, in addition to the deterministic variant of Denoising Diffusion Implicit Models we are particularly interested in. In the same section, we also discuss an intuitive interpretation of denoising diffusion models in terms of a "gravitational analogy" (Section 3.2), which drove many of our investigations and plays an important role in understanding the structure of datapoint embeddings. A major consequence of this interpretation, which to the best of our knowledge has never been pointed out before, is the _invariance of the latent space with respect to different models_: a given seed, passed as input to different models, always produces the same image. In Section 4, we provide architectural details about our implementation of the Denoising Diffusion model. Our methodology to address the embedding problem is discussed in Section 5. Two main approaches have been considered, one based on a gradient descent technique, which allows us to synthesize large clouds of different seeds in the embedding space of specific data points (Section 5.2), and another one based on training a neural network to compute a single "canonical" seed for the given image: essentially, a sort of encoder. Conclusions and future works are discussed in Section 6. **Code**. The source code of the experiments described in this work is freely available at the GitHub repository [https://github.com/asperti/Embedding-in-Diffusion-Models](https://github.com/asperti/Embedding-in-Diffusion-Models), along with links to weights for pre-trained models. ## 2 Related Works The embedding problem has been extensively investigated in the case of Generative Adversarial Networks (GANs) [10]. Similarly to Denoising Generative Models, GANs lack a direct encoding process of the original input sample into the latent space. Several approaches to inversion have been investigated [11; 12; 13; 14], mostly with the purpose of editing. The most common approaches are based on synthesis of the latent encoding via gradient descent techniques [15], or by training a suitable neural network to produce encodings able to reconstruct the original input with sufficient approximation. While the former technique generally tends to achieve better reconstruction errors, the latter has faster inference times and can take advantage of the fact that, since a GAN produces an infinite stream of training data, over-fitting is much less likely. Hybrid methods combining both techniques have also been explored [16; 17]. Recent works have mostly focused on the inversion of the popular StyleGAN and its successors [18; 19; 20], building on previous work with a variety of inversion structures and minimization objectives, or aiming to generalize/transfer to arbitrary datasets [21; 22; 23; 24; 25]. As we already mentioned, the typical application of the embedding is for exploration of the latent space, either for disentanglement purposes or in view of editing; the two issues are in fact tightly intertwined, since knowledge about semantically meaningful directions (e.g. color, pose, shape) can be exploited to tweak an image with the desired features. For instance, InterFaceGAN [26] uses regression techniques to find a hyperplane in the latent space whose normal vector allows for a gradual modification of the feature. Further work based on this idea searches for these directions as an iterative or an optimization problem [27] and also extends it to controllable walks in the latent space [28]. In the same vein, [29] studies the feature space of the U-Net bottleneck of the diffusion model, finding that it can be used as an alternative latent space with highly semantic directions. Another important application of embeddings is for the comparison of the latent space of different generative models [30]: having the possibility to embed the same image in different spaces allows us to create a supervised dataset suitable to learn direct mappings from one space to another. In the realm of diffusion models, much work has been done on the refinement of the reverse diffusion process [2; 31; 32], but relatively little attention has been so far devoted to its inversion. DALL-E2 [3] relies on a form of image embedding, but this is a pre-trained contrastive model not learnt as the inversion of the generative task. An additional difference with respect to our work is that we are also interested in investigating and understanding the _structure_ of the embedding cloud for each image since it could highlight the organization of the latent space and the sampling process. Finally, in the context of text-conditioned generative models, interesting attempts to invert not just the image but a user-provided concept have been investigated in [33]. The concept is represented as a new pseudo-word in the model's vocabulary, which can be then used as part of a prompt (e.g. "a flower in the style of \(S_{*}\)", where \(S_{*}\) refers to an image). The mapping is achieved by optimizing the conditioning vector in order to minimize the reconstruction error (similarly to the technique described in Section 5.1). A similar approach is used in [34]. ## 3 Denoising Diffusion Models In this section, we provide a general overview of diffusion models from a mathematical perspective. Consider a distribution \(q(x_{0})\) generating the data. Generative models aim to find a parameter vector \(\theta\) such that the distribution \(p_{\theta}(x_{0})\), parameterized by a neural network, approximates \(q(x_{0})\) as accurately as possible. In Denoising Diffusion Probabilistic Models (DDPM) [1], the generative distribution \(p_{\theta}(x_{0})\) is assumed to have the form \[p_{\theta}(x_{0})=\int p_{\theta}(x_{0:T})dx_{1:T} \tag{1}\] for a given time range horizon \(T>0\), where \[p_{\theta}(x_{0:T})=p_{\theta}(x_{T})\prod_{t=1}^{T}p_{\theta}(x_{t-1}|x_{t}) \tag{2}\] with \(p_{\theta}(x_{T})=\mathcal{N}(x_{T}|0;\ I)\) and \(p_{\theta}(x_{t-1}|x_{t})=\mathcal{N}(x_{t-1}|\mu_{\theta}(x_{t},\alpha_{t}); \ \sigma_{t}^{2}I)\). Similarly, the diffusion model \(q(x_{0:T})\) is considered to be a Markov chain of the form \[q(x_{t}|x_{t-1})=\mathcal{N}\bigg{(}x_{t}\bigg{|}\sqrt{\frac{\alpha_{t}}{ \alpha_{t-1}}}x_{t-1};\ \Big{(}1-\frac{\alpha_{t}}{\alpha_{t-1}}\Big{)}\cdot I \bigg{)} \tag{3}\] with \(\{\alpha_{t}\}_{t\in[0,T]}\) being a decreasing sequence in the interval \([0,1]\). The parameters of the generative model \(p_{\theta}(x_{0})\) are then trained to fit \(q(x_{0})\) by minimizing the negative Evidence Lower BOund (ELBO) loss, defined as \[\mathcal{L}(\theta)=-\mathbb{E}_{q(x_{0:T})}[\log p_{\theta}(x_{0:T})-\log q(x _{1:T})]. \tag{4}\] The ELBO loss can be rewritten in a computable form by noticing that, as a consequence of Bayes' Theorem, \(q(x_{t-1}|x_{t},x_{0})=\mathcal{N}(x_{t-1}|\tilde{\mu}(x_{t},x_{0});\ \sigma_{q}^{2}\cdot I)\). Consequently, \[\mathcal{L}(\theta)=\sum_{t=1}^{T}\gamma_{t}\mathbb{E}_{q(x_{t}|x_{0})}\Big{[} \|\mu_{\theta}(x_{t},\alpha_{t})-\tilde{\mu}(x_{t},x_{0})\|_{2}^{2}\Big{]} \tag{5}\] which can be interpreted as the weighted mean squared error between the reconstructed image from \(p_{\theta}(x_{t}|x_{0})\) and the true image obtained by the reverse diffusion process \(q(x_{t-1}|x_{t},x_{0})\) for each time \(t\). In [6], the authors considered a non-Markovian reverse diffusion process (also called inference distribution) \[q_{\sigma}(x_{1:T}|x_{0})=q_{\sigma}(x_{T}|x_{0})\prod_{t=2}^{T}q_{\sigma}(x_{ t-1}|x_{t},x_{0}) \tag{6}\] where \(q_{\sigma}(x_{T}|x_{0})=\mathcal{N}(x_{T}|\sqrt{\alpha_{T}}x_{0},(1-\alpha_{T })\cdot I)\), and \[q_{\sigma}(x_{t-1}|x_{t},x_{0})=\mathcal{N}\Big{(}x_{t-1}\Big{|}\mu_{\sigma_{ t}}(x_{0},\alpha_{t-1});\ \sigma_{t}^{2}\cdot I\Big{)} \tag{7}\] with \[\mu_{\sigma_{t}}(x_{0},\alpha_{t-1})=\sqrt{\alpha_{t-1}}x_{0}+\sqrt{1-\alpha_ {t-1}-\sigma_{t}^{2}}\cdot\frac{x_{t}-\sqrt{\alpha_{t}}x_{0}}{\sqrt{1-\alpha_ {t}}}. \tag{8}\] This construction implies that the forward process is no longer Markovian, since it depends both on the starting point \(x_{0}\) and on \(x_{t-1}\). Moreover, [6] proved that, with this choice of \(q_{\sigma}(x_{1:T}|x_{0})\), the marginal distribution \(q_{\sigma}(x_{t}|x_{0})=\mathcal{N}(x_{t}|\sqrt{\alpha_{t}}x_{0};\ (1-\alpha_{t}) \cdot I)\), recovers the same marginals as in DDPM, which implies that \(x_{t}\) can be diffused from \(x_{0}\) and \(\alpha_{t}\) by generating a realization of normally distributed noise \(\epsilon_{t}\sim\mathcal{N}(\epsilon_{t}|0;\ I)\) and defining \[x_{t}=\sqrt{\alpha_{t}}x_{0}+\sqrt{1-\alpha_{t}}\epsilon_{t}. \tag{9}\] Note that when in Equation (7) \(\sigma_{t}=0\), the reverse diffusion \(q_{\sigma}(x_{t-1}|x_{t},x_{0})\) becomes deterministic. With such a choice of \(\sigma_{t}\), the resulting model is named Denoising Diffusion Implicit Models (DDIM) by the authors in [6]. Interestingly, in DDIM, the parameters of the generative model \(p_{\theta}(x_{t-1}|x_{t})\) can be simply optimized by training a neural network \(\epsilon_{\theta}^{(t)}(x_{t},\alpha_{t})\) to map a given \(x_{t}\) to an estimate of the noise \(\epsilon_{t}\) added to \(x_{0}\) to construct \(x_{t}\) as in (9). Consequently, \(p_{\theta}(x_{t-1}|x_{t})\) becomes a \(\delta_{f_{\theta}^{(t)}}\), where \[f_{\theta}^{(t)}(x_{t},\alpha_{t})=\frac{x_{t}-\sqrt{1-\alpha_{t}}\epsilon_{ \theta}^{(t)}(x_{t},\alpha_{t})}{\sqrt{\alpha_{t}}}. \tag{10}\] Intuitively, the network in (10) is just a denoiser that takes as input the noisy image \(x_{t}\) and the variance of the noise \(\alpha_{t}\) and returns an estimate of the denoised solution \(x_{0}\). In DDIM, one can generate new data by first considering random Gaussian noise \(x_{T}\sim p_{\theta}(x_{T})\) with \(\alpha_{T}=1\). Then, \(x_{T}\) is processed by \(f_{\theta}^{(T)}(x_{T},\alpha_{T})\) to generate an estimation of \(x_{0}\), which is then corrupted again by the reverse diffusion \(q(x_{T-1}|x_{T},f_{\theta}^{(T)}(x_{T},\alpha_{T}))\). This process is repeated until a new datum \(x_{0}\) is generated by \(f_{\theta}^{(1)}(x_{1},\alpha_{1})\). The sampling procedure of DDIM generates a trajectory \(\{x_{T},x_{T-1},\ldots,x_{0}\}\) in the image space. In [35; 36] the authors found that the (stochastic) mapping from \(x_{T}\) to \(x_{0}\) in DDPM follows a score-based stochastic differential equation (SDE), where the dynamic is governed by terms related to the gradient of the ground-truth probability distribution from which the true data is generated. The sampling procedure for DDIM can be obtained by discretizing the deterministic _probability flow_[35] associated with this dynamics. Consequently, training a DDIM model leads to an approximation of the score function of the ground-truth distribution. ### The Diffusion Schedule An important aspect in implementing diffusion models is the choice of the diffusion noise \(\{\alpha_{t}\}_{t=1}^{T}\), defining the mean and the variance of \(q(x_{t}|x_{0})\). In [1], the authors showed that the diffusion process \(q(x_{t}|x_{0})\) converges to a normal distribution if and only if \(\alpha_{T}\approx 0\). Moreover, to improve the generation quality, \(\alpha_{t}\) has to be chosen such that it slowly decays to \(0\). The specific choice for the sequence \(\alpha_{t}\) defines the so-called _diffusion schedule_. In [1], the authors proposed to use linear or quadratic schedules. This choice was criticized in [31; 37] since it exhibits a too steep decrease during the first time steps, causing difficulties during generation for the neural network model. To remedy this situation, alternative scheduling functions with a gentler decrease have been proposed in the literature, such as the _cosine_ or _continuous cosine_ schedule. The behavior of all these functions is compared in Figure 6. The quantity of noise added by each schedule is also represented in Figure 7, where a single image is injected with increasing noise according to the given schedule. It is not hard to see that the cosine and the continuous cosine schedules exhibit a more uniform transaction between the original image and the pure noise. ### The Gravitational Analogy Similarly to other generative models, developing an intuition of the actual behavior of diffusion models (and of the mapping from a latent encoding to its visible outcome) can be challenging. In this section, we propose a simple gravitational analogy that we found extremely useful to get an intuitive grasp of these models, and which suggested us some interesting conjectures about the actual shape of the embedding clouds for each object. Simply stated, the idea is the following. You should think of the datapoints as corps with a gravitational attraction. Regions of the space where the data manifold has high probability are equivalent to regions with high density. The denoising model essentially learns the gravitational map induced over the full space: any single point of the space gets mapped to the point where it would naturally "land" if subject to the "attraction" of the data manifold. In more explicit terms, _any_ point \(z\) of the space can be seen as a noisy version of _any_ point \(x\) in the dataset. The "attraction" exerted by \(x\) on \(z\) (i.e. the loss) is directly proportional to their distance, usually an absolute or quadratic error. However, the probability to train the network to reconstruct \(x\) from \(z\) has a Gaussian distribution \(\mathcal{N}(z|x;\ \sigma_{z}\cdot I)\), with \(\sigma_{z}\) depending on the denoising step. Hence, the _weighted attraction_ exerted by \(x\) on \(z\) at each step is \[\mathcal{N}(z|x;\ \sigma_{z}\cdot I)\cdot\|x-z\| \tag{11}\] To get a grasp of the phenomenon, in Figure 8 we compare the gravitational low for a corp \(x\) with the _weighted attraction_ reported in Equation (11), under the assumption that the variance \(\sigma\) has to be compared with the radius of the corp (with constant density, for simplicity). According to the gravitational analogy, the embedding space \(emb(x)\) of each datapoint \(x\) should essentially coincide with the set of points in the space corresponding to trajectories ending in \(x\). We can study this hypothesis on synthetic datasets. In Figure 9 we show the gravitational map for the well-known "circle" (a) and "two moons" datasets (b); examples of embeddings are given in figures (c) and (d). From the pictures, it is clear in which way the model "fills the space", that is associating to each datapoint \(x\) all "trajectories" landing in \(x\). The trajectories are almost straight and oriented along directions orthogonal to the data manifold. We believe that this behavior can be formally understood by exploiting the dynamics of the trajectories introduced in [35], as mentioned in Section 3. We aim to deeply investigate those aspects in a future work. Figure 8: Gravitational analogy. The orange line is the usual gravitational low for a corp with radius \(1\) and constant density. The blue line is a weighted loss \(\mathcal{N}(z|x;\ 1)\cdot\|x-z\|_{1}\). The two lines have been rescaled to have an equal integral. The most striking consequence of the "gravitational" interpretation is, however, the independence of the latent encoding from the neural network or its training: the gravitational map only depends on the data manifold and it is unique, so distinct networks or different trainings of the same network, if successful, should eventually end up with the same results. This seems miraculous: if we pick a random seed in an almost immense space, and pass it as input to two diffusion (deterministic) models for the same dataset, they should generate essentially _identical_ images. This can be easily experimentally verified. In Figure 10 we show images generated from two different models trained over CIFAR10 starting from the same set of seeds: they are practically identical. The fact that the same encoding works for different models seems to be peculiar to this kind of generative models. In [30], it was observed that we can essentially pass from a latent space to another of different generative models with a simple _linear map_: however, an identity or even a permutation of latent variables does not usually suffice1. Footnote 1: It remains to be checked if imposing a spatial structure to the latent space of GANs and VAEs is enough to induce uniqueness in that case too. We plan to investigate this issue in a forthcoming work. While the uniqueness of the latent space is, in our opinion, a major discovery, it is not the main focus of this article, and we plan to conduct a more exhaustive and principled investigation of this property in future works. ## 4 Denoising Architecture The pseudocodes explaining training and sampling for diffusion models are respectively given in Algorithms 1 and 2 below. Figure 9: Gravitational map and embeddings for the “circles” (a,b) and “two moons” (c,d) datasets. Datapoints are in blue. We consider a dense grid of seeds in the latent space, depicted in green. To visualize the maps (a) and (c) we draw an arrow pointing from each seed to the corresponding point generated by reverse diffusion (in red). To visualize embeddings (b) and (d) we consider a set of elements in the datasets, and for each element \(x\) we consider all points in the grid generating a sample \(\hat{x}\) sufficiently close to \(x\). ``` 1:repeat 2:\(x_{0}\sim q(x_{0})\) 3:\(t\sim\)Uniform(1,..,T) 4:\(\epsilon\sim\mathcal{N}(0;\;I)\) 5: Take gradient descent step on \(||\epsilon-\epsilon_{\theta}(\sqrt{\alpha_{t}}x_{0}+\sqrt{1\!-\!\alpha_{t}} \epsilon,\alpha_{t})||^{2}\) 6:until converged ``` **Algorithm 1** Training As explained in Section 3, they are iterative algorithms; the only trainable component is the denoising network \(\epsilon_{\theta}(x_{t},\alpha_{t})\), which takes as input a noisy image \(x_{t}\) and a noise variance \(\alpha_{t}\), and tries to estimate the noise present in the image. This model is trained as a traditional denoising network, taking a sample \(x_{0}\) from the dataset, corrupting it with the expected amount of random noise, and trying to identify the noise in the noisy image. As a denoising network, it is quite standard to consider a conditional variant of the U-Net. This is a very popular network architecture originally proposed for semantic segmentation [38] and subsequently applied to a variety of image manipulation tasks. In general, the network is structured with a downsample Figure 10: Uniqueness of the generative model. Different diffusion models generate essentially identical images when fed with the same seed. The two models have dissimilar architectures, with a higher number of parameters for the first row and lower for the second. The training sets are CIFAR10 (top), MNIST (middle), and Oxford Flowers 102 (bottom); seeds have been randomly generated. sequence of layers followed by an upsample sequence, with skip connections added between the layers of the same size. To improve the sensibility of the network to the noise variance, \(\alpha_{t}\) is taken as input, which is then embedded using an ad-hoc sinusoidal transformation by splitting the value in a set of frequencies, in a way similar to positional encodings in Transformers [39]. The embedded noise variance is then vectorized and concatenated to the noisy images along the channel axes before being passed to the U-Net. This can be done for each convolution blocks separately, or just at the starting layer; we adopted the latter solution due to its simplicity and the fact that it does not seem to entail any loss in performance. Having worked with a variety of datasets, we used slightly different implementations of the previously described model. The U-Net is usually parameterized by specifying the number of downsampling blocks, and the number of channels for each block; the upsampling structure is symmetric. The spatial dimension does not need to be specified, since it is inferred from the input. Therefore, the whole structure of a U-Net is essentially encoded in a single list such as [32; 64; 96; 128] jointly expressing the number of downsampling blocks (4, in this case), and the respective number of channels (usually increasing as we decrease the spatial dimension). For our experiments, we have mainly worked with two basic architectures, mostly adopting [32; 64; 96; 128] for simple datasets such as MNIST of Fashion MNIST, and using more complex structures such as [48; 96; 192; 384] for CIFAR10 or CelebA. We also used different U-Net variants to extensively test the independence of the latent encoding discussed in Section 3.2. ## 5 Embedding We experimented with several different approaches for the embedding task. The most effective ones have been the direct synthesis through gradient descent, and the training of ad-hoc neural networks. Both techniques have interesting aspects worth discussing. The gradient descent technique is intrinsically non-deterministic, producing a variegated set of "noisy" versions of a given image \(x\), all able to reconstruct \(x\) via reverse diffusion. The investigation of this set allows us to draw interesting conclusions on the shape of \(emb(x)\). Gradient descent is, however, pretty slow. A direct network can be trained to compute a single element inside \(emb(x)\). Interestingly enough, this single element seems to be very close to the _average_ of all noisy versions of \(x\) synthesized by the previous technique, suggesting evidence of its "canonical" nature. The two techniques will be detailed in the following subsections. ### Gradient Descent Synthesis In Section 3.2, we computed the shape of embeddings for a few synthetic datasets by defining a dense grid of points in the latent space and looking for their final mapping through the reverse denoising process. Unfortunately, the number of points composing the grid grows exponentially in the number of features, and the technique does not scale to more complex datasets. A viable alternative is the gradient descent approach, where we synthesize inputs starting from random noise, using the distance from a given target image as the objective function. Generation usually requires several thousand steps, but it can be done in parallel on a batch of inputs. This allows us to compute, within a reasonable time, a sufficiently large number of samples in \(emb(x)\) for any given \(x\) (Figure 11). Having a full cloud of data, we can use standard techniques like PCA to investigate its shape, as well as to study how the image changes when moving along the components of the cloud (see Section 5.1.1). A first interesting observation is that \(emb(x)\) seems to be a convex region of the latent space. In Figure 12 we show images obtained by reverse diffusion from 100 random _linear combinations_ of seeds belonging to the embedding of the image on the left: all of them result in very similar reconstructions of the starting image. Figure 11: Examples of seeds in the latent space. The image on the left is the original. On the right, we see 5 different seeds and their corresponding generations through the reverse diffusion process. Figure 12: Linear combination of seeds. Given the original image (1 and 3) we compute by gradient descent a large cloud of seeds in its embedding. Then, we compute 100 _internal points_, as a 1-sum random linear combination of the given seeds. Images 2 and 4 contain the results of these linear combinations. All generated images are similar between each other and are very close to the original image. Therefore, all internal points seem to belong to the embedding. Due to the convexity of the space, its mean is also comprised in it. In Figure 13 we see the reconstructions obtained by considering as seed the average of a progressive number of seeds. The resulting images stabilize soon, although the result is slightly more blurry compared to using a single seed. The seeds on the borders of \(emb(x)\) seem to provide slightly better reconstructions than internal points (which makes the quest for a "canonical", high-quality seed even more challenging). #### PCA Decomposition Principal Component Analysis allows us to fit an ellipsoid over the cloud of datapoints, providing a major tool for investigating the actual shape of embeddings. According to the "gravitational" intuition exposed in Section 3.2, \(emb(x)\) should be elongated along directions orthogonal to the data manifold: moving along those directions should not sensibly influence generation, which should instead be highly affected by movements along minor components. Moreover, since the data manifold is likely oriented along a relatively small number of directions (due to the low dimensionality of the manifold), we expect that most PCA components in each cloud will be orthogonal to the manifold, and have relatively high eigenvalues. Figure 13: Progressive averaging in CIFAR10 and CelebA. The first row shows seeds computed as the mean of a progressive number of seeds in \(emb(x)\); the second row shows their respective output through the reverse denoising process. The output is very similar. Additionally, observe that the original image becomes identifiable in the mean. For instance, in the case of the clouds of seeds for CIFAR10, eigenvalues along all 3072 components typically span between 0.0001 and 4. We observe significant modifications only moving along the minor components of the clouds: in fact, they provide the shortest way to leave the embedding space of a given point. However, as soon as we leave the embedding space of \(x\) we should enter the embedding space of some "adjacent" point \(x^{\prime}\). In other words, the minor components should define directions _inside_ the data manifold, and possibly have a "semantical" (likely entangled) interpretation. ### Embedding Networks The second approach consists in training a neural network to directly compute a sort of "canonical" embedding for each image of the data manifold. The network takes as input an image \(x\) and produces a seed \(z_{x}\in emb(x)\); the loss function used to train the network is simply the distance between \(x\) and the result \(\hat{x}\) of the denoising process starting from \(z_{x}\). We tested several different networks; metrics relative to the most significant architectures are reported in Table 1. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} **Network** & **Params** & \multicolumn{5}{c}{**MSE**} \\ & & MNIST & Fashion & CIFAR10 & Oxford & CelebA \\ & & & MNIST & & Flowers & \\ \hline \hline layers: 1 conv. \(5\!\times\!5\) & 78 &.00704 &.0152 &.0303 &.0372 &.0189 \\ \hline layers: 3 conv. \(5\!\times\!5\) & & & & & & \\ channels: 16-16-out & 7,233 &.00271 &.00523 &.0090 &.0194 &.0101 \\ \hline layers: 3 conv. \(5\!\times\!5\) & & & & & & \\ channels: 64-64-out & 105,729 &.00206 &.00454 &.0061 &.0153 &.00829 \\ \hline layers: \(\begin{array}{c}2conv.5\!\times\!5\\ 3conv.3\!\times\!3\end{ A visual comparison of the behavior of the different networks is given in Figure 15, relative to CIFAR10. More examples on CelebA are given below. We started our investigation with a very simple network: a single convolution with a \(5\times 5\) kernel. The reason for this choice is that, according to the discussion we made in the introduction and the visualization of the mean element of the embedding clouds of Figure 13, we expected the latent encoding to be similar to a suitably rescaled version of the source image. The results on a simple dataset like MNIST confirmed this hypothesis, but on more complex ones like CIFAR10 it does not seem to be the case, as exemplified in Figure 15. We then progressively improved the model's architecture by augmenting their depth and channel dimensions, with the latter being typically the most effective way to improve their performance. In the end, the best results were obtained with a U-Net architecture that is practically identical to the denoising network. Many additional experiments have been performed, comprising autoencoders, residual networks, inception modules, and variants with different padding modalities or regularizations. However, they did not prove to be particularly effective and were thus dropped from our discussion. In Figure 16, we show some examples of embeddings and relative reconstructions in the case of the CelebA dataset. The quality of the reconstruction is definitely high, with just a slight blurriness. There are two possible justifications for the tiny inaccuracy of this Figure 15: Visual comparison with different Embedding Networks. We consider a set of test images from CIFAR10 (first row) and compute the embedding with one of the Embedding Networks of Table 1. We then use the embeddings to generate the corresponding images (remaining rows). result: it could either be a fault of the generator, which is unable to create the requested images (as it is frequently the case with Generative Adversarial Networks [30]), or it could be a fault of the Embedding Network, which is unable to compute the correct seed. To better investigate the issue, we performed two experiments. First, we restricted the reconstruction to images produced by the generator: in this case, if the Embedding network works well, it should be able to reconstruct almost perfect images. Secondly, we tried to improve the seeds computed by the Embedding Network through gradient descent, looking for better candidates. We report the result of the first experiment in Figure 17. While the reconstruction is qualitatively accurate, we can also confirm the effectiveness in a more analytical way. In Table 2 we compare the mean Figure 16: Embedding examples for the CelebA dataset. The first row contains the original examples, the second the synthesized latent seed, and the third the reconstructed image. Reconstruction is very good, with just a slight blurriness. Figure 17: Embedding examples on generated images. In this case, we start from images created by the generator (first row) and re-embed them inside the latent space (second row) using the Embedding Network. In the third row, we show the reconstruction, which is almost perfect. This could be either explained by a deficiency of the generator, or just by the fact that generated images are “simpler”, and hence can be more easily embedded than real ones. squared error of the reconstruction starting from original CelebA images versus generated data: the latter is sensibly smaller. The fact that embedding works better for generated images is, however, not conclusive: it could either be explained by a deficiency of the generator, unable to generate all images in the CelebA dataset, or just by the fact that generated images are "simpler" than real ones (observe the well-known patinated look, which is typical of most generative models) and hence more easily embeddable. Even the results of the second experiment are not easily deciphered. From a visual point of view, refining the embedding through gradient descent is not producing remarkable results, as exemplified in Figure 18. However, numerically, we see an improvement from an MSE of 0.00147 to an MSE of 0.00058, which seems to suggest some margin of improvement for the embedding network. \begin{table} \begin{tabular}{c|c} **Source Images** & **MSE** \\ \hline \hline Dataset & 0.00147 \\ \hline Generated & 0.00074 \\ \end{tabular} \end{table} Table 2: Reconstruction error. In the first case, images are taken from the CelebA dataset: in the second case, they have been generated through the reverse diffusion process. The mean squared error (MSE) was computed over 1000 examples. Both experiments achieve a small reconstruction error, although the second one is even smaller. Figure 18: Gradient descent fine-tuning. The seeds obtained through the embedding network (second row) are refined through gradient descent (fourth row). The respective resulting reconstructions are depicted in rows 3 and 5. The improvement is almost imperceptible. In conclusion, both the generator and the embedder can likely still be improved. However, a really interesting research direction seems to be the possibility to modify the latent representation to improve the realism of the resulting image, even if possibly not in the direction of the original. Therefore, a basic embedder, even if not fully accurate, could still provide the starting point for very interesting manipulations. ## 6 Conclusions In this article we addressed the problem of embedding data into the latent space of Deterministic Diffusion models, providing functionality similar to the encoder in a Variational Autoencoder, or the so-called _recoder_ for Generative Adversarial Networks. The main source of complexity when inverting a diffusion model is the non-injective nature of the generator: for each sample \(x\), there exists a cloud of elements \(z\) able to generate \(x\). We call this set the embedding of \(x\), denoted as \(emb(x)\). We performed a deep investigation of the typical shape of \(emb(x)\), which suggests that embeddings are usually orthogonal to the dataset. These studies point to a sort of gravitational interpretation of the reverse diffusion process, according to which the space is progressively collapsing over the data manifold. In this perspective, \(emb(x)\) is just the set of all trajectories in the space ending in \(x\). We tested our interpretation on both low- and high-dimensional datasets, highlighting a quite amazing result: the latent space of a DDIM generator does not significantly depend on the specific generative model, but just on the data manifold. In other words, passing the same seed as input to different DDIMs will result in almost identical outputs. In order to compute embeddings, we considered both gradient descent approaches, as well as the definition and training of specific Embedding Networks. We showed that, among all the architectures we tested, a U-Net obtained the best results, achieving a high-quality reconstruction from both a quantitative and qualitative point of view. Embedding networks have a lot of interesting applications, largely exemplified in the introduction. More generally, the simplicity and ease of use of Embedding Networks open a wide range of fascinating perspectives about the exploration of semantic trajectories in the latent space, the disentanglement of the different aspects of variations, and the possibility of data editing. We thus hope that our results, by expanding the current understanding of generative models, can guide future research efforts. ### Conflict of Interest On behalf of all authors, the corresponding author states that there is no conflict of interest.
Denoising Diffusion モデルが、生成モデル分野における人気を高め、その理由は、シンプルな安定した学習、優れた生成性、堅固な確率的基礎など、いくつかの理由がある。この論文では、Denoising Diffusion モデルの潜在空間への画像の埋め込み問題を扱っています。つまり、元の画像に繋がる「ノイズ」の画像を適切に選択することです。特に、その逆伝播の定量的な性質を考慮して、Denoising Diffusion Implicit モデルに焦点を当てています。調査の結果、Denoising Diffusion モデルの潜在空間の構造に関する深い理解を得ています。これは、探索の新たな視点、 semantictrajectories の定義、エンコードの編集のための制御/条件付けなどの、新たな可能性を開きます。本研究により強調された、この生成モデルクラスの特徴の一つである、潜在表現が逆伝播プロセスを実装するネットワークに依存しない、という特性です。言い換えれば、同じデータセット
2304.04752
A Practitioner's Guide to Bayesian Inference in Pharmacometrics using Pumas
This paper provides a comprehensive tutorial for Bayesian practitioners in pharmacometrics using Pumas workflows. We start by giving a brief motivation of Bayesian inference for pharmacometrics highlighting limitations in existing software that Pumas addresses. We then follow by a description of all the steps of a standard Bayesian workflow for pharmacometrics using code snippets and examples. This includes: model definition, prior selection, sampling from the posterior, prior and posterior simulations and predictions, counter-factual simulations and predictions, convergence diagnostics, visual predictive checks, and finally model comparison with cross-validation. Finally, the background and intuition behind many advanced concepts in Bayesian statistics are explained in simple language. This includes many important ideas and precautions that users need to keep in mind when performing Bayesian analysis. Many of the algorithms, codes, and ideas presented in this paper are highly applicable to clinical research and statistical learning at large but we chose to focus our discussions on pharmacometrics in this paper to have a narrower scope in mind and given the nature of Pumas as a software primarily for pharmacometricians.
Mohamed Tarek, Jose Storopoli, Casey Davis, Chris Elrod, Julius Krumbiegel, Chris Rackauckas, Vijay Ivaturi
2023-03-31T04:00:53
http://arxiv.org/abs/2304.04752v1
# A Practitioner's Guide to Bayesian Inference in Pharmacometrics using Pumas ###### Abstract This paper provides a comprehensive tutorial for Bayesian practitioners in pharmacometrics using Pumas workflows. We start by giving a brief motivation of Bayesian inference for pharmacometrics highlighting limitations in existing software that Pumas addresses. We then follow by a description of all the steps of a standard Bayesian workflow for pharmacometrics using code snippets and examples. This includes: model definition, prior selection, sampling from the posterior, prior and posterior simulations and predictions, counter-factual simulations and predictions, convergence diagnostics, visual predictive checks, and finally model comparison with cross-validation. Finally, the background and intuition behind many advanced concepts in Bayesian statistics are explained in simple language. This includes many important ideas and precautions that users need to keep in mind when performing Bayesian analysis. Many of the algorithms, codes, and ideas presented in this paper are highly applicable to clinical research and statistical learning at large but we chose to focus our discussions on pharmacometrics in this paper to have a narrower scope in mind and given the nature of Pumas as a software primarily for pharmacometricians. **Keywords:** Bayesian inference, statistical software, pharmacometrics, workflow ###### Contents * 1 Motivation * 2 Introduction * 2.1 Pharmacometrics Workflow * 2.2 Data * 2.3 Models * 2.4 Algorithms * 2.5 Software * 2.6 Limitations of Other Software * 2.7 Related Works and Contributions * 2.8 Paper Layout and Reading Plan * 3 Bayesian Workflow in Pumas * 3.1 Defining a Model * 3.2 Prior Simulations and Predictions * 3.3 Fitting a Model * 3.4 Numerical Errors and Debugging * 3.5 Updating the Posterior with New Data * 3.6 Basic Summary Statistics * 3.7 How Many Samples are Needed? * 3.8 Diagnostic Plots * 3.9 More Diagnostics * 3.10 What if the Chains Are Not Converging? * 3.11 Advanced Posterior Queries * 3.12 Posterior Plots * 3.13 Posterior Simulations and Predictions * 3.14 Visual Predictive Checks and Simulation Plots * 3.15 Simulation Queries * 3.16 Non-Compartmental Analysis (NCA) Parameters * 3.17 Crossvalidation and Expected Log Predictive Density * 3.18 Information Criteria * 4 Background and Intuition * 4.1 Notation * 4.2 Bayesian Statistics * 4.3 Prior Selection * 4.4 Markov Chain Monte Carlo (MCMC) Intuition * 4.5 No-U-Turn Sampler (NUTS) Algorithm * 4.6 Basic Summary Statistics * 4.7 Convergence * 4.8 Crossvalidation and Model Selection * 5 Example Models * 6 Conclusion * 7 Acknowledgements Motivation Fully Bayesian approaches have become important tools in a pharmacometrician's toolbox (Lee and Gobburu, 2011; McDade et al, 2022) because they enable the rigorous and flexible quantification of uncertainty in all of the model's parameters as well as the use of knowledge from previous similar studies which have applications in rare disease drug approvals, pediatric studies, precision dosing and adaptive trial design. The Bayesian workflow implemented in Pumas (Rackauckas et al, 2020) was designed to be flexible and easily accessible using an intuitive clean syntax. We hope that this paper can be a good learning resource for any pharmaceuian interested in learning about and using fully Bayesian methods. ## 2 Introduction In this section, we discuss the need for a fully Bayesian approach in pharmacometrics and describe where it fits in the spectrum of methods typically used in the field. The standard models, data, algorithms, workflows, and software used in pharmacometrics will be briefly presented to set the context for future discussions. Finally, the main contributions of Pumas and the layout of this paper will be summarized. ### Pharmacometrics Workflow Pharmacometrics is a field of study that includes various quantitative analysis techniques used to understand interactions between drugs and patients. A typical pharmacometrics workflow involves: 1. Prepare analysis-ready datasets from clinical trial source datasets, 2. Exploratory data analysis and scientific reasoning of observed data via summaries and plots, 3. Developing parametric models for disease-drug-patient interaction, 4. Fitting the models' parameters to the data and estimating uncertainty in the parameters, 5. Diagnosing the models' fits and evaluating the quality of their fits, 6. Comparing models and selecting the best model, and 7. Using the best model to predict/simulate alternative scenarios for existing or new patients or to answer key drug development questions. ### Data The data used in pharmacometrics typically includes: 1. Patient covariates, e.g. age or sex, sometimes including time-varying covariates, 2. Drug dosing regimen and administration schedule for each patient, and 3. Observed response for each patient, e.g. the measured drug concentration in the blood and/or some clinical endpoints, typically observed at multiple time points. ### Models The kinds of models used in pharmacometrics are typically: 1. Structural models to capture the observed response, e.g, dynamic-based models where the pharmacokinetics are modeled using ordinary differential equations (ODEs) 2. Covariate models predicting the observed response conditional on the covariates, and 3. Hierarchical models with population-level parameters and patient-specific parameters,1 Footnote 1: In this paper, we use the terminology “population-level” parameters and “patient-specific” (or subject-specific) parameters instead of “fixed effects” and “random effects” which are more common in pharmacometrics. This is because the definition of fixed and random effects in the statistical literature is ambiguous (see page 21 in Gelman (2005)), and in Bayesian modeling, every parameter is modeled using a random variable, which further increases the ambiguity. These are not mutually exclusive. For example, a covariate-based model can also be a hierarchical model with a dynamic-based ODE component. The following showcases a classic model of Theophylline dynamics via a 1-compartment pharmacokinetic (PK) oral absorption model. The model describes the dynamics of drug absorption into the bloodstream and its clearance. Initially, the drug is assumed to be administered as a bolus into a single depot compartment, e.g. when it's first ingested into the gut. The drug then gradually enters the bloodstream with an absorption rate \(\mathrm{Ka}\times\mathrm{Depot}\) where Depot is the amount of drug remaining in the gut. The central compartment represents the drug amount absorbed in the bloodstream over time. The drug is also assumed to be cleared from the central compartment by a rate \(\frac{\text{CL}}{V}\times\text{Central}\), where V is the volume of distribution of this compartment. The model has population parameters (\(\boldsymbol{\theta},\boldsymbol{\Omega}\), \(\sigma\)) and patient-specific parameters \(\boldsymbol{\eta}_{i}\). \(\boldsymbol{\theta}\) is a vector of 4 numbers, \(\boldsymbol{\Omega}\) is a \(3\times 3\) positive definite covariance matrix and \(\sigma\) is a standard deviation parameter for the error model. In this model, each patient \(i\) has weight and sex covariates: \[Z_{i}=\begin{bmatrix}\text{wt}_{i},\\ \text{sex}_{i},\end{bmatrix}, \tag{1}\] where \(\text{wt}_{i}\) is a positive real number and \(\text{sex}_{i}\) is an indicator variable which is 0 for males and 1 for females, has individual coefficients: \[\begin{bmatrix}\text{Ka}\\ \text{CL}\\ V\end{bmatrix}=\begin{bmatrix}\theta_{1}e^{\eta_{i,1}}\\ \theta_{2}(\frac{\text{wt}_{i}}{70})^{0.75}\theta_{4}^{\text{sex}_{i}}e^{\eta _{i,2}}\\ \theta_{3}e^{\eta_{i,3}}\end{bmatrix}, \tag{2}\] has internal dynamics: \[\frac{d[\text{Depot}]}{dt} =-\text{Ka}[\text{Depot}]\] \[\frac{d[\text{Central}]}{dt} =\text{Ka}[\text{Depot}]-\frac{\text{CL}}{V}[\text{Central}],\] where Depot and Central are the depot and central compartments, and has normally distributed residual error in the observed drug concentration conc, also known as the error model: \[\text{conc}\sim\text{Normal}\left(\frac{\text{Central}}{V},\sigma\right).\] ### Algorithms In this section, we briefly present an overview of the various algorithms commonly used to fit models to data in pharmacometrics. #### 2.4.1 Marginal Maximum Likelihood Estimation (MLE) When fitting parametric models to data in pharmacometrics, marginal MLE algorithms are the most popular. The patient-specific parameters are marginalized and the marginal likelihood is maximized. There are two families of algorithms to do MLE with the marginal likelihood: 1. Approximate integration of the conditional likelihood (Wang, 2007) which includes: 1. Laplace method, and 2. First order conditional estimation (FOCE). 2. Stochastic approximation expectation maximization (SAEM) (Delyon et al, 1999; Kuhn and Lavielle, 2005). #### 2.4.2 Marginal Maximum-a-Posteriori (MAP) Estimation Marginal MAP is an alternative to marginal MLE that also gives a point estimate for the population parameters but instead of maximizing the marginal likelihood, it maximizes the product of the prior probability of the population parameters and the marginal likelihood. #### 2.4.3 Fully Bayesian Analysis Marginal likelihood maximization can give us, as a by-product, the conditional posterior of the patient-specific parameters \(\eta_{i}\) given the optimal population parameters, or an approximation of it. However, a fully Bayesian approach can give us samples from the joint posterior of all the population parameters and patient-specific parameters simultaneously. The sampling from the joint posterior is typically done using an MCMC algorithm such as the No-U-Turn sampler (NUTS) (Hoffman and Gelman, 2014; Betancourt, 2017) algorithm which is a variant of the Hamiltonian Monte Carlo (HMC) (Neal, 2011) MCMC algorithm. We will cover Bayesian notation and algorithms in more detail in later sections of the paper. Besides the ability to sample from the joint posterior and easily simulate from the posterior uncertainty, a fully Bayesian approach allows modellers to: 1. Incorporate domain knowledge and insights from previous studies using prior distributions. 2. Quantify the so-called epistemic uncertainty, which is the uncertainty in the model parameters' values in cases where the model is non-identifiable2 and/or there are not many data points available. Footnote 2: if a parameter is unidentifiable, that means their values cannot be uniquely determined from the available data. The above are advantages of Bayesian analysis which the non-Bayesian workflow typically doesn't have a satisfactory answer for. Bayesian inference as a paradigm uses the established theory of probability to rigorously quantify the uncertainty in the parameter values with fewer assumptions about the model and data. Bayesian workflows empower the analyst to use, when available, domain knowledge to quantify the epistemic uncertainty of model parameters, thus providing immense flexibility in analysis pipelines. Since Bayesian analysis is a conceptual replacement for bootstrapping, the performance of Bayesian inference should be compared to that of bootstrapping rather than that of a single model fit using Laplace/FOCE/SAEM. This is important to set the users' expectations because Bayesian inference will typically be one or more orders of magnitude slower than running a single model fit with Laplace/FOCE/SAEM. ### Software A number of software implement all the classic MLE-based analysis workflows, e.g. NONMEM (Beal et al, 2009), Monolix, and Pumas (Rackauckas et al, 2020). For fully Bayesian analysis, a few options exist including: Pumas, Stan (Carpenter et al, 2015), Torsten (Margossian et al, 2022), BUGS (Goudie et al, 2020), Turing.jl (Ge et al, 2018), PyMC (Salvatier et al, 2015), Pyro (Bingham et al, 2018) and NONMEM's relatively new Bayesian module. In Bayesian statistics, Stan has grown to be the de-facto standard software for Bayesian analysis when only continuous parameters exist, which is typically the case in pharmacometric models. Stan has been tested and used by many statisticians working in different fields and with different kinds of models over many years. Of all the above software for Bayesian analysis, Stan has the largest community and most mature ecosystem of R packages for a full Bayesian analysis workflow. ### Limitations of Other Software The main limitation of generic probabilistic programming languages (PPLs), such as Stan or Turing.jl, in pharmacometrics (PMx) is that they are not tailored to PMx users and the kinds of models and data found in PMx. For instance, as of the time of this writing, parallelism over subjects is difficult to write in Stan3. Footnote 3: In order to parallelize over subjects, you need to use the _reduce_sun_ function in the likelihood which is not a trivial task. For examples on using the _reduce_sun_ function, you can refer to stanpmx.github.io. Torsten tries to bridge the gap between PMx users and the Stan software, e.g. by making it easier to define dosing regimens and simplifying parallelism over subjects using its group solver. For more on the Torsten group solver, you can refer to metrumresearchgroup.github.io/Torsten/function/ode-pop/. However, the APIs of both Stan and Torsten do not provide the best user experience for PMx users because they are based on C++, an arguably difficult-to-use, non-interactive programming language. Additionally, as of the time of this writing, it is unnecessarily difficult to define dynamics initial conditions in Torsten that depend on the model's parameters, while it is easier to do that in the parent PPL Stan. More broadly, once you assume a particular model structure, there are numerous computational optimizations, e.g. automatic parallelism and automatic ODE linearity detection, that can be employed which we do in Pumas, but which are difficult to do in other more generic software. The second limitation of other software is that they do not support other non-Bayesian workflows that are common in PMx which means that users with a PMx background need to translate their models from other software to the PPL to compare the Bayesian and non-Bayesian results. This is often a time consuming and error-prone process. ### Related Works and Contributions This paper is not the first attempt to describe a Bayesian workflow in the context of pharmacometrics. Works like Wakefield et al (1999) and Lunn et al (2002) were early attempts using the BUGS software (Goudie et al, 2020). More recently, the Torsten paper (Margossian et al, 2022) and Bayesian workflow paper (Gelman et al, 2020) are also excellent resources with a focus on using Stan (Carpenter et al, 2015). Gelman et al (2020) is particularly focused on the workflow aspect of developing Bayesian models and performing analysis and diagnostics, and while it is not customized for pharmacometrics, the Torsten paper (Margossian et al, 2022) attempts to fill this gap. There is also Elmokadem et al (2023) which provides a guide to using Stan/Torsten or Turing.jl (Ge et al, 2018) for Physiologically-based pharmacokinetic (PBPK) models. Given the above excellent resources, we identify the two main contributions of this paper as: 1. [leftmargin=*] 2. Offering a complete pharmacometrics-oriented Bayesian workflow including scripts and examples of common models using the Pumas software. 3. Offering an intuitive explanation of various Bayesian statistical and computational concepts that enable the fine tuning of algorithm's options and making sense of software outputs without being too technical. We believe the Pumas Bayesian workflow implementation addresses many of the issues with other software by: 1. [leftmargin=*] 2. Providing a complete, user-friendly Bayesian analysis workflow that includes: 1. [label=()] 2. model and dosing regimen definition, 3. MCMC sampling, 4. diagnostics, 5. simulation, 6. counter-factual predictions, and 7. customizable cross-validation, using a few lines of code; 3. Using the same user-friendly, compact model syntax for hierarchical, dynamics-based models used in both Bayesian and non-Bayesian analyses so there is no need to translate models; 4. Using Julia as the core programming language which is a fast, interactive, and easy-to-use language, providing an excellent user experience; 5. Automating the definition and computational optimization of PMx models, e.g. using automatic parallelism over subjects and automatic ODE linearity and stiffness detection, delivering high performance with a neat syntax; and 6. Using the open-source, modular implementation, AdvancedHMC.jl (Xu et al, 2020), of the same MCMC algorithm that Stan uses, which is also used in Turing.jl (Ge et al, 2018)). ### Paper Layout and Reading Plan The rest of this paper is organized as follows. In Section 3, we describe the Bayesian workflow in Pumas. This should help users who are already familiar with the Bayesian workflow and theory to get started quickly using Pumas. In Section 4, we then a take a step back and give a reasonably comprehensive, intuition-oriented introduction to all of the main concepts and ideas invoked in the Bayesian workflow including the intuition behind the algorithm used and general points of caution when selecting priors. This section can help users make sense of highly technical concepts using mostly English and some light mathematics which can be useful to make informed decisions when using Pumas. Finally, Section 5 contains a number of common example models and Section 6 includes some additional reading material and resources for current and future Pumas users. This paper is meant to serve both as a tutorial paper for users of Pumas and as an accessible resource for pharmacometricians learning Bayesian inference. If you are an experienced Bayesian and would like to learn how to use Pumas, then you can read Section 3 only and skip Section 4. If you are still learning Bayesian theory, we recommend reading at least some parts of Section 4 first. You can then use Table 1 to switch to reading the relevant parts in Section 3 if you prefer interweaving theory and code. For more focused tutorials and scripts, you can also check the Pumas tutorials website (tutorials.pumas.ai). ## 3 Bayesian Workflow in Pumas A general Bayesian analysis workflow was presented in Gelman et al (2020) which summarizes many of the current best practices in Bayesian statistics. There are many similarities between the general Bayesian workflow and the standard pharmacometrics workflow based on marginal MLE. There are a number of differences between them worth highlighting though: 1. [leftmargin=*] 2. The Bayesian workflow focuses on what a modeller can do when MCMC sampling fails for a model. This could be failure because the sampler is taking too long to complete its sampling, the sampler is often triggering numerical errors,or the samples are failing to pass the diagnostic tests after sampling. Things like simplifying the model or using less or more data can help pinpoint the issue and improve sampling. Much of this is also applicable if MLE/MAP optimization fails but optimization failing is less common than MCMC failing with one of its many failure modes. 2. The Bayesian workflow includes prior selection diagnostics, e.g. prior predictive checks. Marginal MLE optimization does not use priors on population parameters, and there are well-established standard priors often used for patient-specific parameters so this is also less of an issue outside the Bayesian context. 3. The general Bayesian workflow mentions using the model and MCMC samples to make predictions, but it naturally does not specify what kinds of predictions which is very much context-specific. In pharmacometrics, we are typically interested in identifying the effect of using different dose amounts for new or existing patients or in predicting/simulating future responses given past data. In this section, we will borrow components from the general Bayesian workflow, customize it for pharmacometrics, and present the Pumas syntax that can help a pharmacometrician follow the current best practices when performing Bayesian analysis. The syntax presented here is using Pumas 2.4 (to be released in June 2023), but Pumas 2.3 has most of the workflow implemented already. For updated syntax, options, and features, please refer to the Pumas documentation (docs.pumas.ai). ### Defining a Model #### Overview In Pumas you can define models using the @model macro, which is composed of model blocks. We will cover six model blocks in this tutorial: 1. @param: where we define the population parameters of our model, along with their priors. 2. @random: where we define the subject-specific parameters of our model, along with their priors. This is an optional block which can be dropped for single subject models. 3. @covariates: where we declare the subject's covariates. This is an optional block which can be dropped for covariate-free models. 4. @pre: here we do all sorts of pre-computations and other statistical transformations, e.g. calculating the individual PK parameters. 5. @dcp: here you can define dose control parameters (DCPs) in the model. This is an optional block which can be dropped if no DCPs exist in the model. 6. @dynamics: where we define all of our dynamics, e.g. the system of ODEs that governs the relationship between the PK/PD compartments. This is an optional block which can be dropped for models without ODEs. 7. @derived: this is where we defined our observed response's distribution used to calculate the likelihood. 8. @observed: this is where we define additional variables to be computed during simulation but not fitting, e.g a non-compartmental analysis (NCA) based on the concentration curve can be performed here. This is an optional block. With these blocks, you can code almost all PK/PKPD/PBPK models in Pumas. We'll cover the functionality of all of these model blocks. First, the @param block is where we include all of the population parameters of our model. We begin by specifying a begin... end where we insert one parameter per line. For each parameter, we give a prior with the \(\sim\) operator followed by a distribution. Listing 1 shows an example of an @param block with three parameters and their priors. tvcl has a LogNormal prior with log-scale mean log(2.5) and unit log-scale standard deviation 1, tvvc has a positive-constrained Normal \begin{table} \begin{tabular}{|c|c|c|} \hline **Theme** & **Section 3** & **Section 4** \\ \hline Model definition and prior choice & 3.1 & 4.1, 4.2, 4.3 \\ \hline Prior and posterior predictions and simulations & 3.2, 3.11-3.16 & 4.4 \\ \hline Inference algorithm and options & 3.3-3.6 & 4.5-4.6 \\ \hline Convergence and diagnostics & 3.7-3.10 & 4.7 \\ \hline Crossvalidation and model Selection & 3.17-3.18 & 4.8 \\ \hline \end{tabular} \end{table} Table 1: This table shows the relevant groups of subsections from Section 3 and Section 4. prior with mean 70 and standard deviation 10, and \(\sigma\) has an Exponential prior with rate 3.4 Footnote 4: To write LaTeX symbols like \(\sigma\) and \(\theta\) in Pumas, you can write the LaTeX form, e.g. \(\backslash\)\(sigma\), followed by the Tab key. ``` 1@parambegin 2tvel\(\sim\)LogNormal(log(2.5),1) 3tvvc\(\sim\)Constrained(Normal(70,10);lower=0) 4\(\sigma\sim\)Exponential(3) 5end ``` Listing 1: @param block example Next, the @random block holds all of the subject-specific parameters and their priors. Similar to the @param block, we also begin by specifying a begin... end where we insert one parameter per line; and each parameter is assigned a prior with the \(\sim\) operator followed by a distribution. In Listing 2, we have an example of an @random block with a single parameter \(\eta\) which has an MvNormal (multivariate normal) prior with mean 0 and identity covariance matrix. ``` 1@randombegin 2\(\eta\sim\)MvNormal([10;01]) 3end ``` Listing 2: @random block example The @covariates block is used to specify the subject's covariates. This is only a declaration block that follows the same approach by declaring one covariate per line inside the begin... end statements. You can find an example in Listing 3 where we are declaring two subject covariates: WT for the subject's weight, and \(\mathtt{SEX}\) for the subject's sex. ``` 1covariatesbegin 2WT 3SEX 4end ``` Listing 3: @covariates block example We continue with the @pre block, where any pre-computations or statistical transformations necessary for our model are done. In this block, we can use all of the parameters and covariates declared in the previous blocks. The approach is similar to the other blocks so far: one pre-computation or transformation per line inside the begin... end statements. Listing 4 provides an example of the @pre block, in which we compute the individual PK parameters for each subject. ``` 1@prebegin 2CL=tvel*exp(\(\eta\)[1])*(WT/70)^0.75 ``` Listing 4: @pre block example The fifth block, @dynamics, is where we specify the ODE system that governs the dynamics of our model. In this block, we declare one ODE per line inside begin... end statements. On the left-hand side (LHS) of the equation is the derivative of the compartment, i.e. the rate of change. We are free to give each compartment any name. On the LHS the compartment name is followed with a'(prime) operator to denote the rate of change in that compartment. The prime operator is an intuitive way to declare the derivative of the compartment. On the right-hand side (RHS) of the equation is some combination of the compartments and the individual parameters specified in the @pre block. In the example in Listing 5, we specify the dynamics of the Theophylline model presented in section 2 with parameters \(\mathtt{Ka}\), \(\mathtt{CL}\) and \(\mathtt{Vc}\).. ``` 1@dynamicsbegin 2Depot'=-Ka*Depot 3Central'=Ka*Depot-CL/Vc*Central 4end ``` Listing 5: @dynamics block example Pumas supports automatic ODE linearity and stiffness detection. So even linear ODEs can be written in the above readable syntax with no performance loss. Additionally if ODE stiffness is detected, Pumas will switch to a stiff ODE solver as appropriate. The @derived block is where we define our likelihood term. We can use two types of assignments in this block: the deterministic assignment = and the probabilistic assignment \(\sim\). For the deterministic assignments, the RHS is a deterministic quantity, whereas for the probabilistic assignment the RHS is a probabilistic quantity represented as a distribution. In Listing 6, we have two variables being defined5. The first is cp defined using the deterministic assignment, and the second is the conc defined using the probabilistic assignment while also using cp as one of the parameters of the log-normal distribution. In this example, conc is observed and is called the dependent variable. We define the distribution of conc to be a log-normal distribution with log-scale mean log cp and log-scale standard deviation \(\sigma\) (from our @param block). ``` 1@derivedbegin [email protected]/Vc 3conc\(\sim\)@.LogNormal(log(cp),\(\sigma\)) 4end ``` Listing 6: @derived block example The @observed block can be used to compute additional quantities such as the NCA parameters from the concentration curve. ``` 1@observedbegin 2nca:=@ncacp 3auc=NCA.auc(nca) 4cmax=NCA.cmax(nca) 5end ``` Listing 7: @observed block example In addition to the above blocks, there are also more blocks available for: * @init for initializing the dynamics manually, or * @vars for defining short-hand notation variables for use in the @dynamics and @derived blocks. For more on the various components of a Pumas model, please refer to the Pumas documentation (docs.pumas.ai). #### Example: PK Model For illustration purposes, consider the following 1-compartment model with first-order absorption: \[\text{Depot}^{\prime} =-\text{Ka}\cdot\text{Depot}\] \[\text{Central}^{\prime} =\text{Ka}\cdot\text{Depot}-\frac{\text{CL}}{V_{C}}\cdot\text{ Central}\] where CL is the elimination clearance from the Central compartment; \(V_{C}\) is the volume of the Central compartment; and Ka is absorption rate constant. If we had one subject only, this model can be coded in Pumas using all the blocks we've seen except for the @random which is not necessary for single-subject models. Listing 8 shows the code for this model. This is a complete Bayesian Pumas model. We are specifying the parameters along with their priors in the @param block. We only have a single subject, so there is no need for the inclusion of a @random block, which in turn makes the PK individual parameters defined in the @pre block being the same as the population parameters. In the @dynamics block, we are declaring two ODEs that govern our model dynamics. The dynamics have 2 compartments named: Depot and Central. Finally, in the @derived block, we calculate cp as a function of the Central compartment divided by the PK parameter Vc with a deterministic assignment, and we define our observed response conc as following a log-normal distribution with log-scale mean log cp and log-scale standard deviation \(\sigma\). The model in Listing 8 is a single-subject model, but most of the time we have multiple subjects in the data and need to define a population model. This can be accomplished with the addition of a @random block to define the subject-specific parameters \(\eta\). We can then use the population and subject-specific parameters together to define the individual PK parameters in the @pre block. We chose to assign a Gaussian prior distribution to \(\eta\)s with a covariance matrix parameterized using correlations and standard deviations as explained in 4.3.3. The model in Listing 9 is an example of such parameterization. It builds upon the previous single-subject PK model by adding 2 more population parameters: a correlation matrix \(C\) and a vector of standard deviations \(\omega\) in the @param block. The \(C\) correlation matrix has a Cholesky-parameterized LKJ prior (the recommended prior) and the \(\omega\) vector of standard deviations has a positive-constrained multivariate-normal distribution with a diagonal covariance matrix with pre-truncation mean equal to 0 and variances equal to 0.4\({}^{2}\). We build the \(\eta\)s in the @random block by using a multivariate Gaussian prior with a covariance matrix built from the correlation and standard deviations using the Pumas' function cor2cov. Finally, in the @pre block we defined the individual PK parameters as a transformation of the population and the subject-specific parameters combined. #### Selecting Prior Distributions Choosing a new prior for a parameter can be a daunting task. In general, there is no one prior that fits all cases and it might be a good practice to follow a previous similar study's priors where a good reference can be found. However, if you are faced with the task of choosing good prior distributions for an all-new model, it will generally be a multi-step process consisting of: 1. **Deciding the support of the prior**. The support of the prior distribution must match the domain of the parameter. For example, different priors can be used for positive parameters than those for parameters between 0 and 1. Table 2 can help narrow down the list of options available based on the domain of the parameter. Figure 8: PK 1-compartment single-subject model example 2. **Deciding the center of the prior**, e.g. mean, median or mode. 3. **Deciding the strength of the prior**. This is often controlled by a standard deviation or scale parameter in the corresponding distribution function. A small standard deviation or scale parameter implies low uncertainty in the parameter value which leads to a stronger prior. A large standard deviation or scale parameter implies high uncertainty in the parameter value which leads to a weaker prior. It is recommended that each prior distribution that is considered to be used should be assessed carefully before using it. This will ensure that the strength of the prior reflects your confidence level in the parameter values. Refer to the discussion in Section 4.3 on prior selection for more details. 4. **Deciding the shape of the prior**. Some distributions are left-skewed, others are right skewed and some are symmetric. Some have heavier tails than others, e.g. the student's T-distribution is known for its heavier tail compared to a normal distribution. The shape of the probability density function (PDF) should reflect knowledge about the parameter value prior to observing the data. When selecting new priors, besides the discussion in Section 4.3, you may also find the tips and recommendations in github.com/stan-dev/stan/wiki/Prior-Choice-Recommendations to be useful. For a more advanced discussion of various prior choices, also see Simpson et al (2014). For univariate distributions, you can plot the distribution's probability density curve using the PumasPlots.lines6 function, e.g: Footnote 6: which is a part of the PumasUtilities package ``` 1usingPumasUtilities 2dist=Normal(0.0,1.0) 3PumasPlots.lines(dist) ``` For multivariate and matrix-variate distributions, you can use the rand, mean and var functions to sample from the distribution and make sense of the values' distributions. For example: ``` 1dist=LKJ(3,1.0) 2x=rand(dist,100)#100samples 3mean(x)#mean 4var(x)#element-wisevariance ``` The following is a description of some of the most popular prior distributions available in Pumas: 1. Normal(\(\mu\), \(\sigma\)): univariate normal distributions with support \((-\infty,\infty)\), mean \(\mu\) and standard deviation \(\sigma\). 2. LogNormal(\(\mu\), \(\sigma\)): univariate log normal distribution with support \((0,\infty)\) and a log-scale mean \(\mu\) and log-scale standard deviation \(\sigma\). 3. MwNormal(\(\mu\), \(\Sigma\)): multivariate normal distribution with mean vector \(\mu\) and covariance matrix \begin{table} \begin{tabular}{|c|l|} \hline **Support** & **Distributions** \\ \hline \((0,1)\) & Beta, KSOneSided, NoncentralBeta, LogitNormal \\ \hline \((0,\infty)\) & BetaPrime, Chi, Chisq, Erlang, Exponential, FDist, Frechet, Gamma, InverseGamma, InverseGaussian, Kolmogorov, LogNormal, NoncentralChisq, NoncentralF, Rayleigh, Weibull \\ \hline \((-\infty,\infty)\) & Cauchy, Gumbel, Laplace, Logistic, Normal, NormalCanon, NormalInverseGaussian, PGeneralizedGaussian, TDist \\ \hline Real vectors & MvNormal \\ \hline Positive vectors & MvLogNormal \\ \hline Positive definite matrices & Wishart, InverseWishart \\ \hline Correlation matrices & LKJ, LKJCholesky \\ \hline Other & Constrained, truncated, LocationScale, Uniform, Arcsine, Biweight, Cosine, Epanechnikov, Semicircle, SymTriangularDist, Triweight, Pareto, GeneralizedPareto, GeneralizedExtremeValue, Levy \\ \hline \end{tabular} \end{table} Table 2: The table shows some of the most popular prior distributions available in Pumas and their corresponding support domains. You can learn more about each distribution using? followed by the distribution name in the Pumas command line prompt. \(\Sigma\). The matrix \(\Sigma\) can also be a diagonal matrix, e.g. Diagonal([1.0, 1.0]). You can also pass \(\Sigma\) alone as a matrix, e.g. MvNormal(\(\Sigma\)), and the means will be assumed to be 0. 4. MvLogNormal(\(\mu\), \(\Sigma\)): a multivariate log-normal distribution over positive vectors with log-scale mean vector \(\mu\) and log-scale covariance matrix \(\Sigma\) as defined in the MvNormal case above. 5. Cauchy(\(\mu\), \(\sigma\)): a univariate Cauchy distribution with support \((-\infty,\infty)\), location \(\mu\), and scale \(\sigma\). 6. Constrained(dist, lower = l, upper = u): a constrained prior distribution with a fixed support (l, u) and a fixed base distribution dist that could be any univariate or multivariate distribution. lower and upper set the lower and upper bounds on the random variables' support, respectively, defaulting to -Inf \((-\infty)\) and Inf \((\infty)\), respectively. When dist is a univariate distribution, lower and upper should be scalars. When constraining multivariate distributions, lower and upper can be vectors or scalars. If set to a scalar, the same bound will be used for all random variables. There is also a truncated distribution which is different from Constrained in that it allows the base distribution to be a function of the model's parameters but truncated only supports univariate base distributions. In general, it's recommended to use Constrained in the @param block and truncated in the @random and @derived blocks. Examples: * Constrained(Normal(0.0, 1.0), lower = 0.0) is a half normal distribution. * Constrained(Cauchy(0.0, 1.0), lower = 0.0) is a half Cauchy distribution. * Constrained(MvNormal([0.0, 0.0], [1.0 0.0; 0.0 1.0]), lower = 0.0) is a constrained multivariate normal distribution. The init keyword argument can also be set to specify the initial value of the parameter, e.g. Constrained(Normal(), lower = 0.0, init = 1.0) 7. truncated(dist, lower, upper): similar to Constrained with fixed lower and upper bounds lower and upper, respectively, and a base distribution dist. Setting upper is optional and it defaults to Inf \((\infty)\) when not set. In truncated, the base distribution dist is allowed to depend on the model's parameters and the normalization constant is computed in every log probability evaluation. However, the lower and upper bounds must be fixed constants and truncated only supports univariate base distribution. Examples: truncated(Normal(0, 1), 0.0, Inf) is a half normal distribution. truncated(Cauchy(), 0.0, Inf) is a half Cauchy distribution. truncated(Normal(), -Inf, 0.0) is a negative half normal distribution. 8. Uniform(\(l\), \(u\)): a univariate uniform distribution with lower and upper bounds \(l\) and \(u\) respectively. 9. LKJ(\(d\), \(\eta\)): a matrix-variate LKJ prior over correlation matrices of size \(d\times d\). \(\eta\) is the positive shape parameter of the LKJ prior. Decreasing \(\eta\) results in samples with correlations closer to \(\pm 1\). There is also LKJCholesky which is semantically identical to LKJ but has some advantages. See below. 10. LKJCholesky(\(d\), \(\eta\)): a Cholesky-factorized version of the LKJ distribution where the matrix sampled is in factorized form. This is recommended over LKJ for use inside the model for performance reasons. 11. Wishart(\(\nu\), \(S\)): a matrix-variate Wishart distribution over \(d\times d\) positive definite matrices with \(\nu\) degrees of freedom and a positive definite \(S\) scale matrix. 12. InverseWishart(\(\nu\), \(\Psi\)): a matrix-variate inverse Wishart distribution over \(d\times d\) positive definite matrices with \(\nu\) degrees of freedom and a positive definite scale matrix \(\Psi\). 13. Beta(\(\alpha\), \(\beta\)): a univariate Beta distribution with support from 0 to 1 and shape parameters \(\alpha\) and \(\beta\). 14. Gamma(\(\alpha\), \(\theta\)): a univariate Gamma distribution over positive numbers with shape parameter \(\alpha\) and scale \(\theta\). 15. Logistic(\(\mu\), \(\theta\)): a univariate logistic distribution with support \((-\infty,\infty)\), location \(\mu\) and scale \(\theta\). 16. LogitNormal(\(\mu\), \(\sigma\)): a univariate logit normal distribution with support \((0,1)\) and a base normal distribution with mean \(\mu\) and standard deviation \(\sigma\). 17. \(\texttt{TDist}(\nu)\colon\) a univariate Student's T distribution with support \((-\infty,\infty)\), \(\nu\) degrees of freedom and mean 0. To change the mean of the T distribution, you can use a LocationScale distribution (shown below). 18. LocationScale(\(\mu\), \(\sigma\), \(d\)): a scaled and translated univariate distribution with a base distribution \(d\). The base distribution's random variable is first scaled by \(\sigma\) and then translated by \(\mu\). Example: LocationScale(1.0, 2.0, TDist(2)) is a scaled and translated Student's \(t\) distribution. The mean of the LocationScale distribution is \(\mu+\sigma\times\text{mean(d)}\) and the standard deviation is \(\sigma\times\text{std(d)}\). 19. Laplace(\(\mu\), \(\sigma\)): a univariate Laplace distribution with support \((-\infty,\infty)\), location \(\mu\) and scale \(\sigma\). 20. Exponential(\(\theta\)): a univariate exponential distribution with support \((0,\infty)\) and scale \(\theta\). 21. (Improper) flat priors: instead of using a distribution, one can specify a domain instead such as a VectorDomain for vector parameters, PSDDomain for positive definite parameters or CorrDomain for correlation matrix parameters. Those domains are treated in Pumas as flat priors. If the domain is open, this would be an improper prior. For more about domains, see the Pumas documentation (docs.pumas.ai). ### Prior Simulations and Predictions After defining a model with the priors, a prior predictive check may be run to check how close the prior predictions or simulations are to the real data. Prior simulations can be run using the simobs function passing in the model, model, and subject/population, data, as arguments. ``` 1sims=simobs(model,data;samples=100) ``` Listing 10: Prior simulation The samples keyword argument specifies the number of samples to simulate from the prior distributions for each subject in data. In the simobs function, there's also a keyword argument: simulate_error. If it is set to true (the default value), Pumas will sample from the response's error model in the @derived block (aka simulation), otherwise it will return the expected value of the error distribution (aka prediction). The latter is equivalent to using the predict function. There are various statistics and queries that can be run given the simulation results. These are explained in Section 3.15. The simulation results can also be plotted using a visual predictive check (VPC) as explained in Section 3.14. When the simulations are from the prior model, the VPC is usually called a prior predictive check. ### Fitting a Model Now that we have a model, we need to fit it using Pumas. This is the role of the fit function which takes four positional arguments: 1. A Pumas model 2. A Pumas population 3. The initial parameter estimates 4. A fitting algorithm The fitting algorithm in an MLE setting can be an instance of FOCE or Laplace for example. However to run Bayesian inference using MCMC instead, it can be set to an instance of either: * BayesMCMC * MarginalMCMC MarginalMCMC samples from the marginal posterior by integrating out the subject-specific parameter first, whereas BayesMCMC samples from the joint posterior. MarginalMCMC can be much faster than BayesMCMC in some cases but it is still experimental and will be improved in the future. The options in BayesMCMC and MarginalMCMC are passed when constructing an instance of the algorithm using keyword arguments, e.g BayesMCMC(nsamples=2000). The main options that can be set in both BayesMCMC and MarginalMCMC are: * target_accept: target acceptance ratio for the NUTS algorithm, defaults to 0.8 * nsamples: number of Markov Chain Monte Carlo (MCMC) samples to generate, defaults to 2000 * nadapts: number of adaptation steps in the NUTS algorithm, defaults to 1000 * nchains: number of MCMC chains to sample, defaults to 4 * ess_per_chain: target effective sample size (ESS) per chain, sampling terminates if the target is reached, defaults to nsamples * check_every: the number of samples after which the ESS per chain is checked * time_limit: a time limit for sampling in seconds, sampling terminates if the time limit is reached, defaults to Inf (which is \(\infty\) in Julia) * ensemblealg: can be set to EnsembleSerial() for serial sampling, EnsembleThreads() for multi-threaded sampling or EnsembleDistributed() for multi-processing (aka distributed parallelism) sampling. By default parallelism over both chains and subjects will be turned on if enough threads/processes are available. * parallel_chains: can be set to true or false. If set to false, the chains will not be sampled in parallel. If set to true, the chains will be sampled in parallel using either multi-threading or multi-processing depending on the value of ensemblealg. The default value is true. * parallel_subjects: can be set to true or false. If set to false, the log probability computation will not be paralellized. This is preferred when the number of subjects is small. If set to true, the log probability computation will be parallelized over the subjects using either multi-threading or multi-processing depending on the value of ensemblealg. The default value is true if enough threads/processes are available to do both parallelism over chains and subjects. * rng: the random number generator used * diffeq_options: a NamedTuple of all the differential equations solver's options, e.g diffeeq_options = (alg = Rodas5(),) can be used to force Pumas to use the stiff ODE solver Rodas5 instead of relying on the automatic stiffness detection and auto-switching behaviour of Pumas. * constantcoef: a NamedTuple of the parameters to be fixed during sampling. This can be used to sample from conditional posteriors fixing some parameters to specific values, e.g. constantcoef = (\(\sigma\) = 0.1,) fixes the \(\sigma\) parameter to 0.1 and samples from the posterior of the remaining parameters conditional on \(\sigma\). The MarginalMCMC algorithm also has a keyword argument marginal_alg which defaults to LaplaceI() but can also be FOCE() or FO(). By default both BayesMCMC and MarginalMCMC will run 4 Markov chains in parallel, using the remaining computing resources to parallelize the computations across subjects. By default, 2,000 MCMC iterations will be run using the first 1,000 samples of each chain as burn-in. Pumas does not automatically discard the burn-in samples. Hence, the user needs to use the function discard to discard the burn-in samples. If you are using Pumas! 2.4, instead of discard, you can use Pumas.truncate. Listing 11 shows a Bayesian Pumas model fitting example. We save the result to res and we call discard on it with the keyword argument burnin set to 1,000 samples. This will output a truncated fit by discarding the first 1,000 samples per chain. Note that in Julia, 1,000 and 1000 are equivalent. ``` 1res=fit(model,pop,iparans,BayesMCMC(nsamples=2,000,nadapts=1,000)) 2tres=discard(res;burnin=1,000) ``` Listing 11: Fitting a Bayesian model in Pumas You can also pass a ratio keyword argument to the discard function to drop (1 - ratio) \(\times 100\%\) of the samples. This is known as thinning and it works by selecting 1 sample from every 1/ratio samples in each chain. Generally speaking, thinning is usually discouraged in the final analysis because it leads to some loss of information. However, in the initial exploratory phase when many exploratory simulations/predictions are run, it may be desired to do thinning to do faster iterations. Another example is in Listing 12 where we are using MarginalMCMC. ``` 1res=fit(model,pop,iparans,MarginalMCMC(nsamples=100,nadapts=10)) 2tres=discard(res;burnin=10) ``` Listing 12: Fitting a Bayesian model in Pumas with custom arguments When fitting the model using BayesMCMC or MarginalMCMC, you will be able to view the progress of the sampler using live progress information displayed as shown in Figure 1. When using multi-threading or distributed parallelism, an interval or ratio is displayed for each field instead of a value. The following is a description of the most important fields displayed in Figure 1: * iterations refers to how many MCMC iterations are completed including both the adaptation and sampling phases. * n_steps is the number of time steps taken in the last proposal. If this is too large, the NUTS is sampler will be very slow and inefficient. For more on the number of steps, see Section 4.5.6. * is_accept is true if the last proposal was accepted and false otherwise. For more on proposals and the NUTS algorithm, see Section 4.5. * acceptance_rate refers to the average acceptance rate of all the past proposals. This should converge to a similar value as the target_accept option set after the adaptation phase. For more on the acceptance ratio, see Section 4.5.5. * log_density refers to the log joint probability of the parameters and observations. If this is monotonically increasing late during the sampling phase of the fit, this is a sign that the sampler likely didn't converge to the area(s) of Figure 1: Live progress information displayed during sampling using Pumas. Figure 2: An example of the display of the Bayesian fit result, output from the fit function in Pumas. high posterior probability mass during adaptation and the chains likely would not converge. For more on signs of lack of convergence, see Section 4.7 and for more on monotonically increasing log densities (aka optimization behaviour), see Section 4.5.5. * tree_depth is the maximum tree depth reached when generating the last proposal. For more on this, see Section 4.5.6. * step_size is the time step size in the NUTS algorithm which is adapted during the adaptation phase and fixed during the sampling phase. For more on the step size and its connection to the target acceptance ratio, see Section 4.5.5. * is_adapt is true during the adaptation phase and false during the sampling phase. An example of the result of the fit function is shown in Figure 2. A number of summary statistics for the population parameters are displayed automatically. You can also use other Pumas functions to query specific summary statistics programmatically, rather than only in display. For more on summary statistics functions, see Sections 3.6 and 3.11. ### Numerical Errors and Debugging #### 3.4.1 Numerical Instability in the Model Each evaluation of the log-likelihood at specific parameters values \((\eta,\theta)\) involves a full evaluation of the model, including the structural model (@pre block), numerically solving the differential equations (@dynamics block), and the computing of the likelihood (@derived block). In order to perform effective Bayesian inference, one needs to ensure that all the model blocks are numerically stable and do not lead to Inf, -Inf or NaN values. Numerical instability can result from many causes but some usual suspects include: 1. Dividing by a very small number or 0, e.g. if a parameter is in the denominator and is allowed to be 0 during the fitting. 2. Calling the exponential function with a large exponent, e.g. due to bad initial parameter values. 3. Some observations may have 0 or approximately 0 probability according to your model at a specific \((\eta,\theta)\). For example, if a Bernoulli response distribution was used and the probability parameter of the Bernoulli distribution was exactly 0 (or 1), when a 1 (or 0) observation exists in the data. 4. The ODE solver is failing to converge to a solution because the dynamics are not stable for a particular choice of extreme \((\eta,\theta)\). 5. The response's distribution has 0 standard deviation, e.g. when using a proportional error model and the concentration drops to 0. 6. Taking log or square root of a negative parameter. Initial parameter values that cause numerical errors are often more important to watch out for than intermediate bad values during the fitting. This is because bad intermediate models will be rejected automatically when they lead to numerical errors. When this happens, one may see the following warning occur: ``` 1Warning:Thecurrentproposalwillberejectedduetonumericalerror(s). 2isfinite.((0,r,l=,l\(\kappa\)))=(true,false,false,false) ``` This warning is not necessarily a bad thing or a failure of the estimation process. What it represents is that the numerical estimation of \(p(\eta,\theta|D)\) has failed for a particular run. In many cases, this warning is benign and expected from the Bayesian estimation process, e.g. it could be that extreme parameters of an ODE model led to unstable dynamics causing the simulator to diverge. The MCMC stepping process will recover from this by rejecting the step and proposing new \((\eta,\theta)\) values to try. Thus if one only sees a few of these warnings during the estimation process, there may be nothing to worry about. However, if excessive warnings are displayed, then this could mean that many steps are being rejected causing the MCMC process to not effectively explore the posterior, potentially leading to a lack of convergence. One further indication of this is if the stepping process also displays the additional warning: ``` 1NaNdtdetected:LikelyaNNaNvalueinthestate,parameters,orderivativevaluecausedthisoutcome. ``` This warning implies that the ODE solver failed. This warning is shown because the calculation of the initial ODE time step \(dt\), which is the first part of the ODE solver process, resulted in NaN. This is usually caused by NaN or Inf values in the ODE parameters. If this is the case, it may be a good idea to investigate whether the individual parameters in the @pre block have reasonable values or not. One quick way to do this is to instrument the @model definition to print out the current values that are being used in the simulation process. For example, the line ``` 1VC=@[3]*exp(r[2]) 2can be changed to 3VC=@[5]*exp(r[2]) ``` which will print out the value that is calculated at every step. Using these printouts, one can directly see the values of the ODE parameters being used in the @dynamics block. @pumasdebug is a Pumas 2.4 feature which is not available prior to that. Some of the most common issues found through this form of debugging are due to incorrectly setting parameter bounds. For example, a parameter defined by dividing by a fixed or random effect may cause a parameter value to be (nearly) infinite if the denominator is close to zero. Thus a fix is to ensure that a lower bound is appropriately defined for the effect causing the (near) 0 denominator. #### Numerical Instability in the ODE Solver If the ODE parameters seem to be realistic candidates that are being rejected, then one may need to ensure that the ODE solver process is appropriate for the equation at hand. Any of the following warnings is usually a sign that the ODE solver is failing for algorithmic reasons rather than NaN or Inf values in the parameters: ``` 1Warning:Interrupted.Largermaxitersisneeded. 1Warning:dt(x)<=dtmin(y)at=r.Aborting. 1Warning:Instabilitydetected.Aborting ``` For debugging such situations, it can be helpful to recreate the ODE solve with the exact parameters generated by @pre and directly call the ODE solver functions from DifferentialEquations.jl (Rackauckas and Nie (2017)). However, some common guidance is: * Reduce the tolerances. Generally a very robust set of tolerances (at the cost of some performance) is abstol=1e-12, reltol=1e-12. This can be set as part of the diffeq_options keyword argument in the sampling algorithm, e.g. BayesMCMC(diffeq_options = (abstol=1e-12, reltol=1e-12,)). * Change the ODE solver to a method specifically designed for stiff differential equations. A common choice is Rodas5P(). Once again, this can be set using the diffeq_options keyword argument, e.g. BayesMCMC(diffeq_options = (alg = Rodas5P(),)) One common reason for numerical failures of ODE solvers is due to a property known as stiffness in the ODE. Stiffness is difficult to rigorously define but can loosely be defined as large time-scale differences in the rates of change in an ODE. For example, if one value has a derivative of 1 while the other has a derivative of 10\({}^{9}\). This can lead to difficulties in the ODE solver and by consequence in the simulation and Bayesian inference process. Pumas, by default, uses an ODE solver which switches between a less robust but faster method for non-stiff ODEs, and a more robust but slower method for stiff ODEs. However, this default behaviour at times can be less stable than requiring all steps to use a stiff ODE solver, hence the second recommendation to manually switch the ODE solver. ### Updating the Posterior with New Data There are algorithms that can efficiently update the posterior samples given new observations per subject or new subjects, such as sequential Monte Carlo. As of the time of this writing, Pumas does not implement these algorithms. To update the posterior given new data for existing subjects or new subjects, you would currently have to refit the model to the entire dataset. Alternatively, you can approximate the posterior samples using a tractable distribution family, e.g. a multivariate Gaussian distribution, and refit the model to the new data only using the posterior approximation as the prior distribution. In future releases of Pumas, we intend to implement such efficient methods for updating the posterior samples. Please refer to the Pumas documentation (docs.pumas.ai) for a list of the latest features. ### Basic Summary Statistics To query a number of basic summary statistics for the population parameters, you can use: ``` summarystatus(trees) ``` where \(\ttres\) is the result from fit or discard. This will output the sample mean, sample standard deviation, Monte Carlo standard error (MCSE), effective sample size (ESS), \(\hat{R}\) and ESS per second. To get the same summary statistics for the subject-specific parameters of the \(i^{th}\) subject, you can use the subject keyword argument: ``` summarystats(trees,subject=i) ``` ### How Many Samples are Needed? The number of samples needed to accurately estimate various quantities can be different. For instance, to be able to estimate the probability of rare events or some extreme quantiles, you will need many more samples than the number of samples needed to estimate the mean of the posterior. By default, Pumas will generate 4 chains with 2000 samples per chain, 1000 of which will be used for adaptation. Depending on what you are trying to estimate, you may need to run the sampler for longer and check that the result does not significantly change as you increase the number of samples in your chains. More concretely, an ESS of 400 was recommended as a good target in Vehtari et al (2019). With the default 4 chains Pumas runs, this is an ESS of 100 per chain. The same ESS recommendations were also reported in old and recent editions of Gelman et al (2013a). While this is just a general guideline and it doesn't apply to extreme quantile estimation or estimating probabilities of rare events, it can be a good initial target to aim for. The ess_per_chain option discussed in Section 3.3 can be used to set a target ESS per chain. Beside a target ESS, one should also ensure that the \(\hat{R}\) diagnostic is less than 1.1. It is even better if it were less than 1.01 as recommended in Vehtari et al (2019). Estimating the ESS, \(\hat{R}\) and MCSE for the purpose of estimating different quantities other than the posterior mean, e.g. parameter quantiles, is currently not implemented in Pumas but it is an upcoming feature. This can give users more insights into the estimation accuracy of their MCMC samples. ### Diagnostic Plots There are several diagnostic plots that help you identify lack of convergence including: trace plot, cumulative mean plot, and auto-correlation plot. All of the diagnostic plots require the loading of the PumasUtilities package first: ``` usingPumasUtilities ``` Assume \(\ttres\) is the output from fit or discard. #### 3.8.1 Trace Plot The trace plot of a parameter shows the value of the parameter in each iteration of the MCMC algorithm. A good trace plot is one that: * is noisy, not an increasing or decreasing line. * has a fixed mean. * has a fixed variance. * shows all chains overlapping with each other, also known as chain mixing7. Footnote 7: Chain mixing refers to the case when different chains include samples from the same regions in the posterior as opposed to each chain including samples from a separate region of the posterior. You can plot trace plots with the function trace_plot, e.g: ``` trace_plot(trees;parameters=[:tvcl]) ``` Figure 3 shows the resulting trace plot for the parameter tvcl. You can add more parameter names to the parameters keyword argument, e.g. parameters = [:tvcl, :tvvc] to plot more parameters. As you can see the trace plot shown has many of the desired properties of a good trace plot. Figure 3: Example of a trace plot. When the parameters keyword argument is not specified, all the population parameters' trace plots will be displayed. To plot the trace plot of the subject-specific parameters of a group of subjects, you can set the subjects keyword argument instead of setting the parameters keyword argument, e.g: ``` trace_plot(tres;subjects=[i,2]) ``` See the Pumas documentation (docs.pumas.ai) for more details and examples. #### 3.8.2 Cumulative Mean Plot The cumulative mean plot of a parameter shows the mean of the parameter value in each MCMC chain up to a certain iteration. An MCMC chain converging to a stationary posterior distribution should have the cumulative mean of each parameter converge to a fixed value. Furthermore, all the chains should be converging to the same mean for a given parameter, the posterior mean. If the cumulative mean curve is not converging or the chains are converging to different means, this is a sign of non-convergence. You can plot a cumulative mean plot for the population parameter tvcl using: ``` cummean_plot(tres;parameters=[:tvcl]) ``` Figure 4 shows the resulting trace plot for the parameter tvcl. Much like in the trace plot, you can add more parameter names to the parameters keyword argument or leave it out completely which will plot all the population-level parameters. Similarly, the same plot can be plotted for the subject-specific parameters using the subjects keyword argument instead of the parameters keyword argument. #### 3.8.3 Auto-correlation Plot MCMC chains are prone to auto-correlation between the samples because each sample in the chain is a noisy function of the previous sample. The auto-correlation plot shows the correlation between every sample with index \(s\) and the corresponding sample with index s + lag for all s \(\in\) 1:N-lag where N is the total number of samples. For each value of lag, we can compute a correlation measure between the samples and their lag-steps-ahead counterparts. The correlation is usually a value between 0 and 1 but can sometimes be between -1 and 0 as well. The auto-correlation plot shows the lag on the x-axis and the correlation value on the y-axis. For well behaving MCMC chains when lag increases, the corresponding correlation gets closer to 0. This means that there is less and less correlation between any 2 samples further away from each other. The value of lag where the correlation becomes close to 0 can be used to guide the thinning of the MCMC samples to extract mostly independent samples from the auto-correlated samples. The discard function can be used to perform thinning with the ratio keyword set to 1 / lag for an appropriate value of lag. ``` discard(tres;ratio=1/lag) ``` That said, generally speaking, thinning is usually discouraged in the final analysis because it leads to some loss of information. However, in the initial exploratory phase when many exploratory simulations/predictions are run, it may be desired to do thinning to do faster iterations. You can plot an auto-correlation plot for the population parameter tvcl using: ``` autocor_plot(pk_lcmp_fit;parameters=[:tvcl]) ``` Figure 5 shows the resulting auto-correlation plot for the parameter tvcl. Much like in the trace plot, you can add more parameter names to the parameters keyword argument or leave it out completely which will plot all the population-level parameters. Similarly, the same plot can be plotted for the subject-specific parameters using the subjects keyword argument instead of the parameters keyword argument. ### More Diagnostics A number of other diagnostics exist to help you identify: Figure 4: Example of a cumulative mean plot. * When the MCMC algorithm hasn't converged, or * How many samples to throw away as burn-in. In general, we recommend running these diagnostics after removing the adaptation steps using the discard function. Some of the diagnostics we present here can then tell you how many more samples to remove as burn-in after removing the adaptation steps. The discard function can be used again on its own output to remove the additional samples as burn-in. #### 3.9.1 Geweke Diagnostic gewekeding(tres; subject = nothing, first = 0:1, last = 0.5) The above function computes the Geweke diagnostic (Geweke, 1991) for each chain outputting a p-value per parameter. tres is the output from fit or discard and the remaining keyword arguments have the default values shown above. If the subject keyword argument is set to nothing (the default value) or left out, the chains diagnosed are those of the population parameters. If subject is set to an integer index, the chains diagnosed are those of the subject-specific parameters corresponding to the subject with the input index. The Geweke diagnostic compares the sample means of two disjoint sub-chains \(X_{1}\) and \(X_{2}\) of the entire chain using a normal difference of means hypothesis test where the null and alternative hypotheses are defined as: \[H_{0} :\mu_{1}=\mu_{2}\] \[H_{1} :\mu_{1}\neq\mu_{2}\] where \(\mu_{1}\) and \(\mu_{2}\) are the population means. The first sub-chain \(X_{1}\) is taken as the first (first * 100)% of the samples in the chain, where first is a keyword argument defaulting to 0.1. The second sub-chain \(X_{2}\) is taken as the last (last * 100)% of the samples in the chain, where last is a keyword argument defaulting to 0.5. The test statistic used is: \[z_{0}=(\bar{x}_{1}-\bar{x}_{2})\Big{/}\sqrt{s_{1}^{2}+s_{2}^{2}}\] where \(\bar{x}_{1}\) and \(\bar{x}_{2}\) are the sample means of \(X_{1}\) and \(X_{2}\) respectively, and \(s_{1}\) and \(s_{2}\) are the Markov Chain standard error (MCSE) estimates of \(X_{1}\) and \(X_{2}\) respectively. Auto-correlation is assumed within the samples of each individual sub-chain, but the samples in \(X_{1}\) are assumed to be independent of the samples in \(X_{2}\). The p-value output is an estimate of \(P(|z|>|z_{0}|)\), where \(z\) is a standard normally distributed random variable. Low p-values indicate one of the following: * The first and last parts of the chain are sampled from distributions with different means, i.e. non-convergence, * The need to discard some initial samples as burn-in, or * The need to run the sampling for longer due to lack of samples or high auto-correlation. High p-values (desirable) indicate the inability to conclude that the means of the first and last parts of the chain are different with statistical significance. However, this alone does not guarantee convergence to a fixed posterior distribution because: * Either the standard deviations or higher moments of \(X_{1}\) and \(X_{2}\) may be different, or * The independence assumption between \(X_{1}\) and \(X_{2}\) may not be satisfied when high auto-correlation exists. #### 3.9.2 Heidelberger and Welch diagnostic ``` 1heideldiag(tres;subject = nothing, alpha = 0.05, eps = 0.1, start = 1) ``` The above function computes the Heidelberger and Welch diagnostic (Heidelberger and Welch, 1983) for each chain. If the subject keyword argument is set to nothing (default value) or left out, Figure 5: Example of an auto-correlation plot. the chains diagnosed will be those of the population parameters. If subject is set to an integer index, the chains diagnosed will be those of the subject-specific parameters corresponding to the subject with the input index. The output of this function is a dataframe whose columns are explained below. Intuitively, the Heidelberger diagnostic attempts to: * Identify a cutoff point for the initial transient phase for each parameter, after which the samples can be assumed to come from a steady-state distribution. The initial transient phase can be removed as a burn-in. The cutoff point for each parameter is given in the burnin column of the output dataframe. * Estimate the relative confidence interval for the mean of the steady-state posterior distribution of each parameter, assuming such steady-state distribution exists in the samples. The relative confidence interval is computed by dividing the lower and upper bounds of the confidence interval by the mean value of the parameter. A large confidence interval implies either the lack of convergence to a stationary distribution or the lack of samples. Half the relative confidence interval is given in the halfwidth column of the output dataframe. The test column will be true (1) if the halfwidth is less than the input target eps (default is 0.1) and false (0) otherwise. Note that parameters with a mean value close to 0 can have erroneously large relative confidence intervals because of the division by the mean. The test value can therefore be expected to be false (0) for those parameters without concluding a lack of convergence. * Quantify the extent to which the distribution of the samples is stationary using statistical testing. The returned p-value, shown in the pvalue column of the output dataframe, can be considered a measure of mean stationarity. A p-value lower than the input threshold alpha (default is 0.05) implies a lack of stationarity of the mean, i.e. the posterior samples did not converge to a steady-state distribution with a fixed mean. The Heidelberger diagnostic only tests for the mean of the distribution. Therefore, much like other diagnostics, it can only be used to detect the lack of convergence and not to prove convergence. In other words, even if all the numbers seem normal, one cannot conclude that the chain converged to a stationary distribution or that it converged to the true posterior. ### What if the Chains Are Not Converging? If the chains seem to not be converging, there are things you can try to help your Markov chains converge: * Lower the target acceptance ratio from the default 0.8. * Re-parameterize your model to have less parameter dependence. * Fix some parameter values to known good values, e.g. values obtained by _maximum-a-posteriori_ (MAP) optimization. * Initialize the sampling from good parameter values. * Use a stronger prior around suspected good parameter values. * Simplify your model, e.g. using simpler dynamics. * Try the marginal MCMC algorithm MarginalMCMC instead of the full joint MCMC algorithm BayesMCMC. ### Advanced Posterior Queries #### Summary Statistics After you fit your Bayesian Pumas model, there are a number of functions and plots you can call on the output of the fit or discard functions. Often you want to execute posterior queries. Beside the basic summary statistics that one can get using the summarystats function as discussed in Section 3.6, one can also compute more advanced statistics based on the posterior. A common advanced posterior query is the probability that a certain parameter \(\theta\) is higher than 0 which can be written as an expectation problem: \[\mathrm{E}[\theta>0\mid\mathrm{data}]\] The way you can do this is using the mean function with a convenient do operator. Listing 13 shows an example of a posterior query using the do operator where we are testing if the parameter tvcl is higher than 0. It outputs a valid probability estimate, i.e. \(\in[0,1]\). ``` 1mean(tres)dop 2p.tvcl>0 3end ``` Listing 13: Example of a posterior query with the do operator Instead of mean, one can also use var to compute the variance, or use cov and cor to compute the covariance and correlation matrices, respectively, if multiple outputs are returned from the do block. Listing 14 shows an example where the correlation matrix between the tvcl and tvvc parameters is estimated using the posterior samples. ``` 1cor(tres)dop 2[p.tvcl,p.tvc] 3end ``` Listing 14: Posterior queries from multiple outputs Note that any transformation of the parameters can be done in the do block. For example, we can get the mean value of the lower triangular Cholesky factor of a correlation matrix parameter C using the code in Listing 15. ``` 1mean(tres)dop 2getchol(p.C).L 3end ``` Listing 15: Mean Cholesky factor of correlation matrix This is sometimes needed to compare the results of Pumas and Stan because Stan's equivalent to LKJCholesky reports the lower triangular factors in the MCMC samples instead of the actual correlation matrices which Pumas reports. To compute summary statistics of the subject-specific parameters of subject \(i\) instead of the population parameters, you can use the subject keyword argument as shown in Listing 16. ``` 1mean(tres,subject=i)dop 2p.\(\eta\)std 3end ``` Listing 16: Mean subject-specific parameters #### 3.11.2 Quantiles You can query the estimate of the posterior quantiles for population-level or subject-specific parameters using the quantile function: ``` 1quantile(tres) ``` This will display the 2.5%, 25%, 50%, 75%, and 97.5% quantiles of all the population-level parameters by default. To display the quantiles of the subject-specific parameters of subject \(i\) instead, you can use the subject keyword argument as such: ``` 1quantile(tres,subject=i) ``` To change the quantiles computed, you can also manually input the desired quantiles using the q keyword argument. For example, the following returns the 10% and 90% quantiles of the subject-specific parameters of subject \(i\): ``` 1quantile(tres,subject=i,q=[0.1,0.9]) ``` #### 3.11.3 Credible Intervals A credible interval is an interval containing a pre-specified probability mass in the posterior distribution. For instance, an estimate of the 95% credible interval is any interval such that at least 95% of the posterior samples obtained with MCMC lie in that interval. Naively, one can use the interval from the 2.5% quantile to the 97.5% quantile of a parameter as a 95% credible interval. This can be obtained using the quantile function as shown in section 3.11.2. However, less naively, one may be interested in the smallest interval that includes at least 95% of the posterior mass. This is commonly known as the highest probability desnity interval (HPDI). To get the HPDI which contains \((1-\text{a})\%\) of the samples for each population parameter, you can use: ``` 1hpd(tres,alpha=a) ``` To get the same interval for the subject-specific parameters of subject \(i\), you can use: ``` 1hpd(tres,alpha=a,subject=i) ``` ### Posterior Plots There are a number of plots that you can use to visualize the posterior distribution. In this section, we'll cover plots related to the parameter estimates: density plots, ridge plots and corner plots. #### 3.12.1 Density Plot A density plot shows a smoothed version of the histogram of a parameter value, giving an approximate probability density function for the marginal posterior of each parameter. This helps us visualize the shape of the marginal posterior of each parameter. If you run multiple Markov chains, the plot will show overlapping densities for each density distinguished by different colors. You can plot density plots with the function density_plot. Listing 17 and Figure 6 show the code and the resulting density plot, respectively. If you do not specify which parameter you want with the optional keyword argument parameters, the plot will output multiple density plots faceted automatically. parameters accepts a vector of parameters. ``` 1density_plot(pk_lcmp_fit;parameters=[:tvcl]) ``` Listing 17: Example of a density plot #### 3.12.2 Ridgeline Plot Another common posterior plot is the ridgeline plot, which outputs a single density summarizing all the of the sampled chains along with relevant statistical information about your parameter. The information that it outputs is the mean, median, 10% and 90% quantiles, along with 95% and 80% highest posterior density interval (HPDI). You can plot ridgeline plots with the function ridgeline_plot, which has a similar syntax as density_plot. Listing 18 and Figure 7 show the code and the resulting ridgeline plot, respectively. ``` 1ridge_plot(pk_lcmp_fit;parameters=[:tvcl]) ``` Listing 18: Example of a ridgeline plot #### 3.12.3 Corner Plot The corner plot is a plot that showcases scatter plots between different parameters along with marginal histograms in a well-organized template. This can be used to investigate a high correlation between parameter values that can be a source of convergence issues for the MCMC sampler. Listing 19 shows the code for a corner plot for the parameters tvq and tvcl. The output is in Figure 8. ``` 1corner_plot(tres,parameters=[:tvq,:tvvcl]) ``` Listing 19: Example of a corner plot ### Posterior Simulations and Predictions #### 3.13.1 Existing Subjects You can simulate new responses for existing subjects using the parameter values sampled from the posterior stored in the MCMC result. This is not to be confused with prior predictive simulations which use parameter values sampled from the priors. The simobs function can be used to do this: Figure 8: Example of a corner plot. Figure 6: Example of a density plot. Figure 7: Example of a ridgeline plot. ``` 1sims=simobs(tres;samples=100) ``` where tres is the output from the fit or discard functions. The samples keyword argument is the number of sub-samples taken from the MCMC samples. When not set, all of the MCMC samples will be used. By default, all the subjects are simulated. If the keyword argument subject is set to any integer index \(i\), only the \(i^{th}\) subject will be simulated. In the simobs function, there's also a keyword argument: simulate_error. If it is set to true (the default value), Pumas will sample from the response's error model in the @derived block (aka simulation), otherwise, it will return the expected value of the error distribution (aka prediction). The latter is equivalent to using the predict function. #### 3.13.2 New Dose or Covariates It is often useful to do counterfactual simulations by changing some of the variables we have control over and doing what-if analysis. Changing the dose is the most common use case in pharmacometrics but in some cases, covariates may also be changed. To change the dose and/or covariates and reuse the posterior samples for a particular subject, you will first need to define a new subject. You can either define the new subject manually using the Subject constructor, or you can start from a data frame and use the read_pumas function to convert it to a Pumas subject. To learn about the manual Subject constructor, please refer to the Pumas documentation (docs.pumas.ai). To showcase the second approach, assume the original data frame of the entire population is df. To multiply the dose of subject i by 3 given the data frame df where the dose amount is located in the first row of the :amt field, you can do: ``` 1subjdf=copy(df[df.id.==i,:]) 2subjdf[1,:amt]=subjdf[1,:amt]*3.0 3new_subj=read_pumas(subjdf)[1] ``` For more on data frame wrangling, please refer to the Pumas documentation (docs.pumas.ai) or tutorials (tutorials.pumas.ai). After defining the new subject new_subj, you can call the following method of simobs: ``` 1simobs(tres,new_subj,subject=i,samples=100) ``` Setting the subject keyword argument to the index i will trigger the use of the MCMC samples for subject i's parameters while using the dose, covariates and time points from new_subj. Note that the index i refers to the index of the subject in the training population passed to the fit function which may not match the ID field of the subject. To simulate for an actually new subject new_subj, where the subject-specfic parameters are sampled from the prior distribution and the population parameters are sampled from the posterior, you can drop the subject keyword argument: ``` 1simobs(tres,new_subj,samples=100) ``` ### Visual Predictive Checks and Simulation Plots A visual predictive check (VPC) of simulations in which the parameter values were sampled from the prior distributions is commonly known as the prior predictive check. Similarly, a VPC of simulations in which the parameter values were sampled from the posterior distribution is commonly known as the posterior predictive check. After calling a prior or posterior simobs call, the result simulation object sims can be passed to the vpc function to compute all of the quantiles necessary for a VPC plot. The function also accepts a categorical variable to stratify the VPC results with the keyword argument stratify_by. The VPC result can be plotted with the vpc_plot function. Listing 20 show the code for running a VPC from simulations. ``` 1vpc_res=vpc(sims) 2vpc_plot(vpc_res) ``` Listing 20: Visual predictive check Figure 9 is an example of an output of such code. To plot the simulated quantile median lines, you can set the keyword argument simquantile_medians=true, e.g: ``` 1vpc_res=vpc(sims) 2vpc_plot(vpc_res,simquantile_medians=true) ``` The resulting plot will look like Figure 10. To further display the observations as points on the VPC plot, you can set the keyword argument observations = true, e.g: ``` 1vpc_res=vpc(sims) 2vpc_plot(vpc_res,simquantile_medians=true,observations=true) ``` The resulting plot will look like Figure 11. For more on the many VPC options available, including changing the covariate or stratification, you can refer to the Pumas documentation (docs. pumas.ai). Instead of a full VPC plot, you can also just plot the simulated responses and observations without the VPC colour bands using the sim_plot function, e.g: ``` 1simplot(sims) ``` An example output is shown in Figure 12. ### Simulation Queries The output of a simobs call stores the simulated observations but also all the intermediate values computed such as: the parameter values used, individual coefficients, dose control parameters, covariates, differential equation solution, etc. There are a number of post-processing operations you can do on the simulation output to compute various queries and summary statistics based on the simulations. The postprocess function is a powerful tool that allows you to make various queries using the simulation results. There are multiple ways to use the postprocess function. The first way to use the postprocess function is to extract all of the information stored in the simulation result in the form of a vector of named tuples. Each named tuple has all the intermediate values evaluated when simulating 1 run. Let sims be the output of any simobs operation. Listing 21 shows how to extract all the simulation's intermediate results. ``` 1generated=postprocess(sims) ``` Listing 21: Extract intermediate values obstimes was set instead when calling simobs, the time-dependent variables will be evaluated at the time points in obsttimes instead. The second way to use postprocess is by passing in a post-processing function. The postprocessing function can be used to: * Transform the simulated quantities, or * Compare the simulated quantities to the observations. We use the do syntax here which is short for passing in a function as the first argument to postprocess. The 'do' syntax to pass in a post-processing function is shown in Listing 22. ``` 1postprocess(sims)dogen,obs 2... 3end ``` Listing 22: Compare generated quantities and observations where gen is the named tuple of all generated quantities from 1 simulation run and obs is the named tuple of observations. For instance to query the ratio of simulated observations conc that are higher than the observed quantity conc at the observations' time points, you can use the code in Listing 23. This is sometimes called the Bayesian p-value which is expected to be around 0.5. ``` 1postprocess(sims)dogen,obs 2sum(gen.conc.>obs.conc)/length(gen.conc) 3end ``` Listing 23: Bayesian p-value per simulation gen.conc is the simulated vector of conc whose length is the same as the number of observation time points. obs.conc is the observation vector conc. gen.conc.> obs.conc returns a vector of true/false, with one element for each time point. The sum of this vector gives the number of time points where the simulation was higher than the observation. Dividing by the number of time points gives the ratio. When using postprocess in this way, the output is always a vector of the query results, one number for each simulation. In the query function body, you can choose to use only gen or only obs but the header must always have both gen and obs. The third way to use the postprocess function is to compute summary statistics of the simulated quantities or of functions of the simulated quantities. Summary statistics can be computed by passing a statistic function as the stat keyword argument. For example in order to estimate the probability that a simulated value is higher than an observation, you can use the code in Listing 24. ``` 1postprocess(sims,stat=mean)dogen,obs 2gen.conc.>obs.conc 3end ``` Listing 24: Mean Bayesian p-value This function will do 2 things: 1. Concatenate the query results (e.g. gen.conc.>obs.conc) from all the simulation runs into a single vector. 2. Compute the mean value of the combined vector. Alternatively, you can use the mean function to do the same thing without using the keyword argument. Listing 25 will call the postprocess function under the hood. ``` 1mean(sims)dogen,obs 2gen.conc.>obs.conc 3end ``` Listing 25: Mean Bayesian p-value using the mean function The result of this operation will be a scalar equal to the mean value of the _concatenated_ vector of queries. In order to get the probability that the simulated quantity is higher than the observation _for each time point_, you can call the mean function externally as shown in Listing 26. ``` 1generated=postprocess(sims)dogen,obs 2gen.conc.>obs.conc 3end ``` Listing 26: Mean Bayesian p-value using the mean function This returns a vector of probabilities of the same length as the number of time points without concatenating all the queries together. To compute a summary statistic of all the generated quantities, you can also use the code in Listing 27. ``` 1postprocess(sims,stat=mean) 2 3end ``` Listing 27: Mean generated quantities without specifying a post-processing function which is also equivalent to the shorter version in Listing 28. ``` 1mean(sims) Beside mean, you can also use any of the following summary statistic functions in the same way: * std for element-wise standard deviation * var for element-wise variance * cor for correlation between multiple quantities * cov for covariance between multiple quantities These functions can be passed in as the stat keyword argument to postprocess or they can be used in the short form, e.g.: ``` 1generated=postprocess( 2sins,stat=std, 3dogen,obs 4... 5end 6std(sims)dogen,obs 7... 8end 9std(sims) 10 11generated=postprocess( 12sins,stat=var, 13dogen,obs 14... 15end 16var(sims)dogen,obs 17... 18end 19var(sims) ``` The cor and cov statistics are unique in that they require a post-processing function which outputs a vector. For example to estimate the correlation between the CL and Vc parameter in a 1 compartment model, you can use any of the following: ``` 1postprocess(s,stat=cor)dogen,obs 2[gen.CL[1],gen.Vc[1]] 3end 4cor(s)dogen,obs 5[gen.CL[1],gen.Vc[1]] 6end ``` Note that gen.CL is a vector of simulated CL values for all the time points. But since the value is constant across time, we can use the first element gen.CL[1] only. cov can be used instead of cor to compute the covariance matrix. The output of this operation is either a correlation or a covariance matrix. ### Non-Compartmental Analysis (NCA) Parameters You can easily integrate non-compartmental analysis (NCA) parameters such as area-under-curve (auc) and maximum concentration (cmax) in the simulation using the @observed block in the Pumas model, e.g: ``` 1derivedbegim [email protected]/Vc 3conc\(\sim\)@.LogNormal(log(cp),\(\sigma\)) 4end 5@observedbegim 6nc=@ncacp 7auc=NCA.auc(nca) 8cmax=NCA.cmax(nca) ``` The auc and cmax will now get stored in the output of simobs and can be queried using the various summary statistics queries. For example, the following will be estimating the probability that auc is greater than 20 and cmax is less than 30 given the simulations sims (output from simobs). ``` 1mem(sims)dogen,obs 2gen.auc>200&&gen.cmax<30 3end ``` For more on NCA integration and parameters, please refer to the Pumas documentation (docs. pumas.ai). ### Crossvalidation and Expected Log Predictive Density Crossvalidation is a technique for evaluating a model's predictive accuracy on unseen data, aka out-of-sample data. This is done by systematically leaving some data out from the training data8 when performing Bayesian inference followed by an evaluation of the average predictive accuracy of the MCMC samples using the data that was left out. Each iteration of the crossvalidation routine leaves a different subset of the data out of training and uses some or all of it for evaluating the prediction accuracy. The metric used for evaluating the prediction accuracy is typically the (conditional or marginal) log likelihood of each of the MCMC samples given the unseen data. The predictive performance metric is then averaged out across the iterations of the crossvalidation routine. Footnote 8: data used during the Bayesian inference to get samples from the posterior You can do almost all types of crossvalidation for hierarchical models using Pumas. The main function that performs crossvalidation in Pumas is the crossvalidate function. There are 2 inputs to crossvalidate: 1. The MCMC result from fit or discard. We will call this tres. 2. The crossvalidation algorithm. Let's call it cv_method. The call syntax is cv_res = crossvalidate(res, cv_method). To estimate the expected log predictive density (ELPD) given cv_res, you can use the elpd function: elpd(cv_res). There are 2 families of the crossvalidation algorithms available in Pumas: 1. Resampling-based CV which performs the MCMC again for each data point or subset left out. This algorithm is constructed using the ExactCrossvalidation struct. 2. PSIS-based CV (Vehtari et al, 2015) which uses the importance sampling and weight smoothing approach to avoid the need for resampling. This algorithm is constructed using the PSISCrossvalidation struct. The constructor for a resampling-based CV method is: ``` cv_method=ExactCrossvalidation(;split_method,split_by,ensembles&=EnsembleThreads()) ``` where split_method and split_by are keyword arguments that must be set and ensemblealg defaults to the use of multi-threading. This defines an instance of the ExactCrossvalidation algorithm for crossvalidation. In this algorithm, the fitting is re-run while leaving out a subset of the data each time. The way by which the data is split between training and validation sets is determined using the keyword arguments split_method and split_by. The split_method argument can be of any of the following types: 1. LeaveK for leave-K-out crossvalidation 2. KFold for K-fold crossvalidation 3. LeaveFutureK for leaving K future points at a time The split_by keyword argument can be of any of the following types: 1. BySubject for leaving out subjects 2. ByObservation for leaving out individual observations per subject Each of the data-splitting methods will be discussed in the next subsection. Similar to the resampling-based crossvalidation, the constructor for a PSIS-CV method is: ``` cv_method=PSISCrossvalidation(;split_method,split_by,pareto_shape_threshold=0.7) ``` where split_method and split_by are keyword arguments that must be set. This defines an instance of the PSISCrossvalidation algorithm for crossvalidation. The split_method and split_by keyword arguments are similar to the resampling-based CV case. The pareto_shape_threshold = 0.7 keyword argument will result in the removal of any CV run that leads to a Pareto shape parameter more than 0.7 when computing the expected log predictive density (ELPD) estimate. This can be useful to avoid a few bad runs rendering the whole PSIS-CV method useless. Ideally one would re-run the inference for the subset of CV runs where the shape parameter exceeded the threshold but this is not implemented in Pumas yet as of the time of this writing. Follow the Pumas documentation (docs.pumas.ai) for updates on the latest features available. #### 3.17.1 Leave-K-Out The constructor for the leave-K-out data splitting algorithm is: ``` split_method=LeaveK(;K=5,shuffle=false,rng=nothing) ``` In this algorithm, the data is split multiple times into 2 disjoint groups, each time starting from the full data set. The 2 groups are typically called training and validation subsets, where the validation subset has K data points. In the next iteration, the whole data set is re-split using another disjoint subset of K data points as the validation subset. This process is done repeatedly until almost each data point has shown up in 1 and only 1 validation subset. The data is typically a vector of some sort, e.g. observations or subjects, and the splittings are order-dependent. Before performing the splittings, you can randomly shuffle the data vector by setting the shuffle keyword argument to true (default is false) getting rid of the sensitivity to the original order of the data. You can additionally pass an optional pseudo-random number generator rng to control the pseudo-randomness for reproducibility. Assume some dummy original data ["A", "B", "C", "D"] which resembles the subjects or observations. Leave-one-out splitting without shuffling results in the data splittings shown in Table 3. where each data point shows once and only once in a validation subset. Leave-2-out splitting without shuffling results in the data splittings shown in Table 4. #### 3.17.2 K-Fold The constructor for the K-fold data splitting algorithm is: ``` split_method=KFold(;K=5,shuffle=false,rng=nothing) ``` In this algorithm, the data is split K times into 2 disjoint groups, each time starting from the full data set. The 2 groups are typically called training and validation subsets, where the validation subset has floor(N / K)9 data points, N being the total number of data points. In the next iteration, the whole data set is re-split using another disjoint validation subset of floor(N / K) different points, disjoint from the previous validation subsets. This process is done iteratively until almost each data point has shown up in 1 and only 1 validation subset. If N is divisible by K, each point will show up in 1 and only 1 validation subset. Otherwise, the remaining points will be part of the training subset for all the splittings and will not show up in any validation subset. Footnote 9: floor is a function that rounds down to an integer. The data is typically a vector of some sort, e.g. observations or subjects, and the splittings are order-dependent. Before performing the splittings, you can randomly shuffle the data vector by setting the shuffle keyword argument to true (default is false) getting rid of the sensitivity to the original order of the data. You can additionally pass an optional pseudo-random number generator rng to control the pseudo-randomness for reproducibility. Assume some dummy original data ["A", "B", "C", "D"] which resembles the subjects or observations. 4-fold splitting without shuffling results in the data splittings shown in Table 5, where each data point showed once and only once in a validation subset. 2-fold splitting without shuffling results in the data splittings shown in Table 6. #### 3.17.3 Leave-Future-K The constructor for the leave-future-K data splitting algorithm is: ``` split_method=LeaveFutureK(;K=1,minimum=2) ``` In this algorithm, the data is assumed to be a time series. The goal is to split the data into "past" and "future". Using this algorithm, the data is split multiple times into 3 disjoint groups where the third group is discarded, each time starting from the full data set. The first 2 groups are typically called the past/training subset and the future/validation subset, where the future validation subset has K future data points. In the next iteration, the whole data set is then re-split using another disjoint subset of K data points as the future validation subset. This process is done iteratively starting from the full data set and moving backward in time until the training subset has less than a pre-set minimum number of points remaining. Using this method, each data point can show up in at most 1 future validation subset. The default values of K and minimum are 1 and 2 respectively. Assume the original data is ["A", "B", "C", "D", "E", "F"]. Leave-1-future-out splitting with minimum=2 results in the data splittings shown in Table 7 where the remaining points are discarded: Leave-2-future-out splitting with minimum=2 results in the data splittings shown in Table 8. \begin{table} \begin{tabular}{c|c} \hline Training subset & Validation subset \\ \hline ["A", "B"] & ["C", "D"] \\ \hline ["C", "D"] & ["A", "B"] \\ \hline \end{tabular} \end{table} Table 6: 2-fold splitting. \begin{table} \begin{tabular}{c|c} \hline Training subset & Validation subset \\ \hline ["A", "B", "C"] & ["D"] \\ \hline ["A", "B", "D"] & ["C"] \\ \hline ["A", "C", "D"] & ["B"] \\ \hline ["B", "C", "D"] & ["A"] \\ \hline \end{tabular} \end{table} Table 3: Leave-one-out splitting. \begin{table} \begin{tabular}{c|c} \hline Training subset & Validation subset \\ \hline ["A", "B"] & ["C", "D"] \\ \hline ["A", "B", "D"] & ["C"] \\ \hline ["A", "C", "D"] & ["B"] \\ \hline ["B", "C", "D"] & ["A"] \\ \hline \end{tabular} \end{table} Table 5: 4-fold splittings. #### 3.17.4 Subject-based Splitting The constructor for the subject-based splitting method is: split_by = BySubject(; marginal = LaplaceI()) Using this method, each subject is treated as a single data point. The predictive log-likelihood computed for each subject can be either the marginal log-likelihood or conditional log-likelihood. This method has one keyword argument, marginal. If marginal is set to nothing, the predictive log-likelihood computed for each subject is the conditional log-likelihood using the typical values for the parameters. Otherwise, the predictive log-likelihood computed for each subject is the marginal log-likelihood using marginal as the marginalization algorithm. The default value of marginal is LaplaceI() which uses the Laplace method to integrate out the subject-specific parameters. Other alternatives include: FOCE() and FO(). #### 3.17.5 Observation-based Splitting The constructor for the observation-based splitting method is: split_by = ByObservation(; allsubjects = true) Using this method, each observation or collection of observations is treated as a single data point. When computing predictive log-likelihood using this method, the predictive log-likelihood computed is the conditional log-likelihood of one or more observations for one or more subjects. This method has one keyword argument, allsubjects. If allsubjects is set to true (the default value), the \(i^{th}\) observation of each subject are all grouped together into a single data point. This assumes all subjects have the same number of observations. If allsubjects is set to false, then each observation for each subject is its individual data point. Assume there are 2 subjects and 3 observations per subject. When using split_method = LeaveK(K = 1) as the splitting method together with split_by = ByObservation(allsubjects = false), the training and validation splittings are shown in Table 9. On the other hand, if allsubjects is set to true, the training and validation splittings are shown in Table 10. #### 3.17.6 Examples Assume there are 5 subjects and 10 observations per subject and that res is the result of the fit or discard function. The following are some of the combinations in which the above inputs can be used: * Leave-one-observation-out cross-validation, leaving 1 observation for all the subjects at a time. \begin{table} \begin{tabular}{l|l} \hline \hline \multicolumn{1}{c|}{Past subset} & Future subset \\ \hline [“A”, “B”, “C”, “D”, “E”] & [“F”] \\ \hline [“A”, “B”, “C”, “D”] & [“E”] \\ \hline [“A”, “B”] & [“C”] \\ \hline \hline \end{tabular} \end{table} Table 7: Leave-1-future-out splittings. \begin{table} \begin{tabular}{l|l} \hline \hline \multicolumn{1}{c|}{Past subset} & Future subset \\ \hline [“A”, “B”, “C”, “D”] & [“E”, “F”] \\ \hline [“A”, “B”] & [“C”, “D”] \\ \hline \hline \end{tabular} \end{table} Table 8: Leave-2-future-out splittings. \begin{table} \begin{tabular}{l|l} \hline \hline \multicolumn{1}{c|}{Training subset} & Validation subset \\ \hline \hline Subj 1 (obs 1, 2, 3), subj 2 & Subj 2 (obs 3) \\ \hline (obs 1, 2) & Subj 1 (obs 1, 2, 3), subj 2 & Subj 2 (obs 2) \\ \hline Subj 1 (obs 1, 2, 3), subj 2 & Subj 2 (obs 1) \\ \hline Subj 1 (obs 1, 2), subj 2 (obs 1, 2, 3) & Subj 1 (obs 3) \\ \hline Subj 1 (obs 1, 3), subj 2 (obs 1, 2, 3) & Subj 1 (obs 2) \\ \hline Subj 1 (obs 2, 3), subj 2 (obs 1, 2, 3) & Subj 1 (obs 1) \\ \hline \hline \end{tabular} \end{table} Table 9: Training and validation splits using split_method = LeaveK(K = 1) and split_by = ByObservation(allsubjects = false). \begin{table} \begin{tabular}{l|l} \hline \hline \multicolumn{1}{c|}{Past subset} & Future subset \\ \hline [“A”, “B”, “C”, “D”] & [“E”, “F”] \\ \hline [“A”, “B”, “C”] & [“D”] \\ \hline [“A”, “B”] & [“C”] \\ \hline \hline \end{tabular} \end{table} Table 7: Leave-1-future-out splittings. allsubjects = true means that the same observation index is removed for all the subjects, e.g. the 10th observation for all the subjects is used for validation in the first run, then the 9th observation is used for validation in the second run, etc. ``` 1split_method=LeaveK(K=1) 2split_by=ByObservation(allsubjects=true) 3ev_method=ExactCrossvalidation(; split_method=split_method,split_by=split_by,ensemblesg=EnembleThreads()) 4ev_res=crossvalidate(res,cv_method) ``` 5ev_method=ExactCrossvalidation(; split_method=split_method,split_by=split_by,ensemblesg=EnembleThreads()) 6ev_res=crossvalidate(res,cv_method) ``` 7ev_method=ExactCrossvalidation(; split_method=split_method,split_by=split_by,ensemblesg=EnembleThreads()) 8ev_res=crossvalidate(res,cv_method) ``` 9ev_method=ExactCrossvalidation(; split_method=split_method,split_by=split_by,ensemblesg=EnembleThreads()) 10ev_res=crossvalidate(res,cv_method) ``` 11ev_method=ExactCrossvalidation(; split_method=split_method,split_by=split_by,ensemblesg=EnembleThreads()) 12ev_res=crossvalidate(res,cv_method) ``` 13ev_method=ExactCrossvalidation(; split_method=split_method,split_by=split_by,ensemblesg=EnembleThreads()) 14ev_res=crossvalidate(res,cv_method) ``` 15ev_method=ExactCrossvalidation(; split_method=split_method,split_by=split_by,ensemblesg=EnembleThreads()) 16ev_res=crossvalidate(res,cv_method) ``` 17ev_method=ExactCrossvalidation(; split_method=split_method,split_by=split_by,ensemblesg=EnembleThreads()) 18ev_res=crossvalidate(res,cv_method) ``` ### Information Criteria The ELPD model evaluation metric computed from the crossvalidation output is theoretically similar to the so-called Widely Applicable Information Criteria (WAIC) (Vehtari et al, 2015). More precisely, -2 times the ELPD is comparable to the WAIC. A higher ELPD Is better and a lower WAIC is better. When the ELPD is estimated using the PSIS leave-one-out (LOO) crossvalidation method, -2 times the ELPD estimate is sometimes known as the LOOIC. Besides the ELPD estimates, Pumas also supports common information criteria (Burnham and Anderson, 2002), such as the Akaike information criteria (AIC), the corrected AIC (AICc), the Bayesian information criteria (BIC), and the WAIC. To estimate these, we first need to compute the pointwise log-likelihoods by some definition of "pointwise". To do this, you can call: ``` 1pl=loglikelihood(tree;split_method,split_by) ``` where tres is the output of fit or discard and split_method and split_by are the keyword arguments explained in Section 3.17 which define what a point is and which log-likelihood to compute. To calculate the information criteria using the pointwise log-likelihoods, you can then use any of the following functions: ``` 1Pumas.asic(pl) 2Pumas.asic(pl) 3Pumas.bic(pl) 4Pumas.wait(pl) ``` ## 4 Background and Intuition In this section, the notation used and the mathematical background of Bayesian inference in pharmacometrics will be presented. We include a brief introduction of Bayesian statistics and where it's useful in pharmacometrics, followed by an intuitive explanation of Markov Chain Monte Carlo (MCMC) and the No-U-Turn sampler (NUTS) algorithm (Hoffman and Gelman, 2014; Betancourt, 2017). We then discuss the intuition and some of the math behind prior selection, MCMC convergence diagnostics, and cross-validation and model selection. This section should prepare the readers for using the Bayesian workflow in Pumas or any standard Bayesian analysis tool by giving them _working knowledge_ of highly technical concepts, using intuition and light mathematics. ### Notation For convenience of notation, for the rest of this paper we use: 1. \(\theta\) **to refer to all the population-level parameters including all of \((\theta,\Omega,\sigma)\)** 2. \(\eta\) to refer to the patient-specific parameters for all the subjects, where \(\eta_{i}\) refers to the subject-specific parameters of subject \(i\). 3. \(x\) to refer to the covariates for all the subjects, where \(x_{i}\) refers to the subject-specific covariates of subject \(i\). To be more rigorous, \(x_{i}\) also includes all the time points at which the observations were made for subject \(i\). 4. \(y\) to refer to the observed response for all the subjects, where \(y_{i}\) refers to the subject-specific response of subject \(i\) 5. \(p(A=\alpha\mid B=\beta)\) to denote the probability density/mass of the random variable \(A\) taking the value \(\alpha\) conditional on the variable \(B\) taking the value \(\beta\). \(B=\beta\) can generally be replaced with multiple variables, e.g. \(p(A=\alpha\mid B=\beta,C=c)\). If \(A\) is a continuous random variable, \(p(A=\alpha\mid B=\beta)\) refers to the probability _density_ of \(A=\alpha\) conditioned on \(B=\beta\). If \(A\) is a discrete random variable, \(p(A=\alpha\mid B=\beta)\) refers to the probability _mass_ of \(A=\alpha\) conditioned on \(B=\beta\). When \(\alpha\) and/or \(\beta\) are dropped, they can be replaced by the value \(A\) and/or \(B\) respectively, e.g. \(p(A=A\mid B=B\) to be understood from the context. This is a slight abuse of notation using the same symbol \(A/B\) to refer to both the random variable and the specific value in its support but this is common in probability theory. 6. \(p(A,B\mid C)\) to denote \(p(A\mid B,C)\times p(B\mid C)\) which is equal to \(p(B\mid A,C)\times p(A\mid C)\) which could be the product of probability densities and/or masses depending on the supports of \(A\) and \(B\). 7. \(D\) to refer to all the observed data including both \(x\) and \(y\). 8. \(p(y_{i}\mid x_{i},\eta_{i}\theta)\) to denote the _conditional_ likelihood of \((\eta_{i},\theta)\) given subject \(i\)'s observations \((x_{i},y_{i})\). Recall that the likelihood is a function of the parameters given the data but it is also the probability of observing the data given the model's parameters. 9. \(p(y\mid x,\eta,\theta)\) to denote the _conditional_ likelihood of \((\eta,\theta)\) given all the subjects' observations \((x,y)\). Given the hierarchical nature of pharmacometric models, this is equal to \(\prod_{i}p(y_{i}\mid x_{i},\eta_{i}\theta)\). 10. \(p(y_{i}\mid x_{i},\theta)\) to denote the _marginal_ likelihood of \(\theta\) after marginalizing \(\eta_{i}\) given subject \(i\)'s observations \((x_{i},y_{i})\). This is equal to \(\int p(y_{i}\mid x_{i},\eta_{i}\theta)\cdot p(\eta_{i}\mid\theta)d\eta_{i}\). 11. \(p(y\mid x,\theta)\) to denote the _marginal_ likelihood of \(\theta\) given all the subjects' observations \((x,y)\). Given the hierarchical nature of pharmacometric models, this is equal to \(\prod_{i}p(y_{i}\mid x_{i},\theta)\). Some additional assumptions to keep in mind are that: 1. \(y_{i}\) may not be a scalar, instead, it could and often is a subject-specific time series response or multiple such time series responses. 2. \(\eta_{i}\) is not generally a scalar, it can be composed of multiple subject-specific parameters with a different prior distribution assigned to each parameter. 3. \(x_{i}\) is not generally a scalar, it can be multiple time-independent values or a combination of time-independent values and some time series. It also includes all the time points at which its corresponding \(y_{i}\) is observed. 4. \(p(A\mid B)\) will be used in equations to denote the probability density/mass function, but in text it may be used to also refer to the distribution itself as an object/concept, depending on the context. Figure 13 shows the typical model structure in pharmacometrics using the above notation when there are 3 subjects in the population. Additionally, Figure 14 shows a dummy Pumas model highlighting where each variable in Figure 13 is defined. Figure 14: A dummy Pumas model showing where each variable in Figure 13 is defined. Figure 13: Schematic of the hierarchical structure of models typically used in pharmacometrics when there are only 3 subjects in the population. The schematic can be trivially extended to more subjects. See the notation section (4.1) to understand the notations. ### Bayesian Statistics Bayesian Statistics is the use of **Bayes theorem** as the procedure to estimate parameters of interest or unobserved data (Gelman et al, 2013a). Bayes' theorem, named after Thomas Bayes10, tells us how to "invert" conditional probabilities going from \(p(B\mid A,C)\) to \(p(A\mid B,C)\) where \(C\) is optional: Footnote 10: **Thomas Bayes** (1701 - 1761) was a statistician, philosopher, and Presbyterian minister who is known for formulating a specific case of the theorem that bears his name: Bayes’ theorem. Bayes never published what would become his most famous accomplishment; his notes were edited and published posthumously by his friend **Richard Price**. The theorem’s official name is **Bayes-Price-Laplace**, because **Bayes** was the first to discover, **Price** got his notes, transcribed into mathematical notation, and read to the Royal Society of London, and **Laplace** independently rediscovered the theorem without having previous contact in the end of the XVIII century in France while using probability for statistical inference with census data in the Napoleonic era. \[p(A\mid B,C)=\frac{p(A,C)\cdot p(B\mid A,C)}{p(B,C)} \tag{3}\] In the context of statistics, Bayes' rule can be used to calculate the probability that each hypothesis is true given the observations. Assume we have 10 hypotheses \(H_{1},\ldots,H_{10}\) where each has a prior probability \(p(\text{truth}=H_{i})\). We can use Bayes' rule to calculate the posterior probability \(p(\text{truth}=H_{i}\mid\text{data})\) for each hypothesis \(H_{i}\) using: \[p(\text{truth}=H_{i}\mid\text{data})=\\ \frac{p(\text{data}\mid\text{truth}=H_{i})\cdot p(\text{truth}=H_ {i})}{p(\text{data})} \tag{4}\] where the denominator can be written as \[p(\text{data})=\\ \sum_{i=1}^{10}p(\text{data}\mid\text{truth}=H_{i})\cdot p(\text{ truth}=H_{i})\] which is the sum of the likelihoods of all the hypotheses, \(p(\text{data}\mid\text{truth}=H_{i})\), weighted by their respective prior probabilities \(p(\text{truth}=H_{i})\). While the denominator has a profound statistical meaning, it can also be viewed pragmatically as a normalization constant chosen such that \(\sum_{i=1}^{10}p(\text{truth}=H_{i}\mid\text{data})=1\). Since the denominator is the sum of the numerator terms for all \(i\), the sum of the resulting posterior probabilities is guaranteed to be 1. \(p(\text{truth}=H_{i})\) describes what is commonly known as the prior probability of a hypothesis. This can encode the modeller's domain knowledge giving unequal probabilities to different hypotheses upfront prior to observing any data. Alternatively, assigning equal probability to each hypothesis can also be done. Given multiple hypotheses and some data, the hypothesis with the highest probability given the observed data \(p(\text{truth}=H_{i}\mid\text{data})\) is the most plausible one. A hypothesis is typically a combination of a model and parameter values for the model's parameters. In the pharmacometrics context, let each set of parameter values (\(\eta\), \(\theta\)) given a specific model be a hypothesis. In this case, we have a continuum of hypotheses rather than a discrete set of hypotheses. Assuming a single model which we condition upon by putting it on the right-hand-side of the \(\mid\), and using pharmacometrics notation, Bayes' theorem for the hypotheses continuum can be written as: \[p(\eta,\theta\mid x,\text{model},y)=\\ \frac{p(y\mid x,\text{model},\eta,\theta)\cdot p(\eta,\theta\mid x,\text{model})}{p(y\mid x,\text{model})} \tag{5}\] where \(A\) in the general form is replaced by \((\eta,\theta)\), \(B\) is \(y\) and \(C\) is \((x,\text{model})\)11. Note that we conditioned on \(x\) everywhere in the above equation because we are generally not interested in modelling the probability of \(x\) per se, but rather we are interested in the probability of \(y\) given \(x\) (\(y\mid x\)). Footnote 11: We can alternatively make the model part of \(A\) instead of \(C\) when model selection is relevant but don’t consider this case for simplicity. Also note that \(p(\eta,\theta\mid x,\text{model})\), known as the prior probability, can be replaced by \(p(\eta,\theta\mid\text{model})\) since the prior typically doesn't depend on the covariates in pharmacometrics. In Pumas syntax, this means that the covariates from the @covariates block are not used in the prior specification in the @param or @random blocks. When only a single model is considered, so we can drop it from the equations, and the prior is independent of the covariates, Bayes' theorem simplifies to: \[p(\eta,\theta\mid x,y)=\frac{p(y\mid x,\eta,\theta)\cdot p(\eta,\theta)}{p(y\mid x)} \tag{6}\] To further simplify the notation, we denote \((x,y)\) by \(D\), \(p(y\mid x)\) by \(p(D)\) and \(p(y\mid x,\eta,\theta)\) by \(p(D\mid\eta,\theta)\)12. This gives us the more standard looking Bayes' theorem: Footnote 12: This is a slight abuse of notation because we chose to put \(D\) on the left-hand-side even though it includes \(x\) which is on the right-hand-side. \[p(\eta,\theta\mid D)=\frac{p(D\mid\eta,\theta)\cdot p(\eta,\theta)}{p(D)} \tag{7}\] Pharmacometric models typically describe some data generating process from which we can simulate synthetic data \(y\) given: 1) a set of covariates \(x\), 2) a specific model, and 3) a set of parameters \((\eta,\theta)\). Such a model describes a probability distribution for the response \(y\), \(p(D\mid\eta,\theta)\). This makes computing \(p(D\mid\eta,\theta)\) computationally straightforward. Similarly, the prior probability \(p(\eta,\theta)\) is typically defined in terms of standard probability distributions with known and computationally tractable probability density or mass functions. The main issue when trying to apply Bayes' theorem in practice is the calculation of the denominator term \(p(D)=p(y\mid x)\). When all the parameters are continuous, this can be written as an integral: \[p(D)=\int\int p(D\mid\eta,\theta)\cdot p(\eta,\theta)\,d\eta d\theta \tag{8}\] This is known as the marginal probability of the data (also known as the evidence or normalization constant) which is the weighted average of the conditional probabilities \(p(D\mid\eta,\theta)\) given all possible values of \((\eta,\theta)\) weighted by their prior probabilities. \(p(D\mid\eta,\theta)\) is typically known as the conditional likelihood and \(p(\eta,\theta\mid D)\) is known as the posterior probability of \((\eta,\theta)\) after observing \(D\). To better make sense of \(p(D)\), it's helpful to bring back the conditioning on the model and think of \(p(D\mid\) model) as the marginal _likelihood_13 of the model after integrating out all the population and subject-specific parameters. Footnote 13: Note that in statistics in general, \(p(D\mid\theta)\) is the probability of \(D\) given \(\theta\) and the _likelihood_ of \(\theta\) given \(D\). The computation of the high dimensional integral over \((\eta,\theta)\) is intractable in the general case. But why do we need the posterior probability? Often we are more interested in making predictions and using the posterior probability to weigh all the likely hypotheses when making predictions. Assume \(\hat{y}\) is either: 1. The unknown response of a new subject given the subject's known covariates \(\hat{x}\), or 2. The unknown partial response (e.g. at future time points) of an existing subject with a previously observed response that is part of \(D\) and some known covariates \(\hat{x}\). The covariates \(\hat{x}\) include the classic pharmaceu covariates, e.g. age and weight, but also include the time points at which the response \(\hat{y}\) is defined if it is a time series. One can write the average prediction for \(\hat{y}\) (using the posterior probability as weights) as: \[E[y\mid x=\hat{x},D]=\int\hat{y}\times p(y=\hat{y}\mid x=\hat{x},D)\,d\hat{y} \tag{9}\] where \(p(y=\hat{y}\mid x=\hat{x},D)\) is defined as: \[p(y=\hat{y}\mid x=\hat{x},D)=\] \[\int\int p(y=\hat{y}\mid x=\hat{x},\eta,\theta)\times p(\eta, \theta\mid D)\,d\eta d\theta \tag{10}\] where \(p(\eta,\theta\mid D)\)14 is the posterior probability and \(D\) refers to all the previously seen data \((x,y)\) excluding \(\hat{x}\). There are 2 problems with the above integration: Footnote 14: To use the product rule for probabilities in the standard way, we should have used \(p(\eta,\theta\mid\hat{x},D)\) instead but \(\hat{x}\) doesn’t contribute to the posterior given that \(\hat{y}\) is not observed yet, so the 2 terms are equal. 1. We cannot evaluate \(p(\eta,\theta\mid D)\) using Eq 7 because computing \(p(D)\) using Eq 8 requires a high dimensional integral which is computationally intractable. 2. Even if we are able to estimate \(p(D)\), computing \(p(y=\hat{y}\mid x=\hat{x},D)\) using Eq 10 requires another high dimensional integral over \((\eta,\theta)\). Both problems are related to the inability to tractably compute high dimensional integrals. When some or all of the parameters are discrete, the corresponding integrals become summations instead. The summation is still intractable when the parameters' dimension is high because there is a combinatorial growth of the possible combinations of values of \(\eta,\theta\) as their dimension increases. In pharmacometrics, parameters are usually continuous so we focus on continuous parameters for the rest of this paper. ### Prior Selection In this section, the main focus is on understanding how to choose good priors. We will attempt to answer the following questions: 1. When and how are priors useful? 2. When and how are priors harmful? The specifics of which priors are available in Pumas, their parameters, and how to define them can be found in the workflow section (Section 3). #### 4.3.1 Overview A prior distribution over a parameter in Bayesian statistics represents the state of belief in the values of a parameter _prior_ to observing any data. For instance, a univariate Gaussian prior distribution of \(N(0,2.0)\) with mean 0 and standard deviation 2, when used on a scalar parameter, means that we think this parameter has a probability mass of \(\approx 99.7\%\) of being between -6 and 6. More generally, a prior distribution with probability density function \(p(x)\) means that we think the parameter has a probability mass of \(\int_{a}^{b}p(x)dx\) (area under the curve) of being between \(a\) and \(b\). Once data is observed, the prior distribution is _updated_ using the observations and the likelihood values to arrive at the _posterior_ distribution. The posterior distribution represents the state of belief in the values of a parameter _after_ the data has been observed. If even more data is collected, in theory we can use the old posterior as the new prior distribution in the analysis of the new data only15. In practice, because the posterior distribution typically doesn't have a closed form solution, it can be tricky to use it as a prior in any new analysis. Therefore, a full analysis using all of the old and new data together with the _old priors_ may have to be performed. 16 Footnote 15: We can’t use the data twice for the same analysis. If the old data was already used to define the prior in the new analysis, then only the new data should be used in the new analysis. In some ways, the prior distribution is analogical to the domain of a parameter in the non-Bayesian workflow and the posterior distribution is like the maximum likelihood estimate. Before observing data, we can only speak for the domain of the parameter since we don't know its value. After observing data, we can fit the model to the data and identify the best fitting parameter values. In non-Bayesian analyses, we typically choose a domain for each parameter to help the optimization algorithm reach reasonable parameter values without wasting time trying values outside of the reasonable domains. For instance, we know that any covariance matrix parameter has to be positive definite so the optimization algorithm should never have to explore values for a covariance matrix parameter that violate this constraint. Prior distributions generalize this idea by not only having an underlying domain implied by the distribution (known as the support of the distribution) but also by allowing the specification of differential preference for certain parameter values over others. The ability to specify preference in parameter values is a powerful tool that can also be very dangerous if used in a wrong way. When used right, it can allow domain knowledge and results from similar previous studies to be reused in the current study. But when used wrong, it can be used to mask bad science and unethical behaviour using sophisticated math that is hard to scrutinize. #### 4.3.2 Good, Bad and Harmless Priors Loosely speaking, priors can be categorized into 2 categories17: Footnote 17: There are algorithms for updating the posterior samples from a previous study given some new data but we don’t cover these here in this paper. 1. Strong priors, i.e. informative priors, and 2. Strong priors, i.e. informative priors, and 2. Weak priors, i.e. weakly informative priors. These are loose categories because in reality, only the relative strength of a prior compared to the likelihood is significant. Recall that the joint probability used to drive the MCMC sampling is given by: \[p(D,\eta,\theta)=p(D\mid\eta,\theta)\times p(\eta,\theta)\] (11) which is the product of the prior probability and likelihood value. If a lot of data exist, the likelihood term will dominate most priors and the posterior distribution will be mostly reflective of the parameter values that fit the data well. If not enough data exist, the prior and its choice become more significant since it's possible some choices of the prior will lead it to dominate the above term. When the prior dominates the likelihood, the posterior distribution will be only a slight shift of the prior distribution towards the parameter values that _actually_ fit the data well. In these cases, the danger of abusing the prior in analyses is higher and we will discuss scenarios where that can happen in this section. Harmless priors are priors that largely mimic the purpose of specifying parameter domains in non-Bayesian workflows. These priors have very little to no preference between parameter values and they can be easily overpowered by even a few data points. For instance, a uniform distribution over \([0,10^{10}]\) for a positive parameter can be considered a harmless prior. This prior only encodes the domain of the parameter without favouring certain values over others in this particular parameterization. Good and bad priors share one thing in common: they are both "_informative_". More precisely, good priors are informative and bad priors are mis-informative. Table 11 summarizes the various types of informative and mis-informative priors. A good informative prior is one that has a good scientific basis and doesn't contradict the data. Given Table 11, there are 2 general ways to define sound priors: 1. Define a weakly informative prior18 and ensure that the conclusion of the study does not change in the limit as the prior gets weaker and weaker. This is a case where we are letting the data speak for itself without imposing our own bias on the results. In this case, we are only using Bayesian inference for its probabilistic soundness and ability to quantify the total uncertainty in the parameters and response even when the model is non-identifiable and/or we only have a few data points. Footnote 18: Very weak priors are sometimes called diffuse priors. 2. Use similar previous studies to guide the prior choice and test that it doesn't contradict the new data. In this case, more justification of the prior distribution is necessary and a proof that the prior doesn't contradict the data is required. One way to show that the prior doesn't contradict the data is to start with no observations at all and with a weakened version of the strong prior, e.g. by increasing the standard deviation. You can then incrementally make the prior stronger again (e.g. by decreasing the standard deviation back) until it reaches the desired level of strength, followed by incrementally adding the (previously removed) observations back to the analysis. If doing so and re-running the analysis at every increment shows a consistent trend in all of the significant statistics (e.g. the probability that the drug is effective), then the prior's strength is aligned with the story told by the data. This can be done using a sequence of prior simulations (when all the data is removed) followed by a combination of MCMC runs and posterior simulations (when the data is added back). For more methods for detecting prior-data conflicts and model mis-informativeness, the readers are referred to Kallioinen et al (2021). Another simple way to detect the potential conflict between data and informative priors is to simulate from the following distribution: \[\begin{split}(\eta,\theta)&\sim p(\eta,\theta)\\ y&\sim p(y\mid\eta,\theta)\end{split} \tag{12}\] where \(p(\eta,\theta)\) is the prior distribution. You can then do a simulation or VPC plot checking the consistency of the data and prior simulations. If the prior is weakly informative or nearly non-informative and we have a lot of data such that the likelihood dominates the prior, prior simulations that are inconsistent with the data in a VPC plot can be ignored. However if the prior is informative, it is based on previous studies and there are not enough data points in the new study to fully dominate the prior, the VPC plot of the prior simulations next to the data can be a useful diagnostic to inspect. Refer to Section 3.2 for how to do this in Pumas. When using a weakly informative prior, you may be inclined to use such simulation plots to select a good prior with a good (but not necessarily tight) coverage of the data. In general, any fine-tuning of the prior based on the data is frowned upon. This is because we would then be using the same data twice, once to fine-tune the prior and once to update the prior to get the posterior. This can result in under-estimating the posterior's variance, i.e. over-confident posteriors, which in turn leads to over-confident posterior predictions. This is analogical to replicating some or all of the data points in your data in a frequentist workflow which under-estimates the confidence intervals and standard errors. In cases where sampling fails due to numerical errors, an overly weak prior may be a cause. In this case, one may try changing the prior, e.g. truncating its support to reasonable bounds, to be more consistent with the data. However, only minimal \begin{table} \begin{tabular}{|c|c|l|} \hline **Scientific Basis** & **Contradicts Data?** & **Comment** \\ \hline None & Yes & This is cheating. Using a strong prior that contradicts and over-powers the data with no scientific basis can be used to cook any results we desire. For instance, a drug can be ineffective but if the strong prior says it’s effective with a very high probability and not enough data exists to counter that strong wrong prior, the conclusion of the “analysis” will be that the drug is effective. \\ \hline None & No & This is bad science. Using a strong prior that’s consistent with the data but that over-powers the likelihood with no scientific basis can lead to over-confident predictions and premature conclusions using fewer data points than necessary. This has a similar effect to artificially replicating the data multiple times to artificially reduce the confidence interval in the non-Bayesian workflow. \\ \hline Previous studies & Yes & This is a sign that dis-similar previous studies were used to guide the prior choice and that the prior choice should be revised because it contradicts the new data and can possibly bias the results of the analysis. \\ \hline Previous studies & No & When used with care, results from previous studies (e.g. an approximation of its parameters’ posterior distribution) can be used to guide the selection of good informative priors for a new similar study. These priors should not contradict the new data collected but may help us terminate the new study early using a smaller sample size than otherwise possible. Positively concluding a study early, after it’s become clear that a drug is effective given all the information available, means that more patients can have access to _truly effective_ drugs earlier. This is especially important for rare disease drug development where collecting more data in a study often means adding years to the clinical trial duration. This is a use case that requires more regulatory and industry agreement on best practices for defining informative prior distributions in such studies with the goal of benefiting the patients. \\ \hline \end{tabular} \end{table} Table 11: The table shows a description of a number of ways to choose **informative prior distributions**. Only the last case is a good use of informative priors. **This table is only applicable to informative priors** that may dominate the likelihood, since weakly informative priors that are dominated by the likelihood typically don’t matter as much. such changes can be allowed in the final analysis and a good post-sampling sensitivity analysis study would be needed to ensure that the conclusion of the study is not locally sensitive to the prior. More generally, numerical errors are often a sign that the model is too sensitive to some of the parameter values which may imply a structural problem in the model itself. In this case, one should also consider better fixes than simply truncating the priors, e.g. by simplifying the model or re-parameterizing it to avoid numerical instability. The Bayesian workflow paper by Gelman et al (2020) has excellent recommendations and case studies to take heed from when diagnosing failing MCMC runs. #### Correlation vs Covariance When defining models that have a covariance matrix parameter (e.g. the covariance parameter of the multivariate normal prior distribution typically used for subject-specific parameters in pharmacometrics), one is always faced with the following 2 equivalent parameterizations: 1. Use a covariance matrix parameter \(\Omega\), or 2. Use a vector of standard deviations \(\omega\) and a correlation matrix parameter \(C\). One can easily recover \(\Omega\) from \((\omega,C)\) and vice versa. Let \(D_{\omega}\) be the diagonal matrix whose elements are \(\omega\), \(\omega_{i}\) be the \(i^{th}\) element in \(\omega\), and \(\Omega[i,i]\) be the \(i^{th}\) diagonal element of \(\Omega\). The relationships between \(C\), \(\omega\) and \(\Omega\) is given by: \[\Omega =D_{\omega}\times C\times D_{\omega} \tag{13}\] \[C =D_{\omega}^{-1}\times\Omega\times D_{\omega}^{-1}\] \[\omega_{i} =\sqrt{\Omega[i,i]}\] In the non-Bayesian context, the 2 parameterizations are equivalent. However in Bayesian analysis, one should define a prior distribution on the parameters. Since prior distributions are supposed to encode the domain knowledge and state of belief about the values of the parameters, using more intuitive/interpretable parameterizations is generally recommended to better make sense of the prior distributions used. For this reason, some people prefer to use the second parameterization with separate standard deviations vector and a correlation matrix since they are more interpretable. We saw examples of prior distributions that can be used for standard deviation, correlation and covariance matrix parameters in Section 3. ### Markov Chain Monte Carlo (MCMC) Intuition In this section, the use of MCMC will be motivated showing how MCMC can be used to bypass the need for high dimensional integrals (discussed in Section 4.2) for all practical purposes. #### 4.4.1 Inference Markov Chain Monte Carlo (MCMC) (Brooks et al, 2011) bypasses the need to solve the numerical integration problem by sampling19 from the posterior probability distribution \(p(\eta,\theta\mid D)\) directly using only the tractable numerator in Eq 7. This numerator is sometimes called the _joint_ probability since it is just \(p(D,\eta,\theta)=p(D\mid\eta,\theta)\cdot p(\eta,\theta)\). Note the difference between the terms with and without conditioning \(\mid\). Having samples from the posterior allows us to estimate quantities such as: Footnote 19: A sample of a random variable or a sample from its distribution is an instantiation of said random variable. For instance, \([0,1,0,0,1,1]\) are 6 samples from a Bernoulli distribution whose support is the set \(\{0,1\}\). The support of a distribution is the domain of the random variable. For example, the set \(\{0,1\}\) is the support of the Bernoulli distribution and the set of all real numbers is the support of the normal distribution. A sample from the posterior distribution can be interpreted as the likely parameter values that could have generated the data that we observed. The term _sample_ can sometimes be used to refer to multiple such samples, to be understood from the context. \[p(f(\eta,\theta)>0\mid D) \tag{14}\] for an arbitrary function \(f\) using a simple count by checking all the samples from the posterior and counting how many satisfy the particular conditions of interest. The ratio of samples satisfying the condition \(f(\eta,\theta)>0\) is the unbiased estimate of the above probability. More concretely, this probability could be \(p(\theta_{i}>0\mid D)\) where \(\theta_{i}\) corresponds to the effect size of an experiment treatment arm compared to the placebo arm. #### 4.4.2 Prediction Besides estimating probabilities of events using the posterior samples, the posterior samples can also be used to make predictions. The functions for performing posterior predictions in Pumas were presented in Section 3.13. Assume we have \(N\) samples from the posterior \(p(\eta,\theta\mid D)\): \[\{(\eta^{(j)},\theta^{(j)}):j\in 1\ldots N\}\] Recall the intractable term in Eq 9 was \(p(y=\hat{y}\mid x=\hat{x},D)\) which can be written as: \[\int\int p(y=\hat{y}\mid x=\hat{x},\eta,\theta)\times p(\eta,\theta \mid D)\,d\eta d\theta\approx\\ \frac{1}{N}\sum_{j=1}^{N}p(y=\hat{y}\mid x=\hat{x},\eta=\eta^{(j)},\theta=\theta^{(j)})\] where \(p(y=\hat{y}\mid x=\hat{x},\eta=\eta^{(j)},\theta=\theta^{(j)})\) is the conditional probability of \(\hat{y}\mid\hat{x}\) evaluated using the given parameter values. Using the samples, the intractable integral was therefore reduced to a tractable average of \(N\) terms since we can easily evaluate \(p(\hat{y}\mid\hat{x},\eta^{(j)},\theta^{(j)})\) given \((\hat{x},\hat{y},\eta^{(j)},\theta^{(j)})\) and the model. The expectation term in Eq 9 can therefore be approximated by Eq 15, where \(E[y\mid x=\hat{x},\eta^{(j)},\theta^{(j)}]\) is just the mean value of the conditional distribution \(p(y=\hat{y}\mid x=\hat{x},\eta=\eta^{(j)},\theta=\theta^{(j)})\) which can be evaluated from a single run of the model. In Pumas syntax, this is the output of the predict function (or the output of simobs with simulate_error = false) with parameters \((\eta^{(j)},\theta^{(j)})\), and covariates \(\hat{x}\). More generally, one can estimate the expectation of an **arbitrary function**\(g(\eta,\theta)\) with respect to the posterior distribution using: \[E[g(\eta,\theta)\mid D] =\int\int g(\eta,\theta)\times p(\eta,\theta\mid D)\,d\eta d\theta\] \[\approx\frac{1}{N}\sum_{j=1}^{N}g(\eta^{(j)},\theta^{(j)})\] For instance, \(g\) could be computing some NCA parameters (Section 3.16) based on the model's prediction or computing any other deterministic quantity that is a deterministic function of the parameters. When defining \(g\) as \(g(\eta^{\prime},\theta^{\prime})=E[y\mid x=\hat{x},\eta=\eta^{\prime},\theta= \theta^{\prime}]\), we recover the prediction special case. And when defining \(g\) as \(g(\eta,\theta)=\mathbb{1}_{f(\eta,\theta)>0}\) for another arbitrary function \(f\), where \(\mathbb{1}_{f(\eta,\theta)>0}\) is the indicator function that is \(1\) when the condition \(f(\eta,\theta)>0\) is satisfied and \(0\) otherwise, we recover the special case of estimating the probability \(p(f(\eta,\theta)>0\mid D)\). In other words, samples from the posterior are almost everything you may ever need to estimate all the quantities of interest needed to make decisions. So how do we obtain such samples without computing the intractable integrals? We use MCMC. #### 4.4.3 Simulation We saw how given \(N\) samples from the posterior, we can compute the average prediction \(E[g(\eta,\theta)\mid D]\approx\frac{1}{N}\sum_{j=1}^{N}g(\eta^{(j)},\theta^{(j )})\) for any particular choice of the function \(g\). Alternatively, you can also obtain a distribution of predictions: \[\{g(\eta^{(j)},\theta^{(j)})\text{ for }j\in 1\ldots N\} \tag{16}\] where \(g(\eta^{\prime},\theta^{\prime})=E[y\mid x=\hat{x},\eta=\eta^{\prime},\theta= \theta^{\prime}]\). This is the MCMC approximation of the distribution of \(g(\eta,\theta)\) where \((\eta,\theta)\sim p(\eta,\theta\mid D)\). For the above choice of \(g\), this distribution of predictions is typically known as the posterior predictive distribution. When \((\eta,\theta)\) are sampled from the prior instead, the distribution of \(g(\eta,\theta)\), for the above choice of \(g\), is known as the prior predictive distribution. Beside sampling predictions or more generally deterministic functions of the parameters \((\eta,\theta)\), one may also sample from the following distribution of \(\hat{y}\): \[(\eta,\theta) \sim p(\eta,\theta\mid D)\] \[\hat{y} \sim p(y=\hat{y}\mid x=\hat{x},\eta,\theta)\] In Pumas syntax, this is the output of the simobs function using the posterior parameter values and covariates \(\hat{x}\). Alternatively, \((\eta,\theta)\) may be sampled from their prior distributions instead or just fixed to particular _ground truth_ values. These prior/posterior/ground truth simulations can be used to do any of the following: 1. Generate synthetic data to test the MCMC algorithm on synthetic data before using the real data20. \[E[y\mid x=\hat{x},D] =\int\hat{y}\times p(y=\hat{y}\mid x=\hat{x},D)\,d\hat{y}\] \[\approx\int\hat{y}\times\Bigg{(}\frac{1}{N}\sum_{j=1}^{N}p(y=\hat{y }\mid x=\hat{x},\eta=\eta^{(j)},\theta=\theta^{(j)})\Bigg{)}d\hat{y} \tag{15}\] \[=\frac{1}{N}\sum_{j=1}^{N}\Bigg{(}\int\hat{y}\times p(y=\hat{y} \mid x=\hat{x},\eta=\eta^{(j)},\theta=\theta^{(j)})d\hat{y}\Bigg{)}\] \[=\frac{1}{N}\sum_{j=1}^{N}E[y\mid x=\hat{x},\eta=\eta^{(j)}, \theta=\theta^{(j)}]\] 2. Identify extremely poor choices of priors to minimally guide the selection of priors by inspecting the similarity of the prior simulations and real data, e.g. using a visual predictive check (VPC) plot, also known as prior predictive check. See section 4.3 for more details on prior selection. 3. Quantify the quality of model fit by comparing posterior simulations to the real data using a VPC plot, also known as posterior predictive check, and estimating the so-called Bayesian \(p\)-value. The code for doing prior simulations and predictions in Pumas was presented in Section 3.2. Similarly, the code for doing posterior simulations and predictions was presented in Section 3.13. Finally, the codes for performing VPC, various simulation queries, e.g. the Bayesian \(p\)-value, and NCA were presented in Sections 3.14, 3.11 and 3.16 respectively. ### No-U-Turn Sampler (NUTS) Algorithm In this section, an intuitive explanation of the No-U-Turn Sampler (NUTS) (Hoffman and Gelman, 2014; Betancourt, 2017) MCMC algorithm will be given. The focus of the explanation will be to develop a strong intuition for how the algorithm works and how to tune its hyper-parameters21. We don't want to clutter the minds of the readers with equations that can be found in numerous other resources and which are not strictly necessary to be able to effectively use the algorithm. For MCMC beginners or when reading this for the first time, you may skip subsections 4.5.6, 4.5.7 and 4.5.8 without a significant loss of context. Footnote 21: The term _hyper-parameters_ generally refers to any parameters that are not being inferred by the Bayesian inference algorithm and that need to be pre-specified by the user before the Bayesian analysis. These can generally be model hyper-parameters, e.g. the parameters of the prior distributions on the population parameters \(\theta\), or they can be algorithm hyper-parameters such as the settings of the NUTS algorithm. #### 4.5.1 High Level Description MCMC sampling uses a stochastic process (random walk) in the \((\eta,\theta)\) space to collect samples from the posterior \(p(\eta,\theta\mid D)\) in an iterative manner using nothing but "local information available". Figure 15 shows a simple random walk for a 1-dimensional Gaussian random variable. In the \(j^{th}\) iteration of the algorithm, the local information available is basically the numerator in Eq 7 evaluated at a particular value \((\eta=\eta^{j-1},\theta=\theta^{j-1})\) and its gradient22 with respect to \((\eta,\theta)\) both of which can be computed easily. Given that this iterative algorithm only uses information from the previous iteration \(j-1\), it is a so-called Markov chain by definition. The goal of the MCMC family of algorithms is to make it such that the individual steps \(\{(\eta^{(j)},\theta^{(j)}):j\in 1\ldots N\}\) are valid samples from the posterior. In this section, we focus on the so-called No-U-Turn sampler (NUTS) algorithm (Hoffman and Gelman, 2014; Betancourt, 2017) which is a variant of the Hamiltonian Monte Carlo (HMC) (Neal, 2011) family of MCMC algorithms. We will not cover these algorithms in details but we will explain the intuition behind them for you to make sense of their hyper-parameters to be able to informatively tinker with them when needed. Imagine the current position of the sampler \((\eta^{(j-1)},\theta^{(j-1)})\) is a particle in the \((\eta,\theta)\) space. In the NUTS algorithm, the random walk process goes like this23: Footnote 23: The real algorithm includes many performance enhancements which are not discussed here. 1. The imaginary particle \((\eta^{(j-1)},\theta^{(j-1)})\) is given a random imaginary speed in a random direction, i.e. it is given a random imaginary velocity. The velocity is sampled from a multivariate Gaussian distribution. 2. The gradient of the log of the joint probability (log of the numerator in Eq 7) with respect to \((\eta,\theta)\) acts as an imaginary force field locally pushing the imaginary particle towards regions of high (log) prior and/or (log) likelihood and away from regions of low (log) prior and/or (log) likelihood. 3. The imaginary particle's motion is **approximately** simulated using time discretization and an approximate ODE solver24 for a total of \(T\) simulated time steps25 under the influence of the imaginary force field, where each simulated time step is of size \(\epsilon\). This simulation only requires the initial position, initial velocity and being able to calculate the force applied at any arbitrary point \((\eta,\theta)\) which is equivalent to evaluating the gradient of \(\log p(D,\eta,\theta)\) with respect to \((\eta,\theta)\), i.e. \(\frac{d\log p(D,\eta,\theta)}{d(\eta,\theta)}\). Footnote 24: The particle dynamics simulation boils down to simulating a set of ODEs with the gradient of \(\log p(D,\eta,\theta)\) as the driving force. An approximate ODE solver called the leapfrog method is used to do the simulation. The leapfrog method with a large step size is approximate because its solution violates the law of conservation of energy, even though it is so-called volume preserving. However, this is a desirable property in this case and can help fully explore the posterior even with disconnected regions of high probability mass. For the effect of the time step size on the sampling behaviour, see Section 4.5.4. 4. A randomly chosen position \((\eta,\theta)\) on the simulated imaginary trajectory of \(T\) time steps becomes the **proposal**. 5. The proposal is then accepted with a carefully chosen probability to ensure that the random walk gives us correct samples from the posterior. If the proposal is accepted, it becomes the next sample \((\eta^{(j)},\theta^{(j)})\), otherwise the previous value is sampled once more, i.e. \((\eta^{(j)},\theta^{(j)})=(\eta^{(j-1)},\theta^{(j-1)})\).26 Footnote 26: In the state-of-the-art variant of NUTS (Betancourt, 2017), the proposal selection and acceptance/rejection steps are combined into a single step which samples from the imaginary trajectory, which includes the previous particle’s position \((\eta^{(j-1)},\theta^{(j-1)})\). Sampling is done in a way that ensures the chosen next position \((\eta^{j},\theta^{j})\) is a correct sample from the posterior. However, we chose to separate the 2 steps conceptually to aid with the explanation. The above algorithm is still a random walk but it is biased towards high probability regions Figure 15: Random walk visualization for a normally distributed random variable (on the x-axis) where the probability density function (PDF) is shown on the y-axis. Firstly a proposal is made, then it is accepted with a specific acceptance probability. When the proposal is for the hiker to climb up, the move is accepted with a probability of 1. When the proposal is for the hiker to climb down, it is accepted with a probability equal to the ratio of PDFs at the 2 positions. because it uses the gradient of the log joint probability to push the particle in the right direction even if it started with a random velocity. The above sampling algorithm is done in 2 phases: an adaptation phase and a sampling phase. In the adaptation phase, the sampler is adapting its time step size and initial velocity distribution while performing the above sampling procedure. In the sampling phase, the algorithm's hyper-parameters seize to adapt and sampling continues using the same procedure above. It is generally recommended to discard the adaptation steps after sampling as _burn-in_27 as they may not be representative samples from the posterior. In Pumas, this can be done using the discard function as explained in Section 3.3. The number of adaptation steps can also be specified using the nadapts option as shown in Section 3.3. Footnote 27: Also called _warm-up_. #### MCMC Visualization To interactively visualize how an MCMC sampler works, chi-feng.github.io/mcmc-demo/ is a great resource to look at where you can change the target distribution and sampling algorithm to develop an intuition for how MCMC samplers work in different cases. You can select the "No-U-Turn Sampler" as the algorithm and then change the target distribution to visualize how NUTS behaves when sampling from different posterior distributions. #### Proposal Acceptance While the state-of-the-art variant of NUTS (Betancourt, 2017) does not use an explicit acceptance/rejection test (also known as Metropolis-Hastings test) of a proposal, what it does is analogical to a traditional proposal selection followed by an acceptance/rejection test. For pedagogical reasons, we assume these are 2 separate steps. The acceptance probability of a proposal in step 5 of the algorithm depends on: 1. The prior probability, and 2. The likelihood function (how well the proposal fits the data) A proposal leading to bad predictions that don't fit the data well compared to the previous sample \((\eta^{(j-1)},\theta^{(j-1)})\), or a proposal that is more improbable according to the prior compared to the previous sample is more likely to be rejected. On the other hand, a proposal that fits the data better than the previous sample and/or is more probable according to the prior will be more likely to be accepted. #### Effect of Time Step Size Generally speaking, the larger the time step size in the simulation, the more approximate the ODE solver is and the more exploratory/adventurous the proposals will be which lead to a lower ratio of accepted proposals. On the other hand, smaller step sizes generally lead to less exploratory proposals which are more likely to be accepted increasing the acceptance ratio. The sampler's exploration ability partly comes from the ability of the approximate ODE solver to over-shoot and jump from one area of high posterior probability28 to another when making proposals, thus exploring multiple modes even if there is a 0 probability region between the 2 modes. A zero probability \(p(\eta,\theta\mid D)\) implies zero probability \(p(D,\eta,\theta)\) (Eq 7) which will result in an infinite force pushing the particle away from that region. Therefore exact simulation will never be able to make the jump across such a region, hence the need for approximate solvers and over-shooting. Footnote 28: High posterior probability regions have high joint probability values (the numerator in Eq 7). The joint probability is the product of the prior probability and likelihood. So parameters values with a high prior probability and/or high likelihood will have high joint and posterior probabilities. #### Time Step Size Adaptation and Target Acceptance Ratio In the NUTS algorithm, you don't set the step size yourself. The NUTS algorithm adapts its step size to encourage a certain fraction of the proposals to get accepted on average. This target acceptance ratio is a hyper-parameter of NUTS. In Pumas, you can set the target acceptance ratio using the target_accept option as shown in Section 3.3. A value of 0.99 means that we want to accept 99% of the proposals the sampler makes. This will generally lead to a small distance between the proposal and the current sample since this increases the chance of accepting such a proposal. On the other hand, a target acceptance fraction of 0.1 means that we want to only accept 10% of the proposals made on average. The NUTS algorithm will therefore attempt larger step sizes to ensure it rejects 90% of the proposals. In general, a target acceptance ratio of 0.6-0.8 is recommended to use. The default value used in Pumas is 0.8. In sampling, there is usually a trade-off between exploration and exploitation. If the sampler is too adventurous, trying aggressive proposals that are far from the previous sample in each step, the sampler would be more likely to explore the full posterior and not get stuck sampling near a local mode of the posterior. However on the flip side, too much exploration will often lead to many proposal rejections due to the low joint probability \(p(D,\eta,\theta)\) of the data and the adventurous proposals. This can decrease the ratio of the effective sample size (ESS)29 to the total number of samples (also known as relative ESS) since a large number of samples will be mere copies of each other due to rejections. Footnote 29: The ESS is an approximation of the “number of independent samples” generated by a Markov chain, when estimating the posterior mean. A low ESS per sample ratio is caused by high auto-correlation in the MCMC samples and is often a bad indicator. On the other hand if we do less exploration, there are at least 2 possible scenarios: 1. The first scenario is if we initialize the sampler from a mode of the posterior. Making proposals only near the previous sample will ensure that we accept most of the samples since proposals near a mode of the posterior are likely to be good parameter values. This local sampling behavior around known good parameter values is what we call here exploitation. While the samples generated via high exploitation around a mode may not be representative of the whole posterior distribution, they might still give a satisfactory approximation of the posterior predictive distributions, which is to be judged with a VPC plot. 2. The second scenario is if we initialize the sampler from bad parameter values. Bad parameter values and low exploration often lead to optimization-like behavior where the sampler spends a considerable number of iterations moving towards a mode in a noisy fashion. This optimization-like, mode-seeking behavior causes a high auto-correlation in the samples since the sampler is mostly moving in the same direction (towards the mode). A high auto-correlation means a low ESS because the samples would be less independent from each other. 30 Also until the sampler reaches parameter values that actually fit the data well, it's unlikely these samples will lead to a good posterior predictive distribution. This is a fairly common failure mode of MCMC algorithms when the adaptation algorithm fails to find a good step size that properly explores the posterior distribution due to bad initial parameters and the model being too complicated and difficult to optimize, let alone sample from its posterior. In this case, all the samples may look auto-correlated and the step sizes between samples will likely be very small (low exploration). In Pumas, the step size is displayed as part of the live progress information during sampling as shown in Figure 1. It's often helpful to detect such a failure mode early in the sampling and kill the sampling early. Footnote 30: In Pumas, the ESS values of the population parameters are displayed in the default display of the MCMC result as shown in Figure 2. #### 4.5.6 Optional: Number of Time Steps and U-Turns Consider a single direction in the \((\eta,\theta)\) space, e.g. the axis of a particular parameter. For relatively flat regions of the posterior where a lot of the values along this direction are almost equally likely, i.e. they all the fit the data to the same level and are almost equally probable according to the prior, proposals far away from the current sample may still be accepted most of the time. This is especially likely in the parts of the posterior where the model is (almost) non-identifiable causing high parameter correlations, and the prior is indiscriminate (e.g. due to being a weak prior). On the other hand, regions of the posterior that are heavily concentrated around a mode with a high curvature often require a smaller step size to achieve reasonable acceptance ratios, since proposals that are even slightly far from the current sample may be extremely improbable according to the prior or may lead to very bad predictions. This is especially likely in regions of the posterior where the model is highly sensitive to the parameter values or if the prior is too strongly concentrated around specific parameter values. To account for such variation in curvature along the _same direction_31 in different regions of the posterior, the NUTS algorithm uses a multi-step proposal mechanism with a fixed time step size (determined during the adaptation phase and then fixed) and a dynamic number of time steps (dynamic in both the adaptation and sampling phases). More specifically, the sampler simulates a trajectory of \(T\) time steps before choosing a proposal randomly from this trajectory, where \(T\) is different for each proposal made. The number of time steps \(T\) simulated by NUTS is determined by an incremental simulation of: \(T=1+2+4+8+16+\dots\) time steps where the number of time steps in each incremental simulation is an increasing power of 2. Each incremental simulation can be either: 1. Forward in time starting from the future-most state, or 2. Backward in time starting from the past-most state. The direction of each incremental simulation is sampled randomly with 0.5 probability assigned to each direction. Table 12 shows an example of the incremental simulations for the particular choice of simulation directions: [Forward, Forward, Reverse, Reverse, Forward, Reverse]. So when do we stop simulating? The NUTS algorithm typically stops simulating when one of the following 4 conditions is met: 1. It finds a so-called U-turn, that is when the sampler begins to move back towards one end of the trajectory from the other end. 2. It reaches a pre-set maximum number of simulation steps. 3. The log prior probability and/or log likelihood drops rapidly in one of the steps, dropping by more than a pre-set threshold. 4. A numerical error occurs. The third and fourth termination criteria are often called "divergence". After the simulation terminates, a point randomly chosen on the trajectory simulated becomes the next proposal and the search is terminated. _Terminating by finding a U-turn is typically considered a sign of successful exploration._ The number of evaluations of \(\log p(D,\eta,\theta)\) in each NUTS iteration is determined by the length of the simulated trajectory which is \(\sum_{i=0}^{j-1}2^{i}=2^{j}-1\), if \(j\) incremental simulations were required to find a U-turn32. Footnote 32: In the efficient implementation of NUTS, once a U-turn is found, the final incremental simulation is interrupted so the number of model evaluations is actually somewhere between \(2^{j-1}\) and \(2^{j}-1\) In the efficient implementations of the NUTS algorithm, a binary tree data structure of depth \(j\) is used to help with the efficient selection of a proposal from all of the states \((\eta,\theta)\) visited during the simulation until a U-turn was found33, without storing all of the states. This is an efficiency enhancement trick but the term _tree depth_ stuck and became synonymous to the number of incremental simulations ran so far, \(j\). In the case where the sampler is unable to find a U-turn even after a pre-specified maximum \(j\) is reached, the sampler terminates the simulation anyways and makes a proposal. The term maximum tree depth is commonly used to refer to the maximum number of incremental simulations \(j\) allowed before having to make a proposal even if no U-turn was found. Footnote 33: The number of states visited excluding the initial state is at most \(2^{j}-1\). Adding the initial state, we have \(2^{j}\) possible states any of which could be the proposal. These can in theory be stored as the leaf nodes of a binary tree of depth \(j\) which has \(2^{j}\) leaf nodes. However in practice, only a subset of such states are stored and the tree idea is used to ensure the proposal can be efficiently chosen at random from all \(2^{j}\) possible states while satisfying the theoretical requirements of a proposal in MCMC, which is often called the detailed balance condition. #### 4.5.7 Optional: Distribution of the Initial Velocity Recall that in each NUTS iteration, we are sampling a random initial velocity for the \((\eta,\theta)\) particle before simulating the dynamics to arrive at a proposal. Hypothetically, assume that we already have samples from the posterior \(p(\eta,\theta\mid D)\). If you were to go back and re-do the sampling using NUTS, how would you sample the initial velocity of the imaginary \((\eta,\theta)\) particle to make sampling as efficient as possible? In general, it would make sense to move faster along directions in the posterior that have a higher variance and slower along directions that have a lower variance. For instance, we can compute the variance along each parameter's axis and sample higher speeds for the parameters that change more, and lower speeds for the parameters that change less. In practice, you can think of different parameters having different scales where 1 parameter may be in the 10s while another one may be in the 1000s. In that case, it makes sense to use different speeds along different directions to more efficiently sample from the posterior distribution. More generally, one may even compute the sample covariance matrix from the (hypothetical) samples available, compute the principal components and sample higher speeds along directions with more variance than the other directions. If we encode how slow we want the particle to go along each direction \(d_{i}\) by a number \(s_{i}\), setting the standard deviation of the speed along this direction to \(1/s_{i}\) can be used to achieve the desired slowness. Assume each \(d_{i}\) is an axis along a specific parameter \(i\) (which could be part of \(\eta\) or \(\theta\)). The distribution of the velocity \(v_{i}\) along \(d_{i}\) can be the following univariate Gaussian: \[v_{i}\sim N(0,(1/s_{i})^{2}) \tag{17}\] with mean \(0\) and standard deviation \(1/s_{i}\). This distribution will have us sampling speeds along the direction \(d_{i}\) that are on average inversely proportional to \(s_{i}\). Writing it for all the parameters together, we can write: \[v\sim N(0,M^{-1}) \tag{18}\] where \(M\) is a diagonal matrix of elements \(s_{i}^{2}\) on the diagonal and \(M^{-1}\) is the covariance matrix of the velocity vector \(v\). Using a diagonal \(M\) is equivalent to adapting the speeds' standard deviations along the parameters' axes. While using a dense matrix \(M\) is equivalent to the more general case of adapting the speeds' standard deviations along more optimized directions \(d_{i}\) (e.g. from principal components of the covariance matrix). It turns out that when simulating the "imaginary dynamics" in HMC/NUTS after sampling the initial velocity, the analogical _kinetic energy_ is given by: \[K(v)=v^{T}Mv/2 \tag{19}\] hence the natural inclination to call the above matrix \(M\) a "mass matrix" in the HMC/NUTS literature. Recall that in physics, the kinetic energy of a particle with a scalar speed \(v\) and mass \(m\) is \(\frac{mv^{2}}{2}\). To summarize, directions with a higher "mass" will be explored more slowly than directions with a lower mass. The ideal mass matrix \(M\) is one that approximates the _global_ precision matrix of the posterior distribution, i.e. the inverse of the covariance matrix. Equivalently, the ideal \(M^{-1}\) is one that approximates the global covariance matrix of the posterior. So far we assumed that we have samples from the posterior and are able to adapt the mass matrix manually. In practice, the NUTS algorithm adapts the mass matrix for you during the adaptation phase, and you only need to select the structure of the matrix, e.g. diagonal or dense. For large problems, a diagonal matrix is typically used in practice since the computational cost of using a dense matrix is \(O(D^{3})\), where \(D\) is the total number of parameters in \((\eta,\theta)\) combined. On the other hand, the computational cost of using a diagonal \begin{table} \begin{tabular}{|c|c|c|} \hline Increment \(j\) & Simulation Direction & Interval of the Time Steps Simulated after \(j\) Increments \\ \hline 0 & - & \([0,0]\) \\ 1 & Forward & \([0,0+1]=[0,1]\) \\ 2 & Forward & \([0,0+1+2]=[0,3]\) \\ 3 & Reverse & \([0-4,0+1+2]=[-4,3]\) \\ 4 & Reverse & \([0-4-8,0+1+2]=[-12,3]\) \\ 5 & Forward & \([0-4-8,0+1+2+16]=[-12,19]\) \\ 6 & Reverse & \([0-4-8-32,0+1+2+16]=[-44,19]\) \\ \hline \end{tabular} \end{table} Table 12: The table shows the incremental simulations of the NUTS algorithm for \(j\in[1,6]\). Notice how an increasing power of \(2\) is added to the positive direction or subtracted from the negative direction in each increment. The total number of time steps made after increment \(j\) (excluding the initial time point \(t=0\)) is \(1+2+4+8+16+\cdots=2^{0}+2^{1}+2^{2}+\cdots+2^{j-1}=\sum_{i=0}^{j-1}2^{i}=2^{j}-1\). **Check**: \(2^{1}-1=2^{0}=1\), \(2^{2}-1=2^{0}+2^{1}=3\), \(2^{3}-1=2^{0}+2^{1}+2^{2}=7\), etc. Note that the intervals above are of the number of time steps. Each time step has a simulated time step size of \(\epsilon\). matrix is only \(O(D)\). When we have many subjects in the hierarchical model, \(D\) can be quite large. Before we conclude this section, it is important to note that the HMC/NUTS algorithm is typically explained in the literature using a so-called momentum vector \(p=Mv\) while we chose to use the more intuitive velocity vector \(v\) in this paper to explain the intuition behind the algorithm. The two parameterizations are equivalent but the momentum one is the one typically used in implementations and HMC/NUTS research. When \(v\sim N(0,M^{-1})\), the corresponding distribution of the momentum vector \(p\) is \(N(0,M)\). #### 4.5.8 Optional: Hierarchical Priors and Nuts Consider the following toy model which has no observations: \[\begin{split}\text{log}\omega&\sim\text{Normal}(0,1.5)\\ \eta&\sim\text{Normal}\left(0,(e^{\text{log}\omega})\right) \end{split} \tag{20}\] This is a model with 2 parameters \((\text{log}\omega,\eta)\), a prior distribution on each of them and some exponential dependence between them in that the standard deviation of the prior on \(\eta\) depends exponentially on the value of \(\text{log}\omega\). Figure 16 shows the PDF heatmap of the joint prior distribution of the parameters \(\text{log}\omega\) (y-axis) and \(\eta\) (x-axis). Recall that the NUTS algorithm uses a multi-step trajectory with a fixed time step size in the imaginary dynamics simulation to account for variation in curvature along the same direction. Consider the direction along the x-axis in the figure. The curvature along the x-axis changes depending on where along the y-axis the imaginary \((\text{log}\omega,\eta)\) particle is. Lower values of \(\text{log}\omega\) lead to exponentially higher curvatures (reflected through the tight color band in the heatmap) along the \(\eta\) direction. So if we try to use NUTS to sample from this prior, two bad things can happen: 1. The sampler may adapt its step size to very small values to be able to sample from the lower regions of the prior and it will use a large number of time steps \(T\) to effectively explore the upper regions of the prior. In such cases, more often than not, the maximum number of allowed time steps \(2^{j}-1\) may not be enough to find the U-turn. This will hurt the performance since we will be doing many model evaluations per proposal and we may need multiple steps to fully traverse the upper regions of the prior. 2. The sampler may adapt its step size to values not small enough to sample from the lower regions in the prior. In this case, the sampler may skip sampling from the significant lower part in the prior leading to potential bias in the results. In other words, the above prior may lead to slow and biased NUTS sampling, a clearly terrible outcome. Note that of course we are not trying to sample from the prior using NUTS, because we can just sample directly from the standard distributions in the model using more direct and efficient methods. However, studying the prior's PDF and how it interacts with NUTS can help us understand how NUTS will interact with the posterior when there are a few data points available. Also note that the above model is using an explicit log scale parameterization for the standard deviation for pedagogical purposes. In reality, models may be written directly in terms of the standard deviation \(\omega\) (instead of its log) or more generally the covariance matrix \(\Omega\) for multivariate \(\eta\). However, the above example is still relevant in those cases because implementations of the NUTS algorithm do the log-scale transformation behind the scenes and so NUTS actually samples unconstrained parameters all the time even if the original model's parameters were constrained to a different support. So the same insight we build for the model above is applicable to general hierarchical models when using NUTS. Figure 16: The PDF heatmap of the prior distribution of \(\text{log}\omega\) on the y-axis and \(\eta\) on the x-axis. The standard deviation parameters (or more generally the covariance matrix) of the prior on the subject-specific parameters \(\eta_{i}\) (commonly known as the between-subject variability) in pharmacometrics are typically assigned weak priors to avoid bias. This means that they tend to have a wide variability in their value in the posterior distribution unless enough data is collected to precisely identify the value of the parameters. This combination of: 1. Weak priors on the covariance matrix parameters to avoid bias, 2. Not having enough data to precisely identify the parameter values, 3. The dependence between parameters' priors (in the standard deviation) in hierarchical models, 4. The log scale domain transformation of standard deviation and covariance parameters used by NUTS, and 5. The fixed step size used by the NUTS sampler, is an unfortunate but very common combination of factors that can lead to wrong and very slow inference. So what's the solution? One solution is to reparameterize the model as such: \[\begin{split}\text{log}\omega&\sim\text{Normal} (0,1.5)\\ \eta\text{std}&\sim\text{Normal}\left(0,1\right)\\ \eta&=e^{\text{log}\omega}\times\eta\text{std}\end{split} \tag{21}\] This reparameterization de-couples the priors of the parameters and resolves the issue in the PDF heatmap. Note that this model transformation does not change the data generating process. That is if you sample values for \(\eta\text{std}\) and \(\text{log}\omega\), the values of \(\eta\) simulated will be identical to the values simulated from the original model's prior. However, the latter parameterization is more friendly to the NUTS algorithm. In the context of pharmacometrics, the following covariance-based model: \[\begin{split}\theta&\sim p(\theta)\\ \Omega&\sim p(\Omega)\\ \eta_{i}&\sim N(\theta,\Omega)\end{split} \tag{22}\] can be transformed to: \[\begin{split}\theta&\sim p(\theta)\\ \Omega&\sim p(\Omega)\\ \eta\text{std}_{i}&\sim N(0,I)\\ \eta_{i}&=\text{chol}(\Omega)\times\eta\text{std}_{ i}+\theta\end{split} \tag{23}\] where \(\text{chol}(\Omega)\) is the lower triangular Cholesky factor (a generalized square root for matrices) of the covariance matrix \(\Omega\). Similarly, using the standard deviation and correlation matrix parameterization instead, the original model becomes: \[\begin{split}\theta&\sim p(\theta)\\ \omega&\sim p(\omega)\\ C&\sim p(C)\\ \eta_{i}&\sim N(\theta,D_{\omega}\times C\times D_{ \omega})\end{split} \tag{24}\] where \(D_{\omega}\) is the diagonal matrix whose elements are the standard deviations vector \(\omega\). The above correlation-based model can be transformed to de-couple the priors as such: \[\begin{split}\theta&\sim p(\theta)\\ \omega&\sim p(\omega)\\ C&\sim p(C)\\ \eta\text{std}_{i}&\sim N(0,I)\\ \eta_{i}&=D_{\omega}\times\text{chol}(C)\times\eta \text{std}_{i}+\theta\end{split} \tag{25}\] When using Pumas to define these models, a transformation equivalent to the above transformation is done automatically behind the scenes even if you write the model in the coupled way. 34 Footnote 34: This equivalence is in exact arithmetic but when running computation on the computer, floating point arithmetic is done. This means that the results may not be identical depending on how sensitive the model is to round-off errors in the floating point arithmetic. Before we conclude this section, we note that models where the priors on the population and subject-specific parameters are coupled are sometimes called centered parameterization (CP) models in the Bayesian literature. While de-coupling the priors using the transformations discussed above is often called the non-centered parameterization (NCP). These terms can be confusing though so we mostly avoid their use in this paper. ### Basic Summary Statistics There are a few summary statistics that one can view to assess the convergence of the MCMC chains. These include: * **Effective Sample Size (ESS)**: an approximation of the "number of independent samples" generated by a Markov chain, when estimating the posterior mean. A low ESS per sample ratio is caused by high auto-correlation in the MCMC samples and is often a bad indicator. * \(\widehat{R}\) **(Rhat)**: potential scale reduction factor, a metric to measure if the Markov chains have mixed, and, potentially, converged. Chain mixing refers to the case when different chains include samples from the same regions in the posterior as opposed to each chain including samples from a separate region of the posterior. * **Monte Carlo Standard Error (MCSE)**: the standard deviation divided by the ESS, which is a measure of estimation noise in the posterior mean. The formula for the effective sample size (ESS) when estimating the posterior mean is: \[\widehat{\eta}_{\text{eff}}=\frac{mn}{1+\sum_{t=1}^{T}\widehat{\rho}_{t}}\] where \(m\) is number of Markov chains, \(n\) is total samples per Markov chain, and \(\widehat{\rho}_{t}\) is an auto-correlation estimate. This formula is an approximation of the "number of independent sample" generated by a Markov chain when estimating the mean values of the parameters. Since we don't have a way to recover the true auto-correlation \(\rho\), instead we rely on an estimate \(\widehat{\rho}\). The higher the auto-correlation in the chains, the lower the ESS will be for the same number of MCMC samples. High auto-correlation can result from too many rejections or optimization-like behaviour where sampler is moving towards a mode. Both of these are signs of lack of convergence. In general, a high auto-correlation alone is not a sign of lack of convergence though so care must be taken when interpreting the ESS to root-cause why it might be low. The formula for the \(\widehat{R}\) is: \[\widehat{R}=\sqrt{\frac{\widehat{\text{var}}^{+}\left(\psi\mid y\right)}{W}}\] where \(\widehat{\text{var}}^{+}\left(\psi\mid y\right)\) is the Markov chains' sample variance for a certain parameter \(\psi\). We calculate it by using a weighted sum of the within-chain variance \(W\) and between-chain variance \(B\): \[\widehat{\text{var}}^{+}\left(\psi\mid y\right)=\frac{n-1}{n}W+\frac{1}{n}B\] Intuitively, the value is \(\widehat{R}=1.0\) if all chains are totally convergent. As a heuristic, if \(\widehat{R}>1.1\), you need to worry because probably the chains have not converged adequately. ### Convergence #### Signs of Lack of Convergence MCMC has an interesting property that it will asymptotically converge to the target distribution. That means, if time is not a limited resource, it is guaranteed that, irrelevant of the target distribution posterior geometry, MCMC will give you the right answer. However, for all real-world scenarios, time is a limited resource. Different MCMC algorithms, like NUTS, can reduce the sampling (and adaptation) time necessary for convergence to the target distribution. In the ideal scenario, the NUTS sampler will converge to the true posterior and doesn't miss on any mode. But, can we prove convergence? Unfortunately, this is not easy to prove in general. All the convergence diagnostics are only tests for symptoms of lack of convergence. In other words if all the diagnostics look normal, then we can't prove that the sampler didn't converge. There are some signs of lack of convergence: * Any of the moments (e.g. the mean or standard deviation) is changing with time. This is diagnosed using stationarity tests by comparing different parts of a single chain to each other. * Any of the moments is sensitive to the initial parameter values. This is diagnosed using multiple chains by comparing their summary statistics to each other. While high auto-correlation is not strictly a sign of lack of convergence, samplers with high auto-correlation will require many more samples to get to the same efficiency as another sampler with low auto-correlation. So a low auto-correlation is usually more desirable. #### 4.7.2 When Does Convergence Matter? Broadly speaking, there are 2 main classes of models we can use: 1. Causal models, sometimes known as mechanistic models. 2. Black-box models, sometimes known as regression models or machine learning models. Simply put, causal/mechanistic models are models that make sense in the domain of interest. This means that: 1. All of the variables in the model have a meaning. 2. Each relationship in the model is well thought out, minimal and based on a claimed causal relationship between a subset of the variables. The goal of mechanistic models is to understand the system of interest. Each model is typically designed to answer questions about some of the parameters in the model. For example in PK models, we typically have absorption and clearance individual parameters. So after fitting the model to the data, one can answer questions about the probability of a specific individual parameter being greater than or less than a certain threshold. Another common example is dose response models, where we typically have a coefficient that represents the effect of the dose on the disease. If the probability that this parameter is more than 0 (according to the posterior distribution) is higher than a certain threshold, we can claim that this drug is effective. Correct causal/mechanistic models are supposed be good in both interpolation and extrapolation.35 Footnote 35: Note that causality is never implied by the model alone, instead it is based on the scientist’s intuition and understanding of the model and its variables. In other words, models are nothing more than tools that can be used to express some claimed causal relationships between quantities that the scientist has in mind. On the other end of the spectrum, we have black-box models. These are models commonly characterized by: 1. Many intermediate variables that have no meaning. 2. Dense relationships between all the variables without having a precise reason for each relationship upfront. These models are often called machine learning models. Think of a linear regression model with polynomial bases up to the 5th order. Simple linear regression models with linear terms only can arguably be in the first class of causal models if the covariates are claimed to cause the response in a linear fashion. But once you get to the 3rd or 4th order polynomial bases, the higher order polynomial terms and their coefficients start losing meaning and the model becomes more black-box. In Bayesian black-box models, prior distributions are typically arbitrary (e.g. a standard Gaussian) and used only for regularization. The hyper-parameters of the prior distributions can even be optimized for maximum average posterior predictive accuracy36. Footnote 36: Using a validation data set that wasn’t used in the training/inference of the model’s parameters There are many techniques to systematically build complicated black-box models. Some examples include: * Using polynomial series terms as bases, e.g. Taylor polynomial series or the Chebyshev polynomial series * Using Fourier series terms as bases * Using deep neural networks adding neurons and layers as needed * Using a Gaussian process for nonlinear regression These are models that, given enough data for \((x,y)\) and given enough terms in the model, can fit any arbitrary function \(y=f(x)\) without having any causal reasoning or meaning built into the model. They are purely prediction machines that can be used to do interpolation and sometimes very limited extrapolation. The ability of a model class to fit any function \(f\) with a model large enough is sometimes called the universal approximation property which a number of machine learning model classes have Hornik et al (1989). In practice, some models may combine components from causal/mechanistic models and black-box models. For example, a causal model can be used to define which variables depend on which other variables but then the functional form of the dependence can be approximated using a black-box model. Combining mechanistic and black-box models is sometimes known as scientific machine learning (Rackauckas et al, 2021). The reason why we are talking about different types of models here is because the types of diagnostics to use should be consistent with the goal of the analysis you are running. If the goal is to make good predictions, regardless of the model's interpretability, then we can treat the model as a black-box and mostly rely on predictive diagnostics. In this case, good predictions are sufficient even if the model doesn't make sense or if the inference process was imperfect. To give an example, in Bayesian neural networks, extremely crude approximations are often done when inferring the posterior so long as the posterior predictions are consistent with the data (Goan and Fookes, 2020; Izmailov et al, 2019). On the other hand, if the purpose of the analysis is to understand the system and to learn about the values of the parameters in your model because they are significant in and of themselves, then causal/mechanistic models should have been used and extra care must be taken to ensure that we correctly sample from the posterior distribution and that priors were not too strong. ### Crossvalidation and Model Selection In the Bayesian workflow, it is common to evaluate and compare models using their predictive power for out-of-sample data, i.e. data not used for the fitting or inference of the model parameters. One popular model evaluation metric for out-of-sample prediction accuracy is the so-called expected log predictive density (ELPD). Other common model selection criteria include various information criteria (Burnham and Anderson, 2002) such as the Widely Applicable Information Criteria (WAIC). For a discussion of the ELPD as well as other model evaluation criteria, refer to Vehtari and Ojanen (2012); Piironen and Vehtari (2017); Gneiting and Raftery (2007). Intuitively, the ELPD is some average measure of predictive accuracy across all posterior samples, averaged over a number of prediction tasks. Let \(\mathcal{M}\) be the pharmacometrics model with parameters \((\eta,\theta)\) that describe the data generating process of the observed data \(y\mid x\). The ELPD is defined as: \[\text{ELPD}=\int\log p(\hat{y}|\hat{x},D,\mathcal{M})\cdot p_{t}(\hat{y}|\hat{ x})d\hat{y}\] where \(\hat{y}\) is unobserved data, e.g. future data points, \(p_{t}(\hat{y}\mid\hat{x})\) is the true data generating distribution of \(\hat{y}\) (unknown in practice) and \(p(\hat{y}|\hat{x},D,\mathcal{M})\) is the posterior predictive density defined as: \[p(\hat{y}|\hat{x},D,\mathcal{M})=\int p(\hat{y}|\hat{x},\eta,\theta,\mathcal{M })\cdot p(\eta,\theta|D,\mathcal{M})d\theta\] where \(p(\eta,\theta|D,\mathcal{M})\) describes the posterior distribution of \((\eta,\theta)\) given the previously observed data \(D\) and the model \(\mathcal{M}\). Since the true data generating distribution is unknown, it is common to approximate the ELPD by an empirical distribution over the observed data. One such estimator is the log pointwise predictive density (lppd). Let \((x_{i},y_{i})\) be the \(i^{th}\) observation by some arbitrary splitting of the data \(D\) (not necessarily by subjects) into \(S\) pieces and let \((\eta^{(j)},\theta^{(j)})\) be the \(j^{th}\) sample draw from the posterior \(p(\eta,\theta|D,\mathcal{M})\), for \(j\in{1,\ldots,N}\). The lppd can be calculated using Equation 26. A shortcoming of the lppd is that it is not representative of predictive accuracy on unseen data, since \((x_{i},y_{i})\) is used both for inference on the posterior and to evaluate the model out-of-sample. #### 4.8.1 Leave-K-Out Crossvalidation Crossvalidation overcomes this problem by ensuring that \((x_{i},y_{i})\) is not used for inference on the posterior when evaluating the out-of-sample performance for \(y_{i}\mid x_{i}\). The simplest way to divide the data into in-sample and out-of-sample subsets is the leave-one-out (loo) crossvalidation where in each outer iteration, one data point is considered out-of-sample and the remaining are in-sample. The leave-one-out, log predictive density (loo-lpd) is defined in Equation 27, where \(D_{-i}\) is all data excluding \((x_{i},y_{i})\) and \((\eta^{(j)}_{-i},\theta^{(j)}_{-i})\) is the \(j^{th}\) sample draw from the posterior \(p(\eta,\theta|D=D_{-i},\mathcal{M})\). This can be generalised to leave \(K\)-out cross validation where \((x_{i},y_{i})\) is interpreted as \(K\) observations, e.g \(K\) subjects or \(K\) drug concentration observations. \[\text{lppd} =\frac{1}{S}\sum_{i=1}^{S}\log p(y=y_{i}|x=x_{i},D,\mathcal{M})\] \[=\frac{1}{S}\sum_{i=1}^{S}\log\int p(y=y_{i}|x=x_{i},\eta,\theta, \mathcal{M})p(\eta,\theta|D,\mathcal{M})d\theta \tag{26}\] \[\approx\frac{1}{S}\sum_{i=1}^{S}\log\Big{(}\frac{1}{N}\sum_{j=1} ^{N}p(y=y_{i}|x=x_{i},\eta=\eta^{(j)},\theta=\theta^{(j)},\mathcal{M})\Big{)}\] \[\text{loo-lpd} =\frac{1}{S}\sum_{i=1}^{S}\log p(y=y_{i}|x=x_{i},D=D_{-i},\mathcal{ M})\] \[=\frac{1}{S}\sum_{i=1}^{S}\log\int p(y=y_{i}|x=x_{i},\eta,\theta, \mathcal{M})\cdot p(\eta,\theta|D=D_{-i},\mathcal{M})d\theta \tag{27}\] \[\approx\frac{1}{S}\sum_{i=1}^{S}\log\Big{(}\frac{1}{N}\sum_{j=1} ^{N}p(y=y_{i}|x=x_{i},\eta=\eta_{-i}^{(j)},\theta=\theta_{-i}^{(j)},\mathcal{ M})\Big{)}\] #### 4.8.2 Leave-Future-K-Out Crossvalidation When working with time-series data, it can often be more useful to evaluate models based on their ability to predict future values using nothing but past values for training. This gives rise to another variant of crossvalidation called leave-future-one-out (lfoo) crossvalidation and the lfoo-lpd which is defined in Equation 28, where \(t\) is the minimum number of data points used for training/inference, \(D_{1:i-1}\) is the past data and \((\eta_{-(i:S)}^{(j)},\theta_{-(i:S)}^{(j)})\) is the \(j^{th}\) sample draw from the posterior \(p(\eta,\theta|D=D_{1:i-1},\mathcal{M})\) which is obtained by excluding the future data \(D_{i:S}\) from the inference. #### 4.8.3 Crossvalidation for Hierarchical Models When performing crossvalidation in a hierarchical model, there are multiple ways to measure the predictive power of the model. For instance in hierarchical pharmacometric modeling, the goal is to learn a population model to make predictions on new patients while simultaneously learning subject-specific models to make future predictions for specific subjects having seen their past response to drugs. These models are useful for dose selection and dose adaptation for new or existing patients with the objective of maximizing the therapeutic effect while avoiding toxicity. Depending on the prediction task of interest, one may choose to treat each time observation as a data point or each entire patient/subject as a data point. If the goal is to evaluate the model's ability to predict responses for new patients, leave-one-subject-out crossvalidation should be used. Alternatively, if the goal is to evaluate the model's ability to predict future drug concentrations or any other observable time-dependent quantity the model predicts, then leaving future observations out for each subject makes more sense. This will be called leave-one-observation-out or leave-one-future-observation-out crossvalidation. The choice of what constitutes a point to leave out when doing crossvalidation affects the way the predictive likelihoods are computed: \[p(y=y_{i}|x=x_{i},\eta=\theta^{(j)},\theta=\theta^{(j)},\mathcal{M})\] When leaving subjects out, we are often interested in the marginal likelihood of this subject given a posterior sample draw of the population parameters \(\theta\), marginalizing out the subject-specific parameters \(\eta\). Alternatively, the conditional likelihood can also be used for some default or typical values of the subject-specific parameters, e.g. the mode of the prior distribution. To marginalize subject-specific parameters, approximate integration methods such as LaplaceI and FOCE can be used to obtain the marginal likelihood. On the other hand when leaving observations out in a single subject, the quantity of interest is often the conditional likelihood given each sample from the joint posterior of population and subject-specific parameters of individual subjects given previous data of the subject. #### Pareto Smoothed Importance Sampling Crossvalidation Evaluating the loo-lpd or lfoo-lpd is expensive since one needs to draw samples from \(N\) or \(N-t\) different posteriors, e.g. from \(p(\eta,\theta|D=D_{-i},\mathcal{M})\) for loo-lpd. Typically this will be done by MCMC, e.g. the NUTS algorithm, which in spite of recent progress, remains computationally expensive when the number of parameters is large and the curvature of the posterior is uneven along one or more dimensions. One approach to overcome this difficulty, is the Pareto-smoothed importance sampling method for leave-one-out, crossvalidation (PSIS-LOO-CV) (Vehtari et al, 2015). In PSIS-LOO-CV, MCMC is run only once on the full data. The same samples are then re-used in each outer iteration of CV but using different weights. The weights are determined using importance sampling (IS) by comparing the likelihood with one data point left out to the likelihood of the full dataset. The raw importance weights are then smoothed by fitting them to a generalized Pareto distribution. The smoothed weights can then be used to estimate the ELPD contribution of each data point. Beside the ability to approximate the ELPD, PSIS-LOO-CV also provides a useful diagnostic which is the shape parameter of the Pareto distribution fitted to the raw weights when leaving out each data point. Data points that when removed lead to a large shape parameter are more influential than data points which have a low shape parameter. For highly influential points where the Pareto shape parameter is higher than 0.7, the ELPD contribution for this point can be considered unreliable. In those cases, resampling from the posterior after removing the influential point is recommended. ## 5 Example Models Listings 29, 30, 31 and 32 are examples of some common models in pharmacometrics. ``` 1@modelbegin 2@parambegin 3tvcl\(\sim\)LogNormal(log(10),0.25)#CL 4tvq\(\sim\)LogNormal(log(15),0.5)#Q 5tvc\(\sim\)LogNormal(log(35),0.25)#V1 6tvp\(\sim\)LogNormal(log(105),0.5)#V2 7tvka\(\sim\)LogNormal(log(2.5),1)#ka 8\(\sigma\sim\)truncated(Cauchy(),0,Inf)#sigma 9end 10@prebegin 11CL=tvcl 12Vc=tvc 13Q=tvq 14Vp=tvp 15Ka=tvka 16end 17@dynamicsbegin 18Depot'=-Ka*Depot 19Central'=Ka*Depot-(CL+Q)/Vc*Central+Q/Vp*Peripheral 20Peripheral'=Q/Vc*Central-Q/Vp*Peripheral 21end 22@derivedbegin 23cp:[email protected]/Vc 24conc\(\sim\)@.LogNormal(log(cp),\(\sigma\)) 25end 26end ``` Listing 29: Single Subject PK Model ``` 1@modelbegin 2@parambegin 3tvcl\(\sim\)LogNormal(log(10),0.25)#CL 4tvq\(\sim\)LogNormal(log(15),0.5)#Q 5tvc\(\sim\)LogNormal(log(35),0.25)#V1 6tvp\(\sim\)LogNormal(log(105),0.5)#V2 7tvka\(\sim\)LogNormal(log(2.5),1)#ka 8\(\sigma\sim\)truncated(Cauchy(0,5),0,Inf)#sigma 9C\(\sim\)LKJCholesky(5,1.0) 10\(\omega\sim\)Constrained( 11MwNormal(zeros(5),Diagonal(0.4^2*ones(5))), 12lower=zeros(5), 13upper=fill(Inf,5), 14init=ones(5), 15} 16end 17@randombegin 18\(\sim\)WvLogNormal(I(5)) 19end 20@prebegin 21\(\eta=\omega\).*(getchol(C).L*\(\eta\)std) 22CL=tvcl*\(\eta\)[1] 23Q=tvq*\(\eta\)[2] 24Vc=tvc*\(\eta\)[3] 25Vp=tvp*\(\eta\)[4] 26Ka=tvka*\(\eta\)[5] 27end 28dydamicsbegin 29Depot'=-Ka*Depot 30Central'=Ka*Depot-(CL+Q)/Vc*Central+Q/Vp*Peripheral 31Peripheral'=Q/Vc*Central-Q/Vp*Peripheral 32end 33dederivedbegin 34cp:[email protected]/Vc 35conc\(\sim\)@.LogNormal(log(cp),\(\sigma\)) 36end ``` Listing 30: Population PK Model * 1@modelbegin * 2 @parambegin [MISSING_PAGE_POST] nd * 22 @randombegin * 23 @\(\eta\)Ka \(\sim\) LogNormal(0.0, sqrt(\(\omega^{2}\)Ka)) * 24 @\(\eta\)Ke \(\sim\) LogNormal(0.0, sqrt(\(\omega^{2}\)Ka)) * 25 @\(\eta\)Vd \(\sim\) LogNormal(0.0, sqrt(\(\omega^{2}\)Vd)) * 26 @\(\eta\)n \(\sim\) LogNormal(0.0, sqrt(\(\omega^{2}\)n)) * 27 @\(\eta\)d \(\sim\) LogNormal(0.0, sqrt(\(\omega^{2}\delta\))) * 28 @\(\eta\)c \(\sim\) LogNormal(0.0, sqrt(\(\omega^{2}\)c)) * 29 @\(\eta\)EC50 \(\sim\) LogNormal(0.0, sqrt(\(\omega^{2}\)EC50)) * 30 end * 31 @prebegin * 32 @\(\eta\)c \(\eta\ ``` 1@modelbegin 2@parambegin 3\(\lambda_{1}\sim\)LogNormal(0.0, 2.0)#basalhazard 4\(\beta\sim\)LogNormal(0.0, 2.0)#fixedeffectDOSE 5\(\omega\sim\)LogNormal(0.0, 2.0)#inter-subjectvariability 6end 7@randombegin 8\(\eta\sim\)LogNormal(0.0, \(\omega\)) 9end 10@covariatesDOSE 11@prepbegin 12_\(\lambda_{1}\)=\(\lambda_{1}\)*\(\eta\)#basalhazardwithinter-subjectvariability 13_\(\lambda_{0}\)=\(\lambda_{1}\)*\(\beta^{\text{-}}\)DOSE#totalhazard 14end 15@varsbegin 16\(\lambda\)=\(\lambda_{0}\) 17end 18@dynamicsbegin 19\(\Lambda^{\prime}\)=\(\lambda\) 20end 21@derivedbegin 22dv\(\sim\)@.TimeToEvent(\(\lambda\), \(\Lambda\)) 23end 24end ``` Listing 32: Time-To-Event Model ## 6 Conclusion In this work, we presented a comprehensive Bayesian analysis workflow using Pumas. All the syntax and relevant theory were presented following an intuition-first approach as much as possible, with numerous cross-links. If you are an existing Pumas user and you have further questions, you can reach out to us via the Pumas Discourse platform (discourse.pumas.ai). You can also find more focused tutorials and complete scripts on the Pumas tutorials website (tutorials.pumas.ai) for your continued learning. If after reading this paper, you would like to read and learn more about Bayesian statistics, the following are some excellent resources that can be used to further your learning: 1. Bayesian Data Analysis (Gelman et al, 2013) 2. Statistical Rethinking: A Bayesian Course with Examples in R and Stan (McElreath, 2020), 3. Regression and Other Stories (Gelman et al, 2020), 4. Data Analysis Using Regression and Multilevel/Hierarchical Models (Gelman and Hill, 2006), 5. Probabilistic Machine Learning: An Introduction (Murphy, 2022) 6. Probability Theory: The Logic of Science (Jaynes, 2003) ## 7 Acknowledgements We would like to acknowledge all of the reviewers of the early drafts of this work for their valuable feedback. In particular, we would like to thank **Haden Bunn** (Pumas-AI Inc.), **Joga Gobburu** (Pumas-AI Inc. and the University of Maryland Baltimore), **Yoni Nazarathy** (Pumas-AI Inc. and the University of Queensland), **Vaibhav Dixit** (Pumas-AI Inc.), **Russell Tsuchida** (CSIRO Data61), **Anastasios Panagiotelis** (University of Sydney) and **Mutaz Jaber** (Gilead Sciences Inc.).
この論文は、Pharmacometricsにおけるベイズ推論の包括的なチュートリアルを提供します。Pumasワークフローを使用します。私たちは、Pharmacometricsにおけるベイズ推論の簡潔な動機付けを提供し、Pumasが対処している既存のソフトウェアの制限を強調します。次に、Pharmacometricsにおける標準のベイズワークフローのステップを説明し、コードsnippetと例を用いて示します。これには、モデル定義、事前選択、後方推定からのサンプル、事前と後方シミュレーションと予測、逆推論シミュレーションと予測、収束診断、予測的視覚的検証、そして最後に、モデル比較と cross-validationが含まれます。最後に、ベイズ統計の多くの高度な概念の背景と intuitionsは、シンプルに説明されています。これは、ユーザーがベイズ分析を実行する際に注意すべき重要なアイデアや注意事項を含んでいます。この論文に提示されているアルゴリズム、コード、
2309.04976
AVARS -- Alleviating Unexpected Urban Road Traffic Congestion using UAVs
Reducing unexpected urban traffic congestion caused by en-route events (e.g., road closures, car crashes, etc.) often requires fast and accurate reactions to choose the best-fit traffic signals. Traditional traffic light control systems, such as SCATS and SCOOT, are not efficient as their traffic data provided by induction loops has a low update frequency (i.e., longer than 1 minute). Moreover, the traffic light signal plans used by these systems are selected from a limited set of candidate plans pre-programmed prior to unexpected events' occurrence. Recent research demonstrates that camera-based traffic light systems controlled by deep reinforcement learning (DRL) algorithms are more effective in reducing traffic congestion, in which the cameras can provide high-frequency high-resolution traffic data. However, these systems are costly to deploy in big cities due to the excessive potential upgrades required to road infrastructure. In this paper, we argue that Unmanned Aerial Vehicles (UAVs) can play a crucial role in dealing with unexpected traffic congestion because UAVs with onboard cameras can be economically deployed when and where unexpected congestion occurs. Then, we propose a system called "AVARS" that explores the potential of using UAVs to reduce unexpected urban traffic congestion using DRL-based traffic light signal control. This approach is validated on a widely used open-source traffic simulator with practical UAV settings, including its traffic monitoring ranges and battery lifetime. Our simulation results show that AVARS can effectively recover the unexpected traffic congestion in Dublin, Ireland, back to its original un-congested level within the typical battery life duration of a UAV.
Jiaying Guo, Michael R. Jones, Soufiene Djahel, Shen Wang
2023-09-10T09:40:20
http://arxiv.org/abs/2309.04976v1
# AVARS - Alleviating Unexpected Urban Road Traffic Congestion using UAVs ###### Abstract Reducing unexpected urban traffic congestion caused by en-route events (e.g., road closures, car crashes, etc.) often requires fast and accurate reactions to choose the best-fit traffic signals. Traditional traffic light control systems, such as SCATS and SCOOT, are not efficient as their traffic data provided by induction loops has a low update frequency (i.e., longer than 1 minute). Moreover, the traffic light signal plans used by these systems are selected from a limited set of candidate plans pre-programmed prior to unexpected events' occurrence. Recent research demonstrates that camera-based traffic light systems controlled by deep reinforcement learning (DRL) algorithms are more effective in reducing traffic congestion, in which the cameras can provide high-frequency high-resolution traffic data. However, these systems are costly to deploy in big cities due to the excessive potential upgrades required to road infrastructure. In this paper, we argue that Unmanned Aerial Vehicles (UAVs) can play a crucial role in dealing with unexpected traffic congestion because UAVs with onboard cameras can be economically deployed when and where unexpected congestion occurs. Then, we propose a system called "AVARS" that explores the potential of using UAVs to reduce unexpected urban traffic congestion using DRL-based traffic light signal control. This approach is validated on a widely used open-source traffic simulator with practical UAV settings, including its traffic monitoring ranges and battery lifetime. Our simulation results show that AVARS can effectively recover the unexpected traffic congestion in Dublin, Ireland, back to its original uncongested level within the typical battery life duration of a UAV. UAVs, Deep Reinforcement Learning, Traffic Light Control, Unexpected Congestion ## I Introduction Many governments, policy makers, tech giants and SMEs are joining their efforts to support the vision towards achieving Net-Zero transport by 2050 (e.g., in the UK & Ireland). This requires revolutionary solutions, supported by adequate policy changes, to a number of problems facing the wide adoption and/or deployment of cutting-edge technologies such as transport electrification, and the use of drones to offload ground transport, etc. Achieving the above objective also requires innovative solutions to better control traffic congestion, especially after the occurrence of unexpected events (e.g., car crashes and unplanned road work etc.) on the road. Unexpected (aka "non-recurrent") urban road traffic congestion is caused by such en-route events. It is challenging to accurately predict the traffic impact of such events at any possible time and location due to the lack of such historical data. Therefore, an effective approach to reduce such congestion should react quickly after the event occurrence, rather than predicting the event impact in advance. This fast reaction is effective in reducing additional delays experienced by drivers and the resulting additional emissions. UAVs are renowned for their rapid deployment and efficiency in supporting emergency management (e.g., life-critical services [1], situations where the communication infrastructure is destroyed, subject to cyber attacks or overloaded [2, 3]. UAVs have also been used for road traffic monitoring on urban and highway roads [4]. However, the use of UAVs for emergency management in the transportation domain (e.g., reducing the impact of non-recurrent congestion), has not been sufficiently studied in the literature, thus we argue in this paper that UAVs have a great potential to be explored in this context. Additionally, although existing adaptive traffic light control systems (e.g., SCATS [5]) work well for the recurrent urban traffic, they cannot be effective for non-recurrent congestion due to their low-frequency and coarse-grained traffic monitoring detectors (i.e., induction loops collect traffic volumes in minutes). Recent DRL-based traffic light signal control can be promising [6]. However, the convergence of the DRL solution remains uncertain when applied in a fast-changing environment. Moreover, the practical deployment may need excessive hardware and software to upgrade existing systems (i.e., install cameras to all major road intersections and deploy DRL-based software on the road side units and central servers). This makes it unnecessary for non-recurrent congestion reduction which only happens very occasionally. To fill the above-mentioned gaps, we propose a system, dubbed AVARS, which is also open-sourced1 and its main contributions are summarized as follows: Footnote 1: [https://github.com/Guojjyj/AVARS](https://github.com/Guojjyj/AVARS) * **AVARS uses UAVs to reduce non-recurrent congestion, rather than just monitoring normal traffic.** To the best of our knowledge, this is the first work that leverages UAVs to reduce non-recurrent congestion in urban traffic, while most others are for traffic monitoring, which we think is not practical as UAVs can only sustain for less than one hour compared to 24/7 deployed cameras in the fixed location. UAVs can be dispatched to congested intersections in a few minutes and provide high-frequency high-resolution data [7], which are well-suited features for recovering from non-recurrent road congestion within a short duration. * **AVARS is a DRL-based system that contains many practical deployment considerations.** AVARS is carefully designed to achieve the minimum possible upgrade to the existing systems. This includes embedding low-cost wireless communication modules for traffic light controllers only (not vehicles). Thus, it avoids strong assumptions about having full or mix-autonomy traffic with vehicle-to-everything (V2X) communication infrastructures. AVARS also has a practical DRL model (i.e., newly designed state, action, and reward) as it can obtain a more stable convergence under fast-changing road traffic. * **The effectiveness of AVARS is validated on a widely-used simulator with a realistic urban scenario.** The simulation study shows that AVARS can achieve the largest reduction in travel time, fuel consumption, and CO\({}_{2}\) emission, when unexpected congestion occurs in a city center area of Dublin, Ireland. Compared methods include SCATS [5] (no DRL, no UAV), and IntelliLight [6](DRL but no UAV). Additionally, AVARS can recover non-recurrent congestion back to the original traffic within the 30-minute operation of UAVs, which is well within the normal UAV's battery life. If considering more realistic factors impacting battery duration, only 10-minute traffic signal control by AVARS is sufficient to mitigate congestion significantly. ## II AVARS - System Description ### _Assumptions_ **Communications:** The communications required in our AVARS are only between a UAV and a traffic light controller (i.e., vehicles are not involved) that occur every second (i.e., to be responsive to en-route events). The communication technologies used could be any easy to implement ones such as 4/5G or IEEE 802.11p. In particular, the UAV send the "action" to traffic lights to either switch to the next phase or keep the current phase. Conversely, the traffic light controller can send current traffic light phase back to the UAV as part of current observations (states). This is for the UAV to decide about the recommended action in the next time interval. We assume the communication is entirely reliable without any packet loss and delay. **Computations:** Most computation loads are in the centralized Traffic Management Center (TMC). Firstly, TMC uses simulators to mimic possible road closures to collect data on the traffic impact. This is necessary for AVARS to select which intersections (i.e. likely the most badly influenced by the events) that UAVs must control after the occurrence of road events. Secondly, the pre-training of DRL models also happens in this simulation. Thirdly, after each assigned UAV completes its task, it brings back to TMC the realistic DRL trajectory data (i.e., state, action, reward, next state...) when UAV is operating AVARS. The trajectory data is also used for pre-training our DRL models. In addition to the computation loads at the TMC side, UAVs also need to do the following two computation tasks: extract the traffic information from their cameras; and control the traffic light controllers using a pre-trained DRL policy function with the real-time state information being the input. **UAV Placement:** Upon the receipt of an intersection location (i.e., where traffic events occurred) from the TMC, it is assumed that a UAV can independently transit both safely and efficiently across the environment space. Upon arrival at the intersection location the UAV can maintain both its position and altitude, to deliver a visual representation of the intersection allowing for traffic information extraction. ### _Design objectives_ We design AVARS to meet the following objectives: * **Be practical:** The system should have a practical deployment plan to meet the limitations of UAVs and DRL approaches. The cameras of UAVs have limited monitoring range. This implicates the maximum length of the road for UAV traffic monitoring should be around 220 meters [7]. Moreover, the limited battery life of a UAV requires that the congestion should be effectively alleviated before the energy runs out (e.g., within 40 minutes since a UAV starts to operate [8]). Additionally, the practical requirement of DRL refers to a stable convergence performance within reasonable amount of iterations. A stable DRL approach can potentially save computation resources and be effective in more general scenarios. * **Be effective:** The system should be effective to recover the unexpectedly congested traffic within a limited time duration (i.e., UAV battery lifetime). The design of DRL models in terms of state, action, and reward is of vital importance to achieve this objective. The verification of this objective will be done by comparing the traffic evolution when no road closure occurs, as well as the reduction in travel time, fuel consumption, and CO\({}_{2}\) emissions. ### _System components_ We clarify the responsibilities of each different AVARS component as follows: * **UAVs**: Each UAV is responsible for data collection (i.e., DRL state for decision making and DRL trajectories for pre-training) and sending actions (i.e., switching or not to the next traffic light phase) according to the learned DRL policy function for controlling a single intersection. * **Traffic lights:** Each traffic light only needs to receive the DRL action from a UAV to set its phase in the next time interval, and reports back to this UAV about the current phase as a part of DRL state. * **TMC**: Based on numerous simulation results, TMC needs to select a number of key intersections for UAVs to control when unexpected events occur. TMC also needs to train the proposed DRL model using DRL trajectories collected after UAVs complete their AVARS field tasks. Unlike most of existing works in the literature [9], our AVARS does not rely on any vehicle equipped with advanced technologies such as autonomous or connected vehicles. This saves a large potential cost to upgrade existing infrastructure and removes strong assumptions on collecting vehicle driving information in recent DRL-based traffic light signal control approaches [10]. ### _System flow_ Suppose a road segment within an urban road network is suddenly closed, due to an incident, leading to queues of vehicles, reduced traffic speed and several affected vehicles needing to reroute to bypass the closed road. This will migrate the traffic demand to several other signalized intersections, which causes unexpected traffic congestion. To efficiently handle such scenarios, AVARS operates as follows (also shown in Fig.1): 1. Once a road is closed, the TMC is informed through existing traffic monitoring technologies. 2. The TMC suggests several signalized intersections for UAVs to control (six intersections highlighted by red circles in the environment of Fig.1). Each UAV is assigned to a designated intersection. For each selected intersection, all of its directly connected roads should be covered by the monitoring range of the UAV camera. 3. When UAVs arrive at their assigned intersections, each UAV starts: a) collecting the traffic information, then converting them to the "DRL state"; b) sending the "DRL action" to adjust the traffic signal according to the pre-trained DRL policy function; and c) calculating the "DRL reward" based on real-time traffic information. These 3 steps iterate every second until the preset operation duration is reached or the UAV enters a low battery state, both triggering a UAV to return to the TMC. Each UAV returns to TMC with the collected DRL trajectory (state, action, reward, new state) acquired during this operation interval. 4. The DRL policy function is updated using newly collected DRL trajectory data on the TMC cloud servers. ### _DRL models_ * State: For a specific intersection that is controlled by a UAV, the state involves the current signal phase of the traffic lights, the occupancy (i.e., the ratio of the total length of vehicles on a specific road to the length of that road) and average vehicle speed on each of the roads directly linked to this intersection. This state information is collected from the UAV's sensors (e.g., camera) and extracted after onboard data processing [11]. * Action: The action that each UAV sends to its controlled traffic light is a binary set, which defines that "1" represents switching to the next phase and "0" means to keep the current phase unchanged for the next time interval. * Reward: Given the DRL state and action at the intersection \(r_{j}\), as described in Eq.1, the reward is calculated as the negative value of maximum road occupancy among all incoming roads \(I\) connected to the intersection \(j\). The reward aims to evacuate traffic on the road that has the highest occupancy as soon as possible. \[r_{j}=-max(O_{i}),i\in I\] (1) The road occupancy \(O_{i}\) for each incoming road is also involved in the UAV state. To get a stable convergence with a moderate number of iterations, we do not recommend any multi-agent DRL algorithm for AVARS so far [9]. Algorithm 1 shows one completed iteration of AVARS training process to obtain the DRL policy function taken by UAVs to control traffic lights, in which the DRL algorithm can be either Deep Q-learning (DQN) or Proximal Policy Optimization (PPO) [12]. DQN is commonly used in existing traffic light signal control systems [6, 10], having advantages in discrete action space, i.e., a binary set of traffic signal switching in AVARS. The off-policy method is efficient to update the finite Q values on a one-step temporal difference for a higher reward. However, it is also volatile in a fast-changing environment, especially with complex traffic information, which aggravates training convergence. Unlike DQN, PPO can reduce the large variance of policy updates and facilitate training convergence. PPO can also improve sample efficiency by multiple epochs of mini-batch updates. ### _Summary_ We summarize how the design of AVARS achieves our first objective - _be practical_. Firstly, AVARS avoids large-scale upgrades of hundreds of traffic light controllers by quickly assigning UAVs to very few intersections that are most impacted by the en-route events. Secondly, AVARS uses UAVs' sensors to obtain real-time traffic information, which is much more flexible than embedding induction loops on the road. Thirdly, AVARS does not consider vehicle-related Fig. 1: Illustration of AVARS system flow. The traffic environment is a subnet of Dublin city center road network. The highlighted road is the closed road causing unexpected congestion in the surrounding area. UAVs of AVARS control the six red-circled intersections in this scenario, which are influenced most by the road closure. upgrades (e.g., V2X communication capabilities) and requires minimum upgrades of the traffic lights (e.g., 5G or IEEE 802.11p communication technology) to communicate with UAVs instead of the costly wired cable communications used in today's adaptive traffic light systems. Last but not least, our DRL model is simple enough to calculate. Additionally, the heavy-weighted computation for training DRL only occurs in the centralized TMC, which is practical to implement using existing cloud-based services, rather than distributing these computation loads to multiple roadside units or resource-constrained UAVs. ## III Evaluation Results and Analysis ### _Unexpected urban traffic congestion scenarios_ The implementation of AVARS is based on FLOW 2, which is a framework integrating RLlib (DRL library) and SUMO (traffic simulator). The testing map is a subnet of Dublin city center road network around the River Liffey, covering approximately 1 square kilometer, as shown in Fig.1. The scenario is extracted from the open data in [13] to simulate the real-world traffic in Dublin city, and the traffic generation lasts 45 minutes with 1168 vehicles in our experiments. We close a road in the center of the testing scenarios for 30 minutes from the 10th minute of the simulation, at which point the vehicles that would have entered the closed road are required to reroute. The radius of the area managed by the TMC at Dublin city center is 5km, given the maximum flying speed of 80 km/h [11], a UAV is capable of direct flight to any intersection within the control region in 3.75 minutes. However, the direct flight may be limited by the urban environment, hence we assume the start time of UAV control after road closure is 5 minutes. Footnote 2: [https://flow-project.github.io](https://flow-project.github.io) ### _Compared scenarios_ * **Original**: This scenario contains the regular traffic for a given urban region. It is a reference to verify if the unexpected congestion is recovered. The traffic light signals are static in the scenario, operating on a fixed cycle plan. The duration of a signal plan ranges from 90 seconds to 113 seconds for the given heterogeneous intersections. Green light phases last from 27 seconds to 50 seconds. Every time the green light switches, a 3-second yellow is followed to clear the intersection, and some signal plans combine a 5-second all-red light after the yellow light. * **Congestion**: This scenario contains unexpected congestion due to one road closure for the same urban region. The traffic light signals are static, the same as the settings in the Original scenario. * **SCATS [5]**: This scenario contains the simulation of the existing adaptive traffic light controller, SCATS [5], which neither use DRL nor UAVs. The embedded induction loops detect the duration of vehicles passing through the intersection during the green light phase, which can estimate the effective green time. The sum of the ratio of effective green time for each green light phase is the degree of saturation of one signal plan. SCATS provides multiple pre-defined signal plans and selects the signal plan with the minimum degree of saturation [9]. * **IntelliLight [6]**: This scenario contains the DRL-based traffic light signal control model using DQN but not using UAVs. Compared with AVARS, more complicated traffic metrics, such as queue length, number of vehicles, and waiting time of vehicles, are included in state and reward design. Besides, the state is also expanded with vehicles' positions and traffic light phases collected from the image representation. IntelliLight controls the signal switching to reduce queue length and travel delays. ### _Result analysis_ **Is DRL training of AVARS more stable to converge?** As shown in Fig.2, after 150 iterations, the training of AVARS(PPO) terminates at a higher average episode reward and smaller standard deviation (approx. -250 and 99) than the initial stage (approx. -766 and 430), which are calculated over 18 parallel episodes for each iteration. It also demonstrates the continuous positive updates of the trained policy. The DQN training of AVARS gets similar results, while the policy update of DQN is much faster, resulting in earlier convergence. Instead, the reward improvement of IntelliLight shown in Fig.3 varies widely across different DRL algorithms. PPO can get a better policy with a higher average episode reward (about -2860) after training compared to the DQN reward (about -4050). However, the standard deviation of either PPO or DQN during training is large, probably due to the complex design of states and rewards to evaluate traffic. Meanwhile, the complex design of IntelliLight(DQN) results in the training time being approximately three times longer than IntelliLight(PPO). Hence, the benefit of AVARS in terms of states and reward design can also be demonstrated by the above experiments on AVARS and IntelliLight using DQN or PPO. AVARS, with the concise state and reward design, can take advantage of DQN or PPO to get an expected policy to recover unexpectedly congested traffic, which is in line with the second system design objective - _effectiveness_. Additionally, we would simplify the following experiments with only AVARS(PPO) and IntelliLight(DQN) because PPO is more stable than DQN in traffic improvement based on IntelliLight, while DQN is the default DRL algorithm for IntelliLight. **Is AVARS effective?** Vehicles generated for the 45 minutes (i.e., 2700 timesteps), so the number of vehicles running in the simulated region tends to gradually increase during this time interval, as illustrated in the Original scenario of Fig.4. The road closure (the Congestion scenario) has resulted in a sharp increase in running vehicles, up to more than 250. However, after UAVs effectively control the selected intersections for approximately 10 minutes, the number of running vehicles decreases to a level lower than the Original scenario at the same timestep. The largest difference between the Original scenario and our proposed method is 61 vehicles around the 2500th timestep. SCATS cannot evidently mitigate the congested traffic, and in this scenario, the number of running vehicles shows only minor deviations from the Congestion scenario. The results of IntelliLight are closely aligned to that of the Original scenario. Hence, Fig.4 illustrates that AVARS can alleviate the unexpected congestion. Furthermore, we analyze the main traffic statistics including average travel time, fuel consumption, and CO\({}_{2}\) emissions (i.e., we refer to these metrics as "main traffic metrics" in the following descriptions), as shown in Table.I. The results are the average of 10 episodes that terminated after all vehicles completed the trip. Our proposed system AVARS produces the best results. Even compared to the Original scenario, where no road closure is in place, all traffic metrics are significantly reduced. By contrast, the negative impact of one critical road closure is evident, with the value of these main traffic metrics more than doubling up to 126%. Congestion causes massive delays, leading to longer travel times. All traffic light signal control methods, SCATS, IntelliLight, and AVARS, can alleviate the unexpected traffic congestion. Although SCATS reduces about 30% on these main traffic metrics over the Congestion scenario, the results are still inferior to that of the Original scenario. The results show that SCATS is not efficient in alleviating unexpected traffic congestion. In comparison, both DRL-based methods achieve a substantial improvement in average travel time, fuel consumption, and CO\({}_{2}\) emissions, while our system AVARS achieves the largest reduction, 67%, 72%, and 72%, respectively. The travel time statistics for all vehicles also demonstrates the benefit of our system for the vehicles influenced by the unexpected congestion, as shown in Table.II. These vehicles have experienced more prolonged travel times due to the congestion. Except for the shortest average travel time, the smallest standard deviation of AVARS demonstrates a general reduction in vehicle congestion delays. More than half of vehicles can finish their trips in about 200 seconds, which is better than the Original scenario. In addition, the longest travel time is decreased from 4599 seconds in the Congestion scenario to 554 seconds. Vehicles can reach their destination earlier, even when travelling longer distances due to rerouting. **How fast can AVARS alleviate the congestion?** Time periods of 10, 20 and 30 minutes are selected as operational Fig. 4: Evolution of the number of running vehicles in the chosen urban region over the simulated timesteps of an episode under AVARS and other compared scenarios. Road closure starts from the 600th timesteps and lasts 30 mins. UAVs used in AVARS start to control traffic light signals from the 900th timesteps. Fig. 3: Evolution of the average episode reward during the training process of IntelliLight, computed over 18 parallel episodes at each iteration. The reward of IntelliLight(DQN) ranges from approx. -6800 to -4050, and the standard deviation decreases from about 3400 to 2200. Fig. 2: Evolution of the average episode reward during the training process of AVARS, computed over 18 parallel episodes at each iteration. As for AVARS using PPO, the reward ranges from approx. -766 to -250, and the standard deviation decreases from about 430 to 99. AVARS durations, all shorter than 40 minutes, which represent the typical UAV battery lifetime [8]. Fig.5 suggests that although the longer the UAV operates, the better traffic improvement can be achieved, just 10 to 20 minutes of UAV operation in AVARS should be sufficient to recover the congestion back to its original state. ## IV Conclusion and Future Work This paper proposes AVARS: a DRL-based traffic light control system using UAVs to reduce unexpected urban road traffic congestion. AVARS takes advantage of UAVs' rapid deployment, and rich traffic monitoring capabilities to timely react to these events. AVARS also supports state-of-the-art DRL methods to deliver an efficient control of traffic light signals, without the need for the costly upgrade of traffic light controllers. We demonstrated that AVARS can effectively alleviate congestion and achieve notable reductions in travel time, fuel consumption, and CO\({}_{2}\) emissions in an urban scenario. Future works may include designing an extended version of AVARS in which multiple UAVs cooperate using multi-agent DRL while addressing its high complexity of cooperation and convergence issues.
Unexpected urban traffic congestion caused by en-route events (e.g., road closures, car crashes, etc.) often requires fast and accurate reactions to choose the best-fit traffic signals. 従来の交通信号制御システム(SCATSやSCOOTなど)は、誘導ループからの交通データの更新頻度が低いため、効率性が低い。また、これらのシステムは、予期せぬイベント発生前にあらかじめプログラムされた候補プランから信号計画を選択している。最新の実験結果では、カメラベースの交通信号システムは、深層学習(DRL)アルゴリズムを制御することで、交通渋滞を効果的に削減することが示されています。ただし、これらのシステムは、都市の道路インフラへのコストがかかる。本論文では、UAV(無人航空機)が予期せぬ交通渋滞に対処するための重要な役割を果たすと主張しています。UAVは、カメラを搭載することで、予期せぬ渋滞発生時に経済的に
2303.17789
FONT: Flow-guided One-shot Talking Head Generation with Natural Head Motions
One-shot talking head generation has received growing attention in recent years, with various creative and practical applications. An ideal natural and vivid generated talking head video should contain natural head pose changes. However, it is challenging to map head pose sequences from driving audio since there exists a natural gap between audio-visual modalities. In this work, we propose a Flow-guided One-shot model that achieves NaTural head motions(FONT) over generated talking heads. Specifically, the head pose prediction module is designed to generate head pose sequences from the source face and driving audio. We add the random sampling operation and the structural similarity constraint to model the diversity in the one-to-many mapping between audio-visual modality, thus predicting natural head poses. Then we develop a keypoint predictor that produces unsupervised keypoints from the source face, driving audio and pose sequences to describe the facial structure information. Finally, a flow-guided occlusion-aware generator is employed to produce photo-realistic talking head videos from the estimated keypoints and source face. Extensive experimental results prove that FONT generates talking heads with natural head poses and synchronized mouth shapes, outperforming other compared methods.
Jin Liu, Xi Wang, Xiaomeng Fu, Yesheng Chai, Cai Yu, Jiao Dai, Jizhong Han
2023-03-31T03:25:06
http://arxiv.org/abs/2303.17789v1
# Font: Flow-Guided One-Shot Talking Head Generation with Natural Head Motions ###### Abstract One-shot talking head generation has received growing attention in recent years, with various creative and practical applications. An ideal natural and vivid generated talking head video should contain natural head pose changes. However, it is challenging to map head pose sequences from driving audio since there exists a natural gap between audio-visual modalities. In this work, we propose a Flow-guided One-shot model that achieves NaTural head motions(FONT) over generated talking heads. Specifically, we design a probabilistic CVAE-based model to predict head pose sequences from driving audio and source face. Then we develop a keypoint predictor that produces unsupervised keypoints describing the facial structure information from the source face, driving audio and pose sequences. Finally, a flow-guided occlusion-aware generator is employed to produce photo-realistic talking head videos from the estimated keypoints and source face. Extensive experimental results prove that FONT generates talking heads with natural head poses and synchronized mouth shapes, outperforming other compared methods. Jin Liu\({}^{1,2}\), Xi Wang\({}^{1}\), Xiaomeng Fu\({}^{1,2}\), Yesheng Chai\({}^{1}\),Cai Yu\({}^{1,2}\),Jiao Dai\({}^{1*}\),Jizhong Han\({}^{1}\)+\({}^{1}\)Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China \({}^{2}\)School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China Footnote †: Corresponding authors. This research is supported in part by the National Key Research and Development Program of China (No. 2020AAA0140000), and the National Natural Science Foundation of China (No. 61702502). Talking Head Generation, Generative Model, Audio Driven Animation ## 1 Introduction Given one source face and driving audio, one-shot talking head generation aims to synthesize a talking head video with reasonable facial animations corresponding to the driving audio [1]. This task receives growing attention since it can be used in a wide range of multimedia applications, e.g. video dubbing, digital avatar animation and short video creation. Some methods [2, 3] have been proposed to edit the mouth area to achieve lip synchronization. However, they neglect the modeling of head motions, thus generating unnatural talking heads that are far from satisfactory from human observation and practical applications. Therefore, researchers turn to focus on generating talking heads with head pose changes. Recent works [4, 5] choose to introduce an extra auxiliary pose video that guides the head motion changes in the generated talking heads. This formula limits the generalization since it is tedious to find another pose video in one-shot scenario. Hence, some methods try to predict head pose sequence from driving audio. It is challenging to map driving audio signal into head pose sequence, since there exists natural gap between visual and audio modalities. A great many works [6, 1, 7, 8] are proposed to infer head motions from driving audio and source face. However, they neglect the uncertainty in the head pose prediction task and fail to produce natural head poses. In fact, the mapping from driving audio signal to head pose sequence is inherently a one-to-many problem. In real life, people can behave differently in head poses even speaking the same content. Previous methods adopt deterministic models like LSTM or MLP to perform the task, which fundamentally ignore the uncertainty lying between audio signals and head poses. Furthermore, the lack of facial structure modeling in their generation process also leads to blurry artifacts and poor lip-sync quality. To solve the above problem, we propose a **F**low-guided **O**ne-shot talking head generation network with **NaTural** head motions (FONT). The overall framework is shown in Fig. 1. The driving pose sequences come from the well-designed head prediction module. Detailedly, a probabilistic CVAE-based network is adopted to generate head pose sequences from driving audio and source face, during which the structural similarity loss is imposed instead of MSE loss. The above operations model the uncertainty and the ambiguous Figure 1: **Overview of the proposed method.** correspondences between audio and head pose modalities, contributing to natural driving head pose sequences. Then inspired by image animation work FOMM [9], we predict unsupervised keypoints from the source face, driving audio and poses to model the facial structure location. Finally, the occlusion-aware flow-guided generator produces motion flow to indicate the local facial texture variance and generates new talking heads with natural head poses. Moreover, to improve the lip-sync quality, a pre-trained lip-sync discriminator is utilized during the training process. Our contributions are as follows: (1) We develop a new flow-guided one-shot talking head generation framework that produces natural head motions. (2) A probabilistic CVAE-based network is designed to generate natural head poses from driving audio and source face. (3) We present a flow-guided occlusion-aware generator to produce keypoint-based motion flow indicating facial structure, thus generating natural talking heads. (4) Extensive experimental results prove that our proposed framework achieves the state-of-the-art level compared to other methods. ## 2 Related Work **One-shot Talking Head Generation.** One-shot talking head generation [10] has long been a significant research topic in the computer vision field. Speech2Vid [11] generates talking faces via an encoder-decoder structure and a refinement module. DAVS [12] and ATVG [2] further improve the quality using disentangled audio-visual representation and external structural information guidance. Wav2Lip [3] applies a pre-trained lip-sync discriminator to improve the generation results. Nevertheless, the above methods merely edit the mouth area and leave other facial regions unchanged, producing unnatural and less realistic talking head videos. Full-frame talking head generation produces new facial areas but also the neck part of the person, together with the background. MakeitTalk [13] predicts content and speaker-aware displacement on facial landmarks to guide the talking face generation process. To improve the realness, some methods focus on talking heads with natural head poses [4, 7]. However, their lack of face structural modeling and the mouth shape constraint causes identity mismatch and poor lip synchronization performance. However, FONT utilizes motion flow as facial structure information and the lip-syn discriminator to solve the above problem. **Head Pose Control.** Since there is no explicit head pose information contained in the driving audio signal, it is challenging to achieve head pose control and generate talking heads with natural head motions. Early methods [3, 13] focus on mouth shape accuracy and produce almost still talking heads. Later, PC-AVS [4] first propose to rely on auxiliary pose video to obtain head pose guidance. It limits the generalization of this task since obtaining a long pose video is cumbersome in the one-shot scenario. Several methods turn to infer pose sequences directly from audio. Audio2Head [7] and AVCT [8] designs a motion-aware LSTM-based network to predict head motions, while HDTF [1] utilizes Multilayer Perceptron to predict head pose coefficients in morphable face model [14]. However, the correspondence between audio and head poses contains uncertainty and predicting head poses from audio is actually an ill-posed problem. In real life, people may act different poses even speaking the same content. Hence, instead of utilizing _deterministic_ models like other methods, we choose the _probabilistic_ CVAE-based [15] network to model the uncertainty in pose generation. ## 3 Methodology The overview of the proposed method is shown in Fig. 1. Driving pose sequence will be predicted first. Then the source face \(I_{src}\), driving audio and driving pose sequence are fed into the Keypoint Predictor \(E_{kp}\) to predict unsupervised driving keypoints. Then the driving keypoints and source keypoints from \(I_{src}\) are taken as inputs to the Generator and produce final talking head videos. ### Head Pose Prediction Module To generate talking heads with natural head motions, the natural head pose sequences should be predicted first. Different from previous work [6, 7, 8] which adopt deterministic models like LSTM and traditional GAN to generate pose sequences, we design a VAE-based probabilistic model inspired by CVAE [15]. Pose generation is actually an uncertain ill-posed problem since people may behave differently when speaking the same corpus. Hence, the probabilistic model is more suitable for this task. The head pose prediction module of FONT is shown in Fig. 2. Specifically, a 6-dim vector(i.e., 3 for rotation, 1 for scale and 2 for translation) is adopted as pose information representation for each frame. Specifically, during the training stage, we utilize paired pose clip \(p_{1:t}\), corresponding audio \(A_{p}\) and head image \(I_{p}\) as inputs. They will be fed into the encoder to predict mean and standard deviation values, which will be later used for re-parametrization. Finally, the sampled data, \(I_{p}\) and \(A_{p}\) are passed into the decoder to predict pose clip \(\hat{p}_{1:t}\). The face image and audio are served as the condition information to guide the generation of pose sequence. It is noteworthy that we learn the difference of poses compared to the first frame in \(p_{1:t}\) instead of pose itself. This setting eliminates the influence of the various initial head poses in different pose clips. As for the loss constraints, commonly used reconstruction loss like MSE loss is not suitable for pose generation, since the task is actually an ill-posed one-to-many mapping problem. Therefore, we utilize the Structural Similarity [16] to keep the consistency between the generated and ground truth pose sequence: \[\mathcal{L}_{\text{SSIM}}\,=1-\frac{\left(2\mu\hat{\mu}+C_{1}\right)\left(2cov+C_{ 2}\right))}{\left(\mu^{2}+\hat{\mu}^{2}+C_{1}\right)\left(\sigma^{2}+\hat{ \sigma}^{2}+C_{2}\right))}. \tag{1}\] \(\hat{\mu}\) and \(\hat{\sigma}\) are mean and standard deviation of generated pose sequence while \(\mu\) and \(\sigma\) are that of the ground truth pose. \(cov\) is the covariance between two sequences and \(C\) is the constants to stabilize the division. Meanwhile, to guarantee the similarity between latent space distribution and Gaussian distribution, we define \(\mathcal{L}_{KL}\) as the KL-Divergence between the above two distributions. Furthermore, the discriminator is also adopted to improve the realness of the generated pose. \[\mathcal{L}_{D}=\log D\left(p_{gt}\right)+\log\left(1-D\left(p\right)\right). \tag{2}\] The overall loss of pose generation is defined by the combination of \(\mathcal{L}_{\text{SSIM}}\), \(\mathcal{L}_{D}\) and \(\mathcal{L}_{KL}\). During inference, driving audio will be divided into several audio clips. They will be fed into the decoder along with \(I_{src}\) and sampled latent data to produce pose clips. Finally, the pose clips will be stacked together in chronological order and added to the initial head pose to form the driving pose sequence \(P_{1:n}\). ### Keypoint Predictor Inspired by the widely used image animation work FOMM [9], we choose to use the unsupervised keypoints and their first order dynamics as the structure representation. As illustrated in Fig. 1, the Keypoint Predictor \(E_{kp}\) takes source face \(I_{src}\), driving audio \(A_{dri}\) and predicted pose sequence \(p_{1:n}\) as inputs. \(E_{kp}\) first utilized three different encoders to extract the corresponding information. Then the three features are combined and fed into the LSTM-based decoder to recurrently predict the corresponding unsupervised structure representation. At each time step \(t\), the representation contains learned keypoints \(K_{t}\in\mathbb{R}^{N\times 2}\) and the corresponding first order dynamics, i.e. jacobians \(J_{t}\in\mathbb{R}^{N\times 2\times 2}\) which describes the local affine transformation in the neighborhood area around each keypoint. As for the initial source structure representation, the pre-trained keypoint detector \(E_{kd}\) from FOMM [9] is utilized to provide accurate initial keypoints and first order dynamics. The whole procedure is formulated as: \[\begin{split}(K^{src},\,J^{src})&=E_{kd}(I_{src}),\\ (K^{dri}_{1:n},J^{dri}_{1:n})&=E_{kp}(I_{src},A_{ dri},p_{1:n}).\end{split} \tag{3}\] For the training loss, we regard the \(E_{kd}\) as a teacher network and hope \(E_{kp}\) to learn the knowledge of visual structure representation contained in pre-trained \(E_{kd}\). We further define the motion representation of the corresponding ground truth video frame extracted by \(E_{kd}\) as supervision., i.e. \((K^{gt},J^{gt})\). Therefore, the loss term of \(E_{kp}\) is as follows: \[\mathcal{L}_{kp}=\frac{1}{N}\sum_{i=1}^{N}\left(\left\|K^{kp}_{i}-K^{gt}_{i} \right\|_{1}+\left\|J^{kp}_{i}-J^{gt}_{i}\right\|_{1}\right). \tag{4}\] In this way, the source and driving structure representation are both successfully obtained. ### Flow-guided Generator As shown in Fig. 3, the flow-guided generator produces talking head \(I_{G}\) given \(I_{src}\), source and driving structure representation. It mainly contains the motion flow predictor, the occlusion net, the image encoder and decoder. The motion flow predictor first predicts motion flow \(F\) indicating the variation in each part of the face from source to driving. Then \(I_{src}\) and \(F\) are fed into the Occlusion Net to predict the flow mask and occlusion map. The masked motion flow \(F^{{}^{\prime}}\) is utilized to warp the feature map \(f\) of \(I_{src}\) to obtain warped feature \(f^{{}^{\prime}}\). Finally, the occluded feature is sent to the decoder to produce talking head \(I_{G}\). In this way, the decoder obtain the source face texture, motion variance and different confidences among the feature map, which all contribute to the accurate generation process. The encoder and decoder consists of several convolutional and up-sampling layers. The occlusion net is based on the hourglass net while the motion flow predictor relies on the numerical calculation between two structure representations. During training, the perceptual loss is utilized between the generated frame \(I_{G}\) and ground truth frame \(I_{gt}\): Figure 3: **Overview of the flow-guided generator.** Figure 2: **Overview of head pose prediction module.** \[\mathcal{L}_{per}=\sum_{i=1}^{l}||\operatorname{VGG}_{i}(I_{G})-\operatorname{VGG}_ {i}\left(\boldsymbol{I}_{gt}\right)||_{1}, \tag{5}\] where \(VGG(\cdot)\) denotes the \(i_{th}\) channel feature of the pre-trained VGG network. Furthermore, to improve the lip-sync quality, we adopt a pre-trained discriminator to predict the embedding of corresponding audio and video. The discriminator [3] is trained to judge the synchronization between randomly sampled audio-visual pairs. We adopt the cosine-similarity between audio and video embedding \(a\) and \(v\) extracted by the discriminator as the lip-sync loss to indicate the probability of whether the pair is in-sync. \[\mathcal{L}_{sync}=\frac{v\cdot a}{\max\left(\|v\|_{2}\cdot\|a\|_{2},\epsilon \right)} \tag{6}\] ## 4 Experiments ### Experimental Settings Datasets.We evaluate our method on LRW [17] and HDTF [1] datasets. The LRW dataset contains over 1000 short utterances of each 500 different words and all the videos are extracted from BBC television in the wild. The HDTF dataset is a large in the wild audio-visual dataset that consists of long utterances of over 300 subjects. Implementation Details.The face video frames are cropped to \(256\times 256\) size at 25 FPS and the audio is pre-processed into 16kHz. We compute 28-dim MFCC feature with a window size of 10ms to produce a \(28\times 12\) feature for each video frame. As for training, our method is trained in stages. The generator is trained with keypoint predictor after the latter gets stable results. The ADAM optimizer is adopted with an initial leaning rage as \(2\times 10^{-4}\), which linearly decreases to \(2\times 10^{-5}\). We train our model on 1 Tesla V100 GPU and each part requires 0.5, 2 and 3 days for training respectively. ### Experimental Results Evaluation Metrics.The performance is evaluated on image quality and lip-sync quality. The SSIM [16] and Cumulative Probability of Blur Detection (CPBD) [18] scores are utilized to judge the quality of talking head frames. For lip-sync quality, the Landmark Distance(LMD) and Lip-Sync Error-Confidence(LSE-C) are applied. LMD means the average Euclidean distance between corresponding facial land \begin{table} \begin{tabular}{c c c c c} \hline Method & SSIM \(\uparrow\) & CPBD \(\uparrow\) & LMD \(\downarrow\) & LSE-C \(\uparrow\) \\ \hline Wav2Lip & 0.812 & 0.172 & 5.73 & **7.237** \\ MakeltTalk & 0.796 & 0.161 & 7.13 & 3.141 \\ Audio2Head & 0.743 & 0.168 & 7.34 & 2.135 \\ PC-AVS & 0.778 & 0.185 & 3.93 & 6.420 \\ AVCT & 0.805 & 0.181 & 3.56 & 6.567 \\ Ground Truth & 1.000 & 0.189 & 0.00 & 6.876 \\ Ours & **0.825** & **0.187** & **3.48** & 6.572 \\ \hline \end{tabular} \end{table} Table 1: Quantitative comparisons on LRW dataset. The bold and underlined notations represents the Top-2 results. Figure 4: Qualitative Comparison with other methods. Figure 5: Large-pose qualitative comparison results. marks. LSE-C is the confidence score of the correspondence between audio and video features extracted from pre-trained SyncNet [19]. **Quantitative Results.** We choose several state-of-the-art methods as comparison, i.e. Wav2Lip [3], MakeItTalk [13], Audio2Head [7], PC-AVS [4] and AVCT [8]. The frames of each method are generated using their official code. The head poses of Wav2Lip and PC-AVS are fixed since they can not obtain head poses from audio. The Ground Truth results are also added for better comparison. Detailed results on LRW and HDTF can be found in Table 1 and Table 2. FONT achieves the best performance under most of the evaluation metrics on both datasets. As Wav2Lip merely edits the mouth area, it achieves better CPBD score on HDTF. Furthermore, as mentioned in by PC-AVS [4], the leading LSE-C only means that Wav2Lip is comparable to the ground truth, not better. The LMD score also proves high-level lip-synchronization of our method. Overall, the above results prove that FONT generates high-quality talking heads. **Qualitative Results.** The qualitative comparison results are shown in Fig. 4. All the frames are generated using the same source face and driving audio. It indicates that FONT generates talking heads with natural head motions, accurate mouth shape and identity information. Specifically, Wav2Lip generates fixed faces and blurry mouth areas. Though MakeitTalk and Audio2Head produce head pose changes, they fail to preserve the lip synchronization corresponding to driving audio. PC-AVS can not preserve the identity information of the source face compared to ground truth. AVCT produces obvious visual artifacts in the background area and sometimes fails to produce an accurate mouth shape. We also show comparison results in large-pose faces, as shown in Fig. 5. Other methods displays wired facial shape change and obvious identity mismatch problem, while FONT generates natural head motions while obtaining high-level image quality. Furthermore, Fig. 6 displays the qualitative results of FONT on the HDTF dataset. It displays the synced video that provides driving audio and generated talking head videos under different source faces. The results indicate that HDTF produces natural head motions while maintaining high-level lip-syn quality.Please see dynamic demos in the supplementary materials for better comparison. **Ablation Results.** To evaluate the performance of each component in FONT, we conduct the ablation study on several variants: (1) replace the probabilistic VAE-based model with the deterministic LSTM-based model in head pose generation(**w/o VAE**), (2) replace the SSIM loss into the traditional L1 loss (**w/o \(\mathcal{L}_{\text{SSIM}}\)** ) and (3) remove the lip-sync discriminator from the generator (**w/o \(D_{sync}\)**). The results are shown in Table 3. Given that the SSIM relates to image pixel accuracy, pose accuracy and image quality become worse when removing the above module. As all the variants share basically the same flow-guided generation pattern, most of them achieve similar CPBD scores. The model **w/o \(D_{sync}\)** show a poor LSE-C score indicating bad lip synchronization. The model **w/o VAE** and **w/o \(\mathcal{L}_{\text{SSIM}}\)** fail to produce natural head pose, leading to bad LMD score. Moreover, we show qualitative ablation results in Fig. 7. The red and green rectangles mark the difference between each generated frame. The model **w/o VAE** and **w/o \(\mathcal{L}_{\text{SSIM}}\)** fail to produce dynamic natural head motions and tend to produce average still talking heads. Without \(D_{sync}\), the mouth shape accuracy also decreases, as the red rectangle shows. Overall, we see the contribution of each component in FONT. \begin{table} \begin{tabular}{c c c c c} \hline \hline Method & SSIM \(\uparrow\) & CPBD \(\uparrow\) & LMD \(\downarrow\) & LSE-C \(\uparrow\) \\ \hline w/o VAE & 0.746 & 0.160 & 5.48 & 7.18 \\ w/o \(\mathcal{L}_{\text{SSIM}}\) & 0.738 & 0.158 & 6.72 & 6.79 \\ w/o \(D_{sync}\) & 0.752 & 0.166 & 3.46 & 4.28 \\ Ours & **0.789** & **0.169** & **2.69** & **7.22** \\ \hline \hline \end{tabular} \end{table} Table 3: Numerical ablation study results. Figure 6: Qualitative results driven by the same audio and different source faces on HDTF. Figure 7: Qualitative ablation study results. The red and green rectangles indicate the difference of mouth shape and head motion, respectively. ## 5 Conclusion In this paper, we present FONT, a flow-guided one-shot model that generates talking heads with natural head motions. The head pose sequence is first predicted by a well-designed probabilistic VAE-based model. After getting the driving pose sequence, we utilize self-supervised keypoints to predict motion flow as face structure representation from the source face and driving audio. Finally, the occlusion-aware flow-guided generator produces talking heads. Both quantitative and qualitative experiments demonstrate that we obtain talking heads with natural poses and high-level lip-sync quality compared with other methods. For ethical considerations, FONT is intended for the video editing industry and focuses on world-positive use cases and applications. We believe the proper usage of this technique will enhance the development of artificial intelligence research and relevant multimedia applications. To ensure proper use, we will release our codes and contribute to deepfake detection research.
最近年の研究において、ワンショットトークヘッド生成は注目を集めており、様々な創造的な応用と実用的なアプリケーションが開発されています。理想的な自然で鮮明な生成されたトークヘッドビデオは、自然な頭部姿勢の変化を含んでいるべきです。しかし、ドライブオーディオから頭部姿勢シーケンスをマッピングすることは、音声と視覚のモダリティ間に自然なギャップが存在するため、非常に困難です。この研究では、Flow-guided One-shot モデルを提案し、NaTURALな頭部運動(FONT)を生成します。具体的には、頭部姿勢予測モジュールは、ソース顔とドライブオーディオから頭部姿勢シーケンスを生成するように設計されています。私たちは、音声と視覚のモダリティ間の1対多のマップの多様性をモデルするために、ランダムサンプリング演算と構造的類似性制約を追加しています。これにより、自然な頭部姿勢を予測
2309.12872
Deep regression learning with optimal loss function
In this paper, we develop a novel efficient and robust nonparametric regression estimator under a framework of feedforward neural network. There are several interesting characteristics for the proposed estimator. First, the loss function is built upon an estimated maximum likelihood function, who integrates the information from observed data, as well as the information from data structure. Consequently, the resulting estimator has desirable optimal properties, such as efficiency. Second, different from the traditional maximum likelihood estimation (MLE), the proposed method avoid the specification of the distribution, hence is flexible to any kind of distribution, such as heavy tails, multimodal or heterogeneous distribution. Third, the proposed loss function relies on probabilities rather than direct observations as in least squares, contributing the robustness in the proposed estimator. Finally, the proposed loss function involves nonparametric regression function only. This enables a direct application of existing packages, simplifying the computation and programming. We establish the large sample property of the proposed estimator in terms of its excess risk and minimax near-optimal rate. The theoretical results demonstrate that the proposed estimator is equivalent to the true MLE in which the density function is known. Our simulation studies show that the proposed estimator outperforms the existing methods in terms of prediction accuracy, efficiency and robustness. Particularly, it is comparable to the true MLE, and even gets better as the sample size increases. This implies that the adaptive and data-driven loss function from the estimated density may offer an additional avenue for capturing valuable information. We further apply the proposed method to four real data examples, resulting in significantly reduced out-of-sample prediction errors compared to existing methods.
Xuancheng Wang, Ling Zhou, Huazhen Lin
2023-09-22T13:53:25
http://arxiv.org/abs/2309.12872v1
# Deep regression learning with optimal loss function ###### Abstract Due to powerful function fitting ability and effective training algorithms of neural networks, in this paper, we develop a novel efficient and robust nonparametric regression estimator under a framework of feedforward neural network (FNN). There are several interesting characteristics for the proposed estimator. First, the loss function is built upon an estimated maximum likelihood function, who integrates the information from observed data, as well as the information from data structure. Consequently, the resulting estimator has desirable optimal properties, such as efficiency. Second, different from the traditional maximum likelihood estimation (MLE), we do not require the specification of the distribution, hence the proposed estimator is flexible to any kind of distribution, such as heavy tails, multimodal or heterogeneous distribution. Third, the proposed loss function relies on probabilities rather than direct observations as in least square loss, hence contributes the robustness in the proposed estimator. Finally, the proposed loss function involves nonparametric regression function only. This enables the direct application of the existing packages, and thus the computation and programming are simple. We establish the large sample property of the proposed estimator in terms of its excess risk and minimax near-optimal rate. The theoretical results demonstrate that the proposed estimator is equivalent to the true MLE in which the density function is known. Our simulation studies show that the proposed estimator outperforms the existing methods in terms of prediction accuracy, efficiency and robustness. Particularly, it is comparable to the true MLE, and even gets better as the sample size increases. This implies that the adaptive and data-driven loss function from the estimated density may offer an additional avenue for capturing valuable information. We further apply the proposed method to four real data examples, resulting in significantly reduced out-of-sample prediction errors compared to existing methods. _Keywords:_ Estimated maximum likelihood estimation, feedforward neural network, excess risk, kernel density estimation. Introduction Consider a nonparametric regression model, \[Y = g(\mathbf{X})+\epsilon, \tag{1}\] where \(Y\in\mathbb{R}\) is a response variable, \(\mathbf{X}\in\mathcal{X}\subseteqq\mathbb{R}^{d}\) is a \(d\)-dimensional vector of predictors, \(g:\mathcal{X}\rightarrow\mathbb{R}\) is an unknown regression function, \(\epsilon\) is an error independent of \(\mathbf{X}\). Nonparametric regression is a basic and core problem in statistics and machine learning, where the purpose is estimating the unknown target regression function \(g\) given independent and identically distributed (i.i.d.) samples \(S\equiv\left(\mathbf{X}_{i},Y_{i}\right)_{i=1}^{n}\) with the sample size \(n\). Since the distribution of \(\epsilon\) is unknown, \(g(\cdot)\) is usually estimated based on the least square (LS) criterion, that is, \[\hat{g}=\operatorname*{arg\,min}_{g:\mathbb{R}^{d}\rightarrow\mathbb{R}}\frac{ 1}{n}\sum_{i=1}^{n}\left\{Y_{i}-g(\mathbf{X}_{i})\right\}^{2}. \tag{2}\] Driven by various nonparametric approximation techniques, there is a vast literature on nonparametric regression. For example, tree regression (Breiman, 2017), random forests (Breiman, 2001), and nonparametric smoothing methods such as nearest neighbor regression (Cheng, 1984; Devroye et al., 1994), kernel regression (Nadaraya, 1964; Watson, 1964; Hall and Huang, 2001), local polynomial regression (Fan and Gijbels, 2018), spline approximation (Schumaker, 2007) and reproducing kernel regression (Berlinet and Thomas-Agnan, 2011; Lv et al., 2018), among others. Recently, attributed to powerful function fitting ability, well-designed neural network architectures and effective training algorithms and high-performance computing technologies, deep neural network (DNN) with the empirical LS loss function has enjoyed tremendous success in a variety of applications, such as the fields of computer vision, natural language processing, speech recognition, among others. Based on the theoretical results concerning approximation error and stochastic error, with the LS loss, several inspiring works have obtained the minimax near-optimal rate at \(n^{-\frac{2\beta}{2\beta+d}}(\log n)^{s}\) for learning the regression function \(g\) under feedforward neural network (FNN), with the assumption that \(g\) is \(\beta\)-H\(\ddot{o}\)lder smooth. In these works, the response variable or the error term is assumed to be bounded (Gyorfi et al., 2002; Farrell et al., 2021), have finite \(p\)-th moment with \(p>1\)(Kohler and Langer, 2021; Kohler et al., 2022), sub-Gaussian (Bauer and Kohler, 2019; Chen et al., 2019; Schmidt-Hieber, 2019; 2020; Fan and Gu, 2022; Bhattacharya et al., 2023), sub-exponential (Jiao et al., 2021; Yan and Yao, 2023) or have finite variance (Liu et al., 2022). The LS criterion based estimators are mathematically convenient, easily implemented, and efficient when the error \(\epsilon\) is normally distributed. However, as it is expressed in (2), the LS loss is sensitive to large errors, that is, the LS estimator is severely influenced by outliers, resulting in unstable and unreliable estimation. In the era of "big data", data generation mechanism and collection are unmanageable, and thus non-Gaussian noises or outliers are almost inevitable. To address the unstableness, a lot of robust methods based on traditional nonparametric regression techniques have been developed, for example, the kernel M-smoother (Hardle, 1989), median smoothing(Tukey et al., 1977), locally weighted regression (Stone, 1977; Cleveland, 1979), the local least absolute method (Wang and Scott, 1994), quantile regression (Koenker and Bassett Jr, 1978; He et al., 2013; Lv et al., 2018), among others. Recently, within the framework of FNN, several robust methods have been introduced to address non-Gaussian noise problems, and corresponding convergence rates for learning the function \(g\) have also been established. For instance, Lederer (2020); Shen et al. (2021) and Fan et al. (2022) have explored non-asymptotic error bounds of the estimators that minimizing robust loss functions, such as the least-absolute deviation loss (Bassett Jr and Koenker, 1978), Huber loss (Huber, 1973), Cauchy loss and Tukey's biweight loss (Beaton and Tukey, 1974). Particularly, based on a general robust loss function satisfying a Lipschitz continuity, Farrell et al. (2021) have demonstrated the convergence rate \(n^{-\frac{2\beta}{2\beta+d}}(\log n)^{4}\) with the assumption that the response is bounded, which means that heavy-tail error is not applicable. To relax the bounded restriction on the response, Shen et al. (2021b) and Shen et al. (2021a) have established the convergence rate \(n^{-\frac{2\beta}{2\beta+d}+1/p}(\log n)^{c}\) under the assumption that the \(p\)-th moment of response is bounded for some \(p>1\). These methods are proposed for improving the robustness, they are sub-optimal in terms of efficiency. This work attempts to provide a loss function which is efficient as well as robust for nonparametric regression estimation within the framework of FNN. It is worth noting that in the least squares (LS) criterion, observations in which the response variable \(Y_{i}\) deviates significantly from the conditional mean \(g(\mathbf{X}_{i})\) play a significant role, which may seem counterintuitive. In fact, when estimating the conditional mean \(g(\cdot)\), observations in which \(Y_{i}\) is closer to \(g(\mathbf{X}_{i})\) are supposed to logically carry more information than those where the response is away from the conditional mean. This can be expressed in terms of probability of observation. Therefore, we propose a loss function based on the estimated likelihood function, which has the form of \[\hat{g}=\operatorname*{arg\,max}_{g:\mathbb{R}^{d}\to\mathbb{R}}\frac{1}{n} \sum_{i=1}^{n}\log\hat{f}(Y_{i}-g(\mathbf{X}_{i})), \tag{3}\] where \(\hat{f}\) is an estimator of the density function of \(\epsilon_{i}=Y_{i}-g(\mathbf{X}_{i})\). We simplify the FNN estimators of \(g(\cdot)\) based on maximizing Estimated log-Likelihood functions (3) by EML-FNN, which is expected to have the desirable optimal properties since we use the density function and leverage the data distribution. In addition, different from the traditional maximum likelihood estimator (MLE) where \(f(\cdot)\) is known, the proposed EML-FNN is flexible as it avoids specifying the error distribution. Moreover, the quasi-likelihood loss (3), which relies on probabilities rather than direct observations as in LSE, contributes the robustness of the proposed EML-FNN. More interesting, in comparison to the MLE where \(f(\cdot)\) is known, the adaptive form via estimating the density \(f(\cdot)\) in (3) proves to be effective in learning the data structure and offers an additional avenue for capturing information. This is supported by our simulation studies. Specifically, Figures 1 to 3 reveal the following results: when \(\varepsilon_{i}\) follows a normal distribution where the LSE is equivalent to the MLE, the EML-FNN performs slightly better than FNN estimators based on Least Square Error (LSE-FNN) for data with a larger sample size (\(n=1024\)). However, when \(\varepsilon_{i}\) deviates from a normal distribution or has heterogeneous variances, the EML-FNN significantly outperforms the LSE-FNN. The enhanced performance may be attributed to the utilization of structural information via the estimated density function \(\hat{f}(\cdot)\). With the explicit form of Nadaraya-Watson kernel estimator for the density function of \(\epsilon_{i}\), we develop a FNN estimation for \(g\) that circumvents estimating the unknown density function, resulting in an objective function that solely involves \(g\). This enables the direct application of existing packages Pytorch (Paszke et al., 2019) and Scikit-learn (Pedregosa et al., 2011) in python, simplifying the computation and programming. We establish the large sample property of \(\hat{g}\) in terms of its excess risk and the minimax rate, which demonstrate that the proposed estimator for \(g\) is equivalent to the one based on (3) when the density function is known. As a result, the proposed deep learning approach for \(g\) exhibits the desired optimal properties, such as efficiency (Zhou et al., 2018, 2019). Finally, we employ the proposed method to analyze four real datasets. Table 1 shows that the proposed EML-FNN provides much higher prediction accuracy than the existing methods for each dataset. The paper is structured as follows. In Section 2, we introduce the proposed EML-FNN. In Section 3, we establish the large sample property of \(\hat{g}\) in terms of its excess risk and the minimax rate. Section 4 provides simulation studies to investigate the performance of the proposed method via the comparison with the competing estimation methods. In Section 5, we apply the proposed method to analyze four real data. We conclude the paper with a discussion in Section 6. Technical proofs are included in the Supplementary Material. ## 2 Method We estimate \(g\) under the framework of FNN. In particular, we set \(\mathcal{G}\) to be a function class consisting of ReLU neural networks, that is, \(\mathcal{G}:=\mathcal{G}_{\mathcal{D},\mathcal{U},\mathcal{W},\mathcal{S}, \mathcal{B}}\), where the input data is the predictor \(X\), forming the first layer, and the output is the last layer of the network; Such a network \(\mathcal{G}\) has \(\mathcal{D}\) hidden layers and a total of \((\mathcal{D}+2)\) layers. Denote the width of layer \(j\) by \(d_{j}\), \(j=0,\cdots,\mathcal{D},\mathcal{D}+1\) with \(d_{0}=d\) representing the dimension of the input \(X\), and \(d_{\mathcal{D}+1}=1\) representing the dimension of the response \(Y\). The width \(\mathcal{W}\) is defined as the maximum width among the hidden layers, i.e., \(\mathcal{W}=\max\left(d_{1},...,d_{\mathcal{D}}\right)\). The size \(\mathcal{S}\) is defined as the total number of parameters in the network \(\mathcal{G}\), given by \(\mathcal{S}=\sum_{i=0}^{\mathcal{D}}d_{i+1}\times(d_{i}+1)\); The number of neurons \(\mathcal{U}\) is defined as the total number of computational units in the hidden layers, given by \(\mathcal{U}=\sum_{i=1}^{\mathcal{D}}d_{i}\). Further, we assume every function \(g\in\mathcal{G}\) satisfies \(|g|_{\infty}\leq\mathcal{B}\) with \(\mathcal{B}\) being a positive constant. With \(g\in\mathcal{G}\), \(g\) can be estimated by \[\arg\min_{g\in\mathcal{G}}\left\{\frac{1}{n}\sum_{i=1}^{n}\rho(Y_{i}-g( \boldsymbol{X}_{i}))\right\}, \tag{4}\] where \(\rho(\cdot)\) is a given loss function, for example, the least squares \(\rho(t)=t^{2}\); least absolute criteria \(\rho(t)=|t|\); Huber loss, Cauchy loss, and Tukey's biweight loss, and so on. The LS based estimator is efficient only when the error \(\epsilon\) is normally distributed. The estimators based on robust loss functions such as least absolute, Huber, Cauchy and Tukey's biweight are robust but they are sub-optimal in terms of efficiency. When \(f(\cdot)\) is known, an ideal estimator of \(g\) can be obtained by, \[\hat{g}=\arg\min_{g\in\mathcal{G}}\mathcal{R}_{n}(g):=\arg\min_{g\in\mathcal{G}} \left\{\frac{1}{n}\sum_{i=1}^{n}\left(-\log f(Y_{i}-g(\mathbf{X}_{i}))\right)\right\}. \tag{5}\] However, in reality, \(f\) is usually unknown. To ensure that we do not misspecify the distribution and simultaneously obtain an estimator based on optimal loss, we employ kernel techniques to estimate the density function \(f\). That is, \[\hat{f}(z)=\frac{1}{n}\sum_{i=1}^{n}\mathcal{K}_{h}(\epsilon_{i},z), \tag{6}\] where \(\mathcal{K}_{h}(y_{1},y_{2})=K(\frac{y_{1}-y_{2}}{h})/h\), \(h\) is a bandwidth and \(K(\cdot)\) is a kernel function. Replacing \(f(\cdot)\) in (5) with \(\hat{f}\), we estimate \(g\) by, \[\hat{g}=\arg\min_{g\in\mathcal{G}}\hat{\mathcal{R}}_{n}(g)=\arg\min_{g\in \mathcal{G}}n^{-1}\sum_{i=1}^{n}\left(-\log\hat{f}(Y_{i}-g(\mathbf{X}_{i}))\right). \tag{7}\] That is, \[\hat{g}=\arg\min_{g\in\mathcal{G}}\hat{\mathcal{R}}_{n}(g)=\arg\min_{g\in \mathcal{G}}n^{-1}\sum_{i=1}^{n}\left(-\log\frac{1}{n}\sum_{j=1}^{n}\mathcal{ K}_{h}(Y_{j}-g(\mathbf{X}_{j}),Y_{i}-g(\mathbf{X}_{i}))\right). \tag{8}\] Recall that the conventional FNN (Chen et al., 2019; Nakada and Imaizumi, 2019; Schmidt-Hieber, 2020; Kohler and Langer, 2021; Jiao et al., 2021; Kohler et al., 2022; Liu et al., 2022; Fan and Gu, 2022; Yan and Yao, 2023; Bhattacharya et al., 2023) minimizes a least square objective and is sensitive to the data's distribution type and outliers, which leading to the development of the robust FNN (Lederer, 2020; Shen et al., 2021; Fan et al., 2022). However, the enhanced robustness comes at the cost of efficiency. In contrast to existing methods, our approach stands out by utilizing a MLE criterion as the objective function, thereby achieving both efficiency and robustness. In particular, efficiency is attained by fully leveraging the data distribution, robustness is added to our estimator because our proposed loss function relies on probabilities rather than direct observations as in LS. Moreover, the kernel-based estimation approach benefits from the smooth continuity of kernel functions, facilitating gradient calculations and overcoming non-differentiability issues when dealing with densities such as the uniform distribution, mixture distribution, and heteroscedasticity. Finally, the proposed loss function (8) involves \(g\) only. This enables the direct application of packages Pytorch (Paszke et al., 2019) and Scikit-learn (Pedregosa et al., 2011) in python, simplifying the computation and programming. The proposed \(\hat{\mathcal{R}}_{n}(g)\) involves a tuning parameter \(h\). According to the property of kernel approximation, a smaller \(h\) yields a more accurate density approximation but with a larger variance. Fortunately, the summation over individuals mitigates the increased variance caused by a small \(h\). Therefore, when computational feasibility allows, a smaller value of \(h\) is preferred. The conclusion is supported by both our theoretical and numerical results. In practice, we use the Gaussian kernel function and set \(\hat{f}=1e-5\) when \(\hat{f}<1e-5\) because logarithmic transformation is required in the objective function. ## 3 Large sample properties In this section, we establish the large sample property of \(\hat{g}\) in terms of its excess risk, which is defined as the difference between the risks of \(g\) and \(g^{*}\): \[\mathcal{R}(g)-\mathcal{R}(g^{*})=\mathbb{E}\left(-\log f(Y_{i}-g(\mathbf{X}_{i}) )\right)-\mathbb{E}\left(-\log f(Y_{i}-g^{*}(\mathbf{X}_{i}))\right),\] where \(g^{*}\) is defined as \[g^{*}:=\arg\min_{g}\mathcal{R}(g)=\arg\min_{g}\mathbb{E}\left(-\log f\left(Y_ {i}-g(\mathbf{X}_{i})\right)\right).\] The minimizer is taken over the entire space, and thus implies that \(g^{*}\) does not necessarily belong to the set \(\mathcal{G}\). We further define \(g^{*}_{\mathcal{G}}:=\arg\min_{g\in\mathcal{G}}\mathcal{R}(g)\) in the set \(\mathcal{G}\). Denote \(f^{(r)}(\cdot)\) to be the \(r\)th derivative of \(f\), and \(f_{\mathbf{x}}(\cdot)\) to be the density function of covariates \(\mathbf{X}\), who is supported on a bounded set, and for simplicity, we assume this bounded set to be \([0,1]^{d}\). In the rest of the paper, the symbol \(c\) denotes a positive constant which may vary across different contexts. The following conditions are required for establishing the rate of the excess risk: 1. Kernel: Let \(U_{r}=\int K(t)t^{r}dt\) and \(v_{r}=\int K^{2}(t)t^{r}dt\). Assume the kernel function \(K(\cdot)\) has a bounded second derivative, \(U_{0}=1\) and \(U_{1}=0\). 2. Bandwidth: \(h\to 0\) and \(nh\to\infty\) as \(n\to\infty\). 3. Density function \(f\): Assume the density function \(f(\cdot)\) has a continuous second derivative and satisfies \(f(\epsilon)>c>0\) for any \(\epsilon\) belonging to the support set of \(f\). 4. Function class for \(g\) and \(g^{*}\): For any function \(g\in\mathcal{G}\) and the true function \(g^{*}\), we assume \(\|g\|_{\infty}<\mathcal{B}\) and \(\|g^{*}\|_{\infty}<\mathcal{B}\). Condition (C1) is a mild condition for the kernel function, which is easy to be satisfied when the kernel function is a symmetric density function. Condition (C2) is the most commonly used assumption for the bandwidth. Condition (C3) requires a lower bound of the density function to avoid tail-related problems. The simulation studies, where the lower bounded condition is not met across all four distributions, demonstrate that the proposed estimator maintains its effectiveness even in scenarios where the condition does not hold. Condition (C4) is a bounded condition for the function class \(\mathcal{G}\) and the true function \(g^{*}\), which is commonly used in Shen et al. (2019); Lu et al. (2021); Chen et al. (2019); Yarotsky (2017). It is noteworthy that, in cases where the explicit depiction of the approximation error for the function class \(\mathcal{G}\) to \(g^{*}\) becomes necessary, an additional condition will be introduced concerning the category to which \(g^{*}\) belongs. This is demonstrated in Corollary 1. Define \(\mathcal{G}|_{\mathbf{x}}:=\{g(\mathbf{x}_{1}),g(\mathbf{x}_{2}),\cdots,g(\mathbf{x}_{n}):g\in \mathcal{G}\}\) for a given sequence \(\mathbf{x}=(\mathbf{x}_{1},\cdots,\mathbf{x}_{n})\) and denote \(\mathcal{N}_{2n}(\delta,\|\cdot\|_{\infty},\mathcal{G}|_{\mathbf{x}})\) to be the covering number of \(\mathcal{G}|_{\mathbf{x}}\) under the norm \(\|\cdot\|_{\infty}\) with radius \(\delta\). Let \(A\preceq B\) represent \(A\leq cB\) for a postive constant \(c\). In the following Theorems 1 and 2, we show the excess risk of the proposed estimator under the true density function and the estimated density function to see how much difference of the proposed estimator from the true MLE estimator, which is defined as \[\hat{g}_{oracle}=\arg\min_{g\in\mathcal{G}}\left\{\frac{1}{n}\sum_{i=1}^{n} \big{(}-\log f(Y_{i}-g(\mathbf{X}_{i}))\big{)}\right\}. \tag{9}\] **Theorem 1**.: _Under Conditions (C3) and (C4), we have that, as \(n\to\infty\),_ \[\mathbb{E}\left(\mathcal{R}(\hat{g}_{oracle})-\mathcal{R}(g^{*})\right) \preceq\frac{\log\mathcal{N}_{2n}(n^{-1},\|\cdot\|_{\infty},\mathcal{G}|_{\bm {x}})}{n}+\big{(}\mathcal{R}(g^{*}_{\mathcal{G}})-\mathcal{R}(g^{*})\big{)}.\] Recall that the excess risk of the LS estimator, takes the form: \(\frac{\mathcal{B}^{2}\log 2\mathcal{N}_{2n}(n^{-1},|\cdot|_{\infty},\mathcal{G}|_{ \mathbf{x}})(\log n)^{c}}{n}+\big{(}\mathcal{R}(g^{*}_{\mathcal{G}})-\mathcal{R}( g^{*})\big{)}\) for some positive constant \(c\) with the condition of bounded response (Gyorfi et al., 2002; Farrell et al., 2021) or bounded \(p\)-th moment (Kohler and Langer, 2021; Kohler et al., 2022). For the robust loss considered in Shen et al. (2021), the excess risk has the form of: \(\frac{\lambda_{L}\mathcal{B}\log 2\mathcal{N}_{2n}(n^{-1},|\cdot|_{\infty}, \mathcal{G}|_{\mathbf{x}})(\log n)^{c}}{n^{1-1/p}}+\big{(}\mathcal{R}(g^{*}_{ \mathcal{G}})-\mathcal{R}(g^{*})\big{)}\), where \(p\) represents the bounded \(p\)-th moment of the outcome, and \(\lambda_{L}\) represents the Lipschitz coefficient of robust loss function. Clearly, the oracle estimator \(\hat{g}_{oracle}\) presents a slightly more favorable excess risk bound compared to the OLS estimator, as it lacks the \((\log n)^{c}\) multiplier. Additionally, our estimator converges faster than the robust estimators with a rate of \((\log n)^{c}/n^{1-1/p}\) for robust estimators versus a reduced rate of \(1/n\) for our estimator in estimation error. It is important to highlight that, unlike the requirement of a Lipschitz condition for the robust loss, we instead invoke a lower bound condition (C3) for the density function. The introduction of a lower bound to the density function is helpful to the stability of our estimator. On the other hand, by leveraging the inherent benefits of the density function, our proposed estimator exemplifies a harmonious blend of robustness and efficiency that is crucial for practical applications. **Theorem 2**.: _For the proposed estimator \(\hat{g}\), under conditions (C1)-(C4), we have_ \[\mathbb{E}\left(\mathcal{R}(\hat{g})-\mathcal{R}(g^{*})\right) \preceq \left(\frac{\log\mathcal{N}_{2n}(\frac{1}{n},\|\cdot\|_{\infty}, \mathcal{G}|_{\mathbf{x}})}{n}\right)+\left(\mathcal{R}(g^{*}_{\mathcal{G}})- \mathcal{R}(g^{*})\right)+\left(\|g^{*}_{\mathcal{G}}-g^{*}\|_{\infty}^{2}+h^{ 2}\right).\] Theorems 1 and 2 shows that the upper bounds of the excess risk for both \(\hat{g}_{oracle}\) and \(\hat{g}\) encompass two terms: \(\frac{\log\mathcal{N}2n(\frac{1}{n},\|\cdot\|_{\infty},\mathcal{G}|_{\mathbf{x}})} {n}\) and \(\mathcal{R}(g^{*}_{\mathcal{G}})-\mathcal{R}(g^{*})\), which represent the estimation error of \(\hat{g}\) evaluated at the true density function \(f\), and the approximate bias of the FNN space towards the true function \(g^{*}\), respectively. The disparity in excess risks between \(\hat{g}_{oracle}\) and \(\hat{g}\) is encapsulated in \(\|g^{*}_{\mathcal{G}}-g^{*}\|_{\infty}^{2}+h^{2}\), which describes the error introduced by substituting \(f\) with its kernel estimator \(\hat{f}\). The error implies that utilizing the kernel estimator \(\hat{f}\) in lieu of \(f\) does not introduce additional variance. However, it does lead to significant approximation bias when using a larger value of \(h\), thus advocating the preference for a smaller value of \(h\) to mitigate this bias, particularly, the bias is ignorable if \(h^{2}\preceq\frac{\log\mathcal{N}_{2n}(\frac{1}{n},\|\cdot\|_{\infty}, \mathcal{G}|_{\mathbf{x}})}{n}\) and the FNN function closely approximates the true function \(g^{*}\). The former can be satisfied by taking a small \(h\) and the later holds due to powerful function fitting ability of FNN. The simulation studies in Section 4 further confirm the conclusion. With the discussion above, we hence investigate the efficiency of the proposed estimator via that for the oracle estimator \(\hat{g}_{oracle}\). For simplicity, we assume \(g^{*}=g^{*}_{\mathcal{G}}\), that is, the true function belongs to the FNN space. Recall that for \(g\in\mathcal{G}\), we have \[g(x)=\mathbf{W}_{\mathcal{D}}^{\top}\sigma\left(\mathbf{W}_{\mathcal{D}-1}^{\top} \sigma(\mathbf{W}_{\mathcal{D}-2}^{\top}\sigma(\mathbf{W}_{\mathcal{D}-3}\cdots\sigma (\mathbf{W}_{0}^{\top}\mathbf{X}+\mathbf{a}_{0}))+\mathbf{a}_{\mathcal{D}-2})+\mathbf{a}_{ \mathcal{D}-1}\right)+a_{\mathcal{D}},\] where \(\sigma(\cdot)\) is a given activation function and \(\mathbf{W}_{r},\mathbf{a}_{r},r=0,\cdots,\mathcal{D}\) are parameters. Then, we can write \(g(\mathbf{x})=g(\mathbf{x};\mathbf{\theta})\) with \(0,\cdots,\mathcal{D}\). Denote \(g^{*}(\mathbf{x})=g(\mathbf{x};\mathbf{\theta}^{*})\). We can obtain that \(\hat{g}_{oracle}(\mathbf{x})=g(\mathbf{x};\hat{\mathbf{\theta}}_{o})\) with \(\hat{\mathbf{\theta}}_{o}\) satisfying \(\hat{\mathbf{\theta}}_{o}=\arg\min_{\mathbf{\theta}}\left\{\frac{1}{n}\sum_{i=1}^{n} \big{(}-\log f(Y_{i}-g(\mathbf{X}_{i};\mathbf{\theta}))\big{)}\right\}.\) If \(\mathbb{E}\left[\left(\frac{d\log f(Y-g(\mathbf{X};\mathbf{\theta}))}{d\mathbf{\theta}} \right)\left(\frac{d\log f(Y-g(\mathbf{X};\mathbf{\theta}))}{d\mathbf{\theta}}\right)^{ \top}\right]\) is positive definite around \(\mathbf{\theta}^{*}\) and \(\mathbb{E}(\hat{\mathbf{\theta}}_{o})=\mathbf{\theta}^{*}\), we have \(\mathbb{E}(\hat{\mathbf{\theta}}_{o}\hat{\mathbf{\theta}}_{o}^{\top})=n^{-1}\left\{ \mathbb{E}\left[\left(\frac{d\log f(Y-g(\mathbf{X};\mathbf{\theta}))}{d\mathbf{\theta}} \right)\left(\frac{d\log f(Y-g(\mathbf{X};\mathbf{\theta}))}{d\mathbf{\theta}}\right)^{ \top}|_{\mathbf{\theta}=\mathbf{\theta}^{*}}\right]\right\}^{-1}\)(Onzon 2011). Then for any unbiased estimator \(\check{\mathbf{\theta}}\) that \(\mathbb{E}(\check{\mathbf{\theta}})=\mathbf{\theta}^{*}\), based on the Multivariate Cramer-Rao Lower Bound, it holds that \[\mathbb{E}(\check{\mathbf{\theta}}\check{\mathbf{\theta}}^{\top})\succeq n^{-1}\left\{ \mathbb{E}\left[\left(\frac{d\log f(Y-g(\mathbf{X};\mathbf{\theta}))}{d\mathbf{\theta}} \right)\left(\frac{d\log f(Y-g(\mathbf{X};\mathbf{\theta}))}{d\mathbf{\theta}}\right)^{ \top}|_{\mathbf{\theta}=\mathbf{\theta}_{o}}\right]\right\}^{-1}=\mathbb{E}(\hat{\bm {\theta}}_{o}\hat{\mathbf{\theta}}_{o}^{\top}).\] which leads that \(\operatorname{Var}(\check{\mathbf{\theta}})\succeq\operatorname{Var}(\hat{\mathbf{ \theta}}_{o})\), where \(A\succeq B\) represents \(A-B\) is a semi-positive matrix. Combining with the delta method, it holds that \(\operatorname{Var}(\check{g}):=\operatorname{Var}(g(\mathbf{x};\check{\mathbf{\theta}} ))\geq\operatorname{Var}(\hat{g}_{oracle})\). From this perspective, we can characterize \(\hat{g}_{oracle}\) as an efficient estimator, while \(\hat{g}\) also possesses such efficiency under certain straightforward conditions, such as \(h^{2}\preceq\frac{\mathcal{S}}{n}\), where \(\mathcal{S}\) is the length of \(\mathbf{\theta}\). Now, we further explore how the excess risk relies on FNN structure, as well as the function class which \(g^{*}\) belongs to. Let \(\beta=s+r\), \(r\in(0,1]\) and \(s=\lfloor\beta\rfloor\in\mathbb{N}_{0}\), where \(\lfloor\beta\rfloor\) denotes the largest integer strictly smaller than \(\beta\) and \(\mathbb{N}_{0}\) denotes the set of non-negative integers. For a finite constant \(B_{0}>0\), the \(H\ddot{o}lder\) class \(\mathcal{H}_{\beta}([0,1]^{d},B_{0})\) is defined as \[\mathcal{H}_{\beta}([0,1]^{d},B_{0})=\{g:[0,1]^{d}\mapsto\mathbb{R},\max_{ \|\alpha\|_{1}<s}\|\partial^{\alpha}g\|_{\infty}\leq B_{0},\max_{\|\alpha\|_{ 1}=s}\sup_{x\neq y}\frac{|\partial^{\alpha}g(x)-\partial^{\alpha}g(y)|}{\|x-y \|_{2}^{r}}\leq B_{0}\}\] where \(\partial^{\alpha}=\partial^{\alpha_{1}}\cdots\partial^{\alpha_{d}}\) with \(\alpha=(\alpha_{1},\cdots,\alpha_{d})^{T}\in\mathbb{N}_{0}^{d}\) and \(\|\alpha\|_{1}=\sum_{i=1}^{d}\alpha_{i}\). Denote \(\lceil a\rceil\) to be the smallest integer no less than \(a\) and \(\mathbb{N}^{+}\) to be the set of positive integers. Based on Lemma 1 of Jiao et al. (2021) for the approximation error in terms of FNN structures and Lemma 2 of Bartlett et al. (2019) for the bounding covering number, we can conclude the following Corollary 1 from Theorem 2: **Corollary 1**.: _Given \(H\ddot{o}lder\) smooth functions \(g^{*}\in\mathcal{H}_{\beta}([0,1]^{d},B_{0})\), for any \(D\in\mathbb{N}^{+}\), \(W\in\mathbb{N}^{+}\), under conditions of Theorem 2, Lemma 1 in Jiao et al. (2021) and Lemma 2 in Bartlett et al. (2019), if the FNN with a ReLU activation function has width \(\mathcal{W}=C_{3}(\lfloor\beta\rfloor+1)^{2}d^{\lfloor\beta\rfloor+1}W\left\lceil \log_{2}(8W)\right\rceil\) and depth \(\mathcal{D}=C_{4}(\lfloor\beta\rfloor+1)^{2}D\left\lceil\log_{2}(8D)\right\rceil\), then_ \[\mathbb{E}\left(\mathcal{R}(\hat{g})-\mathcal{R}(g^{*})\right) \preceq \frac{\mathcal{S}\mathcal{D}\log(\mathcal{S})}{n}+h^{2}+(WD)^{-4 \beta/d}.\] In Corollary 1, the first term comes from the covering number of \(\mathcal{G}\), which is bounded by its VC dimension \(\mathcal{N}_{2n}(\frac{1}{n},|\cdot|_{\infty},\mathcal{G}|_{\mathbf{x}})=O( \mathcal{S}\mathcal{D}\log(\mathcal{S}))\)(Bartlett et al., 2019), where \(\mathcal{S}\) and \(\mathcal{D}\) are the total number of parameters and hidden layers, respectively. The third term follows from the approximation results from Jiao et al. (2021) that \(\left\|g^{*}-g^{*}_{\mathcal{G}}\right\|_{\infty}\leq 18B_{0}(\lfloor\beta \rfloor+1)^{2}d^{\lfloor\beta\rfloor+\max\{\beta,1\}/2}(WD)^{-2\beta/d}\) and \(\mathbb{E}(\mathcal{R}(g^{*}_{\mathcal{G}})-\mathcal{R}(g^{*}))\simeq\|g^{*} _{\mathcal{G}}-g^{*}\|_{\infty}^{2}\) where \(A\simeq B\) represents \(A\preceq B\) and \(B\preceq A\). If given \(\mathcal{S}=\mathcal{O}(n^{\frac{d}{2\beta+d}}\log n)\) and \(\mathcal{D}=\log n\), following Corollary 1 and \(\mathcal{S}=O(\mathcal{W}^{2}\mathcal{D})\), it holds that \(\mathbb{E}\left(\mathcal{R}(\hat{g})-\mathcal{R}(g^{*})\right)\preceq n^{- \frac{2\beta}{2\beta+d}}(\log n)^{3}+h^{2}+n^{-\frac{2\beta}{2\beta+d}}\). Hence, we have the following Corollary 2. **Corollary 2**.: _Under the conditions in Corollary 1, if \(h^{2}=O(n^{-\frac{2\beta}{2\beta+d}})\), then_ \[\mathbb{E}\left(\mathcal{R}(\hat{g})-\mathcal{R}(g^{*})\right)\preceq n^{- \frac{2\beta}{2\beta+d}}(\log n)^{3},\] _which is comparable to \(n^{-\frac{2\beta}{2\beta+d}}\), the lower bound of the minimax learning rate of \(g^{*}\)(Stone, 1982), i.e., \(\min_{\hat{g}}\max_{g^{*}\in\mathcal{H}_{\beta}([0,1]^{d},B_{0})}\mathbb{E} \left[\int_{[0,1]^{d}}(\check{g}(\mathbf{x})-g^{*}(\mathbf{x}))^{2}f_{\mathbf{x}}(\mathbf{x})d \mathbf{x}\right]\succeq n^{-\frac{2\beta}{2\beta+d}}\), where \(\check{g}\) is an estimator of \(g^{*}\) based on the data set \(S\) and the expectation is taken with respect to the randomness of \(S\)._ It is interesting to compare several established convergence rates under the framework of FNN. Particularly, using the LS loss for \(g\in\mathcal{H}_{\beta}([0,1]^{d},B_{0})\), Chen et al. (2019); Nakada and Imaizumi (2019); Schmidt-Hieber (2020); Jiao et al. (2021); Liu et al. (2022); Bhattacharya et al. (2023) and Yan and Yao (2023) have obtained upper bound of the minimax learning rate of \(g\) at \(n^{-\frac{2\beta}{2\beta+d}}(\log n)^{s}\), which is nearly minimax optimal (Donoho et al. 1995; Stone 1982); Using a Lipschitz continuous loss function, Farrell et al. (2021) have obtained the convergence rate at \(n^{-\frac{2\beta}{2\beta+d}}(\log n)^{4}\) under a bounded response condition; Shen et al. (2021b) and Shen et al. (2021a) have obtained the convergence rate \(n^{-\frac{2\beta}{2\beta+d}+1/p}(\log n)^{c}\) under the assumption of the bounded \(p\)-th moment for some \(p>1\) to allow heavy-tailed response \(Y\). The severity of heavy tail decreases as \(p\) increases. In particular, if the response \(Y\) is sub-exponentially distributed, \(p=\infty\), the convergence rate achieves the minimax near-optimal rate. Obviously, the proposed EML-FNN also enjoys nearly minimax optimal rate under condition (C3) with the lower bounded restriction on the density function. It seems that, to achieve the optimal convergence rate, a bounded condition on the tail-probability may be essential. In fact, a similar condition also appears for a quantile regression model, that under the assumption that the conditional density of \(Y\) given \(\mathbf{X}\) around \(\tau\)-th quantile is bounded by below, Padilla et al. (2020) have obtained the convergence rate \(n^{-\frac{2\beta}{2\beta+d}}(\log n)^{2}\) for a quantile regression model under the framework of FNN. ## 4 Simulation study We investigate the performance of the proposed EML-FNN (simplified as EML) by comparing it with the least square estimator (LSE) and several robust methods with \(g\) approximated by FNN. We consider four commonly used robust methods: (1) Least absolute deviation method (LAD) with the loss function \(\rho(x)=|x|\); (2) Huber method with the loss function \(\rho(x)=0.5x^{2}I(|x|\leq\zeta)+(\zeta\,|x|-\zeta^{2}/2)I(|x|>\zeta)\) at \(\zeta=1.345\); (3) Cauchy method with the loss function \(\rho(x)=\log\{1+\kappa^{2}x^{2}\}\) at \(\kappa=1\); (4) Tukey's biweight method with the loss function \(\rho(x)=t^{2}[1-\{1-(x/t)^{2}\}^{3}]I(|x|\leq t)/6+t^{2}I(|x|>t)/6\) at \(t=4.685\). We also investigate the effect of bandwidth on our method in Section 4.3. All feedforward network architecture, initial values, and data for training are the same for all methods involved. The computations were implemented via packages Pytorch (Paszke et al., 2019) and Scikit-learn (Pedregosa et al., 2011) in python. Specifically, we use the network _Net-d5-w256_ with Relu activated functions, which comprises of 3 hidden layers, resulting in a network depth of 5 with the corresponding network widths \((d,256,256,256,1)\). We use the Adam optimization algorithm (Kingma and Ba, 2014) with a learning rate of 0.0003 for network parameters initialized by uniform distribution (He et al., 2015). The coefficients used for calculating the running average of the gradient and the squared gradient are \(\beta=(0.9,0.99)\). We set the training batch size to be equal to the size of the training data \(n\), and train the network for at least 1000 epochs using a dropout rate of 0.01 until the training loss converges or reaches a satisfactory level. To enhance flexibility and simplicity, we adopt a varying bandwidth \(h(\epsilon_{i})=|\max(\epsilon_{i}(v))-\min(\epsilon_{i}(v))|\), where the set \(\epsilon_{i}(v)\) is the neighborhood of \(\epsilon_{i}\) encompassing a proportion \(v\) of the total sample (Loftsgaarden and Quesenberry, 1965). Then the selection of the bandwidth is translated into selecting a value for \(v\) from the interval \((0,1]\). The constrained interval simplifies the process of bandwidth selection. We evaluate the performance of \(\hat{g}\) by the bias, standard deviation (SD) and root mean square error(RMSE), defined as \(bias=\left[\frac{1}{n_{grid}}\sum_{i=1}^{n_{grid}}(E\widehat{g}(z_{i})-g(z_{i }))^{2}\right]^{\frac{1}{2}}\), \(SD=\left[\frac{1}{n_{grid}}\sum_{i=1}^{n_{grid}}E(\widehat{g}(z_{i})-E\widehat {g}(z_{i}))^{2}\right]^{\frac{1}{2}}\), and \(RMSE=\sqrt{bias^{2}+SD^{2}}\), where \(z_{i}(i=1,...,n_{grid})\) are grid points on which \(g(\cdot)\) is evaluated, which firstly randomly generated from the distribution of \(\mathbf{X}\) and then fixed, \(n_{grid}=2048\) is the number of grid points, and \(E\widehat{g}(z_{i})\) is approximated by its sample mean based on 100 replications. ### Data generation Denote \(\mathbf{X}_{i}=(X_{i1},\cdots,X_{id})^{\top}\) with each component of \(\mathbf{X}_{i}\) being _i.i.d._ generated from a uniform distribution \(U(0,1)\). We consider three target functions: (1) \(g_{5}(\mathbf{X}_{i})=x_{i1}^{3}+x_{i2}^{2}+x_{i3}+|x_{i4}|+cos(x_{i5})\); (2) \(g_{10}(\mathbf{X}_{i})=x_{i1}^{3}+x_{i2}^{2}+x_{i3}+|x_{i4}|+cos(x_{i5})+sin(x_{i6}) +e^{x_{i7}}+log(1+x_{i8})+x_{i9}^{\frac{1}{2}}+x_{i10}^{\frac{1}{3}}\); and (3) \(g_{20}(\mathbf{X}_{i})=x_{i1}^{5}+x_{i2}^{4}+x_{i3}^{3}+x_{i4}^{2}+x_{i5}+|x_{i6}| +x_{i7}^{\frac{1}{2}}+x_{i8}^{\frac{1}{3}}+x_{i9}^{\frac{1}{4}}+x_{i10}^{\frac {1}{5}}+|x_{i11}^{3}|+cos(x_{i12})+sin(x_{i13})+cos(x_{i14}^{2})+sin(x_{i15}^ {2})+e^{x_{i16}}+log(1+x_{i17})+e^{x_{i18}^{2}}+log(1+x_{i19}^{2})+log(1+x_{i2 0}^{\frac{1}{2}})\), which are \(p=5,10\) and \(20\)-dimensional functions, respectively, where \(x_{ij}=\mathbf{X}_{i}^{\top}\mathbf{\beta}_{j},j=1,...,20\), \(\mathbf{\beta}_{j}\) is a \(d\)-dimensional vector with \(\mathbf{\beta}_{j}\left[\left((j-1)\times\left\lfloor\frac{d}{p}\right\rfloor+1 \right):\left(j\times\left\lfloor\frac{d}{p}\right\rfloor\right)\right]=\frac{ \left(\mathbf{\gamma}^{\top},\cdots,\mathbf{\gamma}^{\top}\right)}{\left\lfloor\frac{d }{20p}\right\rfloor\times\left\lVert\mathbf{\gamma}\right\rVert_{1}}\) and the remaining components of \(\mathbf{\beta}_{j}\) are \(0\), where \(\mathbf{\gamma}=(1,2,\cdots,20)^{\top}\). In a word, the non-zero elements of \(\mathbf{\beta}_{j}\) are integer values ranging from \(1\) to \(20\) but scaled according to \(L_{1}\) norms. We consider the following four distributions for the error \(\epsilon_{i}\): (I) Standard normal distribution: \(\epsilon_{i}\sim\mathcal{N}(0,1)\); (II) Mixture gaussian distribution: \(\epsilon_{i}\sim 0.7\mathcal{N}(0,1)+0.3\mathcal{N}(0,5)\); (III) Student-t distribution: \(\epsilon_{i}\sim t(2)\); and (IV) Heteroscedasticity: \(\epsilon_{i}\sim\mathcal{N}(0,3X_{i1}+4X_{i2})\). We then generate \(Y_{i}\) by \(Y_{i}=g_{p}(\mathbf{X}_{i})+\varepsilon_{i}\) with \(p=5,10,20\), respectively. We set \(n=256,1024\) and consider \(d=100,200,400,500,600,800\) for the above three target functions and four error distributions, respectively. All simulation results with \(100\) replications are presented in Figures 1 to 3. ### Numerical Results From Figures 1 to 3, it is evident that the proposed EML consistently and significantly outperforms the robust-based methods in terms of bias, SD, and RMSE. When the errors follow the normal distribution, the LSE is optimal. In this case, the proposed EML performs comparably to LSE, and both outperform the robust-based methods. This indicates that the loss caused by estimating the density function can be ignored, which aligns with the theoretical findings in Theorem 2. Upon closer examination, we can see that EML even slightly outperforms LSE for normal data as the sample size increases, for instance, when \(n=1024\). This observation implies that the ability of the proposed EML to learn the data structure may become more pronounced as the sample size grows. For non-normal and heteroscedasticity situations, the LSE performs the worst among all the methods and the proposed EML significantly outperforms the LSE. Figures 1 to 3 also show that the performance of all the methods improves with increasing sample sizes or decreasing dimensions. Figure 1: The bar chart of the bias and standard deviation of \(g_{5}\) using six methods for four error distributions with sample sizes \(n=256,1024\) and input dimensions \(d=100,500\), respectively. Figure 2: The bar chart of the bias and standard deviation of \(g_{10}\) using six methods for four error distributions with sample sizes \(n=256,1024\) and input dimensions \(d=200,600\), respectively. ### Effect of the bandwidth Now, we examine the effect of bandwidth on the proposed method. In Figures 4 and 5, we present the bias, SD, and prediction error (PE) of the EML-FNN estimator when the band Figure 3: The bar chart of the bias and standard deviation of \(g_{20}\) using six methods for four error distributions with sample sizes \(n=256,1024\) and input dimensions \(d=400,800\), respectively. widths vary from \(0.2\) to \(0.8\) for \(g_{5}\) under four error distributions given \((n,d)=(1024,500)\). PE is defined as \(PE=\frac{1}{t}\sum_{i=1}^{t}\|g(X_{i}^{test})-Y_{i}^{test}\|^{2}\), where \(\{(X_{i}^{test},Y_{i}^{test}\}_{i=1}^{t}\) represents the test data, which shares the same distribution as the training data. From Figures 4 and 5, we can see that a smaller bandwidth provides a better estimator in terms of bias, SD, and PE, and the proposed EML estimator is robust to variations in bandwidth within a certain range that approaches zero. These findings are consistent with the theoretical result presented in Theorem 2, which indicates that a small bandwidth is favored, and the extra risk is independent of the bandwidth if the bandwidth is appropriately small. Additionally, the comparison of Figures 4 and 5 reveals that the PE is more stable than bias and SD as the bandwidth changes. ## 5 Real data example We applied our proposed EML-FNN and other competing methods to analyze four real datasets based on the model (1) using the observations \((\mathbf{X}_{i},Y_{i})_{i=1}^{n}\). 1. Boston House Price Dataset. It is available in the scikit-learn library (Pedregosa et al., 2011) and encompasses a total of \(n=506\) observations. The purpose of the analysis is predicting the house price based on the 13 input variables \(\mathbf{X}_{i}\), such as urban crime rates, nitric oxide levels, average number of rooms in a dwelling, weighted distance to central areas, and average owner-occupied house prices, among others. Following Kong and Xia (2012) and Zhou et al. (2019), we employ the logarithm of the median price of owner-occupied residences in units of $1,000 as our response \(Y_{i}\). Figure 5: The bar chart of the mean prediction error (PE) of the proposed EML-FNN for \(g_{5}\) under four error distributions with \((n,d)=(1024,500)\), as the bandwidths vary from 0.2 to 0.8. 2. QSAR aquatic toxicity Dataset. The dataset was provided by Cassotti et al. (2014) and was used to develop quantitative regression QSAR models for predicting acute aquatic toxicity towards Daphnia Magna. It consists of a total of \(n=546\) observations, each has 8 molecular descriptors serving as covariates \(\mathbf{X}_{i}\), including PSA(Tot) (Molecular properties), SAacc (Molecular properties), H-050 (Atom-centred fragments), MLOGP (Molecular properties), RDCHI (Connectivity indices), GATS1p (2D autocorrelations), nN (Constitutional indices), C-040 (Atom-centred fragments). The response variable \(Y_{i}\) is the acute aquatic toxicity, specifically the LC50, which is defined as the concentration causing death in 50% of the test D. magna over a test duration of 48 hours. 3. QSAR fish toxicity Dataset. Another version of the dataset for quantitative regression QSAR models was provided by Cassotti et al. (2015). This dataset includes 908 observations, each observation has 6 input variables (\(\mathbf{X}_{i}\)) including molecular descriptors: MLOGP (molecular properties), CIC0 (information indices), GATS1i (2D autocorrelations), NdssC (atom-type counts), NdsCH ((atom-type counts), SM1_Dz(Z) (2D matrix-based descriptors). The response variable \(Y_{i}\) is the LC50 which is the concentration that causes death in 50% of test fish over a test duration of 96 hours. 4. Temperature forecast Dataset. The dataset was provided by Cho et al. (2020) and aims to correcting bias of next-day maximum and minimum air temperatures forecast from the LDAPS model operated by the Korea Meteorological Administration over Seoul, South Korea. The data consists of summer data spanning from 2013 to 2017. The input data \(\mathbf{X}_{i}\) is largely composed of predictions from the LDAPS model for the subsequent day, in-situ records of present-day maximum and minimum temperatures, and five geographic auxiliary variables. In this dataset, two outputs (\(Y_{i}\)) are featured: next-day maximum and minimum air temperatures. We preprocessed all the datasets by applying Z-score normalization to each predictor variable. Inspired from transfer learning, we employed the widely-used fine-tuning technique to simplify the computation. We initiated the process by training a single network model based on, for example, the Cauchy loss function by employing the methodology outlined in Section 4. Subsequently, we leveraged this trained model as a foundation to train all other models with a learning rate of 0.00003. All four datasets were randomly \begin{table} \begin{tabular}{c|c|c|c} \hline & **Boston** & **Aquatic Toxicity** & **Fish Toxicity** \\ & (Pedregosa et al., 2011) & (Cassotti et al., 2014) & (Cassotti et al., 2015) \\ \hline **LS** & 0.1045 & 1.2812 & 2.0153 \\ **LAD** & 0.1054 & 1.2184 & 2.142 \\ **Huber** & 0.1155 & 1.2003 & 2.1403 \\ **Cauchy** & 0.1192 & 1.2697 & 2.1179 \\ **Tukey’s biweight** & 0.1153 & 1.3148 & 2.1436 \\ **EML** & **0.0833** & **1.1497** & **1.8918** \\ \hline & **Temperature Forecast** & & \\ & (Cho et al., 2020) & & \\ \hline **LS** & 10.9303 & 5.5969 & \\ **LAD** & 10.6615 & 5.71 & \\ **Huber** & 10.6054 & 5.3211 & \\ **Cauchy** & 11.4592 & 6.0396 & \\ **Tukey’s biweight** & 11.2879 & 5.2332 & \\ **EML** & **4.4085** & **2.2196** & \\ \hline \end{tabular} \end{table} Table 1: Mean prediction error for four real datasets. split into training and test sets with a ratio of 4:1 to calculate PE. The entire procedure was repeated 50 times, and the average PE were calculated and presented in Table 1. The results in Table 1 clearly demonstrate the significant superiority of our approach over other competing methods in terms of a remarkable improvement in prediction accuracy across all four datasets. Particularly noteworthy is the outstanding performance achieved when applying our proposed EML technique to the Temperature Forecast dataset, where the improvement of the prediction accuracy achieves up to 50%. To understand the underlying reasons behind this improvement, we proceeded to plot the Q-Q plot in Figure 6 on the estimated error distribution for each of four real datasets. From the Q-Q plot in Figure 6, we can see the distribution of Boston house prices quite close to normal distribution. Following this, the toxicity data exhibits relatively closer resemblance to normality, characterized by a limited number of outliers. In contrast, the temperature data diverges substantially from the normal distribution, with a notable prevalence of extreme values. Based on these findings, we can draw the conclusion that the prediction performances, as illustrated in Table 1, are linked to the degree to which the respective distributions adhere to normality. Furthermore, from Tables 1 and Figure 6, we also can see that all the methods exhibit enhanced predictive accuracy when handling datasets that are more similar to a normal distribution. This observation further highlights the influence of distribution characteristics on the resulting estimator and emphasizes the importance of incorporating distribution information into the analysis. ## 6 Concluding Remarks The paper presents an innovative approach to nonparametric regression using FNN. This approach is characterized by its efficiency in both estimation and computation, its adaptability to diverse data distributions, and its robustness in the presence of noise and uncertainty. The key contributions are as follows: (1) Estimation efficiency: The method intro Figure 6: The Q-Q plot of the estimated density function for four real datasets. duces a novel loss function that incorporates not only observed data but also potentially implicit information about the data structure. By integrating this hidden information, the loss function transforms into an estimated maximum likelihood function, resulting in desirable properties such as efficiency. (2) Distribution-free: The method is independent of data distribution assumptions. Consequently, the approach adeptly handles data with varying distributional characteristics, such as heavy tails, multimodal distributions, and heterogeneity. (3) Probabilistic Robustness: The loss function is formulated through a probabilistic framework. This probabilistic approach effectively reduce the impact of substantial noise and outliers within the data, thereby enhancing its robustness. (4) Kernel-Based Smoothness: The method leverages the inherent smoothness of kernel functions. This enables the calculation of gradients and addresses challenges related to non-differentiability, when dealing with densities such as uniform distributions, mixture distributions, and cases of heteroscedasticity. (5) Computational efficiency: The proposed loss function exclusively involves the regression function \(g\). This design facilitates the straightforward utilization of existing software packages, simplifying the computational and programming. In summary, the method's capacity to accommodate various data distributions without the need for distributional assumptions renders it versatile and applicable to a wide range of real-world scenarios. By utilizing a reasonably small bandwidth, the proposed estimator is proved to be equivalent to the maximum likelihood estimator (MLE) where the density function is known. Furthermore, it nearly attains the minimax optimal rate, with only an additional logarithmic factor. Its exceptional performance is further exemplified through comprehensive simulation studies and its successful application to four distinct real-world datasets. There are several directions for future research. First, it might be possible to extend our method to a more complicated model, such as the generalized regression model for a discrete response. Secondly, practical scenarios often involve multiple responses that exhibit correlations, as seen in the Temperature Forecast Dataset's maximum and minimum air temperatures. By further modeling inter-response correlations, predictive capabilities could be enhanced. Lastly, it remains our responsibility to consistently enhance the associated software packages, ensuring seamless application. Despite having introduced an efficient and user-friendly package named EML-FNN, continued optimization and refinement are necessary.
この論文では、前処理型ニューラルネットワークの枠組みの中で、非パラメトリックな回帰推定法を効率的で堅牢なestimatorとして開発しています。提案されたestimatorには、いくつかの興味深い特徴があります。まず、損失関数には、観察データから得られた情報とデータ構造から得られた情報に基づいて、最大似然性関数に基づいています。これにより、効率性などの望ましい最適な特性を持つestimatorが得られます。さらに、従来の最大似然性推定法(MLE)とは異なり、提案された方法では分布の特定を回避しています。したがって、重tailed、 multimodal、または異質的な分布など、あらゆる種類の分布に対応できます。第三に、提案された損失関数では、Least-squaresのように直接の観察ではなく確率を用いて計算され、提案されたestimatorにおける robustnessを実現しています。最後に、提案された損失関数では、非パラメトリックな回帰関数のみを用いており
2309.06807
Bayesian uncertainty-weighted loss for improved generalisability on polyp segmentation task
While several previous studies have devised methods for segmentation of polyps, most of these methods are not rigorously assessed on multi-center datasets. Variability due to appearance of polyps from one center to another, difference in endoscopic instrument grades, and acquisition quality result in methods with good performance on in-distribution test data, and poor performance on out-of-distribution or underrepresented samples. Unfair models have serious implications and pose a critical challenge to clinical applications. We adapt an implicit bias mitigation method which leverages Bayesian predictive uncertainties during training to encourage the model to focus on underrepresented sample regions. We demonstrate the potential of this approach to improve generalisability without sacrificing state-of-the-art performance on a challenging multi-center polyp segmentation dataset (PolypGen) with different centers and image modalities.
Rebecca S. Stone, Pedro E. Chavarrias-Solano, Andrew J. Bulpitt, David C. Hogg, Sharib Ali
2023-09-13T08:54:22
http://arxiv.org/abs/2309.06807v2
# Bayesian uncertainty-weighted loss for improved generalisability on polyp segmentation task ###### Abstract While several previous studies have devised methods for segmentation of polyps, most of these methods are not rigorously assessed on multi-center datasets. Variability due to appearance of polyps from one center to another, difference in endoscopic instrument grades, and acquisition quality result in methods with good performance on in-distribution test data, and poor performance on out-of-distribution or underrepresented samples. Unfair models have serious implications and pose a critical challenge to clinical applications. We adapt an implicit bias mitigation method which leverages Bayesian epistemic uncertainties during training to encourage the model to focus on underrepresented sample regions. We demonstrate the potential of this approach to improve generalisability without sacrificing state-of-the-art performance on a challenging multi-center polyp segmentation dataset (PolyGen) with different centers and image modalities. ## 1 Introduction Colorectal cancer (CRC) is the third most common cancer worldwide [27] with early screening and removal of precancerous lesions (colorectal adenomas such as "polyps") suggesting longer survival rates. While surgical removal of polyps (polypectomy) is a standard procedure during colonoscopy, detecting these and their precise delineation, especially for sessile serrated adenomas/polyps, is extremely challenging. Over a decade, advanced computer-aided methods have been developed and most recently machine learning (ML) methods have been widely developed by several groups. However, the translation of these technologies in clinical settings has still not been fully achieved. One of the main reasons is the generalisability issues with the ML methods [2]. Most techniques are built and adapted over carefully curated datasets which may not match the natural occurrences of the scene during colonoscopy. Recent literature demonstrates how intelligent models can be systematically unfair and biased against certain subgroups of populations. In medical imaging, the problem is prevalent across various image modalities and target tasks; for example, models trained for lung disease prediction [25], retinal diagnosis [6], cardiac MR segmentation [23], and skin lesion detection [1, 17] are all subject to biased performance against one or a combination of underrepresented gender, age, socio-economic, and ethnic subgroups. Even under the assumption of an ideal sampling environment, a perfectly balanced dataset does not ensure unbiased performance as relative quantities are not solely responsible for bias [31, 19]. This, and the scarcity of literature exploring bias mitigation for polyp segmentation in particular, strongly motivate the need for development and evaluation of mitigation methods which work on naturally occurring diverse colonoscopy datasets such as PolypGen [3]. ## 2 Related work Convolutional neural networks have recently worked favourably towards the advancement of building data-driven approaches to polyp segmentation using deep learning. These methods [18, 34] are widely adapted from the encoder-decoder U-Net [24] architecture. Moreover, addressing the problem of different polyp sizes using multi-scale feature pruning methods, such as atrous-spatial pyramid pooling in DeepLabV3 [8] or high-resolution feature fusion networks like HRNet [28] have been used by several groups for improved polyp segmentation. For example, MSRFNet [29] uses feature fusion networks between different resolution stages. Recent work on generalisability assessment found that methods trained on specific centers do not tend to generalise well on unseen center data or different naturally occurring modalities such as sequence colonoscopy data [2]. These performance gaps were reported to be large (drops of nearly 20%). Out-of-distribution (OOD) generalisation and bias mitigation are challenging, open problems in the computer vision research community. While in the bias problem formulation, models wrongly correlate one or more spurious (non-core) features with the target task, the out-of-distribution problem states that test data is drawn from a separate distribution than the training data. Some degree of overlap between the two distributions in the latter formulation exists, which likely includes the core features. Regardless of the perspective, the two problems have clear similarities, and both result in unfair models which struggle to generalise for certain sub-populations. In the literature, many works focus on OOD detection, through normal or modified softmax outputs [13], sample uncertainty thresholds from Bayesian, ensemble, or other models [20, 14, 7], and distance measures in feature latent space [12]. Other approaches tackle the more difficult problem of algorithmic mitigation through disentangled representation learning, architectural and learning methods, and methods which optimise for OOD generalisability directly [26]. Similarly, several categories of bias mitigation methods exist. Some methods rely on two or more models, one encouraged to learn the biased correlations of the majority, and the other penalised for learning the correlations of the first [21, 16]. Other approaches modify the objective loss functions to reward learning core rather than spurious features [33, 22], or by neutralising representations to remove learned spurious correlations [10]. Others use data augmentation [6], or explore implicit versions of up-weighting or re-sampling underrepresented samples by discovering sparse areas of the feature space [4] or dynamically identifying samples more likely to be underrepresented [30]. De-biasing methods leveraging Bayesian model uncertainties [15, 5, 30] provide the added benefits of uncertainty estimations which are useful in clinical application for model interpretability and building user confidence. To tackle the generalisability problem for polyp segmentation, we consider the diversity of features in a multi-centre polyp dataset [3]. Our contributions can be listed as: 1) adapting an implicit bias mitigation strategy in [30] from a classification to a segmentation task; 2) evaluating the suitability of this approach on three separate test sets which have been shown to be challenging generalisation problems. Our experiments demonstrate that our method is comparable and in many cases even improves the performance compared to the baseline state-of-the-art segmentation method while decreasing performance discrepancies between different test splits. ## 3 Method The encoder-decoder architecture for semantic segmentation has been widely explored in medical image analysis. In our approach we have used DeepLabV3 [9] as baseline model that has SOTA performance on the PolypGen dataset [3]. We then apply a probabilistic model assuming a Gaussian prior on all trainable weights (both encoder and decoder) that are updated to the posterior using the training dataset. For the Bayesian network with parameters \(\boldsymbol{\theta}\), posterior \(p(\boldsymbol{\theta}\;\mid D)\), training data with ground truth segmentation masks \(D=(X,Y)\), and sample \(x_{i}\), the predictive posterior distribution for a given ground truth segmentation mask \(y_{i}\) can be written as: \[p(y_{i}\mid D,x_{i})=\int p(y_{i}\mid\boldsymbol{\theta},x_{i})p(\boldsymbol{ \theta}\mid D)d\boldsymbol{\theta} \tag{1}\] While Monte-Carlo dropout [11] at test-time is a popular approach to approximating this intractable integral, we choose stochastic gradient Monte-Carlo sampling MCMC (SG-MCMC [32]) for a better posterior. Stochastic gradient over mini-batches includes a noise term approximating the gradient over the whole training distribution. Furthermore, the cyclical learning rate schedule introduced in [35] known as cyclical SG-MCMC, or cSG-MCMC, allows for faster convergence and better exploration of the multimodal distributions prevalent in deep neural networks. Larger learning step phases provide a warm restart to the subsequent smaller steps in the sampling phases. The final estimated posterior of the Bayesian network, \(\boldsymbol{\Theta}=\{\boldsymbol{\theta}_{1},...\boldsymbol{\theta}_{M}\}\), consists of \(M\) moments sampled from the posterior taken during the sampling phases of each learning cycle. With functional model \(\boldsymbol{\Phi}\) representing the neural network, the approximate predictive mean \(\mu_{i}\) for one sample \(x_{i}\) is: \[\mu_{i}\approx\frac{1}{M}\sum_{m=1}^{M}\boldsymbol{\Phi}_{\theta_{m}}(x_{i}) \tag{2}\] We can derive a segmentation prediction mask \(\hat{y}_{i}\) from \(\mu_{i}\) by taking the maximum output between the foreground and background channels. The epistemic uncertainty mask corresponding to this prediction (Equation 3) represents the _model uncertainty_ for the predicted segmentation mask, the variance in the predictive distribution for that sample. \[\sigma_{i}\approx\frac{1}{M}\sqrt{\sum_{m=1}^{M}\left(\boldsymbol{\Phi}_{\theta _{m}}(x_{i})-\mu_{i}\right)^{2}} \tag{3}\] We add epistemic uncertainty-weighted sample loss [30] that identifies high-uncertainty sample regions during training. It also scales the pixel-wise contribution of these regions to the loss computation via a simple weighting function (Equation 4). This unreduced cross-entropy loss is then averaged over each image and batch (see Fig. 1). \[\hat{L}(\hat{y}_{i},y_{i})=L_{CE}(\hat{y}_{i},y_{i})*(1.0+\sigma_{i,y_{i}})^{\kappa} \tag{4}\] The shift by a constant (1.0) normalises the values, ensuring that the lowest uncertainty samples are never irrelevant to the loss term. \(\kappa\) is a tunable debiasing parameter; \(\kappa=1\) being a normal weighting, whereas \(\kappa\rightarrow\infty\) increases the importance of high-uncertainty regions. As too large a \(\kappa\) results in degraded performance due to overfitting, the optimal value is determined by validation metrics. Figure 1: Pixel-wise weighting of cross entropy (CE) loss contribution based on epistemic uncertainty maps for each training sample; the model is encouraged to focus on regions for which it is more uncertain. ## 4 Experiments and results ### Dataset and experimental setup PolypGen [3] is an expert-curated polyp segmentation dataset comprising of both single frames and sequence frames (frames sampled at every 10 frames from video) from over 300 unique patients across six different medical centers. The natural data collection format is video from which single frames and sequence data are hand-selected. The single frames are clearer, better quality, and with polyps in each frame including polyps of various sizes (10k to 40k pixels), and also potentially containing additional artifacts such as light reflections, blue dye, partial view of instruments, and anatomies such as colon linings and mucosa covered with stool, and air bubbles (Fig. 2). The sequence frames are more challenging and contain more negative samples without a polyp and more severe artifacts, which are a natural occurrence in colonoscopy. Our training set includes 1449 single frames from five centers (C1 to C5) and we evaluate on the three tests sets used for generalisability assessment in literature [2, 3]. The first test dataset has 88 single frames from an unseen center C6 (C6-SIN), and the second has 432 frames from sequence data also from unseen center C6 (C6-SEQ). Here, the first test data (C6-SIN) comprises of hand selected images from the colonoscopy videos while the second test data (C6-SEQ) includes short sequences (every \(10^{th}\) frame of video) mimicking the natural occurrence of the procedure. The third test dataset includes 124 frames but from seen centers C1 - C5; however, these are more challenging as they contain both positive and negative samples with different levels of corruption that are not as present in Figure 2: Samples from the EndoCV2021 dataset; from (_top_) C1-5 single frames and (_bottom_) C1-5-SEQ; (_top_) highlights the data distribution of each center (C1-C5), which consists of curated frames with well-defined polyps; (_bottom_) demonstrates the variability of sequential data due to the presence of artifacts, occlusions, and polyps with different morphology. the curated single frame training set. As no C6 samples nor sequence data are present in the training data, these test sets present a challenging generalisability problem. 1. Footnote 1: C1-5-SEQ and C6-SEQ data are referred to as DATA3 and DATA4, respectively, in [2] Training was carried out on several IBM Power 9 dual-CPU nodes with 4 NVIDIA V100 GPUs. Validation metrics were used to determine optimal models for all experiments with hyper-parameters chosen via grid search. Perhaps due to some frames containing very large polyps with high uncertainties, we found that the gradients of Bayesian models with uncertainty-weighted loss (BayDeepLabV3+Unc) occasionally exploded during the second learning cycle, and clipping the absolute gradients at 1.0 for all weights prevented this issue. All Bayesian DeepLabV3+ (BayDeepLabV3+) models had 2 cycles, a cycle length of 550 epochs, noise control parameter \(\alpha=0.9\), and an initial learning rate of 0.1. For BayDeepLabV3+Unc, we found optimal results with de-biasing tuning parameter \(\kappa=3\). Posterior estimates for BayDeepLabV3+ and BayDeepLabV3+Unc included 6 and 4 samples per cycle, respectively. ### Results We use the state-of-the-art deterministic model 2 and checkpoints to evaluate on the three test sets, and compare against the baseline Bayesian model BayDeepLabV3+ and BayDeepLabV3+Unc with uncertainty-weighted loss. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline **Dataset** & **Method** & **JAC** & **Dice** & **F2** & **PPV** & **Recall** & **Accuracy** \\ \hline \multirow{4}{*}{C6-SIN} & SOTA & 0.738\(\pm\)0.3 & 0.806\(\pm\)0.3 & 0.795\(\pm\)0.3 & **0.912\(\pm\)0.2** & 0.793\(\pm\)0.3 & 0.979\(\pm\)0.1 \\ & BayDeepLabV3+ & 0.721\(\pm\)0.3 & 0.790\(\pm\)0.3 & **0.809\(\pm\)0.3** & 0.836\(\pm\)0.2 & **0.843\(\pm\)0.3** & **0.977\(\pm\)0.1** \\ & Ours & **0.740\(\pm\)0.3** & **0.810\(\pm\)0.3** & 0.804\(\pm\)0.3 & 0.903\(\pm\)0.1 & 0.806\(\pm\)0.3 & **0.977\(\pm\)0.1** \\ \hline \multirow{4}{*}{C1-5-SEQ} & SOTA & **0.747\(\pm\)0.3** & **0.819\(\pm\)0.3** & **0.828\(\pm\)0.3** & 0.877\(\pm\)0.2 & 0.852\(\pm\)0.3 & 0.960\(\pm\)0.0 \\ & BayDeepLabV3+ & 0.708\(\pm\)0.3 & 0.778\(\pm\)0.3 & 0.805\(\pm\)0.3 & 0.784\(\pm\)0.3 & **0.885\(\pm\)0.2** & **0.963\(\pm\)0.0** \\ & Ours & 0.741\(\pm\)0.3 & 0.810\(\pm\)0.3 & 0.815\(\pm\)0.3 & **0.888\(\pm\)0.2** & 0.836\(\pm\)0.3 & 0.961\(\pm\)0.0 \\ \hline \multirow{4}{*}{C6-SEQ} & SOTA & 0.608\(\pm\)0.4 & 0.676\(\pm\)0.4 & 0.653\(\pm\)0.4 & 0.845\(\pm\)0.3 & 0.719\(\pm\)0.3 & 0.964\(\pm\)0.1 \\ & BayDeepLabV3+ & 0.622\(\pm\)0.4 & 0.682\(\pm\)0.4 & 0.669\(\pm\)0.4 & 0.802\(\pm\)0.3 & **0.764\(\pm\)0.3** & 0.965\(\pm\)0.1 \\ \cline{1-1} & Ours & **0.637\(\pm\)0.4** & **0.697\(\pm\)0.4** & **0.682\(\pm\)0.4** & **0.858\(\pm\)0.3** & 0.741\(\pm\)0.3 & **0.967\(\pm\)0.1** \\ \hline \hline \end{tabular} \end{table} Table 1: Evaluation of the state-of-the-art deterministic DeepLabV3+, BayDeepLabV3+, and our proposed BayDeepLabV3+Unc, showing mean and standard deviations across the respective test dataset samples. **First** and _second_ best results for each metric per dataset formatted. We report results for Jaccard index (JAC), Dice coefficient (Dice), F\({}_{\beta}\)-measure with \(\beta\) = 2 (F2), positive predictive value (PPV), recall (Rec), and mean pixel-wise accuracy (Acc). PPV in particular has high clinical value as it indicates a more accurate delineation for the detected polyps. Recall and mean accuracy are less indicative since the majority of frames are background in the segmentation task and these metrics do not account for false positives. A larger number of Figure 3: Performance gaps of the three models (state-of-the-art deterministic DeepLabV3+, BayDeepLabV3+, and BayDeepLabV3+Unc) between the three different test sets; _(top)_ comparing performance on single vs. sequence frames from out-of-distribution test set C6 (C6-SIN vs. C6-SEQ), and _(bottom)_ sequence frames from C1 - C5 vs. unseen C6 (C1-5-SEQ vs. C6-SEQ). The subtext above bars indicates the percent decrease in performance gap compared to SOTA; a larger percent decrease and shorter vertical bar length indicate better generalisability. false positive predictions can cause inconvenience to endoscopists during colonoscopic procedure and hence can hinder clinical adoption of methods. Figure 3 illustrates that our approach maintains SOTA performance across most metrics and various test settings, even outperforming in some cases; simultaneously, the performance gaps between different test sets representing different challenging features (1) image modalities (single vs. sequence frames) and (2) source centers (C1 - C5 vs. C6) are significantly decreased. Simply turning the SOTA model Bayesian improves the model's ability to generalise, yet comes with a sacrifice in performance across metrics and datasets. Our proposed uncertainty-weighted loss achieves better generalisability without sacrificing performance (also see Table 1). We note performance superiority to SOTA especially on C6-SEQ, approximately 3% improvement on Dice. We can also observe slight improvement on PPV for test sets with sequence (both held-out data and unseen centre data). Finally, we note that in clinical applications, the uncertainty maps for samples during inference could be useful for drawing clinicians' attention towards potentially challenging cases, increasing the likelihood of a fairer outcome. ## 5 Conclusion We have motivated the critical problem of model fairness in polyp segmentation on a multi-center dataset, and modified a Bayesian bias mitigation method to our task. The results on three challenging test sets show strong potential for improving generalisability while maintaining competitive performance across all metrics. Furthermore, the proposed mitigation method is implicit, not requiring comprehensive knowledge of biases or out-of-distribution features in the training data. This is of particular importance in the medical community given the sensitivity and privacy issues limiting collection of annotations and metadata. Our findings are highly relevant to the understudied problem of generalisation across high variability colonoscopy images, and we anticipate future work will include comparisons with other methods to improve generalisability and an extension to the approach. We also anticipate having access to additional test data for more in-depth analysis of the results. ###### Acknowledgements. R. S. Stone is supported by an Ezra Rabin scholarship.
複数の先行研究は、 Polyp の分割方法を開発しましたが、これらの方法は、多中心データセットへの厳格な評価が行われていません。 Polyp の出現の差異、内視鏡の機器グレードの差異、そして取得の品質は、評価データにおいて良好な性能を示す一方で、アウトプットデータまたは不足しているサンプルにおいては、性能が著しく低下します。 不公平なモデルは重大な影響を及ぼし、臨床応用に対する重要な課題です。このアプローチは、 Bayesian 予測不確かさをトレーニング中に利用することで、モデルに Undersampled のサンプル領域に焦点を当てるように促します。このアプローチの潜在性を、異なるセンターと画像のモデリティを持つ、多中心 Polyp の分割データセット (PolypGen)において、従来の性能を維持しながら、一般性を向上させる可能性を示します。
2308.00187
Detecting the Anomalies in LiDAR Pointcloud
LiDAR sensors play an important role in the perception stack of modern autonomous driving systems. Adverse weather conditions such as rain, fog and dust, as well as some (occasional) LiDAR hardware fault may cause the LiDAR to produce pointcloud with abnormal patterns such as scattered noise points and uncommon intensity values. In this paper, we propose a novel approach to detect whether a LiDAR is generating anomalous pointcloud by analyzing the pointcloud characteristics. Specifically, we develop a pointcloud quality metric based on the LiDAR points' spatial and intensity distribution to characterize the noise level of the pointcloud, which relies on pure mathematical analysis and does not require any labeling or training as learning-based methods do. Therefore, the method is scalable and can be quickly deployed either online to improve the autonomy safety by monitoring anomalies in the LiDAR data or offline to perform in-depth study of the LiDAR behavior over large amount of data. The proposed approach is studied with extensive real public road data collected by LiDARs with different scanning mechanisms and laser spectrums, and is proven to be able to effectively handle various known and unknown sources of pointcloud anomaly.
Chiyu Zhang, Ji Han, Yao Zou, Kexin Dong, Yujia Li, Junchun Ding, Xiaoling Han
2023-07-31T22:53:42
http://arxiv.org/abs/2308.00187v1
# Detecting the Anomalies in LiDAR Pointcloud ###### Abstract LiDAR sensors play an important role in the perception stack of modern autonomous driving systems. Adverse weather conditions such as rain, fog and dust, as well as some (occasion) LiDAR hardware fault may cause the LiDAR to produce pointcloud with abnormal patterns such as scattered noise points and uncommon intensity values. In this paper, we propose a novel approach to detect whether a LiDAR is generating anomalous pointcloud by analyzing the pointcloud characteristics. Specifically, we develop a pointcloud quality metric based on the LiDAR points' spatial and intensity distribution to characterize the noise level of the pointcloud, which relies on pure mathematical analysis and does not require any labeling or training as learning-based methods do. Therefore, the method is scalable and can be quickly deployed either online to improve the autonomy safety by monitoring anomalies in the LiDAR data or offline to perform in-depth study of the LiDAR behavior over large amount of data. The proposed approach is studied with extensive real public road data collected by LiDARs with different scanning mechanisms and laser spectrums, and is proven to be able to effectively handle various known and unknown sources of pointcloud anomaly. LiDAR, autonomous driving, assisted driving ## I Introduction LiDAR (Light Detection and Ranging) sensors have caught growing attention of the automotive and autonomous driving industry thanks to their capability of continuously generating high-definition and accurately-ranged image (pointcloud) of the surroundings, regardless of the ambient illuminance conditions [1, 2]. As is pointed out in [2], one particular challenge of using LiDARs for perception in autonomous driving is the performance degradation in adverse weather conditions such as rain, fog, dust, etc., where the LiDAR's laser signal may be scattered and/or attenuated, leading to reduced laser power and signal-noise ratio (SNR) and thus may cause the pointcloud to contain random noise points and lower intensity readings [3]. Not only the adverse environmental conditions can cause the issues above, sometimes defected LiDAR hardware components or unknown random factors may also lead to anomalous pointcloud output. For example, a LiDAR with defected electromagnetic shielding may output extremely noisy pointcloud when strong signal interference sources such as cellular towers are nearby. The goal of this paper is to propose a method to characterize the aforementioned LiDAR pointcloud anomalies, which can benefit the autonomous driving system (ADS) safety as well as the ADS development cycle. In terms of increasing the level of automation and the ADS safety, a higher level ADS (level 3+) needs to detect whether the system is within its operation domain and behave correspondingly, according to the Society of Automotive Engineers (SAE) [4]. The ADS operation domain is typically bounded by environmental conditions and system component health, and it is essential that the ADS sensors such as LiDARs are able to determine their status and data quality. As for the application in the ADS development, the data frames with anomalous LiDAR pointcloud are typically associated with edge cases and long-tail scenarios, which require extra attention yet have relatively low rate of occurrence in the vast amount of data generated by the autonomous driving fleet. Having those cases picked out effectively and efficiently helps to save the time and effort required for ADS development. While researches on general LiDAR pointcloud anomalies are limited, the topic of LiDAR performance under adverse weather conditions have been studied extensively [5, 6, 7, 8, 9, 10, 11, 12]. Many of the studies focus on the performance degradation of the LiDAR in rain/fog and have developed various quantification methods for aspects such as signal attenuation, visibility range, point density and target reflectance. Some recent studies develop statistical-based learning methods to classify whether a LiDAR is working in adverse weather based on performance degradation metrics [13, 14]. These methods are typically verified through simulation or testing in controlled environment which may not well resemble the realistic road conditions. For example, many controlled environments to emulate rains such as the one presented in [13] consists of several static test targets (vehicles, pedestrians, etc.). Such environment cannot produce water splashes generated from rolling wheels of other vehicles on the road, which is typically seen and picked up by the LiDARs in realistic operations. In addition, it should be noted that many of the commonly studied LiDAR performance degradation aspects do not always lead to safety-critical component or system failure. For example, a LiDAR typically have a reduced visibility range in rain which only reduces the perception system's capability and does not necessarily disable all the perception functions; on the other hand, even if the LiDAR is operating with its full capability in a sunny day, it may generate a large amount of false positive points due to hardware failure which is likely to be recognized as objects by the perception system and cause the vehicle to perform a hard-brake. In [15] the authors developed a deep-learning based approach to classify and detect LiDAR pointcloud anomalies. However, there are two major drawbacks to apply the deep-learning based approaches in practical R&D and implementation. First, it requires a large amount of annotated LiDAR data frames to train the software, moreover, the data collection, annotation and training pipeline must be repeated for different LiDAR properties, such as spinning vs solid state, 905nm vs 1550nm, or even a change to the mounting locations, thus lengthens the R&D cycle; and second, the real-time computational cost is high and may not be desirable given the limited onboard computational cost. In this paper, we propose a novel quality metric to quantitatively characterize the general noise-related anomalies in LiDAR pointcloud. To capture the spatially-scattered nature of LiDAR noise points, we adopt the idea of spatial autocorrelation [16], which is widely used in statistical studies, to quantify how 'dispersed' the points are in a frame of LiDAR pointcloud. A factor related to the intensity of the pointcloud is also included in the quality metric to better separate the cases where the LiDAR is in heavy rain or dense fog. The main contribution of the paper is twofold: * First, we developed a general quality metric that is able to capture noise-related anomalies in LiDAR pointcloud regardless of the cause of the anomaly. It is particularly useful in identifying new pointcloud issues with unknown causes or very little prior experience during both early-stage system validation or large-scaled operation. * Second, the proposed approach does not require a priori data collection, labeling and training and thus can reduce the time and resource consumption for practical implementation. The proposed quality metric is verified with over 10,000 miles of public road data collected by LiDARs with various laser spectrums, scanning mechanisms and mounting locations. The results show that the proposed method is able to identify the pointcloud affected not only by adverse weather conditions, but also by uncommon noise sources such as signal interference, road dust, etc. The rest of the paper is organized as follows. We first present the formulation and implementation of the proposed LiDAR pointcloud quality metric in Section II. Section III demonstrates the verification of the proposed method, followed by conclusions in Section IV. ## II Pointcloud Quality Metric In this section, we first showcase some typical scenarios and characteristics of anomalous LiDAR pointcloud, based on which we formulate the pointcloud quality metric. An implementation method utilizing LiDAR image grid and GPU (graphic processing unit) acceleration is also presented. ### _Anomalous LiDAR Pointcloud_ LiDAR pointcloud impacted by adverse weathers or hardware component failures may produce anomalous pointcloud with the following typical characteristics: * Randomly and sparsely distributed detections in the 3-dimensional physical space. Signal interference and hardware failure typically affect the LiDAR's signal processing module and generate random and sparse false positives. In adverse weather conditions, this is mainly caused by reflection from water droplets, reflection from scattered laser signals through water/dust, and reduced pointcloud density due to signal attenuation. * Abnormal intensity values. Particularly in rainy and foggy weathers, the intensity values are lower than normal due to signal attenuation. Signal interference and hardware failure may lead to either low or excessively high intensity values. A few examples of typical anomalous LiDAR pointcloud we collected during public road testing are shown in Figure 1. All the pointcloud in the figures are colored by the intensity values. Points colored blue indicate low intensity values and those colored red represent high intensity values. Figure 1(a) demonstrates one case of LiDAR pointcloud in rain where numerous noise points can be observed at a close range of the LiDAR's field of view (FOV). Figure 1(b) shows another case of LiDAR pointcloud in rain. In this case, both the number of points and the intensity values are significantly reduced due to laser signal getting absorbed by the heavy rain. The pointcloud in Figure 1(c) sees much higher intensity values as well as noise points all over the FOV due to an internal component failure inside the LiDAR. The LiDAR whose pointcloud is shown Figure 1(d) does not have proper electromagnetic shielding and suffers signal interference when passing a cellular signal tower. ### _Pointcloud quality metric Formulation_ The proposed pointcloud quality metric consists of two factors to address the two major characteristics of anomalous LiDAR pointcloud shown above. The first factor is a spatial measure to quantify how dispersed the LiDAR points are distributed in the 3-dimensional physical space. The second factor is an intensity measure to capture the abnormal intensity pattern in the LiDAR pointcloud, particularly the lower-than-normal intensity values in adverse weather conditions such as rain and fog. #### Ii-B1 Spatial Measure We employ the concept of spatial autocorrelation [16] as a measure of the LiDAR points' level of spatial dispersion. In statistics, spatial autocorrelation is used to describe the overall spatial clustering of a group of data Fig. 1: Examples of Anomalous LiDAR Pointcloud by calculating each data point's correlation with other nearby data points. A low spatial autocorrelation means that the group of data is dispersed, while a high spatial autocorrelation means that the data group is clustered. The underlying idea of using spatial autocorrelation to characterize the LiDAR pointcloud's spatial dispersion/clustering is that if a segment of LiDAR pointcloud data is generated by lasers detecting an actual object, the distance values in the data segment tend to be clustered since common road objects such as cars and pedestrians typically have large and continuous reflection surfaces. On the other hand, if a LiDAR data segment contains an excessive number of noise points, the distance values in the data segment are more likely dispersed. An illustration of the idea is shown in Figure 2. The example captures the LiDAR pointcloud of a vehicle driving on wet road surfaces with water splash generated at the rear of the vehicle. The LiDAR points from the vehicle (marked red) are well clustered, while the water splash points behind the vehicle (marked green) are dispersed. The spatial autocorrelation of a set of LiDAR points is defined as follows. Given a set of LiDAR points: \[\mathcal{P}=\{p_{i}=(r_{i},\theta_{i},\phi_{i},\gamma_{i})|i=1,2,...N\} \tag{1}\] where \(r_{i},~{}\theta_{i},~{}\phi_{i}\) and \(\gamma_{i}\) represents the distance, azimuth, elevation and intensity of the i-th LiDAR point, respectively. Then, the spatial autocorrelation of the distance values is defined as: \[I=\left\{\begin{array}{ll}\dfrac{N}{\sum_{i=1}^{N}\sum_{j=1}^{N}w_{ij}(r_{i} -r)(r_{j}-r)}&N>1\\ -1&N=1\end{array}\right. \tag{2}\] \(r=\frac{1}{N}\sum_{i=1}^{N}r_{i}\) is the average distance of all distance values in the set of points. \(w_{ij}\) is a pre-defined weight value. For instance, one may consider the correlation of one data point to all other data points in the set with identical weights by defining \(w_{ij}\) as: \[w_{ij}=\left\{\begin{array}{ll}1&i\neq j\\ 0&i=j\end{array}\right. \tag{3}\] Alternatively, \(w_{ij}\) can also be defined based on the inverse angular distance between points \(i\) and \(j\) so that the correlation between closer points have higher weight: \[w_{ij}=\left\{\begin{array}{ll}||(\theta_{i},\phi_{i}),(\theta_{j},\phi_{j} )||^{-2}&i\neq j\\ 0&i=j\end{array}\right. \tag{4}\] \(W=\sum_{i=1}^{N}\sum_{j=1}^{N}w_{ij}\) is the sum of all weights. The spatial autocorrelation is valued between \([-1,~{}1]\), where a value of -1 indicates that the set of points are extremely dispersed in the 3-dimensional physical space and a value of 1 means that the points are well clustered. It should be noted that by the definition above, a set with one isolated point, i.e., \(N=1\), is considered as dispersed and has a spatial autocorrelation value of -1. We believe that (2) is a reasonable definition for isolated points since an isolated point is most likely to be treated as a noise point in perception algorithms. The main difference between the autocorrelation and statistical variance is that the statistical variance only considers the absolute difference between each individual points to the average, thus, it depicts how the data is distributed in the sample space. The spatial autocorrelation, on the other hand, considers the relation between each individual points to other points. Sets of data points that have the same statistical variance may not necessarily have the same spatial autocorrelation. As shown by the two pointcloud examples in Figure 3, where both sets of points shall have the same range variance. However, the spatial autocorrelation of the pointcloud in case ii is negative while that in case i is positive, indicating that the pointcloud in case ii is more dispersed. In practice, multiple vehicles/objects in the LiDAR field of view can typically generate a pointcloud distribution similar to case i, and noise/false positives may result in a pointcloud distribution which resembles that in case ii. Furthermore, consider the extreme case where only one isolated LiDAR point is present. By definition, the single-point set has a minimum variance of 0. On the other hand, it has the lowest spacial autocorrelation score following the definition (2), which aligns with our intention to characterize isolated points as noise points. Therefore, spatial autocorrelation is a more suitable measure for our application than the statistical variance. #### Iii-B2 Intensity Measure LiDARs with specific laser wavelengths may generate clustered instead of scattered noise points in heavy rain of dense fog. Figure 4 shows an example of such type of noise points. The pointcloud in the figure is captured when the LiDAR encounters heavy rain on the road. In the lower right of the figure there is a sizable cluster of noise points likely generated from reflections of rain droplets, which could be recognized as an object to be avoided by the perception algorithms. Since this particular type of LiDAR noise is typically clustered, it can be hard to characterize using spatial autocorre Fig. 3: Examples of Pointcloud Distribution Fig. 2: Illustration of Clustered and Dispersed LiDAR Pointcloud lation alone, as will be shown in the test results in Section III. However, we have observed that this noise type only occurs when there is a dense layer of laser-absorbing/deflecting matter such as heavy rain, dense fog or intense smog, etc., and the points almost always have extremely low intensity values since they are generated from partial reflection of the laser pulse passing through the matter. Therefore, in addition to the spatial autocorrelation, we also take low intensity values into consideration by adding an intensity weight multiplier to the spatial autocorrelation. The intensity weight multiplier can be formulated from any intensity statistical measures such as mean, standard deviation, or any other metrics that can distinguish the abnormally low intensity values. In this paper, we present one formulation of the intensity multiplier based on the average intensity. Let \(\gamma_{ref}\) be a reference intensity value which indicates a nominal LiDAR intensity during normal operation (clear weather, no hardware issues). The reference is a user-defined value which is typically associated to specific LiDAR models from different manufacturers. The reference value can be obtained through statistical analysis of LiDAR data, since the LiDAR intensity during normal operation is typically consistent with small fluctuations. Let \(\bar{\gamma}\) be the average intensity of the set of LiDAR point \(\mathcal{P}\). The intensity weight multiplier \(K_{\gamma}\) is formulated as below: \[K_{\gamma}=exp(k\cdot\frac{max(0,\gamma_{ref}-\bar{\gamma})}{\gamma_{ref}}) \tag{5}\] where \(k\) is a constant scale factor. By definition, a low average intensity leads to a high weight multiplier. The multiplier value is defined as 1 for high average intensities. While some LiDAR hardware failures may lead to a high average intensity in some cases, as shown in Figure 1(c), most of the high average intensity cases are the result of retro-reflective targets, e.g., road signs, occurring at a close range and occupies most of the LiDAR pointcloud. Figure 5 shows an example of the average intensity of the pointcloud from one test LiDAR passing a road sign. The average intensity ramps up as the road sign gets closer to the vehicle and producing more points. Once the road sign gets out of the LiDAR's FOV, the average intensity quickly drops back to its nominal value. These cases with high average intensities are irrelevant to the LiDAR data quality yet are very commonly seen as vehicles can pass road signs from time to time. Therefore, we intentionally disregard the high average intensity in the definition of the multiplier. Overall, the LiDAR data quality metric is formulated as the multiplication of the intensity weight multiplier and the spatial autocorrelation \(K_{\gamma}\cdot I\). ### _Implementation_ #### Iii-C1 LiDAR Image Grid It makes practical sense to calculate the spatial autocorrelation of the LiDAR points in a small local area instead of calculating for all LiDAR points across the entire FOV all at once, since typical objects and other physical features do not occupy the entire LiDAR FOV and the LiDAR points are bound to be scattered when looking from a global FOV perspective. Furthermore, calculating in a small local area reduces the computational cost as the spatial autocorrelation is of \(O(N^{2})\) with \(N\) being the size of the pointcloud under consideration. Therefore, in implementation, we first create a LiDAR image grid and calculate the spatial autocorrelation grid by grid. For each LiDAR data frame, we project all the LiDAR points onto an azimuth-elevation image, with each point containing its range and intensity information. The image is then divided into grids in both azimuth and elevation directions. An example of such image grid is shown In Figure 6. Then, for each grid cell, we calculate the weighted spatial autocorrelation of all the distance values of the points in that cell following the definition (2) and (5). The overall quality metric score of the LiDAR data frame is then the sum of the weighted spatial autocorrelation over all grid cells averaged by the number of grid cells: \[s=\frac{1}{VH}\sum_{i}^{V}\sum_{j}^{H}K_{\gamma,ij}\cdot I_{ij} \tag{6}\] where \(i\) and \(j\) denotes the indices of the grid cells, and the \(V\) and \(H\) denotes the number of grid cells in the elevation and azimuth directions, respectively. Fig. 4: Exemplary LiDAR Noise in Heavy Rain Fig. 5: Average Intensity of LiDAR Passing Road Sign Fig. 6: Example of LiDAR Image Grid 2 GPU Acceleration By definition, the time complexity to calculate the spatial autocorrelation is of \(O(N^{2})\), where \(N\) is the number of point a LiDAR produces in one frame. Therefore, the time cost of calculating the weighted spatial autocorrelation can be too high to meet the real-time constraint since modern automotive LiDARs can generate up to 100,000 points in a single frame. Applying the implementation based on the LiDAR pointcloud image grid shown above, the computation can be done in parallel for each grid cell since the spatial autocorrelation of each grid cell is independent to other grid cells. As GPUs become a more and more viable resource on automotives [17], in this section, we propose a GPU-accelerated parallel computation implementation of the weighted spatial autocorrelation. Figure 7 demonstrates the GPU-accelerated parallel computation structure. For each LiDAR data frame, the pointcloud is first reorganized as an \(m\times n\) 2-D array before sending to the GPU, where \(m\) and \(n\) are pre-defined parameters based on the LiDAR's FOV and resolution. Note that a LiDAR frame does not necessarily have detection at all entries, and the entries without valid detection are set to have a range of 0 which will be excluded from the spatial autocorrelation calculation. Given the size of the grid \(V\) and \(H\) as previously defined, the GPU launches \(V\times H\) threads in parallel, and each thread computes the weighted spatial autocorrelation of the LiDAR points within the corresponding grid cell. After all threads finish the computation, the results are sent back to the CPU for the final calculation. ## III Results We collect test data with two different LiDAR models which have different sepcifications in almost all aspects from the scanning mechanism to the laser spectrum. Table I lists some of the key parameters of the two LiDAR models. Both LiDARs calculate the distance measurement on a time-of-flight (TOF) basis. Several Navistar International LT625 trucks equipped with both LiDAR models is used for data collection on public road. All LiDARs are mounted in an exposed manner, i.e., no windshield or other secondary fascia in front of the LiDAR. Each truck is also equipped with multiple cameras oriented to various directions. The cameras are synchronized with the LiDARs and the camera images are recorded in addition to the LiDAR data as reference. We have accumulated a total of over 230 unit-hours and 10,000 unit-miles of road data with a combination of conditions covering different aspects, including various time of day such as daytime, nighttime, dusk and dawn, various weather conditions such as clear day, rainy and foggy, and various surroundings such as highway, local road, test track and parking lot. Both LiDARs output pointcloud at a 10Hz rate, leading to a total amount of over 828k frames of pointcloud data. We calculate the pointcloud quality metric once every second, i.e., once every 10 frames of data. Since the scenarios that produces noise or anomalous LiDAR pointcloud, such as rains and fogs, can typically last for some time in a continuous manner, we are still able to capture the anomalous LiDAR pointcloud without losing much information while reducing the effort to go through the test dataset. A summary of the dataset is given in Table II. For the rest of this section, we select a few typical scenarios and analyze in detail to showcase the performance of the proposed method, as well as providing an overview of the method's performance over the entire test data set. ### _Scenario I: Electromagnetic Interference (EMI)_ In this scenario, one unit of LiDAR 2 with defected EMI shielding passes through a cellular signal tower, generating a large amount of low-intensity noise points. As shown in Figure 8, the noise points are randomly and sparsely distributed over the LiDAR FOV and in general have low intensity values (marked gray together with some points from the road surface). Figure 9(a) shows the proposed LiDAR pointcloud quality metric over time. To demonstrate how the spatial autocor Fig. 8: LiDAR Pointcloud with EMI Fig. 7: Computation of Spatial Autocorrelation lation and the intensity weight multiplier contribute to the overall metric respectively, the orange curve shows the spatial autocorrelation over time without the multiplication of the intensity weight, and the blue curve shows the overall quality metric score. Both curves show significant drops for about 10s which corresponds to the duration of the EMI effect. In this scenario, the spatial autocorrelation can clearly capture the false positives, and the intensity weight magnifies the gap between the normal and low-quality data frames since the noise points are mostly low-intensity. It should be noted that even for normal data frames, the intensity weight scales the spatial autocorrelation since there are always points with intensity values below the reference intensity. As comparison, Figure 9(b) shows the averaged range variance of all the grid cells over the same data segment. While the EMI affected pointcloud does lead to a peak in the range variance, there are other peaks when the pointcloud is normal, and the peak at the EMI effect is not significant enough to distinguish the pointcloud frame. Therefore, the range variance is not a suitable detector for anomalous pointcloud frames. In Figure 10 we showcase two frames of the LiDAR image grid during the scenario, where Figure 10(a) captures an instance without any EMI effect and Figure 10(b) is one exemplary image grid when the EMI effect is in place, respectively. The grid cells with red edges have low unweighted spatial autocorrelation values. As can be seen in Figure 10(a), individual grid cells may have low spatial autocorrelation values occasionally especially at the edge of FOV or when objects with small reflection surfaces, such as poles and vegetation, appear in the pointcloud. However, they do not lead to a low total quality metric score since the amount of such type of grid cell is generally small. On the other hand, anomalies and noise points generates numerous grid cells with low spatial autocorrelation values, as shown in Figure 10(b). As a result, the overall quality metric of the data frame is low. ### _Scenario II: Rain_ In this scenario, we investigate a trip segment where both LiDAR models are exposed in heavy rain. Figure 11(a) and Figure 11(b) show the pointcloud of LiDAR 1 and 2 in the rain scenario, respectively. A reference camera image is shown in Figure 11(c). As demonstrated in the figures, LiDARs with 905nm laser wavelength are more likely to see scattered noise points from rain droplets and water splashes; LiDARs with 1550nm laser wavelength generates pointcloud where both the point density and intensity are significantly reduced due to signal absorption, as well as clustered noise points at close ranges. Figure 12(a) gives both the quality metric score and the unweighted spatial autocorrelation from LiDAR 1 during the test. Due to the scattered pattern of the noise seen by the 905nm LiDAR, even the unweighted spatial autocorrelation can distinguish the rain scenario well since the scattered noise tends to generate low spatial autocorrelation scores. And since 905nm LiDARs' laser signal also gets attenuated in rain and leading to lower-than-normal intensities, including the intensity weight multiplier may increase the gap between 'normal' pointcloud frames and anomalous pointcloud frames. Figure 12(b) shows the quality metric score and the unweighted spatial autocorrelation from LiDAR 2. It can be seen that the unweighted spatial autocorrelation in general cannot differentiate the pointcloud frames affected by rain, since the points, including noise points, can be well clustered. The intensity weight multiplier in this case effectively helps to characterize the rain data. ### _Test Result Overview_ For the LiDAR data frames with low quality metric score outputs, we define a true positive result when there are notable noise/anomalous points in the pointcloud, and a false positive result when no notable noise/anomalous points are found. In this section, we pick the true and false positive cases by finding pointcloud frames whose quality metric score is less than -0.4. Fig. 11: Pointcloud and Image in Scenario II Fig. 12: Pointcloud Quality Score of Scenario II Over Time Fig. 10: Two Frames of LiDAR Image Grid during Scenario I Fig. 9: Pointcloud Quality Score of Scenario I Over Time It should be noted that the quality metric score threshold is merely a bar to filter out the frames of interest from the large amount of test data, and is not meant to be a threshold for real application. Figure 13(a) and 13(b) show the breakdown of causes that lead to true positive and false positive results, respectively. Among over 82.8k frames of LiDAR pointcloud checked, the amount of frames labeled as true positive is about 16k, which are mainly caused by rain, fog and dust. There are some true positive cases where the noise/anomaly source cannot be identified from the reference camera (unknown noise), which we believe are likely caused by sunlight interference or other reasons. On the other hand, there are about 250 frames in the test dataset labeled as false positive. Typical objects/scenarios that result in false positive pointcloud frames include close vehicles passing by the ego vehicle in adjacent lanes, power lines, and vegetation, which contribute over 95% of the false positive cases. Figure 14 provides examples of pointcloud of close vehicles, power lines and vegetation. Close vehicles are typically only partially detected by the LiDAR due to the LiDAR's limited FOV. The pointcloud from the partially detected vehicle can be random and scarce, depending on which and how much portion of the vehicle is within the FOV. In addition, the LiDAR ranging precision of close vehicles is often degraded due to the multi-reflection between the target and ego vehicle bodies, leaving scattered points over the space. The point intensities from close vehicles can also get low due to lasers hitting the smooth vehicle surfaces at large angles of incidence. All the reasons above make the points from close vehicles similar to noise/anomalous points based on our quality metric definition. The power lines and vegetation have small area of reflectance and therefore is in general only partially detected with low signal power reflected to the LiDAR's laser detector. Therefore, points from power lines are sparsely distributed as well as having low intensity values, which are close to the characteristics of noise and anomalous points. Figure 15 shows the cumulative distribution of the quality metric scores of all true positive and false positive data frames. While there are overlaps between the scores of all true positive and those of all false positive data frames, it is clear that the true positive cases in general have lower scores than the false positive cases. In our test dataset, over 75% of the false positive cases have a quality metric score higher than -0.5, while the percentage of the true positive cases that have a score higher than -0.5 is about 25%. Furthermore, while the scores of the true positive cases are distributed in a wide range, it should be noted that the value of the score is related to the severity of the noise/anomaly caused by the source of noise such as rain, fog, dust, etc. Figure 16 demonstrates the pointcloud of various rain and fog scenarios and their corresponding quality metric score. The noise points generated from rain and fog are manually marked as gray. Sometimes the pointcloud recorded when rain or fog is in presence can have a quality metric score as high as -0.4, however, they typically correspond to light rain or fog scenarios and the amount of noise points is relatively small. The pointcloud having a large amount of noise points and a quality metric score going as low as -1.0 or even lower is typically associated with heavy rain or dense fog which may harm the driving safety. Therefore, for actual application of the proposed method, one may choose a score threshold which best suits their use cases. For instance, to apply the Fig. 16: Pointcloud of True Positive Cases with Different Quality Scores Fig. 14: Pointcloud of Objects Causing False Positives Fig. 13: Distribution of Low Score Causes Fig. 15: Cumulative Distribution of True/False Positive Case Scores proposed method to determine whether the vehicle is in an adverse scenario which may be outside the ADS operation domain, one may use a lower detection threshold so that the ADS is not constantly disturbed by false positive detections while the most severe rains and fogs are captured. On the other hand, for applications aiming to study the characteristics of noisy and anomalous LiDAR pointcloud, one may choose a high threshold that keeps as many true positive cases as possible and tolerate the increased amount of false positives. Figure 17 uses our test dataset as an example to illustrate the effect of different thresholds. The two curves in the figure show the proportion of true positive cases kept and false positive cases filtered under various score thresholds, respectively. With a threshold set to -1.0, one can keep more than 15% of most severe scenarios while eliminating over 99% of false positive cases. On the other hand, a threshold of -0.5 can keep over 70% of the true positive cases as well as about 25% of the false positive cases. ## IV Conclusion In this paper, we present a novel approach to characterize the noise and anomalies in the LiDAR pointcloud, which is typically caused by adverse environment conditions such as rain, fog, dust, or LiDAR internal component failures. To capture the anomalous pointcloud frames, we developed a quality metric score based only on the LiDAR pointcloud characteristics, i.e., the spatial distribution of the points and the intensity values, which does not require any data annotation or training. We verified the method with numerous test data collected from public road with various LiDAR physical modalities, and the result proves that the proposed quality metric score can effectively capture the anomalous LiDAR pointcloud caused by different reasons. There is a wide range of potential applications of the work in this paper, such as monitoring the operation condition of an autonomous driving system in real time, or efficiently selecting the data collected in rain/fog from enormous amount of test data for further analysis.
LiDARセンサーは、現代自動運転システムの認識スタックにおいて重要な役割を果たします。雨、霧、埃などの悪天候条件や、まれに発生するLiDARハードウェアの故障により、LiDARは異常なパターンである散乱 noise pointと希少な強度値を持つ点群を生成することがあります。本論文では、LiDARが点群の異常性を検出するために、点群の特徴を分析する新しいアプローチを提案します。具体的には、LiDARポイントの空間的および強度分布に基づいた点群品質メトリックを開発し、点群のノイズレベルを特徴づけることで、学習ベースの方法と異なる純粋な数学的な分析を使用しています。したがって、この方法はスケーラブルで、オンラインでLiDARデータの異常を監視して自律性安全性向上に活用したり、オフラインで大量のデータのLiDAR行動の深層的な調査に活用したりできます。提案されたアプローチは、Li
2301.13396
Study of Optical Networks, 5G, Artificial Intelligence and Their Applications
This paper discusses the application of artificial intelligence (AI) technology in optical communication networks and 5G. It primarily introduces representative applications of AI technology and potential risks of AI technology failure caused by the openness of optical communication networks, and proposes some coping strategies, mainly including modeling AI systems through modularization and miniaturization, combining with traditional classical network modeling and planning methods, and improving the effectiveness and interpretability of AI technology. At the same time, it proposes response strategies based on network protection for the possible failure and attack of AI technology.
Quanda Zhang, Qi Zhang
2023-01-31T04:06:18
http://arxiv.org/abs/2301.13396v1
# Study of Optical Networks, 5G, Artificial Intelligence and Their Applications ###### Abstract This paper discusses the application of artificial intelligence (AI) technology in optical communication networks and 5G. It primarily introduces representative applications of AI technology and potential risks of AI technology failure caused by the openness of optical communication networks, and proposes some coping strategies, mainly including modeling AI systems through modularization and miniaturization, combining with traditional classical network modeling and planning methods, and improving the effectiveness and interpretability of AI technology. At the same time, it proposes response strategies based on network protection for the possible failure and attack of AI technology. AI, 5G, Optical Networks ## I Introduction Artificial intelligence (AI, artificial intelligence) technology is very early has been used in many fields, but for many years this technology has not gained high attention until AlphaGo defeated Chinese and Korean Go players After the hand, it began to become a research hotspot, and researchers tried to AI technology is applied in different fields, including optical communication network network. In the past two years, the United States Optical Communications Conference (OFC, optical fiber communication) and the European Conference of Optical Communications (ECOC, European conference of optical communication), at least 16 conference topics focused on AI or machine learning (ML, machine learning) technology. This paper combines AI technology and ML technology AI technologies are regarded as the same class of technologies, and at the same time, although AI technologies cover a wide range, The AI technology referred to in this article is mainly neural network technology. AI technology has received widespread attention mainly due to the following two reasons. First, AI technology is relatively easy to get started and use. it comes in black Model the system in a box way, through a large number of samples Learning, let the black box connect neurons by itself, and distribute neurons Connect weights without requiring the user to understand why neurons behave the way they do connections and are assigned current weights. Users only need to provide enough learning samples, increase the number of neurons and the number of hidden layers, It can improve the prediction accuracy of AI technology. Second, AI technology is After the AlphaGo incident, it has almost been deified, and almost everyone knows that "people "artificial intelligence", and in the academic circle, the paper labeled AI Papers also seem to be easier to publish, so this also leads to a current phenomenon Like, that is, for almost all problems, regardless of whether it is suitable or not, the use of AI technology for modeling and solving. AI technology is very successful in solving some problems, as before Go and some image-to-speech recognition scenarios mentioned above, but cannot Due to the successful solution of a certain field or certain problems, AI is regarded as a "universal method". This paper aims at the current AI technology in optical communication Discuss the application of AI technology in the network, including the application of AI technology in optical communication network applicability in the network, and raises the potential risk of using AI technology Some coping strategies. ## II AI Applications in Optical Networks AI technology has been widely used in the literature of optical communication networks [1, 2, 3, 4, 5, 6, 7]. A great deal of research can be found in this area. This paper introduces several representative applications of AI technology in optical communication networks. 1) On receiving At the end, using the digital signal processing method combined with AI technology, it can effectively improve the detection sensitivity of optical signals and improve the optical fiber transmission system performance and improve the spectrum utilization efficiency of the network [7, 8]. 2) In the optical network, there are a large number of end-to-end optical channels, and these optical channels are respectively related parameters (including transmission rate, modulation format, number of optical fiber links, number of optical amplifiers and gain, etc.) and their receiving the signal transmission quality (QoT, quality of transmission) detected by the end is used as input and output, and through a lot of learning, it can be realized Prediction of QoT for different end-to-end optical channels in optical networks; where QoT often expressed as the signal-to-noise ratio of the optical channel (OSNR, optical signal to noise ratio), its accurate prediction can reduce the optical channel OSNR margin configuration, thereby improving the spectrum utilization efficiency of the network [9, 10]. 3) pass Continuously learn the fault events in the optical network, and use the fault and fault cause Because it is used as input and output, it can accurately analyze and diagnose the cause of the fault early warning of future failures [3]. 4) Combined with the need for network security, AI technology can also be used for early warning and identification of network attacks on the optical layer [11]. ## III AI Applications in 5G Communication AI and ML are being utilized in 5G and mmWave communication [12, 13, 14, 15, 16, 17, 18] to enhance performance, reduce costs, and boost efficiency. Applications of AI in this field include network optimization, predictive maintenance, self-organizing networks, traffic prediction, security, resource allocation, network slicing, edge computing, interference management, and spectrum management. These technologies are still in the early stages of development but have the potential to significantly improve the performance, efficiency, and cost-effectiveness of these networks. As AI technologies continue to evolve and become more widely adopted, they will likely play an increasingly important role in the development and deployment of 5G and mmWave communication systems. However, it is also crucial to consider potential risks and challenges such as ensuring privacy, security, and ethical considerations. AI techniques apply the same "black box" approach to different application scenarios leading to method innovation and analysis of the underlying mechanism slack. A very typical example is as follows. Thanks to AI technologies such as deep learning can effectively recognize some image patterns, there are studies that the researchers applied this technology to the identification of lesions in different parts of the human body [19, 20]. Based on the same method and process, different human parts are continuously used bit pictures, which can form a large number of so-called "research results" and thesis. Obviously, from the perspective of cultivating students and scientific research, students The development of research skills and professionalism acquired practically in the project are very few, and the actual work is only to collect relevant image data and Write a small amount of Python code, and finally hand over the training task to the graph Processor (GPU, graphics processing unit) to complete, no Carry out in-depth thinking on the method and mechanism of specific research questions It is obviously not conducive to innovation and effective innovation, and it is impossible to grasp (in fact, it is currently impossible to grasp) what is going on in the black box.
人工知能(AI)技術の光通信ネットワークと5Gへの応用について議論しています。この論文では、AI技術の代表的な適用事例とその光通信ネットワークの開放性によるAI技術の失敗の可能性、そしてAI技術の対応策を提案しています。主な対応策としては、AIシステムをモジュラライズおよびミニマルに構築する、従来のネットワークモデルと計画方法との組み合わせ、AI技術の有効性と解釈可能性の向上などがあります。同時に、AI技術の潜在的な失敗や攻撃に対するネットワーク保護の対応策も提案しています。
2308.00193
C-DARL: Contrastive diffusion adversarial representation learning for label-free blood vessel segmentation
Blood vessel segmentation in medical imaging is one of the essential steps for vascular disease diagnosis and interventional planning in a broad spectrum of clinical scenarios in image-based medicine and interventional medicine. Unfortunately, manual annotation of the vessel masks is challenging and resource-intensive due to subtle branches and complex structures. To overcome this issue, this paper presents a self-supervised vessel segmentation method, dubbed the contrastive diffusion adversarial representation learning (C-DARL) model. Our model is composed of a diffusion module and a generation module that learns the distribution of multi-domain blood vessel data by generating synthetic vessel images from diffusion latent. Moreover, we employ contrastive learning through a mask-based contrastive loss so that the model can learn more realistic vessel representations. To validate the efficacy, C-DARL is trained using various vessel datasets, including coronary angiograms, abdominal digital subtraction angiograms, and retinal imaging. Experimental results confirm that our model achieves performance improvement over baseline methods with noise robustness, suggesting the effectiveness of C-DARL for vessel segmentation.
Boah Kim, Yujin Oh, Bradford J. Wood, Ronald M. Summers, Jong Chul Ye
2023-07-31T23:09:01
http://arxiv.org/abs/2308.00193v1
C-DARL: Contrastive diffusion adversarial representation learning for label-free blood vessel segmentation ###### Abstract Blood vessel segmentation in medical imaging is one of the essential steps for vascular disease diagnosis and interventional planning in a broad spectrum of clinical scenarios in image-based medicine and interventional medicine. Unfortunately, manual annotation of the vessel masks is challenging and resource-intensive due to subtle branches and complex structures. To overcome this issue, this paper presents a self-supervised vessel segmentation method, dubbed the contrastive diffusion adversarial representation learning (C-DARL) model. Our model is composed of a diffusion module and a generation module that learns the distribution of multi-domain blood vessel data by generating synthetic vessel images from diffusion latent. Moreover, we employ contrastive learning through a mask-based contrastive loss so that the model can learn more realistic vessel representations. To validate the efficacy, C-DARL is trained using various vessel datasets, including coronary angiograms, abdominal digital subtraction angiograms, and retinal imaging. Experimental results confirm that our model achieves performance improvement over baseline methods with noise robustness, suggesting the effectiveness of C-DARL for vessel segmentation. 1 Footnote 1: This paper extends the work (Kim et al., 2022) presented at the Eleventh International Conference on Learning Representations (ICLR) 2023. + Footnote †: This paper extends the work (Kim et al., 2022) presented at the Eleventh International Conference on Learning Representations (ICLR) 2023. + Footnote †: This paper extends the work (Kim et al., 2022) presented at the Eleventh International Conference on Learning Representations (ICLR) 2023. + Footnote †: This paper extends the work (Kim et al., 2022) presented at the Eleventh International Conference on Learning Representations (ICLR) 2023. + Footnote †: This paper extends the work (Kim et al., 2022) presented at the Eleventh International Conference on Learning Representations (ICLR) 2023. ## 1 Introduction Angiography is an invasive or non-invasive exam to visualize blood vessels towards diagnosis or treatment of a wide variety of diseases that impact vascular structures, or where vascular maps provide the roadmap to delivery of therapeutics. For example, to plan therapy and accurately deliver drugs and devices in minimally invasive image-guided therapies, identification, characterization, and quantification of the blood vessels and their branches is a foundational element towards blood flow, endothelial pathology, landmarks, reference points and roadmaps towards tumors or target anatomy (Dehkordi et al., 2011). As manual annotation of vessel masks is time-consuming due to tiny and low-contrast vessel branches (see Figure 1), automatic vessel segmentation methods have been extensively studied to enhance efficiencies and to facilitate large data for training (Delibasis et al., 2010; Jiang et al., 2019; Wu et al., 2019). Classical rule-based vessel segmentation methods utilize various features of vessel images such as geometric models, ordered region growing, and vessel intensity distributions (Lesage et al., 2009; Taghizadeh Dehkordi et al., 2014; Zhao et al., 2019). However, these approaches require complicated preprocessing and manual refinement, posing resource barriers and challenges to practical clinical deployment. On the other hand, recent learning-based techniques (Nasr-Esfahani et al., 2016; Fan et al., 2018; Wu et al., 2019), which segment blood vessels through neural networks, can generate outputs in real-time, but they require the supervision of large amounts of annotated data. Recently, self-supervised learning methods, which do not require ground-truth vessel masks when training networks, have been extensively studied. For example, Ma et al. (2021) presented an adversarial vessel segmentation method using fractal masks. Kim et al. (2022) proposes a diffusion adversarial representation learning model (DARL) that combines the diffusion model and the adversarial model. Specifically, the DARL model learns the distribution of background images using the diffusion denoising probabilistic model (DDPM) (Ho et al., 2020) so that the vessel structures can be easily discerned in the latent. Accordingly, a subsequent generation module can extract foreground vessel regions. Although this method provides a high-quality vessel segmentation map through single step inference while alleviating the network training complexity compared to Ma et al. (2021), one of the main limitations of DARL is that it uses both the pre-contrast background and angiography images for network training, which may limit its use in various clinical applications that do not normally provide similar background images (eg., retinal fundus images). Future training with digital subtraction and raw images might reduce this training variability. This paper is to extend the concept of DARL and present a label-free vessel segmentation method that can utilize a variety of blood vessel images in training the model, enabling its generalizability as a vessel segmentator in a variety of clinical applications. Specifically, we design a model that basically follows the overall structure of DARL consisting of the diffusion and generation modules, with customizations. One of the key improvements over DARL comes from our observation that we can still obtain a semantically meaningful segmentation map by omitting the background image input path. In addition, to effectively learn vessel representation, we employ contrastive learning, namely contrastive-DARL (C-DARL), for further improvement (Wu et al., 2018; Chen et al., 2020; He et al., 2020). While DARL is trained to generate vessel masks via adversarial learning using the fractal masks, the input fractal masks and the real blood vessels have intrinsically different shapes and sizes. Accordingly, the contrastive loss function is designed to dissociate the estimated masks of real vessel images and the fractal masks while maximizing the similarity between the estimated masks of fractal-based synthetic vessel images and the fractal. This is achieved by leveraging contrastive unpaired translation (CUT) (Park et al., 2020) that computes the similarity of source and target in a patch-wise manner. Thanks to this simplification of the data preparation without requiring background images, our framework can be trained using multiple domains of two-dimensional (2D) blood vessel representations, such as X-ray angiography or retinal imaging. Experimental results show that our C-DARL model outperforms the comparative methods in vessel segmentation of various datasets. Also, when comparing our model to the DARL, our method achieves consistent improvement both on internal and external test data with respect to the training data. As the C-DARL provides vessel segmentation maps in real-time (0.176 seconds per frame), this holds great promise as a platform in clinical practices, upon validation of the integrity of the resulting representations. In summary, the contributions of this paper are as follows: * We introduce a label-free vessel segmentation method that can leverage multi-domain blood vessel images without requiring background images and provide vessel masks for those various datasets. * In contrast to the DARL, our proposed model applies contrastive learning in generating vessel segmentation maps, allowing the network to intensively learn vessel representations. * Extensive experimental results demonstrate that the proposed C-DARL is robust across diverse blood vessel data in a variety of clinical applications, and has superior performance with more efficiency than the existing self-supervised methods. ## 2 Related works ### Diffusion model The DDPM (Ho et al., 2020) generates images by converting the Gaussian noise distribution into the data distribution through the Markov chain process. Specifically, for the forward diffusion, the data is corrupted by the noise as follows: \[q(x_{t}|x_{t-1})=\mathcal{N}(x_{t};\sqrt{1-\beta_{t}}x_{t-1},\beta_{t}I), \tag{1}\] where \(\beta_{t}\) is a scalar variance in the range of \([0,1]\). Then, through Markov chain, a noisy image \(x_{t}\) for the data \(x_{0}\) can be computed by: \[q(x_{t}|x_{0})=\mathcal{N}(x_{t};\sqrt{\alpha_{t}}x_{0},(1-\alpha_{t})I), \tag{2}\] where \(\alpha_{t}=\Pi_{s=1}^{t}(1-\beta_{s})\). For this corrupted image, the DDPM learns the reverse diffusion: \[p_{\theta}(x_{t-1}|x_{t})=\mathcal{N}(x_{t-1};\mathbf{\mu}_{\theta}(x_{t},t), \sigma_{t}^{2}I), \tag{3}\] where \(\sigma_{t}\) is a scalar variance and \(\mathbf{\mu}_{\theta}\) is a learnt mean computed by the network \(G_{\epsilon}\): \[\mathbf{\mu}_{\theta}(x_{t},t)=\frac{1}{\sqrt{1-\beta_{t}}}\left(x_{t}-\frac{\beta _{t}}{\sqrt{1-\alpha_{t}}}G_{\epsilon}(x_{t},t)\right). \tag{4}\] Accordingly, one can obtain images from the Gaussian noise using the DDPM through the reverse diffusion process as follows: \[x_{t-1}=\mathbf{\mu}_{\theta}(x_{t},t)+\sigma_{t}z, \tag{5}\] where \(z\sim\mathcal{N}(0,I)\). This DDPM has been successfully adapted to various computer vision tasks including semantic image synthesis (Wang et al., 2022; Huang et al., 2023). Moreover, the potentials of learned representations from DDPM have been revealed through semantic segmentation, which effectively captures semantic features to improve segmentation performance (Baranchuk et al., 2021; Rahman et al., 2023; Brempong et al., 2022). Figure 1: Examples of various blood vessel image domains. Red boxes show magnified vessel structures. ### Self-supervised vessel segmentation using DARL Semantic segmentation problems have been traditionally addressed using supervised learning. Unfortunately, supervised learning performance is largely dependent on a huge amount of labels and associated costs, and limited and skilled resources (Fan et al., 2019; Yang et al., 2019). Recently, self-supervised learning (SSL) methods have been actively investigated for mitigating this issue (Oquab et al., 2023; Melas-Kyriazi et al., 2022). Nonetheless, a naive application of SSL is challenging to accurately extract vessel structure in various blood vessel images, which contain tiny vessel branches within highly interfering background signals. To address this, SSL methods tailored for the vessel segmentation task have been recently developed. In particular, Kim et al. (2022) proposes a diffusion-based adversarial vessel segmentation method (DARL) that learns the background signal using the diffusion module, which effectively improves vessel segmentation performance with noise robustness. Specifically, the DARL model estimates vessel segmentation maps through the guidance of semantic image synthesis that incorporates the given pre-contrast background image and fractal vessel masks, as shown in Fig. 2(a). This is motivated by Ma et al. (2021) which synthesizes vessel images by adding the fractal masks to the background images and lets the network estimate vessel masks using the information of fractal-based synthetic vessel images. However, the DARL model has limitations in that 1) it still utilizes the background images as input, and 2) the vessel segmentation is learned through the adversarial loss by regarding the fractal masks as real and the network output as fake even though the fractals are different from the ground-truth vessel masks. ## 3 Method ### Motivation To deal with the aforementioned issues, firstly, we propose a model that eliminates the need for background images, alleviating the constraint of using the angiography dataset, as shown in Fig. 2(b). This is based on the empirical observation that the diffusion module in DARL is effective in learning the sparsity of blood vessel structure but also in estimating the noise that captures the information of the given data distribution and enables diverse image synthesis. Here, the diffusion model can estimate the noise by nulling out the learned image distribution regardless of the existence of vessels in the training data. Accordingly, as long as the vessel structures are sparsely distributed, the vessels can be regarded as outliers and represented in the diffusion module output when generating vessel masks. Therefore, the diffusion module can be trained using vessel images in various domains where vessel-free backgrounds are difficult to obtain. Moreover, to further refine the segmentation accuracy, in the vessel segmentation path, we replace an adversarial loss of DARL with a contrastive loss (Chen et al., 2020; Zhong et al., 2021; Hu et al., 2021; Oh et al., 2022). Specifically, by reflecting the fact that the fractal masks and real vessel masks have different features, we present a mask-based contrastive loss that utilizes the fractals and the estimated vessel masks as negative pairs, while using the fractals and the cyclically estimated segmentation masks as positive pairs. In particular, inspired by contrastive unsupervised translation (CUT) (Park et al., 2020), which maximizes the mutual information of the patches at the same locations in the source and the target images, we employ the CUT-loss to maximize the structural patch similarity between the fractal mask (as a query) and the cyclically synthesized vessel mask (as a positive), and to disassociate the query signals from the segmented mask of the real vessel image (as a negative), allowing the model to learn vessel structure more effectively with no use of any labeled dataset. ### Overall architecture The overall learning flow of C-DARL is illustrated in Fig. 3. The C-DARL model has a generator \(G\) consisting of a diffusion module \(G_{\epsilon}\) and a generation module \(G_{s}\). When the diffusion module based on the DDPM estimates a latent feature by learning data distribution for various noisy levels, the generation module generates a vessel mask \(\hat{m}^{s}\) and a synthetic vessel image \(\hat{x}\). Here, the vessel image is generated using the latent of the diffusion module and the fractal mask \(m^{f}\). Then, in the path of vessel segmentation, contrastive learning is applied by using the fractal mask \(m^{f}\) and the estimated vessel masks (\(\hat{m}^{s},\hat{m}^{f}\)), where \(\hat{m}^{f}\) is generated by feeding the synthetic vessel image \(\hat{x}\) into the generator through the cycle pathway. On the other hand, in the path of semantic image synthesis, the synthetic vessel image \(\hat{x}\) and real vessel image \(x\) are fed into a discriminator \(D\) for adversarial learning. As described in Fig. 3, we denote the segmentation path as a _contrastive_ path and the image synthesis path as an _adversarial_ path. In the following, we describe the proposed method in detail. Figure 2: Training pipelines of (a) the DARL model (Kim et al., 2022) and (b) our proposed C-DARL model. \(x^{s}\) and \(x^{b}\) are a real vessel image and a background image, respectively, and \(m^{f}\) denotes a fractal mask. \(\hat{m}^{s}\) is an estimated vessel segmentation mask, and \(\hat{x}^{s}\) is a synthetic vessel image. For the C-DARL, the real vessel image in one of the various domains (e.g. \([v_{a},v_{b},v_{c}]\)) can be fed into the model. #### 3.2.1 Model input Let \(\mathbf{X}=\bigcup_{k=1}^{k=K}X^{k}\) be a group of given blood vessel image datasets with \(K\) different domains where \(X^{k}\) has one or more images of the \(k\)-th domain, i.e. \(X^{k}=\{x^{k_{1}},x^{k_{2}},...,x^{k_{N}}\}\) with \(k_{N}\geq 1\). For each iteration, our model randomly samples one of the multi-domain images \(x_{0}^{k_{s}}\) among the given datasets. Then, for the model input, the image is corrupted by the forward diffusion (2): \[x_{t}=\sqrt{\alpha_{t}}x_{0}^{k_{s}}+\sqrt{1-\alpha_{t}}\epsilon, \tag{6}\] where \(t\) is the noisy level in range of \([0,T]\) and \(\epsilon\sim\mathcal{N}(0,I)\). Using this perturbed image, the diffusion module \(G_{\epsilon}\) learns the vessel image distribution and provides latent information to the generation module \(G_{s}\). #### 3.2.2 Switchable SPADE layers When the diffusion module estimates the latent features, given the latent features concatenated with the perturbed image \(x_{t}\) in channel dimension, the generation module generates vessel segmentation masks and synthetic vessel images simultaneously. This can be performed by the switchable SPADE (S-SPADE) layers proposed in the DARL model. Specifically, as shown in Fig. 4, the generation module consists of \(N\) residual blocks with S-SPADE layers that perform different normalization operations depending on whether or not the semantic layout is given to the model. For the feature maps \(e\) from a shared encoder in the generation module, the residual blocks synthesize semantic images through the spatially adaptive normalization (SPADE) (Park et al., 2019) if the semantic mask \(m\) is given, whereas they generate segmentation masks through the instance normalization (InsNorm) (Ulyanov et al., 2016) otherwise: \[e=\begin{cases}\text{SPADE}(e,m),&\text{if semantic mask $m$ is given,}\\ \text{InsNorm}(e),&\text{otherwise.}\end{cases} \tag{7}\] Thus, the contrastive path activates the InsNorm and estimates the vessel masks, and at the same time, the adversarial path, which takes the fractal masks \(m^{f}\), activates the SPADE layer and generates the synthetic images. Also, by sharing all network parameters except for the S-SPADE layers in the two paths of the generation module, our model can synergistically learn vessel representation through semantic image synthesis. This also ensures that the estimated vessel structure from the contrastive path becomes a semantically meaningful negative pair with respect to the synthetic fractal mask. Therefore, although the contrastive path does not need to learn the reverse process from the pure Gaussian noise in that this path estimates the vessel masks of the input image, the corrupted input image is given also to the contrastive path to share the same generation module with SPADE layer. Instead, by setting the maximum noisy level \(T\) to be smaller than that for the adversarial path, we design the model to estimate vessel masks for noisy input images. ### Loss formulation Fig. 5 shows our loss function with training flow. The proposed C-DARL model has three paths in the training phase: (A) the contrastive path estimates the vessel segmentation mask \(\hat{m}^{r}\) that is used to compute a mask-based contrastive loss \(\mathcal{L}_{conv}\), (B) the adversarial path learns data distribution through a diffusion loss \(\mathcal{L}_{diff}\) and generates the vessel image \(\hat{x}\) based on the fractal layout \(m^{f}\) via an adversarial loss \(\mathcal{L}_{adv}\), and (C) the cyclic contrastive path feeds the fractal-based synthetic image into the Figure 4: Detailed architecture of the generation module \(G_{s}\) that has a shared encoder and \(N\) residual blocks composed of switchable SPADE layers, ReLU, and convolution (Conv) layers. If a semantic layout \(m^{f}\) is given to the generation module, the SPADE layer is activated. Otherwise, the instance normalization (InsNorm) is activated. Figure 3: Overall training framework of the proposed contrastive diffusion adversarial representation learning model (C-DARL). model and segments the fractal region \(\hat{m}^{f}\) that is utilized in a cycle loss \(\mathcal{L}_{cyc}\) and the contrastive loss \(\mathcal{L}_{cont}\). A detailed description of each loss function is as follows. #### 3.3.1 Diffusion loss The diffusion loss aims to learn input data distribution through the diffusion module \(G_{\epsilon}\), yielding latent features including meaningful information about the input. For a given vessel image \(x_{0}\), a noise \(\epsilon\sim\mathcal{N}(0,I)\), and a time step \(t\) uniformly sampled from \([0,T]\), by following the DDPM training scheme (Ho et al., 2020), the loss can be represented as: \[\mathcal{L}_{diff}=\mathbb{E}_{x_{0},\epsilon,t}\left[\|G_{\epsilon}(\sqrt{ \alpha_{t}}x_{0}+\sqrt{(1-\alpha_{t})}\epsilon,t)-\epsilon\|^{2}\right]. \tag{8}\] As aforementioned, since the input image is perturbed within the full range of noisy levels in the adversarial path compared to the contrastive path, the diffusion loss is calculated only in the adversarial path. #### 3.3.2 Mask-based contrastive loss To address the limitation of not having real vessel masks in our label-free training scheme, we employ a patch-based contrastive learning objective (Park et al., 2020) that maximizes the mutual information of corresponding patches between two images, which utilizes a noise contrastive estimation method (Oord et al., 2018). Here, instead of using the image features from the network encoder, we design a _mask-based_ contrastive loss that leverages the segmentation masks. Specifically, based on the observation that the real blood vessel and the fractal masks have different features of shapes and sizes, for the fractal \(m^{f}\) as a query, we refer to the estimated vessel mask \(\hat{m}^{v}\) in the contrastive path as negatives (See Fig. 3). In contrast, since the cyclic contrastive path estimates the fractal embedded in the synthetic vessel image, we regard the segmented fractal mask \(\hat{m}^{f}\) as a positive of the query. Then, the model is trained for corresponding patches of \(m^{f}\) and \(\hat{m}^{f}\) to be more strongly associated than those of \(m^{f}\) and \(\hat{m}^{v}\) using our contrastive loss. More specifically, for each segmentation mask \(m\in\mathbb{R}^{1\times M\hat{m}}\), we obtain \(R\) different-sized tensors, in which each tensor is obtained by folding the mask as \(m_{r}\in\mathbb{R}^{P,*}\times\frac{R}{\hat{m}^{f}}\) where \(p_{r}\) is a division factor for \(r\in\{1,2,...,R\}\). Then, the tensor is fed into a light-weight network \(H_{r}\) composed of two fully-connected layers, generating a stack of the features \(\{h_{r}(m)\}_{R}=\{H_{1}(m_{1}),H_{2}(m_{2}),...,H_{R}(m_{R})\}\) where \(h_{r}(m)\in\mathbb{R}^{C,\times}\frac{R}{\hat{m}^{f}}\) is the feature with \(C_{r}\) channels for the \(r\)-th tensor. Through this process, we can get the feature stacks of \(\{h_{r}(m^{f})\}_{R}\), \(\{h_{r}(\hat{m}^{f})\}_{R}\), and \(\{h_{r}(\hat{m}^{v})\}_{R}\) for the masks of \(m^{f}\), \(\hat{m}^{f}\), and \(\hat{m}^{f}\), respectively. By randomly selecting \(Q_{r}\) spatial locations in the range of \([0,\frac{H}{p},\frac{W}{p}]\), the contrastive loss is computed by: \[\mathcal{L}_{cont}=\mathbb{E}_{m^{f},\hat{m}^{f},\hat{m}^{f}}\sum_{r=1}^{R} \sum_{q=1}^{Q_{r}}\ell_{Mt}\left(h_{r}^{q}(m^{f}),h_{r}^{q}(\hat{m}^{f}),h_{r} ^{q}(\hat{m}^{r})\right), \tag{9}\] where \(\ell_{Mt}\) is the mutual information using the cross-entropy loss as: \[\ell_{Mt}(u,u^{+},u^{-})=-log\left[\frac{\exp(u\cdot u^{+}/\tau)}{\exp(u \cdot u^{+}/\tau)+\sum_{i}\exp(u\cdot u_{i}^{-}/\tau)}\right], \tag{10}\] where \(\tau\) is a temperature scaling the distances between the query and other positives/negatives. This calculates the probability that the positive patches are selected over the negative patches at specific locations, enabling the model to generate vessel masks not very similar to the fractal masks while learning the mask features. In our experiments, we set \(R=3\), and \((p_{1},p_{2},p_{3})=(4,8,16)\). #### 3.3.3 Adversarial loss In the adversarial path, our model generates the synthetic vessel image using the semantic fractal mask and the noisy features generated by the diffusion module. Since real vessel images exist in the training phase, we apply adversarial learning to the model and enable the generator \(G\) to output realistic images while fooling the discriminator \(D\) that distinguishes real and fake images. By employing LSGAN (Mao et al., 2017), the adversarial loss of the generator can be represented as: \[\mathcal{L}_{adv}^{G}=\mathbb{E}_{x_{t},m^{f}}\left[(D(G(x_{t},m^{f}))-1)^{2} \right], \tag{11}\] and the adversarial loss of the discriminator can be written as: \[\mathcal{L}_{adv}^{D}=\frac{1}{2}\mathbb{E}_{x_{0}}\left[(D(x_{0})-1)^{2} \right]+\frac{1}{2}\mathbb{E}_{x_{t},m^{f}}\left[(D(G(x_{t},m^{f}))^{2}\right]. \tag{12}\] Through this loss function, the model is trained to generate the synthetic vessel image \(\hat{x}\) that is indistinguishable from real data \(x\) by the discriminator. #### 3.3.4 Cycle loss Recall that when the fractal-based synthetic vessel image is generated, in the cyclic contrastive path, the proposed model takes this synthetic image and generates the fractal segmentation mask \(\hat{m}^{f}\). Here, in addition to using the mask \(\hat{m}^{f}\) in the contrastive loss, to learn semantic information about blood vessels in the vessel medical data, we utilize the estimated fractal Figure 5: Diagram of loss formulation to train our proposed C-DARL. _Car_ denotes the channel-wise concatenation of two inputs. mask in the cycle loss that computes a distance between the real and fake images. As we handle the mask as an image, the cycle loss can be formulated using \(l_{1}\) norm: \[\mathcal{L}_{cyc}=\mathbb{E}_{s_{u},m^{\prime}}\left[\|G(G(x_{t},m^{\prime}),0)-m ^{\prime}\|_{1}\right]. \tag{13}\] Accordingly, the model can capture vessel information and estimate the mask for vessel-like regions even though there is no ground-truth vessel mask of real data. #### 3.3.5 Full loss formulation Using the diffusion loss, the contrastive loss, the adversarial loss, and the cycle loss, our C-DARL model is trained in an end-to-end learning manner. Hence, the complete loss function of the generator can be defined by: \[loss=\mathcal{L}_{diff}+\lambda_{a}\mathcal{L}_{cont}+\lambda_{\beta}\mathcal{ L}_{adv}^{G}+\lambda_{\gamma}\mathcal{L}_{cyc}, \tag{14}\] where \(\lambda_{a}\), \(\lambda_{\beta}\), and \(\lambda_{\gamma}\) are hyper-parameters that control each loss function. ### Inference Compared to conventional diffusion models, which generate images from pure Gaussian noise through the iterative reverse process, the DARL model provides a segmentation mask in one step in that the model is trained to estimate the mask for a given input image. Similarly, the proposed C-DARL model also generates the mask for a given vessel medical image in a single step. In other words, for a perturbed vessel image \(x_{t}\), our model outputs the vessel segmentation mask \(\hat{m}^{v}\) through the contrastive path. Here, while the model can estimate the mask for any corrupted image with a noise level \(t\in[0,T]\) as shown in Fig. 6, we evaluate our method using the clean image \(x_{0}\) which can be considered as a target image for the diffusion model. We further discuss the segmentation performance according to the noise level in Section 5.2. ## 4 Experiments and results ### Experimental setup #### 4.1.1 Datasets To train our C-DARL model for blood vessel segmentation, we utilize a variety of vessel images from different medical domains, including coronary arteriograms, abdominal pancreatic and hepatic arteriograms, and retinal fundus photography images. Also, we test the proposed model using several benchmark datasets of blood vessel segmentation. The number of data used in the training, validation, and test phases is summarized in Table 1, and detailed descriptions and preprocessing methods of each dataset are as follows. _XCAD dataset._ The X-ray angiography coronary artery disease (XCAD) dataset (Ma et al., 2021) contains 1,621 coronary angiography images taken during stent placement. Also, using the fractal module, we synthesize 1,621 fractal masks for model training. For an additional 126 angiograms and the corresponding vessel segmentation masks labeled by experienced radiologists, we use 12 pairs of images and masks for the validation set, and the remaining 114 pairs for the test set. Each image has \(512\times 512\) sized resolution, and we subsample it into \(256\times 256\). In addition, by following the fractal synthesis method in Ma et al. (2021), we generate 1,621 synthetic fractal masks with \(512\times 512\) size, in which the fractals have various shapes and thicknesses. _134XCA dataset._ To test the model for vessel segmentation of coronary angiography images using externally distributed data over the training dataset, we use the 134 XCA dataset (Cervantes-Sanchez et al., 2019) that provides 134 coronary angiograms with ground-truth vessel masks, where the masks are obtained by an expert cardiologist. We resize the images to \(512\times 512\) and extract vessel masks using 4 subsampled \(256\times 256\) images. The full vessel masks are then obtained by un-subsampling the 4 masks into one. _30XCA dataset._ We also utilize the 30XCA dataset (Hao et al., 2020) in the evaluation of coronary angiography segmentation. \begin{table} \begin{tabular}{l l c c c c c c c c} \hline \hline \multicolumn{2}{c}{**Input**} & \multicolumn{3}{c}{**CA**} & \multicolumn{3}{c}{**AA**} & \multicolumn{3}{c}{**RI**} \\ \cline{3-10} & & XCAD & 134xCA & 30xCA & AAIR & UWF & FP & DRIVE & STARE \\ \hline \multirow{2}{*}{**Train**} & Image \(\mathbf{x}\) & 1,621 & - & - & 327 & 451 & 745 & - & - \\ & Fractal mask \(\mathbf{m}^{\prime}\) & 1,621 & - & - & - & - & - & - & - \\ \hline \multirow{2}{*}{**Val**} & Image \(\mathbf{x}\) & 12 & - & - & - & - & - & - & - \\ & Vessel label \(\mathbf{m}^{\prime}\) & 12 & - & - & - & - & - & - & - \\ \hline \multirow{2}{*}{**Test**} & Image \(\mathbf{x}\) & 114 & 134 & 30 & - & - & - & 20 & 20 \\ & Vessel label \(\mathbf{m}^{\prime}\) & 114 & 134 & 30 & - & - & - & 20 & 20 \\ \hline \multicolumn{2}{l}{CA: Coronary angiography, AA: Abdomen anglography, RI: Retinal imaging} \\ \hline \hline \end{tabular} \end{table} Table 1: The number of data used in each training, validation (val), and test phase. Figure 6: Inference of the blood vessel segmentation according to the noisy level \(t\). For given perturbed input images \(x_{t}\), the proposed C-DARL model generates vessel masks \(\hat{m}^{v}\) in a single step. We show an example of the vessel segmentation of the hepatic angiogram. This dataset has 30 X-ray coronary angiography series, and one image for each series and its corresponding vessel segmentation masks annotated by experts are used for the test. The images are processed as same as the 134XCA dataset. AAIR datasetAs one of the multi-domain angiography images, we use 327 abdominal angiography images that were obtained from 42 subjects at the National Institutes of Health Clinical Center. The angiography series were obtained with 2 frames per second (fps) during arteriography, embolization or calcium stimulation and show arteries in the abdomen such as the celiac, pancreatic, and hepatic vasculatures. For each scan, we select one frame in which the blood vessels were most visible and manually cropped the region that includes the vessels. We call these collected data the AAIR dataset. In the training phase, we randomly extract image patches with a size of \(256\times 256\). UWF and FP datasetsIn addition to the coronary and abdominal angiograms, our model is trained using retinal images of the ultra-widefield (UWF) data and the fundus photograph (FP) data which are provided by Yoo (2020). The UWF dataset has 451 normal or pathologic retinal images with artifacts, and the FP dataset has 745 retinal images without artifacts. We use these retinal data for training the model. For data processing, we resize each image to \(512\times 512\) and crop the center area to \(256\times 256\). Also, we convert the RGB scale images to grayscale for the data to have one channel. DRIVE datasetTo evaluate the model for retinal imaging data, we use the DRIVE dataset (Staal et al., 2004) that provides 20 retinal images and their vessel segmentation labels. We transform the images from RGB scale to grayscale and resize them to \(768\times 768\). Then we extract non-overlapping patches with a size of \(256\times 256\) and obtain the vessel masks for nonzero areas of those patches. STARE datasetFor the retinal image domain, the model is also tested using the STARE dataset (Hoover et al., 2000). This is composed of 20 retinal images and human-labeled blood vessel segmentation masks. As with the DRIVE dataset, the vessel masks are estimated for given \(256\times 256\) patches from the rescaled grayscale retinal images. #### 4.1.2 Implementation details Our proposed model is implemented using the PyTorch (Paszke et al., 2019) in Python. For the network architectures of the diffusion module and the generation module, we employ the DDPM (Ho et al., 2020) and the SPADE (Park et al., 2019), respectively. The DDPM network has a U-Net-like structure and takes an embedding vector of the time as well as the noise-corrupted image. Also, the SPADE model has a shallow encoder and a decoder with SPADE blocks, in which the SPADE layer is replaced with the S-SPADE layer for our C-DARL model. Also, for the discriminator, we utilize PatchGAN (Isola et al., 2017) to distinguish real and generated fake images in patch levels. We set the range of noisy time steps as [0, 2000] to linearly schedule the noise levels from \(10^{-6}\) to \(10^{-2}\). In the contrastive path, we set the maximum noisy level to 200, allowing the model to perform robust segmentation even on noisy images. The hyperparameters of our loss function are set as \(\lambda_{\alpha}=4\times 10^{-4}\), \(\lambda_{\beta}=0.2\), and \(\lambda_{\gamma}=2\). For the inputs of multi-domain images, as shown in Table 1, we use all images from the XCAD, AAIR, UWF, and FP datasets. The images are rescaled into [-1, 1] and augmented using random horizontal or vertical flips and rotation at 90 degrees. To optimize our model, we adopt the Adam algorithm (Kingma and Ba, 2014) with a learning rate of \(1\times 10^{-5}\). The model is trained for 150 epochs using a single GPU card of Nvidia A100-SXM4-40GB. Then, we test the proposed method using the network weights that achieve the best performance on the validation dataset. #### 4.1.3 Evaluation For baseline methods, we adopt several self-supervised approaches that can use the pseudo fractal masks for a fair comparison: Deep Adversarial (DA) (Mahmood et al., 2019), Self-Supervised Vessel Segmentation (SSVS) (Ma et al., 2021), and Diffusion Adversarial Representation Learning (DARL) (Kim et al., 2022). For the models of SSVS and DARL that require background images acquired before contrast agent injection, we use only coronary angiography (CA) data from the XCAD dataset. For the DA method, we train two models: one is to only use the CA data, and the other is to use the same amount of training data as our method. All these baseline methods are trained under identical experimental conditions to the proposed C-DARL. The segmentation performance is evaluated quantitatively using several metrics: Intersection over Union (IoU), Dice similarity coefficient, and Precision. ### Results #### 4.2.1 Comparison study For the comparison study with the existing models, we evaluate the blood vessel segmentation performance on coronary angiography (CA) datasets and retinal imaging (RI) datasets. As \begin{table} \begin{tabular}{l l l l l l} \hline \hline \multirow{2}{*}{**Datect**} & \multirow{2}{*}{**Metric**} & \multirow{2}{*}{**w/ only CA data**} & \multicolumn{2}{c}{**w/ multi-domain data**} \\ \cline{3-6} & & DA & SSVS & DARL & DA & C-DARL \\ \hline \multicolumn{6}{l}{Coronary angiography (CA)} \\ \hline \multirow{3}{*}{XCAD} & IoU & 0.375\({}_{-0.000}\) & 0.410\({}_{-0.005}\) & 0.471\({}_{-0.005}\) & 0.302\({}_{-0.000}\) & **0.498\({}_{-0.000}\)** \\ & Dice & 0.542\({}_{-0.007}\) & 0.575\({}_{-0.001}\) & 0.653\({}_{-0.007}\) & 0.459\({}_{-0.011}\) & **0.661\({}_{-0.007}\)** \\ & Precision & 0.557\({}_{-0.113}\) & 0.590\({}_{-0.107}\) & 0.710\({}_{-0.011}\) & 0.449\({}_{-0.110}\) & **0.750\({}_{-0.010}\)** \\ \hline \multirow{3}{*}{134 XCA} & IoU & 0.257\({}_{-0.113}\) & 0.349\({}_{-0.003}\) & 0.599\({}_{-0.008}\) & 0.286\({}_{-0.002}\) & **0.545\({}_{-0.008}\)** \\ & Dice & 0.384\({}_{-0.350}\) & 0.550\({}_{-0.002}\) & 0.667\({}_{-0.007}\) & 0.407\({}_{-0.010}\) & **0.700\({}_{-0.000}\)** \\ & Precision & 0.490\({}_{-0.127}\) & 0.534\({}_{-0.004}\) & 0.645\({}_{-0.011}\) & 0.379\({}_{-0.22}\) & **0.673\({}_{-0.125}\)** \\ \hline \multirow{3}{*}{30 XCA} & IoU & 0.298\({}_{-0.109}\) & 0.324\({}_{-0.108}\) & 0.472\({}_{-0.111}\) & 0.535\({}_{-0.003}\) & **0.453\({}_{-0.003}\)** \\ & Dice & 0.471\({}_{-0.114}\) & 0.648\({}_{-0.108}\) & 0.572\({}_{-0.125}\) & 0.516\({}_{-0.011}\) & **0.621\({}_{-0.001}\)** \\ & Precision & 0.612\({}_{-0.117}\) & 0.613\({}_{-0.127}\) & 0.729\({}_{-0.112}\) & 0.659\({}_{-0.121}\) & **0.826\({}_{-0.005}\)** \\ \hline \multicolumn{6}{l}{Retinal Imaging (RI)} \\ \hline \multirow{3}{*}{DRIVE} & IoU & 0.108\({}_{-0.003}\) & 0.229\({}_{-0.004}\) & 0.313\({}_{-0.006}\) & 0.336\({}_{-0.000}\) & **0.345\({}_{-0.001}\)** \\ & Dice & 0.304\({}_{-0.002}\) & 0.371\({}_{-0.005}\) & 0.474\({}_{-0.005}\) & 0.501\({}_{-0.000}\) & **0.510\({}_{-0.007}\)** \\ & Precision & 0.472\({}_{-0.005}\) & 0.632\({}_{-0.007}\) & 0.899\({}_{-0.009}\) & 0.687\({}_{-0.002}\) & **0.904\({}_{-0.000}\)** \\ \hline \multirow{3}{*}{STARE} & IoU & 0.217\({}_{-0.107}\) & 0.293\({}_{-0.000}\) & **0.338\({}_{-0.003}\)** & 0.328\({}_{-0.002}\) & 0.367\({}_{-0.11}\) \\ & Dice & 0.351\({}_{-0.004}\) & 0.461\({}_{-0.011}\) & 0.594\({}_{-0.010}\) & 0.520\({}_{-0.148}\) \\ \cline{1-1} & Precision & 0.450\({}_{-0.111}\) & 0.571\({}_{-0.114}\) & 0.671\({}_{-0.102}\) & 0.585\({}_{-0.117}\) & **0.736\({}_{-0.179}\)** \\ \hline \hline \end{tabular} \end{table} Table 2: Quantitative evaluation of vessel segmentation performance for the comparison study with the self-supervised methods. reported in Table 2, our model achieves state-of-the-art (SOTA) performance on most of the datasets compared to the baseline methods of self-supervised vessel segmentation. In particular, although the proposed C-DARL model is trained using various domains of vessel images in contrast to the previous model of DARL, our performance is higher than the DARL model. This result indicates that the proposed contrastive learning framework using multi-domain images improves not only the segmentation performance but also the generalization performance on unseen datasets. In Fig. 7, we can observe that the segmentation performance on tiny vessel regions is significantly improved (see red arrows), which demonstrates the advantages of our contrastive learning that effectively differentiates real vessel structure from the pseudo fractal mask signal. Moreover, our C-DARL mitigates false positive artifacts compared to the existing models trained with only the CA dataset (see blue arrows) and shows the most promising precision metrics. This shows the training efficiency of the proposed label-free framework which is capable of incorporating various data distributions with no use of paired background images. #### 4.2.2 Noise corruption study In clinical practice, X-ray angiography images can be degraded by noise due to various factors such as low radiation dose, body mass, organ or patient motion, breathing, or X-ray energy parameters in the data acquisition procedure. Accordingly, we further evaluate the segmentation performance of our C-DARL under noise degradation scenarios. To simulate the noisy images, we add Poisson and Gaussian noises with different levels of \(\lambda\) and \(\sigma\), respectively, to the clean XCAD test data. Then, we extract vessel structures within the noise-degraded images. Fig. 8 and Fig. 9 show the quantitative and qualitative results of the XCAD test data between our model and the baseline methods. The detailed values of evaluation metrics are reported in Table 5 of Supplementary Material. We can observe that our model is robust to noise compared to the existing models even though the performance decreases according to the noise level being stronger. Also, the DARL and C-DARL are the only methods that endure harsh noise corruption with reasonable performance. These results suggest that our proposed diffusion adversarial learning framework is superior in unseen noise distributions since the model is trained with highly perturbed input images through the diffusion module. Figure 7: Visual comparisons of the vessel segmentation results on various vessel datasets. Red and blue arrows indicate remarkable parts of tiny vessel structures and false positive artifacts, respectively. #### 4.2.3 Ablation study Our proposed method is trained by minimizing the objective function (14) which is composed of diffusion loss, contrastive loss, adversarial loss, and cycle loss. To verify the effectiveness of each loss, we conduct an ablation study on the loss function and report the results in Table 3. When our C-DARL model is trained without the diffusion loss (w/o \(\mathcal{L}_{diff}\)), the segmentation performance slightly decreases than the proposed method. This suggests that the DDPM network architecture is effective to extract features for given noisy input images but learning data distribution through the diffusion loss, which enhances the performance of semantic image synthesis required to capture vessel structures, gives more improvement in vessel segmentation performance. Also, the ablation method that excludes the contrastive loss (w/o \(\mathcal{L}_{cont}\)) indicates that the proposed mask-based contrastive loss makes the model effectively learn vessel representations without ground-truth labels. In particular, although self-supervised learning can be possible via adversarial learning on the real and fake vessel masks, the experiment in which the contrastive loss is replaced with the adversarial loss (Replace \(\mathcal{L}_{cont}\rightarrow\mathcal{L}_{adv}\)), such as the loss function of the DARL, have 4% lower IoU and Dice scores and 8% lower Precision than the proposed method. This shows that our contrastive loss considering that the input fractal masks have different features from real blood vessels outperforms the adversarial loss regarding the synthetic fractal masks as real masks. For the C-DARL model trained without the adversarial loss (w/o \(\mathcal{L}_{adv}\)), the vessel structures are hardly extracted due to no guidance to learn vessel representations. Since the fake vessel images synthesized using the fractal mask provide vessel-like structure information to the model in the cyclic contrastive path, the adversarial loss is required to generate realistic vessel images. In addition, the model trained without the cycle loss shows much inferior vessel segmentation performance compared to the proposed method, implying that the cycle loss in the cyclic contrastive path enables the model to capture vessel structures of input images. On the other hand, we also study the necessity of input im Figure 8: Quantitative results of vessel segmentation on various noise-corrupted image scenarios. The top row shows the evaluation results according to \(\lambda\) level for Poisson noisy images, and the bottom row shows the results according to \(\sigma\) level for Gaussian noisy images. \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{**Ablation method**} & \multicolumn{3}{c}{**Metric**} \\ \cline{2-4} & IoU & Dice & Precision \\ \hline w/o \(\mathcal{L}_{diff}\) & 0.483\({}_{\pm 0.004}\) & 0.646\({}_{\pm 0.008}\) & 0.741\({}_{\pm 0.114}\) \\ w/o \(\mathcal{L}_{att}\) & 0.361\({}_{\pm 0.002}\) & 0.527\({}_{\pm 0.007}\) & 0.593\({}_{\pm 0.102}\) \\ w/o \(\mathcal{L}_{adv}\) & 0.004\({}_{\pm 0.005}\) & 0.006\({}_{\pm 0.006}\) & 0.006\({}_{\pm 0.008}\) \\ w/o \(\mathcal{L}_{adv}\) & 0.091\({}_{\pm 0.03}\) & 0.164\({}_{\pm 0.03}\) & 0.107\({}_{\pm 0.005}\) \\ Replace \(\mathcal{L}_{cont}\rightarrow\mathcal{L}_{adv}\) & 0.456\({}_{\pm 0.006}\) & 0.620\({}_{\pm 0.005}\) & 0.669\({}_{\pm 0.134}\) \\ w/o cut before \(G_{s}\) & 0.489\({}_{\pm 0.009}\) & 0.652\({}_{\pm 0.004}\) & 0.717\({}_{\pm 0.115}\) \\ \hline Proposed & **0.498\({}_{\pm 0.008}\)** & **0.661\({}_{\pm 0.007}\)** & **0.750\({}_{\pm 0.108}\)** \\ \hline \hline \end{tabular} \end{table} Table 3: Blood vessel segmentation results for the XCAD test dataset in the ablation study. Figure 9: Visual vessel segmentation results of the noisy corruption study. age concatenation before the generation module. We implement the ablation method that excludes the concatenation (w/o cat before \(G_{s}\)), by setting the contrastive path of the generation module to take only noisy vessel images while the adversarial path to take only latent images estimated from the diffusion module. As a result, the segmentation performance is similar to the proposed C-DARL model, but this ablation method shows relatively higher false positives. This indicates that although the two paths of the generation module may require different image information, the latent features from the diffusion module and the noisy vessel images synergistically improve the vessel segmentation performance and their concatenation further simplify the model flows. #### 4.2.4 Hyperparameter setting To investigate the optimal setting of hyper-parameters in our loss function ( 14), we implement our model by adjusting the values for each \(\lambda_{\alpha}\), \(\lambda_{\beta}\), and \(\lambda_{\gamma}\). When one of the parameters is adjusted, we set the other parameters to fixed values. Fig. 10 shows quantitative evaluation results on the validation dataset according to the hyperparameters. When the parameter \(\lambda_{\alpha}\) weighting the contrastive loss increases, the IoU and Dice scores increase and then converge, and the Precision value continues to increase slightly, leading to the achievement of the highest performance when \(\lambda_{\alpha}\) is 4\(\times\)10\({}^{-4}\). In the study of \(\lambda_{\beta}\) that controls the adversarial loss, we can observe that our model hardly captures vessel information if the model does not learn semantic image synthesis with \(\lambda_{\beta}=0\), but \(\lambda_{\beta}\) is between 0.1 and 0.3, the model learns vessel representation with high performance. In particular, when \(\lambda_{\beta}\) is set to 0.2, the model shows the best results in terms of IoU and Precision. Lastly, in the study of \(\lambda_{\gamma}\) for the cycle loss, the model shows the best performance at \(\lambda_{\gamma}=2.0\), and the performance gradually decreases as the parameter increases. By considering these results, we report the performance of our model that is trained by setting the hyperparameters as \(\lambda_{\alpha}=4\times 10^{-4}\), \(\lambda_{\beta}=0.2\), and \(\lambda_{\gamma}=2.0\). ## 5 Discussion ### Training using multi-domain data One of the strengths of our framework is that only vessel images are taken as input without background images before the contrast agent injection so that the model can utilize various vessel medical data, which improves the generalization performance. In order to demonstrate the effect of using multi-domain data in the training phase, we train our C-DARL model with respect to the number of vessel image domains. Table 4 reports the vessel segmentation performance according to the input image datasets. When compared to the model trained only using coronary angiography images from the XCAD dataset, the model additionally leveraging vessel data in different domains improves the segmentation performance with a gain from 3% to 5% for both internal and external test datasets. Also, the model trained using various datasets in three domains, including coronary images from the XCAD dataset, abdomen angiography images from the AAIR dataset, and retinal images from the UWF and FP datasets, achieves the highest performance in most of the test data. In other words, the segmentation performance increases even though the training data have different domain information, which suggests that our model outperforms to incorporate various domains and achieves better generalization. Future work will study performance for different clinical tasks in scenarios where the underlying vessel pathway is desirable (navigation tasks), as well as settings where diagnostic surrogates for vessel or endothelial pathology are valuable. ### Noise level for inference Our framework has another strong point in providing robust performance under noise corruption scenarios thanks to the proposed diffusion adversarial learning strategy, which leverages both the perturbed real vessel image through the forward diffusion process and the synthesized fake vessel image for learning \begin{table} \begin{tabular}{l l l l l} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Metric**} & \multicolumn{3}{c}{**Training dataset**} \\ \cline{3-5} & & CA & CA+BI & CA+AA+RI \\ \hline \multicolumn{5}{l}{Coronary angiography (CA)} \\ \hline \multirow{3}{*}{XCAD} & IoU & 0.466,0.027 & 0.491,0.008 & **0.498,0.000** \\ & Dice & 0.633,0.020 & 0.654,0.001 & **0.661,0.007** \\ & Precision & 0.712,0.154 & 0.745,0.111 & **0.750,0.108** \\ \hline \multirow{3}{*}{134XCA} & IoU & 0.508,0.018 & 0.521,0.008 & **0.545,0.000** \\ & Dice & 0.668,0.007 & 0.678,0.116 & **0.700,0.000** \\ & Precision & 0.665,0.129 & 0.649,0.120 & **0.673,0.123** \\ \hline \multirow{3}{*}{300XA} & IoU & 0.429,0.045 & **0.464,0.007** & 0.453,0.001 \\ & Dice & 0.598,0.053 & **0.632,0.058** & 0.621,0.009 \\ & Precision & 0.806,0.135 & 0.817,0.003 & **0.826,0.003** \\ \hline \multicolumn{5}{l}{Retinal imaging (RI)} \\ \hline \multirow{3}{*}{DRIVE} & IoU & 0.302,0.009 & 0.314,0.005 & **0.345,0.001** \\ & Dice & 0.462,0.058 & 0.475,0.019 & **0.510,0.007** \\ & Precision & 0.538,0.022 & 0.588,0.103 & **0.904,0.000** \\ \hline \multirow{3}{*}{STARE} & IoU & 0.333,0.131 & 0.342,0.117 & **0.367,0.133** \\ & Dice & 0.484,0.157 & 0.498,0.117 & **0.522,0.154** \\ & Precision & 0.685,0.129 & 0.691,0.179 & **0.736,0.179** \\ \hline \multicolumn{5}{l}{CA: Coronary angiography, AA: Abdominal angiography, RT: Retinal imaging} \\ \hline \hline \end{tabular} \end{table} Table 4: Quantitative evaluation results of vessel segmentation performance for the comparison study with the self-supervised methods. Figure 10: Study of hyperparameters in the loss function of C-DARL model. We compare the evaluation results on the validation dataset according to the value of hyperparameters for \(\lambda_{\alpha}\) (top row), \(\lambda_{\beta}\) (middle row), and \(\lambda_{\gamma}\) (bottom row). blood vessel segmentation. We analyze the segmentation performance according to the noise level at a time step \(t\in[0,200]\) that is uniformly sampled with an interval of 20. In Fig. 11, our model achieves the reliable performance of vessel segmentation until \(t\) reaches 200, which relates to Section 4.2.2 which shows the additional noise corruption study with different types of noise on external datasets. Although the model reveals the best segmentation performance at \(t=0\), this inference study with different levels of noise demonstrates the great potential of our model in generalizing to out-of-distribution datasets. ### Abdomen angiography segmentation We also verify the segmentation performance of our model for abdominal angiography to further study the generalization of our model to various domain datasets. As there is no benchmark dataset containing ground-truth vessel labels for the abdominal vessel images, we instead compare the qualitative results of the vessel segmentation using the AAIR data. For a baseline model, we adopt the DARL model which achieves the second-best performance in the comparison study of Section 4.2.1 (Table 2). Fig. 12 shows the results of the blood vessel segmentation for several abdomen angiography images. Compared with the DARL model, the results show that the proposed C-DARL model outperforms the segmentation of vascular regions, including tiny and low-contrast branches. Also, our model shows superior segmentation performance of even blood vessels that are difficult to distinguish from the background structures. This suggests that our model can be applied to a variety of vascular images by improving the capability of learning vessel representations through contrastive learning under the condition of no ground-truth vessel masks. ## 6 Conclusion We propose a novel label-free C-DARL model that achieves reliable blood vessel segmentation performance on multi-domain vessel images. Our model addresses the limitations of previously introduced DARL by effectively simplifying two different vessel image generation and vessel segmentation tasks, enabling self-supervised learning of blood vessel features using vessel images in a variety of domains. We further introduce the mask-based contrastive learning to the diffusion adversarial learning framework, by leveraging both the synthetic fractal and the estimated segmentation masks to intensely extract vessel representation under the label-free condition. Extensive experiments conducted on various vessel segmentation benchmarks demonstrate that our proposed C-DARL outperforms existing label-free vessel segmentation methods, offering noise robustness and promising generalization performance across diverse vessel datasets. This performance with a variety of clinical settings for vessel segmentation suggests that our model can provide clinicians with presumptive guidance during the diagnostic process for a variety of organs and diseases, as well as aid in detection, treatment planning, navigation, or reference data augmentation for a variety of clinical interventional and diagnostic tasks without requiring any labeling process. ## Acknowledgments This research was supported in part by the Intramural Research Program of the National Institutes of Health, Clinical Center, United States, and in part by the National Research Foundation of Korea under Grant NRF-2020R1A2B5B03001980.
医療画像における血管分断の自動化は、画像に基づいた医学と介入医学において血管疾患の診断と介入計画の重要なステップの一つです。 残念ながら、血管マスクの manuales のアノテーションは、細かな枝分かれと複雑な構造のために困難で、かつ資源を必要とします。この問題を克服するために、本論文では、自己教師化された血管分割手法である、対照的な拡散アドversarial表現学習 (C-DARL) モデルを提案します。このモデルは、拡散モジュールと生成モジュールから構成され、拡散潜在空間から合成血管画像を生成することで、多様な血管データの分布を学習します。さらに、マスクベースの対照学習を用いることで、モデルはより実質的な血管表現を学習することができます。C-DARLの有効性を検証するために、冠状動脈造影、腹部消化管血管造影、網膜画像など、様々な血管
2309.06813
A Strong Blend in the Morning: Studying the Circumgalactic Medium Before Cosmic Noon with Strong, Blended Lyman-$α$ Forest Systems
We study of the properties of a new class of circumgalactic medium absorbers identified in the Lyman-$\alpha$ forest: "Strong, Blended Lyman-$\alpha$" (or SBLA) absorption systems. We study SBLAs at $2.4<z<3.1$ in SDSS-IV/eBOSS spectra by their strong extended Lyman-$\alpha$ absorption complexes covering 138 $\,\,{\rm km}\,{\rm s}^{-1}$ with an integrated $\log (N_{HI}/$cm$^{-2}) =16.04\substack{+0.05 \\ -0.06}$ and Doppler parameter $b=18.1 \substack{+0.7 \\ -0.4}\,\,{\rm km}\,{\rm s}^{-1}$. Clustering with the Lyman-$\alpha$ forest provides a large-scale structure bias of $b = 2.34\pm0.06$ and halo mass estimate of $M_h \approx 10^{12}{\rm h^{-1}M_{sol}}$ for our SBLA sample. We measure the ensemble mean column densities of 22 metal features in the SBLA composite spectrum and find that no single-population multiphase model for them is viable. We therefore explore the underlying SBLA population by forward modelling the SBLA absorption distribution. Based on covariance measurements and favoured populations we find that $\approx 25$\% of our SBLAs have stronger metals. Using silicon only we find that our strong metal SBLAs trace gas with a $\log(n_H / $cm$^{-3}) > -2.40$ for $T=10^{3.5}$K and show gas clumping on $<210$ parsec scales. We fit multiphase models to this strong sub-population and find a low ionization phase with $n_H=1$cm$^{-3}$, $T=10^{3.5}$K and $[X/H]=0.8$, an intermediate ionization phase with $\log(n_H / $cm$^{-3}) = -3.05$, $T=10^{3.5}$K and $[X/H]=-0.8$, and a poorly constrained higher ionization phase. We find that the low ionization phase favours cold, dense super-solar metallicity gas with a clumping scale of just 0.009 parsecs.
Sean Morrison, Debopam Som, Matthew M. Pieri, Ignasi Pérez-Ràfols, Michael Blomqvist
2023-09-13T09:04:03
http://arxiv.org/abs/2309.06813v2
# A Strong Blend in the Morning: ###### Abstract We study of the properties of a new class of circumgalactic medium absorbers identified in the Lyman-\(\alpha\) forest: "Strong, Blended Lyman-\(\alpha\)" (or SBLA) absorption systems. We study SBLAs at \(2.4<z<3.1\) in SDSS-IV/eBOSS spectra by their strong extended Lyman-\(\alpha\) absorption complexes covering \(138\) km s\({}^{-1}\) with an integrated \(\log(N_{HI}/\mathrm{cm}^{-2})=16.04^{+2.08}_{-0.08}\) and \(b=18.1^{+0.27}_{-1.44}\) km s\({}^{-1}\). Clustering with the Lyman-\(\alpha\) forest provides a large-scale structure bias of \(b=2.34\pm 0.06\) and halo mass estimate of \(M_{h}\approx 10^{12}\)h\({}^{-1}\)M\({}_{\sun}\) for our SBLA sample. We measure the ensemble mean column densities of 22 metal features in the SBLA composite spectrum and find that no single-population multiphase model for them is viable. We therefore explore the underlying SBLA population by forward modelling the SBLA absorption distribution. Based on covariance measurements and favoured populations we find that \(\approx 25\%\) of our SBLAs have stronger metals. Using silicon only we find that our strong metal SBLAs trace gas with a \(\log(n_{H}/\mathrm{cm}^{-3})>-2.45\) for \(T=10^{3.5}\)K and show gas clumping on \(<255\) parsec scales. We fit multiphase models to this strong sub-population and find a low ionization phase with \(n_{H}=1\mathrm{cm}^{-3}\), \(T=10^{3.5}\)K and \([X/H]=0.8\), an intermediate ionization phase with \(\log(n_{H}/\mathrm{cm}^{-3})=-3.35\), \(T=10^{3.5}\)K and \([X/H]=-1.1\), and a poorly constrained higher ionization phase. We find that the low ionization phase traces cold, dense super-solar metallicity gas with a clumping scale of just 0.009 parsecs. keywords: intergalactic medium - quasars: absorption lines - galaxies: formation - galaxies: evolution - galaxies: high-redshift ## 1 Introduction The history of the universe can be thought of as an evolution through a series of distinct epochs; the hot Big Bang, the dark ages, the epoch of the first stars, hydrogen reionization, the galaxy formative era reaching a crescendo when the star formation rate peaks at \(z\approx 2\)(Madau & Dickinson, 2014), and finally a gradual decline in star formation activity (driven in-part by dark energy driving the expansion of the universe) leading to the mature galaxies we see today. The star formation rate peak is often known as 'cosmic noon'. The period leading up to that epoch (which we might call the 'cosmic morning') is one of the most active periods in the history of the universe. This is the epoch where gas is actively accreting onto galaxies and fuelling imminent star formation. It is also the epoch where galaxies increasingly respond to star formation and eject outflows into their surrounding environments. The zone where accretion and outflows occur is known as the 'circumgalactic medium' (or CGM), in general regions outside galaxies are known as the 'intergalactic medium' (or IGM). The cosmic morning is also notable as the epoch where key UV atomic transitions are redshifted into the optical window allowing us to study them from the ground-based observatories in great detail. In particular, the richly informative Lyman-\(\alpha\) (Ly\(\alpha\)) forest is well-studied at \(z>2\), typically towards background quasars (Gunn & Peterson, 1965; Lynds, 1971). This leads to samples of Lyman-\(\alpha\) forest spectra going from a few hundred at \(z<2\) to a few hundred thousand at \(z>2\). This combination of high-observability and high-information-content is encouraging for developing an understanding of galaxy formation, however, progress has been held back by the fact that at these high redshifts galaxies are faint and so have been observed in much smaller numbers than the active galactic nuclei hosting quasars. Wide-area surveys of galaxies at \(z>2\) are on their way (e.g. HETDEX, Hill et al., 2008 and PFS, Takada et al., 2014) but in the meantime and in complement to them, we can study galaxies in absorption. The most widely known and accepted way of doing this is to study damped Lyman-\(\alpha\) systems (or DLAs; Wolfe et al., 2005), which are systems with a column density \(N_{HI}>10^{20.3}\)cm\({}^{-2}\) such that ionizing photons cannot penetrate them. These systems are typically easy to identify in absorption through their wide damping wings. A wider category of systems (including DLAs) that do not allow the passage of ionizing photons (i.e. self-shielded) are named Lyman limit systems (or LLSs), which have column densities \(N_{HI}>10^{17.2}\)cm\({}^{-2}\). Partial Lyman limit systems with \(10^{16.2}\)cm\({}^{-2}<N_{HI}<10^{17.2}\)cm\({}^{-2}\) absorb a sufficient fraction of ionizing photons and modify ionization fractions of species they host (though the boundary lower of this group is somewhat ill-defined). DLAs are thought to have a particularly strong connection to galaxies since the densities inferred are approximately sufficient to provoke star formation (e.g. Rafelski et al., 2011). LLSs are less clear, sometimes they are thought to be closely associated with galaxies and in other cases they are though to trace cold streams of inflowing gas (e.g. Fumagalli et al., 2011). Self-shielded systems cover a small covering fraction of the CGM (typically defined as regions within the virial radius of a galaxy hosting dark matter halo). The overwhelming majority of CGM regions are not detectable as Lyman limit systems but are optically thin with \(10^{14}\)cm\({}^{-2}\leq N_{HI}\lesssim 10^{16}\)cm\({}^{-2}\)(e.g. Fumagalli et al., 2011 and Hummels et al., 2019). Conversely, many of these strong optically thin systems are not CGM systems but instead probe diffuse IGM gas. In other words, these systems are critically important tracers of the CGM but their CGM/IGM classification is challenging. Furthermore given that lines with \(N_{HI}\gtrsim 10^{14}\)cm\({}^{-2}\) are on the flat part of the curve of growth (e.g. Charlton and Churchill, 2000) and therefore suffer from degeneracy between column density and line broadening even the column density itself is a non-trivial measurement. We explore a wider sample of CGM systems that are not optically thick to ionizing photons, nor do they require a challenging estimation of column density for confirmation. The sample requires only that the absorption in the Lyman-\(\alpha\) forest be strong and blended. This population has already been studied in Pieri et al. (2010) and followed up in Pieri et al. (2014) through spectral stacking. Weretrum to this sample with a refined error analysis of the stacked spectra, a formalised measurement of column densities, halo mass constraints and more extensive interpretation, in particular modelling of the underlying metal populations in the stack of spectra. There are various observational studies of the CGM that provide gas details such as thermal properties, density, metallicity, sometimes with respect to a galactic disk, often as a function of impact parameter to the likely closest galaxy (e.g. Steidel et al., 2010; Bouche et al., 2013; Werk et al., 2014; Augustin et al., 2018; Qu et al., 2022). SINFONI and MUSE integral field unit data have provided a particular boost to the detail and statistics of CGM observations (e.g. Peroux et al., 2016; Fumagalli et al., 2014; Fossati et al., 2021). Despite the exquisite detail offered by these datasets, an unbiased, large statistical sample of spectra is needed in order to develop a global picture of the CGM. Obtaining such samples with this level of detail remains a distant prospect. Hence, we take a brute force approach. We identify CGM regions as a general population with diverse gas properties studied simultaneously. These samples number in the 10s or even 100s of thousands recovered from SDSS spectra of \(z>2\) quasars. The selection function is simple (see Section 3) but the challenge resides in the interpretation of this rich but mixed sample. Complexity exists not only in the unresolved phases, but also in the diversity of systems selected. In previous work Pieri et al. (2010, 2014); Morrison et al. (2021); Yang et al. (2022) the focus has been to interpret the multi-phase properties of a hypothetical mean system that is measured with high precision in the composite spectrum of the ensemble. We revisit these measurements, and go further to study the underlying populations of metals features: both their individual expected populations and the degree of covariance between them. We focus in particular on a strong population of metals that we infer and find signal of metal rich, high-density, cold gas clumping on remarkably small-scales. Much remains to be explored but we offer a framework for studying the CGM in the largest Lyman-\(\alpha\) forest samples. In light of the improved understanding outlined here, we define these absorption systems (initially discovered in Pieri et al., 2010) as a new class of CGM absorption systems defined by the both absorption strength and clustering on \(\sim 100\) km s\({}^{-1}\) scales, and we name them "Strong, Blended Lyman-\(\alpha\)' or SBLA absorption systems. Over the coming decade quasar surveys at \(z>2\) will grow and will increasing be accompanied by galaxy surveys at the same redshifts, making this statistical population analysis an increasingly powerful tool. This publication is structured as follows. We begin by describing the dataset (including quasar continuum fitting to isolate the foreground transmission spectrum). In Section 3 we describe various ways of selecting SBLAs for different purity and absorption strength before settling on a analysis sample in subsequent sections. We then review the stacking methodology in Section 4 and follow this in Section 5 with a comprehensive end-to-end error analysis of the resulting composite spectrum of SBLAs. In Section 6 we present large-scale structure biases for SBLAs and inferences regarding their halo masses. In Section 7 we begin to explore our results with measurements in the composite H i and metal column densities, the sensitivity to physical conditions and the covariance between metal lines. We then go on to model and constrain the underlying absorber populations and explore the properties of the strong metal population in Section 8. We follow up with an extensive discussion Section 9 and conclusions. We also provide appendices on the correlation function methodology used to measure structure bias (Appendix A), details on the error analysis (Appendix B), SBLAs studied in high-resolution spectra (Appendix C), and finally measurements of the covariance between our metal features (Appendix D). ## 2 Data SDSS-IV (Blanton et al., 2017) carried out three spectroscopic surveys using the 2.5-meter Sloan telescope (Gunn et al., 2006) in New Mexico. These surveys included APOGEE-2 (an infrared survey of the Milky Way Stars), Extended Baryon Oscillation Spectroscopic Survey (eBOSS; a optical cosmological survey of quasars and galaxies) and MaNGA (an optical IFU survey of \(\sim\)10,000 nearby galaxies), eBOSS, an extension of the SDSS-III (Eisenstein et al., 2011; Dawson et al., 2013) BOSS survey, utilizes the BOSS spectrograph. The BOSS instrument (Smee et al., 2013) employs a twin spectrograph design with each spectrograph separating incoming light into a blue and a red camera. The resulting spectral coverage is over 3450A - 10,400A with a resolving power (\(\lambda/\Delta\lambda\)) ranging between \(\sim\) 1650 (near the blue end) to \(\sim\) 2500 (near the red end). We discard regions with a 100 pixel boxcar smoothed signal-to-noise ratio (S/N)\(<\) 1, in order to exclude from our analysis regions of sharply falling S/N at the blue extreme of SDSS-IV quasar spectra. Pixels flagged by the pipeline, pixels around bright sky lines and the observed Galactic Ca ii H&K absorption lines were also masked throughout our stacking analysis. We use a high redshift quasar sample derived from the final data release of eBOSS quasars (Lyke et al., 2020, hereafter DR16Q) from SDSS-IV Data Release 16 (Ahumada et al., 2020). The spectra of objects targeted as quasars are reduced and calibrated by the SDSS spectroscopic pipeline (Bolton et al., 2012) which also classifies and determines the redshifts of sources automatically. Unlike the quasar catalogues from DR12Q (Paris et al., 2017) and earlier, the additional quasars in DR16Q are primarily selected via the automated pipeline, with a small visually inspected sample for validation. Ensuring the availability of enough Ly\(\alpha\) forest pixels required for an accurate continuum estimate restricts the minimum redshift of our quasar sample to \(z_{em}\geq 2.15\). We also discard DR16Q quasars with median Ly\(\alpha\) forest S/N \(<\) 0.2 pixel\({}^{-1}\) or median S/N \(<\) 0.5 pixel\({}^{-1}\) over 1268 A - 1380 A given the difficulty in the accurate estimation of continua for very noisy spectra. Since the presence of BAL troughs contaminate the Ly\(\alpha\) forest with intrinsic quasar absorption and likely affects continuum estimation, we discard quasars flagged as BAL quasars in DR16Q. Pixels which were flagged by the pipeline as problematic during the extraction, flux calibration or sky subtraction process were excluded from our analysis. Spectra of DR16Q quasars with more than 20% pixels within 1216 \(<\)\(\lambda_{rest}\)\(<\) 1600 A or in the Ly\(\alpha\) forest region flagged to be unreliable by the pipeline were discarded. DLAs and their wings (where the estimated flux decrement is \(>\) 5%) in our data were masked using the DLA catalogue internal to the DR16Q catalogue, presented in Chabanier et al. (2022) and based on the Parks et al. (2018) convolutional neural network deep learning algorithm designed to identify and characterise DLAs. Spectra with more than one DLAs are entirely discarded throughout our analysis. Further steps are taken to prepare the data for the selection of Ly\(\alpha\) systems to be stacked and the spectra to be stacked themselves. Steps taken for the calculation of forest correlation functions are explained separately in Section 6) ### Preparation for Lyman-\(\alpha\) absorber selection We take two approaches for the normalisation of the quasar continua in our stacking analysis. For SBLA detection we follow the method described in Lee et al. (2013) over 1040 - 1185 A in the rest frame. The modified version of the MF-PCA technique presented in Lee et al. (2012) fits the 1216 \(<\)\(\lambda_{rest}\)\(<\) 1600 A region of a quasar spectrum with PCA templates providing a prediction for the continuum shape in the Ly\(\alpha\) forest. The predicted continuum is then re-scaled to match the expected evolution of the Ly\(\alpha\) forest mean flux from Faucher-Giguere et al. (2008). The above definition of the forest region avoids contamination from higher order Lyman series lines and conservatively excludes the quasar proximity zone. We discard any spectrum for which the estimated continuum turns out to be negative. Metal absorption lines are masked using a \(3\sigma\) iterative flagging of outlier pixels redward of the Ly\(\alpha\) forest from a spline fit of the continua. With all the cuts mentioned above, we are left with an analysis sample of 198,405 quasars with a redshift distribution shown in Figure 1 along with the distribution of all \(z\geq 2\) quasars from DR16Q. ### Preparation of spectra to be stacked The mean-flux regulated PCA continua described above provide robust estimates of the Ly\(\alpha\) forest absorption and are therefore well-suited for the search and selection of SBLAs for stacking. However, these continua are limited to \(<\)1600 A in the quasar rest frame and present discontinuities due to the mean-flux regulation process. For spectra to be stacked, we required wide wavelength coverage without discontinuities and so we use spline fitting. We split each spectrum into 25A chunks over the entire observed spectral range and calculate the median flux in each spectral chunk before fitting a cubic spline to these flux nodes. Pixels falling 1\(\sigma\) below the fit within the Ly\(\alpha\) forest or 2\(\sigma\) below outside the forest are then rejected and the flux nodes are recalculated followed by a re-evaluation of the spline fit. This absorption-rejection is iterated until convergence to estimate the quasar continuum. The cubic spline fitting breaks down in regions with large gradients in the continuum, usually near the centres of strong quasar emission lines. We, therefore, mask data around the peaks of emission features commonly seen in quasar spectra before the continuum fitting is performed. In addition, as sharp edges (caused by missing data as a result of masking the emission peaks) can induce instability in the fits using the smooth cubic spline function, we discard a buffer region around the emission line masks. The extents of the masked region (\(\lambda_{mask}\)) and the corresponding buffer (\(\pm\lambda_{buffer}\)), in the quasar rest frame, depend on the typical strength of the emission line concerned and are listed in Table 1 along with the rest frame locations of the emission line centres. ## 3 Selection of strong, blended Lyman \(\alpha\) forest absorption systems When analysing the absorption in the Ly\(\alpha\) forest, typically two approaches are taken. One may treat the forest as a series of discrete Figure 1: Redshift distribution of the 198,405 quasars in our initial sample is shown in black. The thick grey solid curve represents the distribution of all \(z\geq 2\) quasars from DR16Q. Also shown are the 4 samples of SBLAs FS0 (light green dashed line), FS1 (light red dashed dotted line), FS2 (orange dotted line), and P30 (dashed double dotted line) as discussed in Section 3. identifiable systems such can be fit as Voigt profiles and therefore derive their column densities and thermal and/or turbulent broadening. Alternatively one may treat the forest as a continually fluctuating gas density field and therefore take each pixel in the spectrum and infer a measurement of gas opacity (the so-called 'fluctuating Gunn-Peterson approximation'). For the former, the assumption is that the gas can resolved into a discrete set of clouds, which is physically incorrect for the Ly\(\alpha\) forest as a whole but a useful approximation in some conditions. For the latter, it is assumed that line broadening effects are subdominant to the complex density structure as a function of redshift in the Ly\(\alpha\) forest. In this work, we take the second approach, selecting absorption systems based on the measured flux transmission in a spectral bin in the forest, \(F_{\rm Ly\alpha}\). The absorbers in our sample are selected from wavelengths of 1040 A \(<\lambda<\)1185 A in the quasar rest frame. This range was chosen to eliminate the selection of Ly\(\beta\) absorbers and exclude regions of elevated continuum fitting noise from Ly\(\beta\) and O vi emission lines at the blue limit, and absorbers proximate to the quasar (within 7563 \(\,\rm km\,s^{-1}\)) at the red limit. We follow the method of Pieri et al. (2014) (hereafter P14) to generate their three strongest absorber samples, which they argued select CGM systems with varying purity. We limit ourselves to \(2.4<z_{abs}<3.1\) to retain sample homogeneity with varying wavelength. Without this limit there would be limited sample overlap across the composite spectrum (the blue end of the composite would measure exclusively higher redshift SBLAs and the red end would measure exclusively lower redshift SBLAs). Specifically, (P14) chose this redshift range to allow simultaneous measurement of both the full Lyman series and Mg ii absorption. We take main samples explored in P14 using a signal-to-noise per pixel \(>3\) over a 100 pixel boxcar. Of the 198405 quasars available, 68525 quasars had forest regions of sufficient quality necessary for the recovery of Strong, Blended Lyman-\(\alpha\) absorbers. These samples are: FS0 with \(-0.05\leq F_{\rm Ly\alpha}<0.05\), FS1 with \(0.05\leq F_{\rm Ly\alpha}<0.15\), and FS2 with \(0.15\leq F_{\rm Ly\alpha}<0.25\). The numbers of systems identified are given in Table 2. This is approximately quadruple the number of SBLAs with respect to P14 (though they were not given this name at the time). We also consider samples defined by their purity as discussed below. All remaining absorbers (after the flagging discussed in the previous section) are assumed to arise due to the Ly\(\alpha\) transition with \(2.4<z<3.1\), and are available for selection. Given the strength of absorbers selected here this is a fair assumption and in cases this is not true, the effect is easily controlled for (e.g. 'the shadow Si iii' features discussed in P14). The spectral resolution of the BOSS spectrograph varies from \(R=1560\) at 3700A to \(R=2270\) at 6000A. For chosen redshift range the resolution at the wavelength of the selected Ly\(\alpha\) absorption is \(R\approx 1800\) and this is therefore our effective spectral resolution throughout this work. This equates 167 \(\,\rm km\,s^{-1}\) or 2.4 pixels in the native SDSS wavelength solution. This allows us to rebin the spectra by a factor of 2 before selection of our Ly\(\alpha\) absorbers to reduce noise and improve our selection of absorbers. It has the added benefit of precluding double-counting of absorbers within a single resolution element. This results in the selection of absorbers on velocity scales of \(\sim 138\)\(\,\rm km\,s^{-1}\). Given that Lyman-\(\alpha\) absorbers have a median Doppler parameter of \(b\approx 30\)\(\,\rm km\,s^{-1}\) (and \(\sigma=10\)\(\,\rm km\,s^{-1}\); Hu et al. 1995; Rudie et al. 2012) our absorber selection is both a function of absorber strength and absorber blending. More detail is provided on the meaning of this selection function in P14. One of the key results of P14 was that regions of the Ly\(\alpha\) forest with transmission less than 25% in bins of 138 \(\,\rm km\,s^{-1}\) are typically associated with the CGM of Lyman break galaxies (using Keck HIRES and VLT UVES spectra with nearby Lyman break galaxies). The metal properties in the composite spectra were strongly supportive of this picture. We further reinforce this picture with improved metal measurements, constraints on their halo mass both from large-scale clustering and arguments regarding halo circular velocities Section 6. Given the weight of evidence that these systems represent a previously unclassified sample of galaxies in absorption, we chose to explicitly define them as a new class and name them "Strong, Blended Lyman-\(\alpha\)" (SBLAs) forest absorption systems. The preferred definition here is a noiseless transmitted Ly\(\alpha\) flux \(F_{\rm Ly\alpha}<0.25\) over bins of 138 \(\,\rm km\,s^{-1}\) for consistency with this Lyman break galaxy comparison and comparison with P14. Refinement of this class of SBLAs and/or alternative classes of SBLAs are possible with modifications of transmission or velocity scale. In the arguments that follow, statements regarding purity refer specifically to the successful recovery of this idealised SBLA class. As pointed out in Section 2, DLAs from DR16Q (presented in Chabanier et al. 2022) are masked in our selection of SBLAs, however, no catalogue of Lyman limit systems (LLS) are available and are therefore potentially among the SBLA sample. As P14 discussed at length, even if one assumes that all LLS are selected (which is not a given) no more than 3.7% of SBLAs should be a LLS. SBLAs are much more numerous and this is not surprising in light of simulations (e.g. Hummels et al. 2019) showing that the covering fraction of LLS (including DLA) is small compared to regions of integrated column density \(\approx 10^{16}\rm cm^{-2}\) we find here. The presence of even small numbers of Lyman limit systems can be impactful for our ionization corrections, however, and we return to this topic Section 7.4 and Section 8. ### Using Ly\(\alpha\) Mocks to Characterise Sample Purity The FS0 sample provides higher purity SBLA selection than FS1 or FS2 (P14). However, we note that there exists sets of systems that do not meet these requirements but have equivalent or better purity compared to subsets of FS0 systems with limiting S/N or flux. For example, systems with \(F_{\rm Ly\alpha}=0.06\) and S/N/A \(=10\) will have a higher SBLA purity than systems with \(F_{\rm Ly\alpha}=0.04\) and S/N/A \(\approx 3\), even though the latter meets the requirements for sample FS0 and the former does not. We have therefore explored the optimal combination of inter-dependent S/N and flux transmission thresholds to obtain a desired limiting purity. We used the official SDSS BOSS Lyman-\(\alpha\) forest mock data-set produced for DR11 (Bautista et al. 2015) without the \begin{table} \begin{tabular}{l c c c} \hline Emission Line & \(\lambda_{rest}\) & \(\lambda_{mask}\) & \(\pm\lambda_{buffer}\) \\ & (Å) & (Å) & (Å) \\ \hline Ly\(\beta\) & 1033.03 & 1023 \(-\) 1041 & 5 \\ Ly\(\alpha\) & 1215.67 & 1204 \(-\) 1240 & 10 \\ O i & 1305.42 & 1298 \(-\) 1312 & 5 \\ Si iv & 1396.76 & 1387 \(-\) 1407 & 10 \\ C iv & 1549.06 & 1533 \(-\) 1558 & 10 \\ He ii & 1637.84 & 1630 \(-\) 1645 & 5 \\ C iii & 1908.73 & 1890 \(-\) 1919 & 10 \\ Mg ii & 2798.75 & 2788 \(-\) 2811 & 5 \\ \hline \end{tabular} \end{table} Table 1: Emission line masks and buffer regions used in cubic-spline continuum estimation. All wavelengths listed are in quasar rest frame. addition of DLAs (which are masked in our analysis) and metal absorption lines (which are rare particularly for the strong, blended absorption studied here). The signal-to-noise was calculated using a 100-pixel boxcar smoothing of the unbinned data (replicating the selection function in the data), and then was rebinned to match the resolution used in our selection function. We then compared the observed (noise-in) Ly\(\alpha\) flux transmission in the models with the true (noiseless) flux transmission of these system in various ranges of observed flux transmission and S/N. The purity is the fraction of systems selected which that meets the SBLA definition of true (noiseless) flux transmission \(F_{\rm Ly\alpha}<0.25\). We then accept ranges that meet a given purity requirement. We estimated the purity for a grid of S/N/A \(>0.4\) (and step size of 0.2) and \(-0.05\leq\)F\(<0.35\) (and step size of 0.05). The flux and S/N/A of the selected lines in the real data are compared to this grid to give an estimate of the purity of the selection. By building samples in this way we are not limited to the high signal-to-noise data used in P14. Though we focus on F50 for consistency with (P14), we demonstrate here how expanded samples can be prepared. Using this approach, we propose three additional samples defined by their limiting SBLA purities. Noting that the mean purity of the F50 sample of \(\approx 90\%\), we produce a sample of 90% minimum purity, which we label P90. We do indeed obtain a more optimal sample with both higher mean purity and nearly double the number of SBLAs with sample P90 compared to F50. We further produce samples with minimum purity of 75% and 30%, labelled P75 and P30 respectively. The numbers and resulting mean purity taken from these mock tests are showing in Table 2. These tests indicate that there are around 200,000 SBLAs between \(2.4<z<3.1\) are present in data. Our companion paper, Perez-Rafols et al. (2023), uses a version of our P30 sample without a redshift limit to measure large-scale structure clustering. This provided us with 742,832 SBLAs. Assuming that our inferred purity for P30 is correct for this sample also, we obtain around half a million true SBLAs in our most inclusive sample. This is more than an order of magnitude more CGM systems than our DLA sample. ## 4 Stacking Procedure We follow the method originally set out in Pieri et al. (2010) (hereafter P10) and further elaborated in P14 for building composite spectra of Ly\(\alpha\) forest absorbers through the process of stacking SDSS spectra. For every selected Ly\(\alpha\) absorber with redshift \(z_{\alpha}\) the entire continuum fitted quasar spectrum is treated initially as if it were the spectrum of that system alone. In practise, one produces a rest frame spectrum of that absorber by dividing the wavelength array by \((1+z_{\alpha})\). This is done many times for many different selected absorbers (sometimes using the quasar spectra more than once). This ensemble of SBLA rest frame spectra constitutes the stack of spectra to be analysed. Typically one collapses this stack to a single value at every wavelength using some statistic. In P10 and P14 two statistics were applied; the median and the arithmetic mean (though in some circumstances the geometric mean may be the more suitable choice). In Section 8 below we will explore what we can learn from the full population of absorbers and relax the implicit assumption that all systems in a given sample are the same. In this work we will focus on the arithmetic mean with no S/N weighting for reasons which will become clear in Section 8. Stating this explicitly, we calculate the mean of the stack of spectra (or'mean stacked spectrum') as \[F_{S}(\lambda_{r})=\sum_{i=1}^{n}F_{i}(\lambda_{r})/n \tag{1}\] where \(\lambda_{r}\) indicates the wavelength in the rest frame of the SBLA system selected and the set of \(i=1,n\) indicates SBLAs that contribute a measurement at the specified rest frame wavelength. Following the method of P10 and P14, in order to calculate the arithmetic mean, we sigma clip the high and low 3% of the stack of spectra to reduce our sensitivity to outliers. We also allow that the overwhelming majority of the absorption in the spectra are not associated with the selected SBLAs. These unassociated absorbers do not generate any absorption features correlated with our selected Ly\(\alpha\), but they do have an impact on the mean stacked spectrum. When a mean stacked spectrum is calculated, a broad suppression of transmitted flux is seen (see Figure 2). Since this absorption is not associated with the selected systems, it is therefore undesirable in the pursuit of a composite absorption spectrum of the selected systems. The stacked flux departs from unity even in regions where Lyman-series and metals features are not expected despite the fact that each spectrum was continuum normalised before being stacked Figure 2. These broad flux variations result mainly from the smoothly varying average contribution of uncorrelated absorption. The artefacts of the stacking procedure are unwanted in a composite spectrum of the selected systems but vary smoothly enough that one can distinguish them from absorption features of interest. Since they are analogous to quasar continua, P10 gave these artefacts in the stacked spectra the name 'pseudo-continua'. They argued that the effect of this contamination in the mean stacked spectrum can be best approximated by an additive factor in flux decrement. This is because quasar absorption lines are narrower than the SDSS resolution element and hence would typically be separable lines in perfect spectral resolution. These uncorrelated absorbers are present on either side of the feature of interest and it is reasonable to assume that they will continue through the feature of interest contributing to the absorption in every pixel of the feature on average without typically occupying the same true, resolved redshift range. In this regime each contributing absorber makes an additive contributions to the flux decrement in a pixel. The alternative regime where absorption is additive in opacity, leads to a multiplicative correction, but weak absorption features (such as those we measure here ) are insensitive to the choice of a multiplicative or additive correction. In light of these two factors we continue under the approximation of additive contaminating absorption. \begin{table} \begin{tabular}{l c c c c} \hline Sample & F\({}_{\rm lower}\) & F\({}_{\rm upper}\) & \(<\)Purity(\%)\(>\) & Number of SBLAs \\ \hline \hline F50 & -0.05 & 0.05 & 89 & 42,210 \\ F51 & 0.05 & 0.15 & 81 & 86,938 \\ F52 & 0.15 & 0.25 & 55 & 141,544 \\ P30 & -0.05 & 0.25\({}^{ab}\) & 63 & 335,259 \\ P75 & -0.05 & 0.25\({}^{aa}\) & 90 & 124,955 \\ P90 & -0.05 & 0.25\({}^{aa}\) & 97 & 74,660 \\ \hline \end{tabular} \({}^{a}\) Hard limit. True maximum is a function of of S/N tuned for desired minimum purity. \({}^{b}\) Redshift limited version of Pérez-Rafols et al. (2023) sample. \end{table} Table 2: Possible \(2.4<z<3.1\) SBLA samples, their flux transmission boundaries (in 138 km s\({}^{-1}\) bins and their purity to true (noiseless) flux transmission of \(F_{\rm Ly\alpha}<0.25\) We therefore arrive at a composite spectrum of SBLAs by correcting the stacked spectrum using \[F_{C}(\lambda_{r})=F_{S}+(1-P), \tag{2}\] where (again) \(F_{S}\) represents the mean stacked flux and \((1-P)\) represents the flux decrement of the 'pseudo-continuum' and can be estimated by fitting a spline through flux nodes representing this pseudo-continuum. To calculate these nodes we first manually select regions of the stacked spectrum in areas where signal from correlated absorption is not seen and/or expected. Then for each such 'pseudo-continuum patch', we define the corresponding node using the mean of flux and wavelength values of all stacked pixels within this patch. In estimating the pseudo-continuum we typically use \(\sim\) 10 A wide "patches" of spectrum. However, smaller continuum patches were used in regions crowded by correlated absorption features, while much wider segments were selected for relatively flat regions of the stacked spectrum. Figure 2 shows the pseudo-continuum along with the regions used to estimate it for the mean stacked spectrum corresponding to FS0. The corresponding composite spectrum is shown in Figure 3. Figure 2: The stacked spectrum of the SBLA system sample FS0 (systems selected with flux in the range \(-0.05\leq F<0.05\) (FS0) is plotted with solid blue curve. The stacked spectrum show broad continuum variations resulting from uncorrelated absorption. The overlaid orange curve represents this pseudo-continuum. The regions used to estimate the pseudo-continuum are shown as green shaded regions withing vertical green dashed lines. ## 5 Improved estimations of measurement uncertainty In this work, we explore a more inclusive treatment of measurement uncertainty than P10 and P14 allowing more reliable fits and more quantitative model comparison. We will initially summarise the previous method in order expand on our more precise error estimations. ### Quick bootstrap method In P10 and P14 the errors were estimated for the stacked spectrum alone, i.e. prior to the pseudo-continuum normalisation step above. In taking this approach, they did not formally include the potential contribution to the uncertainty of the pseudo-continuum normalisation. Instead they took the conservative choice to scale the errors generated by the bootstrap method by a factor of root-2 assuming that pseudo-continuum fitting introduced an equal contribution to the uncertainty of the final composite spectrum. Errors in the stacked spectrum were estimated by bootstrapping the stack of spectra. At every wavelength bin in the stack, 100 bootstrap realisations were produced and the error was calculated as the standard deviation of the means calculated from those random realisations. This was performed independently for each bin. In the process of exploring improved estimates of uncertainty in the composite spectrum of Ly\(\alpha\) forest systems, we have learned that 100 realisations is not a sufficient number for precision error estimates. Based on these convergence tests we advocate generating 1,000 realisations to have high confidence of accuracy. See Appendix B for more detail on this choice. ### End-to-end bootstrap method In this work we wish to relax the assumption of P14 that pseudo-continuum fitting introduces an uncertainty to the composite spectrum equal to, but uncorrelated with, the combination of other sources of error. In order to do this, we seek to estimate the errors from the telescope all the way to final data analysis step of producing a composite spectrum. In order to build an end-to-end error estimation framework we begin by bootstrapping the sample of SBLAs and their accompanying spectra. For each random realisation of the sample, we construct a realisation of the stacked spectrum following the same approach as that in the quick bootstrap method. The key difference is that we do not simply calculate an uncertainty in the stacked spectrum and propagate it forward analytically through the pseudo-continuum normalisation to the composite spectrum. Instead we include this process in the bootstrap analysis by performing the pseudo-continuum fit and normalisation upon each realisation. The patches used to fit the pseudo-continuum of our observed stacked spectrum (as described in Section 4) were applied to each of Figure 3: Composite spectrum of the SBLA system sample F80 (systems selected with flux between \(-0.05\leq F<0.05\)) produced using the arithmetic mean statistic. Error bars are shown in blue. Vertical dashed lines indicate metal lines identified and dotted vertical lines denote the locations of the Lyman series. Note the scale of the y-axis in each panel: this is our lowest S/N composite spectrum and yet we measure absorption features with depth as small as 0.0005. the bootstrap realisations to obtain spline nodes for a unique pseudo-continuum per realisation. This created an ensemble of 1,000 bootstrapped realisations of the (pseudo-continuum normalised) composite spectrum, \((F_{C}^{c})_{i}\), where \(i\) denotes the \(i\)th bootstrap realisation at every wavelength. Finally, the error in the composite flux \(\sigma_{F_{C}}\) is estimated to be the standard deviation of the ensemble \((F_{C}^{c})_{i}\) at every wavelength. The resulting uncertainties in the composite flux derived using the end-to-end error estimation method are shown in Figure 3 using blue error bars. Figure 4 illustrates the end-to-end error estimation mechanism taking a region of the stack around the Si ii \(\lambda 1260\) absorption signal. The stack is shown in the top panel of the figure along with a pair of continuum patches on either sides of the absorption feature as well as the pseudo-continuum estimate. This panel also marks the locations of three pixels chosen as example to illustrate the method: the pixel at the centre of the absorption feature and the pixels at the middle of the continuum patches on the 'blue' and'red' side of the feature. The panels in the bottom row of Figure 4 show the distribution of realisations for the stacked spectrum (open histogram) and composite spectrum (filled histogram). For convenience of comparison, each distribution is plotted with respect to that distribution's mean (i.e \(f_{pix,i}=(\tilde{F_{C}})_{i}-\langle\tilde{F_{C}}\rangle\) or \(f_{pix,i}=(\tilde{F_{S}})_{i}-\langle\tilde{F_{S}}\rangle\)). The wavelength for each distribution is indicated by the vertical dot-dash line of matching colour in the top panel. The interval described by the standard deviation of each distribution is indicated using vertical solid lines for the stacked spectrum (\(\pm\sigma_{F_{S}}\)) and vertical dashed lines for the composite spectrum (\(\pm\sigma_{F_{S}}\)). We can further compare the uncertainty derived for the composite spectrum and the stacked spectrum through the ratio \(\epsilon=\sigma_{F_{C}}/\sigma_{F_{S}}\)) as a function of wavelength. An \(\epsilon>1\) indicates that uncertainty is increased by the pseudo-continuum fitting, whereas \(\epsilon<1\) indicates that pseudo-continuum fitting is suppressing variance. We again take the example of Si ii 1260A and show \(\epsilon\) as a function of wavelength in Figure 5. As illustrated for Si ii 1260A, line absorption features show an additional uncertainty and the regions between them show variance suppression. The latter is to be expected because the pseudo-continuum fitting suppresses large-scale deviations in uncorrelated absorption by erasing low order modes in the spectra. One the other hand the absorption features themselves are free to deviate and show the increased uncertainty of Figure 4: Illustration of the end-to-end error estimation mechanism using a regions of the F80 stack around the Si ii \(\lambda 1260\) absorption feature. **Top row:** The stacked spectrum around the absorption feature centred at 1260Å is shown using a black curve. The shaded grey regions represent a pair of continuum patches on either sides of the feature. The pseudo-continuum is also shown using the orange dashed curve. The green, blue and red vertical lines mark the locations of three pixels chosen for illustration: the pixel at the centre of the absorption feature and the pixels at the midpoints of the continuum patches located on the left and right of the feature, respectively. **Bottom row:** Each panel shows the distributions of the stacked and composite flux across all the realisations at one of the pixels marked in the upper panel. The wavelength of each distribution is indicated by their colour and the colour of the dot-dash line in the top panel. The distributions are shown on a linearly shifted flux scale so that the mean of each distribution corresponds to \(f_{pix}=0\). The stacked flux distribution is shown using a open histogram while the composite flux distribution is shown using a shaded histogram and their corresponding standard deviations are shown using vertical solid and dashed lines, respectively. interest. The value of \(\epsilon\) for every measured metal line measurement bin is given in Table 4. The pseudo-continuum normalisation process does increase the uncertainty at the absorption locations, but the increase is smaller than the 41% increase implied by the root-2 assumption of P14. Only C iii(977A) and Si ii(1190A) show a great than 10% increase in errors and so overall a more accurate (but less conservative) error estimate would have been to be neglect the contribution of pseudo-continuum fitting. We note, however, that the degree of noise suppression in feature free regions and the degree of noise inflation at absorption feature centres are both dependent on the placement and width of patches are used to generate spline nodes (shown in Figure 2). Therefore we advise caution if using quick bootstraps with these \(\epsilon\) measurements as correction factors, if precise error estimates are needed. The placement of these patches may change if absorption features are broader/narrower than the results presented here, leading to changes in \(\epsilon\). ## 6 Measurement of the Sbla halo mass We cross-correlate the main FS0 sample of SBLAs with the Ly\(\alpha\) forest in order to measure large-scale structure bias, and constrain SBLA halo mass. The Ly\(\alpha\) forest is prepared in a distinct way for this analysis using the standard method developed for correlation function analyses, as outlined in our companion paper (Perez-Rafols et al., 2023, hereafter PR22). We summarise the data preparation briefly in Appendix A and refer the reader to that paper for a detailed discussion. Figure 6 shows the measured cross-correlation and the best-fit model. The best fit has \(\chi^{2}=5060.602\) for 4904 degrees of freedom (probability \(p=0.058\)). The best-fit value of the SBLA bias parameter is \[b_{\rm SBLA}=2.34\pm 0.06, \tag{3}\] where the quoted uncertainty only includes the stochastic errors. The recovered \(b_{\rm SBLA}\) value is consistent with that found by PR22. If all SBLAs were sited on halos of a single mass, this mass would be \(\sim 7.8\times 10^{11}\rm h^{-1}M_{\sun}\). However, SBLAs are likely found in halos with a range of masses. Following what Perez-Rafols et al. (2018) proposed for DLAs (see their equations 15 and 16 and their figure 8), a plausible distribution of the SBLA cross-section, \(\Sigma\) (\(M_{h}\)), is a power law in halo mass, starting with some minimal halo mass: \[\Sigma\left(M_{h}\right)=\Sigma_{0}\left(\frac{M_{h}}{M_{\rm min}}\right)^{- \alpha}\ \left(M_{h}>M_{\rm min}\right). \tag{4}\] Using this cross-section, the mean halo mass is computed as \[\overline{M_{h}}=\frac{\int_{M_{\rm min}}^{\infty}n(M)\Sigma(M)MdM}{\int_{M_{ \rm min}}^{\infty}n(M)\Sigma(M)dM}\, \tag{5}\] where \(n\left(M\right)\) is the number density of halos for a given mass. For plausible values of \(\alpha=0.50\), \(0.75\) and \(1.00\) this yields a mean mass of \(1.3\times 10^{12}\rm h^{-1}M_{\sun}\), \(9.4\times 10^{11}\rm h^{-1}M_{\sun}\), and \(7.6\times 10^{11}\rm h^{-1}M_{\sun}\) respectively. We note that a detailed study of this cross-section using simulations is necessary to make more accurate mass estimates, but our finding indicate that SBLAs reside in halos of mass \(\approx 10^{12}\rm h^{-1}M_{\sun}\). It is informative to compare this with order of magnitude estimates of the halo mass derived by assuming that the width of the SBLA line blend is driven by the circular velocity of virialised halo gas undergoing collapse. This connection between halo circular velocity, halo virial mass and galaxy populations has been well-explored (e.g. Thoul & Weinberg, 1996). Specifically we apply the relationship between maximal circular velocity and halo mass modelled by Zehavi et al. (2019). Using these relations, we infer that a circular velocity of 138 km s\({}^{-1}\) at \(z\sim 2.4\) leads to halo mass estimate of \(M_{h}\sim 3\times 10^{11}\rm h^{-1}M_{\sun}\). This value is broadly consistent with our findings from SBLA clustering, supporting our assumption that blending scale is associated with halo circular velocity and so halo mass. This may shed some light on the reason why SBLAs are CGM regions. ## 7 Average Sbla absorption properties As one can see in Figure 3, absorption signal is measurable in the composite spectrum from a a wide range of transitions: Lyman-series lines (Ly\(\alpha\)- Ly\(\theta\)) and metal lines (O i, O vi, C ii, C iii, C iv, Figure 5: The ratio, \(\epsilon\), between \(1\sigma\) error in the composite flux (\(\sigma_{F_{\rm{C}}}\)) to that of the stacked flux (\(\sigma_{F_{\rm{S}}}\)) for the FS0 sample is plotted over the region around the Si ii\(\lambda\)1260 feature. The shaded grey regions represent a pair of continuum patches on either side of the feature. The vertical lines correspond to the locations of the pixels marked in Figure 4. Figure 6: Cross-correlation function averaged over the full angular range \(0<|\mu|<1\) for the fitting range \(10<r<80\ h^{-1}\)Mpc. The solid line shows the best-fit model. Si ii, Si iii, Si iv, N v, Fe ii, Al ii, Al iii, and Mg ii), but care must be taken to measure them in a way that is self-consistent and without bias. Although these features appear to be absorption lines, they are in fact a complex mix of effects that precludes the naive application of standard absorption line analysis methods appropriate for individual spectrum studies. P14 demonstrated that the main difference in interpretation of the 3 potentially CGM dependent samples (which we have named) FS0, FS1 and FS2 was the purity of CGM selection in light of spectral noise given the large excess of pixels with higher transmission that might pollute the sample. Since FS0 has the lowest transmission, it is the purest of these samples. Hence, in this work directed at understanding CGM properties, we focus on interpreting FS0 sample properties. Throughout this work we only present lines measured with 5\(\sigma\) significance or greater. N v, for example, fails to meet this requirement and is not included in the measurements presented below. ### Line-of-sight integration scale There are two approaches to the measurement of absorption features seen in the composite spectra (as identified in P14); the measurement of the full profile of the feature and the measurement of the central pixel (or more accurately resolution element). In order to understand this choice, it is necessary to reflect, briefly, on the elements that give rise to the shape and strength of the features. The signal present for every absorption feature is a combination of * the absorption signal directly associated with the selected Ly\(\alpha\) absorption, * possible associated absorption complexes extending over larger velocities (typically associated with gas flows, often with many components), and * sensitivity to large-scale structure (including redshift-space distortions) reflected in the well-documented (e.g Chabanier et al., 2019) fact that Ly\(\alpha\) forest absorption is clustered, leading to potential clustering in associated absorbers also (e.g Blomqvist et al., 2018). In large-scale structure terminology the first two points are 'one-halo' terms and the last one is a 'two-halo' term. This two-halo effect is clearly visible in the wide wings of the Ly\(\alpha\) absorption feature extending over several thousand \(\,\mathrm{km\,s^{-1}}\). Since the metal features seen are associated with Ly\(\alpha\) every one must present an analogous (albeit weak) signal due to the clustering of SBLA. Although this large-scale structure signal is present in the composite, our stacking analysis is poorly adapted to the measurement of large-scale structure since the signal is degenerate with the pseudo-continuum fitting used, and the preferred measurement framework for this signal is the Ly\(\alpha\) forest power spectrum (McDonald et al., 2006). As outlined in Section 3, the selection of SBLA to be stacked includes clustering and therefore both complexes and large-scale structure. Therefore even the central pixel includes all the above effects to some extent but limiting ourselves to the measurement of the central pixel sets a common velocity integration scale for absorption measurement. In fact, since the resolution of SDSS is 2.4 pixels, the appropriate common velocity scale is two native SDSS pixels. We therefore take the average of the two native pixels with wavelengths closest to the rest frame wavelength of the transition in question as our analysis pixel. This sets the integration scale fixed to 138 \(\,\mathrm{km\,s^{-1}}\). This mirrors the Ly\(\alpha\) selection function bin scale which is also a 2-pixel average (see Section 3). The error estimate for the flux transmission of this double width wavelength bin is taken as the quadrature sum of the uncertainty for the two pixels in question (a conservative approximation that neglects the fact that errors in neighbouring pixels are correlated due to pipeline and analysis steps such as pseudo-continuum fitting). Here after we will use 'central bin' to refer to this 2-pixel average centred around the rest frame wavelength of the transition of interest. In contrast P14 showed that measuring the full profile of the features leads to a different velocity width for every feature indicating either varying sensitivity to these effects or tracing different extended complexes. Critically this means that some absorption must be coming from physically different gas. Since the objective of this work is the formal measurement and interpretation of the systems selected, we limit ourselves to the central analysis pixels at the standard rest frame wavelength of each transition. We note, however, that information is present in the composite spectra on the velocity scale of metal complexes and this demands further study if it can be disentangled from large-scale structure. ### Measuring the H i Column density Here we compare Lyman series line measurements in the composite spectrum with a variety of models in order to constrain the column density and Doppler parameter. As we have stressed throughout this work, our SBLA samples are a blend of unresolved lines contributing to a 138 \(\,\mathrm{km\,s^{-1}}\) central bin. As a result a range of H i column densities are present in each SBLA. While the full range of H i columns contribute to the selection, it is reasonable to presume that a high column density subset dominate the signal in the composite. It is, therefore, natural that the further we climb up the Lyman series, the more we converge on a signal driven by this dominant high-column subset. Here we exploit this expected convergence to jointly constrain the integrated dominant H i column density (\(\mathrm{N_{H\textsc{i}}}\)) of lines in the blend and their typical Doppler parameter (\(b\)). In the following, the results are presented as equivalent widths to follow standard practise, but the measurements are in fact central bin flux decrements (\(1-F_{C}\)) multiplied by the wavelength interval corresponding to the 138 \(\,\mathrm{km\,s^{-1}}\) central bin interval. In effect, the equivalent widths presented are the integrated equivalent widths of all lines contributing to that central bin measurement. We build a grid of model1 equivalent widths for the eight strongest Lyman transitions over the range \(13.0\leq\log\mathrm{N_{H\textsc{i}}}(\mathrm{cm^{-2}})\leq 21.0\) with interval \(\delta\log\mathrm{N_{H\textsc{i}}}(\mathrm{cm^{-2}})=0.01\), and \(5.0\leq b\) (\(\,\mathrm{km\,s^{-1}}\)) \(\leq 50.0\) with interval \(\delta b=0.1\)\(\,\mathrm{km\,s^{-1}}\). These models are built for the composite spectrum wavelength solution and include instrumental broadening of 167 \(\,\mathrm{km\,s^{-1}}\). Footnote 1: Produced using VPFIT 10.0 (Carswell & Webb, 2014) In order to measure the dominant H i contribution, we must determine which of the Lyman series lines should be treated as upper limits progressively, starting with Ly\(\alpha\) and moving up the series until a converged single line solution of satisfactory probability is reached. For each line considered as upper limit, if the model prediction lies 1\(\sigma\) equivalent width error above the measured equivalent width, the line contributes to the total \(\chi^{2}\) for the model and one degree of freedom gets added to the number of degrees of freedom for the model. If the model prediction lies below this threshold, it does not contribute to the total \(\chi^{2}\) and the number of degrees of freedom for the model remain unchanged. This process 'punishes' the overproducing models instead of rejecting them. The probability for each model is calculated based on the total \(\chi^{2}\) and the updated number of degrees of freedom. The best-fit model for a given upper-limit assignment scheme is determined by maximising the probability. The best-fit probabilities, \(N\) and \(b\)-values corresponding to the different upper-limit assignment schemes are compared to determine the number of lowest order Lyman lines assigned to upper limits (\(N_{ul}\)) necessary to achieve a converged probability. The convergence for the FS0 sample is shown in Figure 7. The model that corresponds to the convergence is chosen as the best-fit model for the H i column density and Doppler parameter for that sample. Figure 8 shows the measured equivalent widths (\(W\)) normalised by the oscillator strength (\(f\)) and rest frame wavelength (\(\lambda\)) for each transition for the FS0 sample. Also shown is the best-fit model, along with models for the \(1\sigma\) upper and lower confidence intervals on the dominant H i column density. Note that when plotted this way, unsaturated lines would produce a constant \(W/(F\lambda)\), and so the dominant H i population is only beginning to show unsaturated properties for the highest Lyman series transitions measured. Table 3 shows the fit results for this procedure. The differences in measured column densities between FS0, FS1, and FS2 demonstrate that, along with decreasing purity of noiseless \(F_{\rm Ly\alpha}<0.25\), higher transmission bands also select lower column densities. The P90, P75 and P30 samples show a similar trend but show a weaker variation in H i column density along with a weaker decline in mean purity. This combined with the large numbers of systems selected indicates that these purity cuts do indeed provide more optimal SBLA samples. While we chose to focus on FS0 in order to preserve sample continuity for comparison with previous work, we recommend a transition to such optimised selection in future work. This supports the choice taken in Perez-Rafols et al. (2023) to use the P30 sample. ### Average Metal Column Densities Unlike the H i measurement above, metal features in the composite are sufficiently weak that several metal transitions are not necessary to establish a reliable column density. However, the combination of line strength and measurement precision means that the small opacity approximation (that the relationship is linear between equivalent width and column density) is inadequate for our needs. Again given that we lack a large number of metal transitions with a wide dynamic range of transition strengths for each metal species, a suite of model lines (as performed for H i) is not necessary. We instead fit them directly with column density the only free parameter, treating each feature as fully independent or one another. We assume a Doppler parameter value taken from the H i measurement (see below). We fit the mean of the pair of pixels nearest to the transition wavelength with instrumental broadening set to 167 km s\({}^{-1}\) using VPFIT. Since VPFIT was not designed to reliably assess the uncertainty in the column density from a single pixel at time, we pass the upper and lower \(1\sigma\) error envelope through VPFIT for every line to obtain \(N_{min}\) and \(N_{max}\) respectively. The measurements for our main sample (FS0) along are given Table 4. We exclude from our analysis all transitions where there is a significant contribution to the central 138 km s\({}^{-1}\) by the broad wing of a neighbouring feature. In principal, it is possible to fit the superposed features, correct for the profile of the unwanted feature and measure the 138 km s\({}^{-1}\) central core of the desired line, but these blended features are incompatible with the population modelling Figure 8: The best fit H i model (_green solid_ line) and the limiting \(\pm 1\sigma\) allowed models (_orange dashed_ line) compared to Lyman series equivalent width measurements for the FS0 sample. The upper limits reflect the convergence described in the text and illustrated in Figure 7. \begin{table} \begin{tabular}{l c c c} \hline \hline Sample & \(\log\mathrm{N_{H\textsc{i}}}\)(cm\({}^{-2}\)) & \(b\)(km s\({}^{-1}\)) & \(N_{ul}\) & Prob \\ \hline \hline FS0 & \(16.04^{+0.06}_{-0.06}\) & \(18.1^{+0.7}_{-0.6}\) & 5 & 0.04 \\ FS1 & \(15.64^{+0.06}_{-0.06}\) & \(12.3^{+0.4}_{-0.4}\) & 3 & 0.6 \\ FS2 & \(15.1^{+0.06}_{-0.07}\) & \(8.5^{+1.0}_{-0.3}\) & 5 & 0.13 \\ P30 & \(15.49^{+0.06}_{-0.01}\) & \(10.8^{+1.4}_{-0.1}\) & 5 & 0.4 \\ P75 & \(15.67^{+0.06}_{-0.03}\) & \(13.5^{+0.3}_{-0.3}\) & 5 & 0.27 \\ P90 & \(15.79^{+0.06}_{-0.07}\) & \(14.6^{+1.0}_{-0.1}\) & 5 & 0.37 \\ \hline \end{tabular} \end{table} Table 3: Inferred H i column densities from Lyman series measurements. Figure 7: Test of H i Lyman series upper limits (starting with Ly \(\alpha\) as an upper limit and progressively adding higher order Lyman lines) for convergence to determine best fit model parameters for the FS0 composite. The shaded bands represent the final best fit parameters for \(\log\mathrm{N_{H\textsc{i}}}\) (top, blue) and \(b\) (middle, red). The probability of (of a higher \(\chi^{2}\)) for each best-fit model, as a function of the number of upper limits, is given in the bottom panel (green). procedure that follows and so are of limited value. Examples of cases where a broad feature wing contaminates the desired feature centre (and are hence discarded) are O i(989A), N iii(990A) and Si ii(990A), and C ii(1036A) and O vi(1037A). On the other hand O i(1302A) and Si ii(1304A)are retained in our analysis despite pertainingly blended in our composite spectrum. The contribution of the Si ii(1304A) feature wing to the central O i analysis bin is 3% of the observed flux decrement. The O i feature wing contributes 6% to the observed flux decrement to the Si ii(1304A) measurement. This is illustrated in Figure 9. In each case spectral error estimate is similar to the size of the contamination. As we shall see in Section 7 the error estimates of the composite are too small for any true model fit and instead the limiting factor is the much larger uncertainty in the population model fits of Section 8. Another consequence of our inability to resolve the individual lines that give rise to our metal features (and our lack of a dynamic range of transition strengths) is that we lack the ability to constrain the Doppler broadening parameter. However, we do have a statistical measurement of the Doppler parameter of systems that dominate the blend selected. This is the value of the Doppler parameter obtained from the H i measurement. While the measurement of narrow lines in wide spectral bins is often insensitive to the choice of Doppler parameter, in our measurements it does matter. The theoretical oversampled line profile is a convolution of the the narrow line and the line spread function. Our choice of 2 spectral bins is much larger than the former but does include the entire line spread function. This means that the choice of Doppler parameter in the model does have an impact. For example, changing the Doppler parameter by 5 km s\({}^{-1}\) generates a change of \(\Delta(logN)\lesssim 0.1\) (the strongest features are closest to this limit, e.g. C iii). Normally this degree of sensitivity would be considered small but in the context of the extremely high precision of the average column density statistic, the choice of using the H i Doppler is a significant assumption. Again we shall see in Section 8 that the population analysis implies larger column density errors. ### Modelling average metal column densities In order to interpret our measurements of SBLA sample FS0 (both for the ensemble SBLA mean and the population properties in Section 8) we follow the simple framework in P10 and P14. We will review this analytic framework here, and for further details see P14. A key supporting assumption for what follows is that the gas studied follows the optically thin (to ionizing photons) approximation. This assumption is supported by various arguments. First of all, as stated in Section 3 Damped Lyman-\(\alpha\) systems in the DR16Q sample are masked. Secondly, the mean H i column density found (see Section 7.2) is that of optically thin gas. Thirdly, the population analysis (see Section 8) indicates that Ly\(\epsilon\) is homogeneous indicat \begin{table} \begin{tabular}{l c c c c c c c c} \hline Line & Wavelength (Å) & Ionization Potential (eV) & \(F_{C}\) & \(\sigma_{F_{C}}\) & \(\epsilon\) & \(\log{\rm N(cm^{-2})}\) & \(\log{\rm N_{max}(cm^{-2})}\) & \(\log{\rm N_{min}(cm^{-2})}\) \\ \hline OI & 1302.17 & 13.6 & 0.9743 & 0.0011 & 1.084 & 13.470 & 13.449 & 13.489 \\ MgII & 2796.35 & 15.0 & 0.9376 & 0.0031 & 1.034 & 12.450 & 12.424 & 12.474 \\ MgII & 2803.53 & 15.0 & 0.9404 & 0.0031 & 1.043 & 12.729 & 12.703 & 12.754 \\ FeII & 1608.45 & 16.2 & 0.9596 & 0.0007 & 1.020 & 12.509 & 12.433 & 12.573 \\ FeII & 2344.21 & 16.2 & 0.9878 & 0.0009 & 1.042 & 12.499 & 12.467 & 12.530 \\ FeII & 2382.76 & 16.2 & 0.9807 & 0.0009 & 1.032 & 12.252 & 12.231 & 12.272 \\ FeII & 2586.65 & 16.2 & 0.9932 & 0.0013 & 1.041 & 12.415 & 12.321 & 12.493 \\ FeII & 2600.17 & 16.2 & 0.9798 & 0.0014 & 1.031 & 12.361 & 12.329 & 12.390 \\ SiII & 1190.42 & 16.3 & 0.9709 & 0.0010 & 1.147 & 12.780 & 12.765 & 12.795 \\ SiII & 1193.29 & 16.3 & 0.9643 & 0.0010 & 1.165 & 12.574 & 12.561 & 12.586 \\ SiII & 1260.42 & 16.3 & 0.9481 & 0.0012 & 1.082 & 12.422 & 12.411 & 12.433 \\ SiII & 1304.37 & 16.3 & 0.9823 & 0.0010 & 1.076 & 13.044 & 13.017 & 13.069 \\ SiIII & 1526.71 & 16.3 & 0.9780 & 0.0006 & 1.032 & 12.886 & 12.872 & 12.899 \\ AlII & 1670.79 & 18.8 & 0.9740 & 0.0007 & 1.020 & 11.806 & 11.795 & 11.817 \\ CII & 1334.53 & 24.4 & 0.9428 & 0.0010 & 1.019 & 13.410 & 13.401 & 13.418 \\ AlIII & 1854.72 & 28.4 & 0.9904 & 0.0005 & 1.031 & 11.805 & 11.780 & 11.828 \\ AlIII & 1862.79 & 28.4 & 0.9965 & 0.0005 & 1.035 & 11.661 & 11.590 & 11.722 \\ SiIII & 1206.50 & 33.5 & 0.8904 & 0.0010 & 1.057 & 12.690 & 12.685 & 12.696 \\ SiIV & 1393.76 & 45.1 & 0.9367 & 0.0007 & 1.016 & 12.838 & 12.832 & 12.844 \\ CIII & 977.02 & 47.9 & 0.8180 & 0.0025 & 1.259 & 13.444 & 13.434 & 13.455 \\ CIV & 1548.20 & 64.5 & 0.8764 & 0.0008 & 1.029 & 13.586 & 13.582 & 13.590 \\ OVI & 1031.93 & 138.1 & 0.8994 & 0.0014 & 1.084 & 13.799 & 13.792 & 13.807 \\ \hline \end{tabular} \end{table} Table 4: Mean metal columns for the main sample, FS0. Figure 9: The contribution of Si ii(1304Å) to the central bin measurement of O i(1302Å) and vice versa. The _blue_ curve is the fit to the portion of the O feature that is Si ii–free (the blue-side of the profile). The _red_ curve is the fit to the portion of the Si ii feature that is O i–free (the red-side of the profile). The green curve is the joint fit of the full profiles of both features. The full profile fit is only used to measure the contribution to the measurement bin of the neighbouring line. As discussed in Section 7.1, we do not use the full profile measurement of features in this work. ing that the H i population does not deviate significantly from this mean. Finally DLAs and Lyman limit systems are not sufficiently numerous to significantly modify our mean results (as discussed in Section 3). However, as we shall see in Section 8 when one delves further into the metal population behind the mean one finds that such small populations can have an important contribution if the absorption is sufficiently strong. Metal lines consistent with such small populations are identified in Section 8 and omitted from further analysis in this work. In order to model the column density of each metal line from the measured the H i column density we need a simple sequence of conversion factors: the neutral hydrogen fraction is needed to obtain the hydrogen column density, the metallicity (and an abundance pattern baseline) is needed to obtain the metal element column density, and finally the metal ionization fraction is needed to obtain the required metal column density. The ionization fractions are provided under the optically thin approximation by runs of the CLOUDY (Ferland et al., 1998) with varying gas density and temperature using a quasar+galaxy UV background model (Haardt & Madau, 2001). For relative elemental abundances, we assume a solar abundance and take the solar abundance pattern of Anders & Grevesse (1989). The UV background and abundance patterns used are significant simplifying assumptions (see Section 9.2 for further discussion). In this work, we focus on the constraining density and temperature from these ionization models with metallicity as an additional free parameter (acting as a overall normalisation of projected metal column densities). We give the gas density in terms of hydrogen atom number density, but this can be converted to gas overdensity by scaling up by 5.04 dex for a standard cosmology at \(z=2.7\). In the process of interpreting these metal features, we take into account that all these features should build up a coherent picture either as a multi-phase medium, or multiple populations of systems or both. By'multiphase' we mean that an individual SBLA in our sample may be associated with multiple phases of gas that are unresolved in our data. Interpreting our average metal properties in a purely multiphase way presumes that all SBLAs stacked are the same. We will initially explore this straw-man model before going on to explore the underlying population, and combined multi-population and multi-phase fits in Section 8. One cannot fit a model to each of the ionisation species in isolation because a fit to one metal column density implies a prediction for another. We illustrate this point in figures 10 and 11. In each panel we attempt to fit one of O vi, Si iii or O i, varying the metallicity to maintain the fit while exploring density and temperature. In Figure 10 we a take reasonable density for each of the 3 species and a reasonable temperature of \(T=10^{4}\)K, and we vary the density around this value. In Figure 11 we vary instead the temperature around these reasonable values. The temperature, \(T=10^{4}\)K, is a standard estimate for a photoionized and photo-heated gas. The central densities are estimates intended to span the range of conditions required without over-production of other species (where possible). Note that the propagated errors associated with the uncertainty in the H i column density are approximately the width of the model lines shown and so can be neglected. In this plot (and all subsequent plots of this section) the measured column densities are those shown in Table 4. In this plot (and all subsequent plots of this section) the measured metal column densities are those shown in Table 4. Note also that the propagated errors associated with the uncertainty in the H i column density is approximately the width of the model lines shown and so can be neglected. In each panel we attempt to fit one of O vi, Si iii or O i. In Figure 10 we a take reasonable density to reproduce each of these 3 species and a reasonable temperature of \(T=10^{4}\)K, and we vary the density around this value. In each panel we vary the metallicity to maintain fit to the intended species. In Figure 11 we vary instead the temperature around these reasonable values. Assuming that the gas is multi-phase, contributions to the column density are additive. In other words, one must not significantly over-produce column densities from any given phase, but under-producing is acceptable so long as the short-fall can be made up by other phases (that do not themselves lead to significant over-production of other metal column densities). One can see by eye Figure 10: Metal column densities models for the F80 sample. Model curves are displayed assuming gas is optically thin, H i columns shown in Table 1, a solar abundance pattern. Models to fit the column densities of O i (_top panel_), Si iii (_middle panel_), and O vi (_bottom panel_) are shown with varying density. Metallicities are tuned in order to fit to the chosen species for a given density and temperature. A preferred value of density for a fixed temperature (\(10^{4}\)K) attempting to avoid overproducing any species and avoiding unjustified extremes of density. Density is varied around this preferred value ( _black line_) in the the middle and bottom panels. In the top panel, we are not able to do this since the maximum density is the favoured one (_blue dashed line_) and we are only able to vary the density downwards. that two to three phases are sufficient to generate all the ionization species in broad terms. Figure 12 shows the resulting overall model fit from summing these three phases for the reasonable densities and temperatures of figures 10 and 11. While not a full parameter search it is clear that this multi-phase model produces the general trend required by the data but only with extremely high density and metallicity for the CGM. However, it completely fails to offer acceptable statistical agreement required by the very small measured uncertainties. While one might attempt to generate instead four, five, six or more phases (indeed a plausible physical model would not be discrete at all), but each of our current three phases makes strong productions for multiple species and the model lacks the freedom to meet the statistical requirements of the data. For instance, producing more Al iii without overproducing Si iii, C ii and Al ii seems implausible. Similarly producing more Si iv without overproducing Si iii or further overproducing C iii seems implausible. Indeed the data is also not self-consistent in this purely multi-phase picture. For example the five Si ii features measured are statistically divergent from one other. A natural solution to this puzzle presents itself; not all SBLAs are alike and treating the composite spectrum as a measurement of a uniform population of lines with multi-phase properties is unrealistic. ### The covariance between SBLA metal features In order to explore the absorbing population beyond the mean we can study the properties of the stack of spectra used to calculate the mean composite spectrum. Naturally there is variance in the metal population giving rise to any given feature. In order to exploit these metal populations, we must develop an understanding of whether line strengths vary together. For example, it is expected that Si ii(1260A) will be covariant with C ii given the predictions of models shown in figures 10 and 11. On the other hand, it is far from clear if Si ii(1260A) will be similarly covariant with O i, Si iii or even O vi. Insignificant covariance would imply that population variance is negligible. Similar and significant covariance between all species irrespective of ionization potential would indicate that metallicity variation is the main driver for population variation. On the other hand significant differences in covariance of low, medium and high ions with themselves and each other are a sign of more complex multi-population properties. In order to explore this we calculate the covariance of the transmitted flux between our metal features normalised by their line strengths. The procedure used is set out in Appendix D. Figure D2 shows the covariance between pairs of lines measured at line centre normalised to the product of the associated mean absorption signal for each line (corresponding to the flux decrement in the composite spectrum at line centre). This normalisation is performed in order to allow meaningful comparisons of the covariance between lines of Figure 11: As in Figure 10 metal column densities models are shown for the FSO sample. Models to fit the column densities of O i (_top panel_), Si iii (_middle panel_), and O vi (_bottom panel_) are shown with varying temperature around the value \(10^{1}\)K corresponding to the preferred values of Figure 10. Metallicities are again varied to provide the best fit to the chosen species for a given density and temperature. Figure 12: The column densities of metal ionization species in order of decreasing ionization potential the FSO sample as in Figure 10. The best three models to fit the column densities of O i, Si iii, and O vi are shown. A combined model is showing reflecting the multiphase scenario where each system stacked has same properties and three phases of associated gas. By summing the columns from the three models without correction we are assuming that the H i is distributed equally in each phase. Each sample receives a third the H i column and therefore the metallicity is a three times larger than values shown in the legend for the model. different intrinsic strengths. In general covariance is approximately as large as the absorption strength or up to 4\(\times\) larger. In top panel of Figure 11 we focus once again on transitions of our 3 indicative species: O i, Si iii, and O vi, for low, medium and high ionization species respectively. We show the trend of covariance with the best-measured carbon lines, the best-measured silicon lines and remaining low ionization species in subsequent panels. We find that high ions are covariant with other ions with little or no signs of ionization potential dependence. Medium ions (Si iv, Si iii, Al iii and to an extent C iii and C ii) also show an increased (albeit weaker) covariance with low ions and no signs of raised covariance with each other. We can conclude that SBLAs are not all alike with respect to their mix of detected metal lines. High ions appear to be relatively homogeneous, low ions appear to be inhomogenous. Medium ions lie between and their inhomogeneity seems to be linked to the inhomogeneity of low ions. Low ions generally show high levels of covariance with each other aside from the peculiar low covariance between Mg ii and O i despite their closely related ionization properties. However Section 8 shows that the Mg ii population is poorly constrained and is (marginally) consistent with a separate small self-shielded population. Overall it seems evident from the line covariance alone that more than one population exists in the ensemble of SBLA properties, and that metallicity variation alone is not sufficient to explain it. Overall covariance with low ionization species is high. It is at least as high as the covariance between high ions, between medium ions and between high ions with medium ions. Hence we conclude that the strong population(s) of low ions is also accompanied by strong populations of all species. ## 8 Sbla absorption population The standard stacking approach of calculating a mean or a median in order to understand the ensemble properties of the sample neglects variation in the ensemble. In this section we seek to explore the underlying properties of the population probed in our fiducial composite spectrum by using the full distribution in the stack at metal line centres. This is a non-trivial matter since the flux transmission distribution provided by this stack of spectra is a mix of different effects. In addition to the metal strength distribution we seek to probe, one can expect contributions from the observing noise (mostly read noise and photon shot noise), contaminating absorption fluctuations (absorption in the spectra not associated with the selected system but nevertheless coincident with them), any smooth residual continuum normalisation errors in the individual quasar spectra, and finally any errors in the subtraction of the overall mean level of uncorrelated absorption (i.e. the pseudo-continuum). It is not possible to pick apart these various effects from the data alone but we may forward-model potential metal populations and compare them with the observed distribution. One could seek to study each effect in detail and generate synthetic spectra, but a much simpler and more robust method presents itself; we use the data itself as the testbed for our population modelling by adding model signal to null data. ### The null sample The stack of spectra itself provides the ideal signal-free null sample: the close blueward and redward portions of the stack of spectra beyond the full profile of the feature of interest. These proximate portions of the spectral stack represent a close approximation of the effects present at line centre excluding the metal signal of interest. Potential linear variation as a function of wavelength in these effects are dealt with by attempting to mirror as much as possible the null pixels selected on both the blueward and redward sides. These nulls wavelength bins are drawn from the sample used in pseudo-continuum fitting as shaded in green in Figure 2. We take 8 wavelength bins on the red-side and 8 wavelength bins on the blue-side for all metal lines except Si iii (where the close proximity of the broad Ly\(\alpha\) absorption feature limits us to 4 bins on each side). We then average together the flux transmission in red-blue pairs from closest to furthest to the metal transition in order to generate the usual 138 \(\,\mathrm{km\,s}^{-1}\) integration scale of the central bin and to cancel out linear evolution with wavelength between red and blue. This leaves us with 8 null bins (or 4 nulls for Si iii) for every metal feature central bin. In all cases the sampling of the null distribution is sufficient to allow the errors in the true measurement to dominate. Finally, before assembling our null pixels we rescale them by any residual offset from the pseudo-continuum in the mean spectrum. As a result the nulls show only dispersion and no zero-point offset from the pseudo-continuum before mock signal is added. ### The population model We model the populations underlying the average metal absorption signal of each feature independently with two main fitted parameters and two further marginalised parameters. These main parameters generate bimodality in the metal populations constrained by a prior that the population mean arrived as is that given by the unweighted arithmetic mean composite spectrum. In effect this unweighted arithmetic mean provides a flux decrement (\(D_{m}=1-F_{C}\))'metal absorption budget' to be allocated in a way such that the ensemble mean is preserved. Specifically our main parameters are: * \(f_{pop}\), the fraction of systems with strong metal absorption, and * \(f_{move}\), the proportion of the flux decrement by which to reduce the weak metal absorption population and reallocate to the strong population. The two parameters combined define the degree of asymmetry between the two populations. We initially attempted to fit with only \(f_{pop}\) and \(f_{move}\) as free parameters but found that two forms of random scatter were required and must be marginalised over. The first is a Gaussian scatter in the strong absorption flux decrements with a standard deviation, \(\sigma_{p}\). The second is a Gaussian random noise added to the entire sample (both strong and weak components) with a standard deviation, \(\sigma_{n}\). This additional noise term is typically small (see Table 5) but appears to be necessary in some cases for an acceptable fit. The addition is a logical one since the pseudo-continuum fitting leads to an asymmetry in the noise properties between the metal measurements and nulls. The null pixels are part of the pseudo-continuum fitting and therefore the mean of the noise distribution is suppressed. This suppression is reinforced by our choice to rescale the zero-point of the nulls. In this way, we chose to generate a random noise in the nulls rather than carry-forward a potential different noise deviation already present in the nulls. Overall, these two normally distributed random variables are sufficiently flexible to account for any scatter in the weak population also, since the sum of two independent normal random variables is also normal. The resulting model represents the simplest that provides an acceptable fit to our data. More explicitly, a mock absorption sample is built by taking every null pixel in the ensemble of nulls and applying the model as follows. For strong absorbers the flux decrement applied is \[D^{\prime}_{strong}=D_{m}+\frac{D_{m}f_{move}(1-f_{pop})}{f_{pop}}\mathcal{G}(0, \sigma_{p})+\mathcal{G}(0,\sigma_{n}) \tag{6}\] whereas the weak absorbers flux decrement is modelled as \[D^{\prime}_{weak}=D_{m}(1-f_{move})+\mathcal{G}(0,\sigma_{n}) \tag{7}\] where \(\mathcal{G}(0,\sigma)\) denotes a Gaussian random number with zero mean and a standard deviation \(\sigma\). The Gaussian random number that represents scatter in the strong population is bounded such that \(\mathcal{G}(0,\sigma_{p})<D_{m}\) in order to ensure that the strong sample never shows unphysical negative absorption. In principal this could lead to an asymmetry in the generated Gaussian numbers, a non-conservation of the'metal budget' and therefore an incorrect mean metal strength for the ensemble. In practise, however, favoured values of \(\sigma_{p}\) are sufficiently small that this regime is not reached. The mock absorption sample combines together every null pixel from every member of the stack of spectra. We randomly assign weak or strong absorber status to each pixel (using a uniform random number generator) in line with the trial \(f_{pop}\) and proceed following Equation 6 or 7 as necessary. For every model (specified by our 4 parameters) we compare the flux transmission distribution of the mock sample with the measured flux transmission distribution function for the feature of interest. Despite our large number of null pixels, our model distribution functions can be unstable. Hence we make at least 100 random realisations of these mocks and the model distribution function carried forward is the average of these random realisations. More realisations are produced when it is clear that the flux distribution is more complex or if the favoured models are those with small \(f_{pop}\), which therefore require additional statistics to offset the intrinsically smaller sample size. In each case we compare the averaged simulation histogram with the measured true one, by performing a \(\chi^{2}\) test. An example of this is shown in Figure 13, which compares the distribution function of the flux transmission for the central bin of Si ii (1260) with the distribution of the preferred model. In the development of these mocks and their comparison to data, it became apparent that outliers in the noise distribution lead to high \(\chi^{2}\) values. In order to limit the impact of these outliers we sigma-clip by removing the top and bottom 3% of the distribution from the \(\chi^{2}\) test. This could in principal impair our ability to constrain very small absorbing populations but this is not true in practise. Furthermore, the favoured models are largely unaffected. This suggests that the tails of the distributions are dominated by noise outliers as expected. The range of flux transmission shown in Figure 13 shows for example, the range used in the model comparison. We search parameter space from \(0.01\leq f_{pop}<1\) and \(0.01<f_{move}<0.99\) allowing \(\sigma_{p}\) and \(\sigma_{p}\) to float freely to preferred values in each case following the results of the \(\chi^{2}\) test. We also add grid points in this 4-dimensional parameter space in order to better sample the region with \(\Delta\chi^{2}<12\). We then find the minimum \(\chi^{2}\) in this parameter space and calculate the \(\Delta\) with respect to this minimum for the entire \(\chi^{2}\) surface. We estimate confidence interval for our two parameters of interest by marginalising over the other 2 in order to produce a \(\chi^{2}\) scan as shown in Figure 14. Since we are performing a combined fit of the two parameters of interest the standard deviation 68.3% confidence interval is provided but the region where \(\Delta\chi^{2}<2.30\). This 1\(\sigma\) interval is marked in Figure 14. ### Population analysis results and measuring the strong metal population Table 5 shows the resulting favoured model parameters including 1\(\sigma\) confidence intervals for our two parameters of interest and the fit probability. Since the constraint is statistically and computationally demanding, we limit ourselves to the most constraining transition for each ionization species. We present only species that have generated statistically meaningful parameter constraints for any feature. We study one further parameter, which is a quantity derived from our two parameters of interest. This is the 'boost factor' \[C_{boost}=\frac{f_{move}(1-f_{pop})}{f_{pop}}+1, \tag{8}\] which represents for each feature the level of boost in line strength that must be applied to the flux decrement measured in the composite spectrum in order to generate the metal strength of the strong population favoured by the population model search. Note that the best fit \(C_{boost}\) is derived from the best fit \(f_{pop}\) and \(f_{move}\), while the error estimate in \(C_{boost}\) is the range given by marginalising over the 1\(\sigma\) confidence of \(f_{pop}\) and \(f_{move}\). ### Inferred column densities for the strong metal population We now have a population analysis fit, as shown in Table 5 and the covariance analysis result in Section 7.5, and so we are able to build up a picture of the dominant strong absorber population with realistic associated measurement errors statistically, even though we make no attempt to recover the sub-population on a case-by-case basis. The population analysis parameter \(C_{boost}\) allows us infer the typical corrected transmitted flux, \(F_{Corr}\), associated with this strong population for each feature (see Table 6). Since the uncertainty \(C_{boost}\) is much larger than the uncertainty in \(F\), the error margin in \(C_{boost}\) can be carried forward as the error margin in the flux transmission. This uncertainty is indicated in Table 6 as a minimum and maximum transmitted flux, respectively given by \(F_{Corr,min}\) and \(F_{Corr,max}\). The corrected transmitted fluxes shown in Table 6 for the strong population are averaged across a 138 km s\({}^{-1}\) velocity window and while this information alone doesn't tell us how many individual components exist, we know there must be at least one component Figure 13: An estimate of the probability distribution function of the flux in the stack o spectra corrected for the pseudo-continuum (for consistency with the composite spectrum) at the spectral pixel closest to the rest frame wavelength of Si ii(1260) (_black line_). The _red line_ shows the distribution function of the best fitting model (see Table 5). that is strong enough to produce a minimum flux at least this low if resolved. We can conclude therefore that all these lines should be statistically significant in high S/N and high resolution. We don't rule out the possibility that the weak population for any given metal line is detectable, but they are not the focus of this work. The size of the strong populations (indicated by \(f_{pop}\)) is not consistent among all features. Higher ionization lines typically show larger and weaker strong populations. Given the picture, drawn from covariance, that strong higher ions trace a wider ranger of conditions, this is to be expected. However, it is also true that each feature shows their highest covariance with low ions. The key conclusion of the covariance analysis is that strong low ions appear to be accompanied by medium and high ions. We can therefore treat this sub-population of \(\approx\)25% as being traced by all our fitted metal features and fit a multi-phase model to all these features. The metal column densities (and their measurement uncertainties) associated with this common strong absorbing population are derived from the corrected transmitted flux (and its error margin), using the same method as set out in Section 7.3. We recompute each column density value as before using this strong absorber corrected flux transmission. The column densities of the strong population features are given as \(N_{strng}\) in Table 6, with associated upper low \begin{table} \begin{tabular}{l c c c c c c} \hline Line & Wavelength & \(F_{Corr}\) & \(F_{Corr,min}\) & \(F_{Corr,max}\) & \(N_{strng}\) & \(N_{strng,max}\) & \(N_{strng,min}\) \\ \hline OI & 1302.17 & 0.8708 & 0.7867 & 0.9214 & 14.287 & 14.653 & 14.008 \\ MgII & 2796.35 & 0.4951 & 0.0000 & 0.8805 & 15.334 & \(\infty\) & 12.798 \\ FeII & 2382.76 & 0.2484 & 0.0000 & 0.8602 & 18.023 & \(\infty\) & 13.248 \\ SiII & 1260.42 & 0.9091 & 0.8878 & 0.9218 & 12.707 & 12.825 & 12.628 \\ AlIII & 1670.79 & 0.7844 & 0.7454 & 0.8832 & 12.991 & 13.165 & 12.557 \\ CII & 1334.53 & 0.8359 & 0.6817 & 0.9097 & 14.005 & 14.748 & 13.645 \\ SiIII & 1206.50 & 0.8676 & 0.8644 & 0.8904 & 12.802 & 12.817 & 12.690 \\ SiIV & 1393.76 & 0.8052 & 0.7463 & 0.8499 & 13.513 & 13.770 & 13.322 \\ CIII & 977.02 & 0.6085 & 0.5550 & 0.6667 & 14.638 & 15.098 & 14.207 \\ CIV & 1548.20 & 0.7530 & 0.7260 & 0.7664 & 14.125 & 14.257 & 14.065 \\ OVI & 1031.93 & 0.8845 & 0.8498 & 0.8994 & 13.879 & 14.041 & 13.799 \\ \hline \end{tabular} \end{table} Table 6: Strong population column densities. \(F_{Corr}\) is the corrected flux transmission for the strong population of lines derived from the population analysis in Section 8.4. \(N_{strng}\) is the integrated metal column density associated with SBLAs with strong metals. Figure 14: Si ii(1260) population model \(\chi^{2}\) scans of both \(f_{pop}\) (_left_) and \(f_{move}\) (_right_) marginalised over all four parameters. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline Line & \(\lambda\) (Å) & \(f_{pop}\) & \(f_{move}\) & \(C_{boost}\) & \(\sigma_{p}\) & \(\sigma_{n}\) & \(\chi^{2}\) & DOF & Prob \\ \hline Ly \(\epsilon\) & 937.803 & 0.91 \({}^{+0.09}_{-0.05}\) & 0.050 \({}^{+0.221}_{-0.030}\) & 0.105 \({}^{+0.087}_{-0.005}\) & 0.000 & 0.002 & 81.309 & 76 - 4 & 0.212 \\ Si ii & 1260.422 & 0.36 \({}^{+0.08}_{-0.18}\) & 0.42 \({}^{+0.11}_{-0.25}\) & 0.175 \({}^{+0.41}_{-0.25}\) & 0.21 & 0.009 & 210.7 & 178 - 4 & 0.030 \\ Si iii & 1206.500 & 0.590 \({}^{+0.00}_{-0.004}\) & 0.298 \({}^{+0.01}_{-0.10}\) & 1.21 \({}^{+0.01}_{-0.01}\) & 0.30 & 0.003 & 214.1 & 183 - 4 & 0.038 \\ Si iv & 1393.760 & 0.202 \({}^{+0.071}_{-0.008}\) & 0.526 \({}^{+0.097}_{-0.099}\) & 3.08 \({}^{+0.21}_{-0.21}\) & 0.12 & 0.037 & 117.2 & 94 - 4 & 0.028 \\ C ii & 1334.532 & 0.26 \({}^{+0.18}_{-0.18}\) & 0.670 \({}^{+0.09}_{-0.20}\) & 2.95 \({}^{+0.17}_{-0.30}\) & 0.15 & 0.037 & 127.5 & 98 - 4 & 0.012 \\ C iii & 977.020 & 0.430 \({}^{+0.080}_{-0.058}\) & 0.870 \({}^{+0.065}_{-0.065}\) & 2.15 \({}^{+0.32}_{-0.32}\) & 0.043 & 0.009 & 253.4 & 130 - 4 & 0.000 \\ C iv & 1548.205 & 0.373 \({}^{+0.038}_{-0.090}\) & 0.593 \({}^{+0.024}_{-0.024}\) & 2.00 \({}^{+0.22}_{-0.11}\) & 0.15 & 0.043 & 150.5 & 126 - 4 & 0.041 \\ Mg ii & 2796.354 & 0.05 \({}^{+0.05}_{-0.03}\) & 0.39 \({}^{+0.17}_{-0.13}\) & 8.1 \({}^{+0.22}_{-0.2}\) & 0.059 & 0.010 & 18. & 22 - 4 & 0.444 \\ Fe ii & 2382.764 & 0.010 \({}^{+0.01}_{-0.010}\) & 0.38 \({}^{+0.12}_{-0.12}\) & 3.95 \({}^{+0.20}_{-0.20}\) & 0.000 & 0.028 & 152.6 & 129 - 4 & 0.047 \\ O i & 1302.168 & 0.19 \({}^{+0.11}_{-0.14}\) & 0.96 \({}^{+0.40}_{-0.04}\) & 5.0 \({}^{+0.3}_{-0.73}\) & 0.043 & 0.004 & 84.1 & 81 - 4 & 0.271 \\ O vi & 1031.926 & 0.79 \({}^{+0.06}_{-0.25}\) & 0.55 \({}^{+0.11}_{-0.08}\) & 1.15 \({}^{+0.35}_{-0.15}\) & 0.043 & 0.000 & 446.0 & 258 - 4 & 0.000 \\ Al ii & 1670.789 & 0.045 \({}^{+0.043}_{-0.017}\) & 0.341 \({}^{+0.085}_{-0.088}\) & 8.3 \({}^{+1.5}_{-3.3}\) & 0.14 & 0.022 & 103.4 & 91 - 4 & 0.111 \\ \hline \end{tabular} \end{table} Table 5: Population model fits. We exclude all species where the statistics were insufficient to provide any useful constraint. limiting column densities given by \(N_{strng,max}\) and \(N_{strng,min}\) respectively. ### Modelling the column densities for the strong metal population Now that we have a series of column densities measurements for a single strong population with multiple phases, we are ready to reassess the model comparisons shown in Section 7 and thus test the unusually high densities that our comparisons demand. As explained in the previous section, our metal column density models are dependent on density, temperature, metallicity, the UV background models and abundance pattern. We make standard assumptions for the latter two and explore density and temperature, with metallicity setting the overall normalisation. A challenge of modelling our measurements lies in the production of the lowest ionisation potential species without over-producing others, driving us towards unusually high minimum densities. Thus far we have used this model comparison purely for illustration since no statistical fit to a single mean population was possible. Here we attempt statistical constraints for the dominant strong metal population. We begin with the most conservative choice; we relax the assumption of a solar abundance pattern and explore the minimum density required by multiple species of a single element. This is possible for both carbon and silicon where we have reliable population analysis results for three ionisation species each. Optically thin, photoionized gas is typically heated to \(\sim 10^{4}\)K in hydrodynamic simulations (Rahmati et al., 2016), but it is theoretically possible for it to reach \(T<10^{3.7}\)K in unresolved gas that is sufficiently metal rich. As a result we consider models with temperatures as low as \(10^{3.5}\)K. Figures 15 and 16 illustrate these limits for silicon and carbon respectively. Only allowed models are shown and in each case the metallicity is tuned such that the low ion is only marginally produced, by treating the lower \(1\sigma\) error bar as the target. The density is then allowed to vary such that it remains below the \(1\sigma\) upper error bar of the high and intermediate ions. The minimum density in each figure is given by the red dot-dash line and the density is free to increase up to _and beyond_ the density indicated by the blue dashed line. Given that this is a multiphase model, any short-fall in projected column density for the high and intermediate ions can be made up by other phases of gas with a lower density. As one can see from figures 15 and 16, silicon provides the more stringent density limited of \(\log(n_{H}/\mathrm{cm}^{-3})>-1.85\) assuming \(10^{4}\)K gas (equivalent to an overdensity of \(\rho/\bar{\rho}>3.19\)) or \(\log(n_{H}/\mathrm{cm}^{-3})>-2.45\) assuming \(10^{3.5}\)K gas if one allows the temperature to reach the lowest temperature considered here (equivalent to \(\rho/\bar{\rho}>2.59\)). The limit arises from marginally (\(1\sigma\)) producing enough Si in without marginally (again \(1\sigma\)) overproducing Si in. Similarly, carbon requires \(\log(n_{H}/\mathrm{cm}^{-3})>-2.95\) assuming \(T=10^{3.5}\)K gas and \(\log(n_{H}/\mathrm{cm}^{-3})>-2.65\) assuming \(T=10^{4}\)K gas. Since the models imply a hydrogen neutral fraction, the total hydrogen column density can be derived. The characteristic gas clumping scale can be obtained from \[l_{c}=N_{H}/n_{H}, \tag{9}\] where \(N_{H}\) total hydrogen column density and \(n_{H}\) is the hydrogen density. For silicon this maximum scale is just \(l_{c}=36\) parsecs assuming a gas temperature of \(T=10^{4}\)K and \(l_{c}=255\) parsecs for a gas temperature \(T=10^{3.5}\)K. Our carbon-only limits produce weaker constraints of 1.4 kpc and 2.5 kpc respectively. \(10^{3.5}\)K is rather a low temperature for photoionized gas but as we shall see below, we appear to be forced to allow such low temperatures. We can perform a statistical fit for three gas phases in these dominant strong metal systems by including all species and assuming a solar abundance pattern. We scan through density, temperature and metallicity for two different gas phases: high density and moderate density. As explained below, it was not possible to scan through the third, lower-density phase. Temperature is allowed to vary between \(10^{3.5}\)K and \(10^{4.5}\)K in both phases. In the moderate density phase, density was searched from \(\log(n_{H}/\mathrm{cm}^{-3})=-4.8\) to \(\log(n_{H}/\mathrm{cm}^{-3})=-2.8\). In the high density phase we scan through \(\log(n_{H}/\mathrm{cm}^{-3})=-0.8\) to \(\log(n_{H}/\mathrm{cm}^{-3})=0\). As usual, metallicity is a free parameter that scales up and down the projected metal columns. Extreme small populations may arise due to Lyman limit systems in our sample of SBLAs and require more complex ionization corrections. P14 argued that this contamination is likely to be at the level of \(\lesssim\)1% and no higher than 3.7% of our SBLAs. We conservatively require that any strong population that is statistically consistent with 3.7% contamination should be omitted from our investigation of gas physical conditions. This leads to the rejection Figure 15: Constraining the minimum density of metal strong SBLAs using silicon species alone. Silicon column densities are modelled as in Figure 10. The data has been corrected to take into account the column density of the strong metal systems based on the population modelling (including associated model uncertainty). Here we test the minimum density allowed by measurements of Si in, and Si in. Si is also shown for completeness but doesn’t constrain the analysis since no model produce it in significant amounts and it is evidently produced by gas in a different phase. We conservatively take the \(1\sigma\) lower error bar of our lowest column density measurement of Si in as the target and then tune the density to change the slope while renormalising with the metallicity. The _red dash-dot line_ shows he lowest density allowed at the \(1\sigma\)-level. The _top panel_ shows the result for the lowest density considered of \(10^{3.5}\)K and _the bottom panel_ shows a more standard photoionized temperature of \(10^{4}\)K. of species Mg ii, Al ii and Fe ii from further interpretation using the optically thin to ionizing photons approximation. This is partly a consequence of poor statistical precision and given more data and more refined population modelling these species could be included in future analyses. In the process of performing this fit with three phases, it became apparent that only O vi requires the lowest density phase. With one data point and three unknowns (density, temperature and metallicity), this third phases is unconstrained aside from being (by construction) of lower density. As a result, we proceeded with a two phase fit excluding O vi. Figure 17 provides the best-fit model based on this parameter scan of the strong metal population. The fit is of acceptable statistical quality with a \(\chi^{2}=4.2\) for 7 points with 6 degrees of freedom arising from the 6 fitted parameters, equivalent to a probability of statistical consistency between the model and data of 4%. The favoured conditions for these strong absorbers are \(\log(n_{H}/\mathrm{cm^{-3}})=0\), temperature \(10^{3.5}\)K and super-solar metallicities of [X/H]= 0.80. In the intermediate density phase we find \(\log(n_{H}/\mathrm{cm^{-3}})=-3.35\), again temperature of \(10^{3.5}\)K and metallicity [X/H]= \(-1.1\). As noted the lowest density phase is required but unconstrained. It will be noted that the favoured density for the dense phase is the limiting value of our parameter scan of \(\log n_{H}=0\). This is driven by the measurement of O i. A higher density in the high density phase may provide a better fit to the O i column density, but only to a limited extent since it will lead to a a worse fit to C ii and Si ii. Again we can infer a gas clumping scale by dividing the hydrogen column density by the hydrogen density from this final, joint fit of species that reliably probe diffuse, photoionized gas. Our dense gas phase corresponding to \(n_{H}=1\mathrm{cm^{-3}}\) requires a clumping scale of only \(l_{c}=0.009\) parsecs. If we marginalise the density of this dense component and take the \(1\sigma\) minimum value for a 6 parameter fit (\(\Delta\chi^{2}=7.04\)) we obtain a minimum density of \(\log(n_{H}/\mathrm{cm^{-3}})=-0.93\) equivalent to a maximum (\(1\sigma\)) clumping scales of \(l_{c}=0.38\) parsecs. The intermediate density gas is expected to have structure on 15kpc scales. Once again the low density phase traced by O vi is unconstrained. ## 9 Discussion P10 and P14 argued that the presence of high density gas (inferred from the relative strength of low ionization species) indicates the presence of cold dense clumps 10s of parsecs in size, embedded in a more diffuse medium. We have reviewed and revisited this claim by improving the methodology, challenging several assumptions and interpreting the results more deeply, while quadrupling the amount of data updating from SDSS-BOSS DR9 (Dawson et al., 2013; Ahn et al., 2012; Lee et al., 2013) to SDSS-eBOSS DR16. Specifically we have, 1. explored the statistical robustness of the mean composite spectrum error estimation, 2. made robust statistical measurements of H i column density and verified its homogeneity, 3. improved the robustness of metal column densities, 4. explored metal line dependence on density and temperature, 5. measured the covariance and populations of metal species, 6. inferred the properties of the dominant strong metal population, Figure 16: Constraining the minimum density of metal strong SBLs using carbon species alone by following the same procedure used in Figure 15 for silicon. In this case all of C ii, C iii and C iv provide useful limits. Again the _red dash-dot_ line shows he lowest density allowed at the \(1\sigma\)-level. The _top panel_ shows the result for the lowest density considered of \(10^{3.5}\)K and the _bottom panel_ shows a more standard photoionized temperature of \(10^{4}\)K. Figure 17: The results of a parameter search for a two phase fit for metal strong SBLAs limited to species confirmed to arise in optically thin gas (shown as black data points) assuming that the strong populations overlap. The fit probability is 4%. Species showing small populations of only \(f_{pop}\la 5\)% are excluded from the fit since they may arise from a self-shielded contaminating population (Fe ii, Mg ii and Al ii). The measurement of strong O vi is also excluded from the fit since it requires a third (more ionized) phase that is poorly constrained. This is because one can only set a lower limit on density based on the absence of associated C iv (comparing with Figure 10 one may see that this density is \(\log(n_{H})\la-4.3\)). These four species not included in the fit are shown as grey points for completeness. (vii) placed limits on the density derived from a single element (carbon and silicon) for the strong metal population, (viii) performed a fit to models of density and temperature for the strong metal population. From silicon alone we find that gas clumping on scales of at most 36 parsecs is required assuming temperatures of at least \(10^{3.5}\)K. However, when we include C iv, C ii, Si iv, Si iii, C ii, Si ii and O i we find that a clumping scale of 0.009 parsecs is favoured (with a 1\(\sigma\) upper limit of 0.38 parsecs) and super-solar metallicities are required. We discuss this chain of reasoning its weak points further below. ### Metal populations and the nature of SBLAs Our covariance measurements and population models carry wider implications for the nature of SBLAs than simply gas property measurements. Perhaps some SBLAs probe the CGM (with low ions and medium/high ions) and others probe the metal enriched IGM (showing only medium/high ions). Alternatively perhaps all SBLAs probe the CGM with medium/high ions, and when the line of sight happens to pass through a dense clump, low ions are also seen. The former implies a high impact cross-section to at least one dense clump with covariance being driven by CGM/IGM separation. The latter, implies a lower impact cross-section to dense clumps and covariance driven by the lines of sight passing through the CGM with or without intersecting a dense clump. Naturally these two scenarios are not mutually exclusive. This is self-evident since we cannot exclude the possibility that a metal rich IGM surrounding the CGM plays a significant role. Nor can we argue that there is a perfect association between our SBLA samples and CGM regions. This is likely to be a factor in why the high ion covariance is non-zero, but we cannot rule out the possibility that some CGM is relatively diffuse or metal poor (e.g. inflows). In practise the variation in ion strengths must arise due to some combination of SBLA purity, CGM selection purity of SBLAs and the impact cross-section to various phases. The first term is known since we have measured the FS0 sample purity to be 89%. Neglecting this minor correction, the fractional size of the low ion strong population, \(\approx 30\)%, provides the cross-section to high density phases modulated by the CGM purity. We make this assertion because these low ionization species are not expected to occur in significant quantities outside of the CGM. ### Inferring gas properties from SBLA metals Following on from P10 and P14, we focus on the surprising appearance of low ionisation metal species in forest absorbers that are optically thin to ionising photons. All the metal line measurements are of interest but the low ionization species drive our interpretation towards a quite narrow set of physical conditions. Specifically, the need for high densities and therefore small-scale clumping. Our goal in this work has been to update the measurements of P14 with the final BOSS/eBOSS dataset, to make error estimates more robust and to perform a thorough multi-phase and multi-population analysis of our measurements in order to generate statistically robust constraints. Despite our inclusive error analysis, the error estimates on the metal column densities remain so tight that no single population, multi-phase model is satisfactory. This in combination with an analysis of the metal line covariance has led us to go beyond the study of mean properties in the composite spectrum and explore the full properties of the stack. Hence we forward model the metal absorbing population for each of our metal species using the full stack. The quality of fit provided by the population is largely acceptable, with more complex models unjustified by current data. Exceptions are C iii(977A) and O vi(1032A), both of which offer 0% quality of fit. This is not surprising since these are two of our four strongest metal features. It seems likely that this is a sign that more sophisticated population models are required by larger samples of SBLAs and/or higher signal-to-noise spectra. It is also possible that the metal populations are an exceptionally poor fit in these two cases, however, neither species' strong line fits are critically important for the main results presented in this article. For each of the metal species we obtain (among other quantities) a constraint on the absorbing population size. All species with a population modelling constraint are included in the fit except Al ii, Fe ii and Mg ii since their strong populations are sufficiently small that they could plausibly arise in self-shielded gas (although it is notable that Fe ii and Mg ii column density constraints are statistically consistent with preferred models). Given the measured column density of \(\log(N_{HI}/\mathrm{cm}^{-2})=16.04_{-0.06}^{+.006}\) for the FS0 sample, the lack of any significant inhomogeneity in the H i population, the small potential interloper incidence rate, and our efforts to exclude metal species that show populations consistent with the interloper rate, we robustly conclude that our SBLA analysis is not sensitive to complex self-shielding effects expected for Lyman limit systems, or indeed partial Lyman limit systems. The inferred column density is at the limit where these effects are considered to be negligible and therefore the sample under study can be treated as strong, blended groupings of optically thin Ly\(\alpha\) forest absorbers. The measurements of covariance indicate that strong low ion absorption is also associated with strong medium and high ion absorption, so we proceeded with measurements of the properties of these strong metal SBLA systems in various forms. Measurements of carbon-only and silicon-only were made independent of assumptions about abundance patterns providing lower limits in gas density and so upper limits on gas clumping on sub-kpc scales, but full fits become possible where all elements are included. These fits require 3 phases. 2 phases to provide both low and medium ions (defined broadly to include C iv) and one additional unconstrained phased providing only O vi absorption. The derived density of \(n_{H}=1\mathrm{cm}^{-3}\) for the dense phase is notably high even for the CGM (corresponding to an overdensity of \(10^{5}\)). Leading to a measurement of 0.009 parsecs cold dense clumps. Even if one considers the 1\(\sigma\) lower limit on density allowed in this fit, the analysis requires sub-parsec scale clumping (0.38 parsecs). Parsec-scales are required by silicon alone but the sub-parsec scales are driven by O i absorption. We cannot dismiss the measurement of O i absorption since no other metal lines contribute significantly to the measurement spectra bin. Si ii 1304A is closest but when one fits the full Si ii line profile one sees that the contribution to the O i line centre is negligible (as shown in Figure 9). Note that charge-exchange driving the O i ionization fraction to that of H i (Draine, 2011) does not apply in this case. This effect occurs at the boundaries of H i and H ii regions and as we have discussed, SBLAs are optically thin to H i ionizing photons and no boundary region is present. We must, therefore, conclude that we are probing clumps on scales as low as 1% of a parsec due to our measurement of O i absorption. Small increases in the favoured density above \(n_{H}=1\)cm\({}^{-3}\) are possible since the favoured density is the one selected as a prior. Also lower temperatures than our prior of \(10^{3.5}\)K are also possible but would stretch the limits of plausibility for a photoionized gas. The relationship between density and temperature warrants further investigation in simulations. It should be noted that this work assumes a solar pattern of elemental abundances (taken from Anders & Grevesse, 1989) for the final results in Figure 17. If the relative abundances of oxygen, carbon and silicon differ significantly from solar in SBLAs then our results would require modification. Our carbon and silicon only measurements are, of course, unaffected. Furthermore we assume photoionization reflecting a "quasar + galaxy" UV background following Haardt & Madau (2001). Morrison et al. (2019) and Morrison et al. (2021) demonstrated that large-scale inhomogeneities exist in the UV background at these redshifts on scales of 10s or even 100s of comoving Mpc. Morrison et al. (2021) in particular explored the spatial variation in metals species through large-scale 3D quasar proximity in eBOSS DR16. There we used a mixed CGM sample including the superset of FS0+FS1+FS2 and found 10-20% variations in O vi and C iv absorption on 100 comoving Mpc h\({}^{-1}\) scales with similar variations in Si iv and Si iii also possible but unconstrained. It seems clear that the high ionization species studied here are susceptible to large-scale variation while the low ionization species have not yet been explored. Questions remain about the potential impact of the local galaxy (or galaxies) of these CGM systems. ### Comparison with simulations Wind tunnel simulations indicate that cold clumps of gas should survive entrainment by a hot galactic wind despite concerns that they might be destroyed before they can be accelerated by Kelvin-Helmholtz instabilities (McCourt et al., 2015; Gronke & Oh, 2018; Tan et al., 2023). These simulations are broadly consistent with our findings that such high-densities, low-temperatures (for a photoionized medium) and small-scales are plausible. Indeed many physical effects with characteristic scales of order a parsec are key for the ejection, propagation, entrainment and subsequent accretion of gas in the CGM with important consequences for further galaxy evolution (Hummels et al., 2019; Faucher-Giguere & Oh, 2023 and citations therein). For detailed observational predictions, high resolution cosmological simulations are required and cosmological simulations do not resolve below 10-pc-scales even with zoom-in regions or adaptive refinement (Lochhaas et al., 2023; Rey et al., 2023). CGM scales as small as 18 pc have been studied by Rey et al. (2023) for a single isolated dwarf galaxy although this is currently computationally demanding. They found that increasing resolution does indeed reveal higher densities (\(n_{H}\approx 0.5\)cm\({}^{-3}\)) and more extreme temperatures in the CGM (both \(10^{3.6}\)K and \(10^{6.5}\)K). It is notable that temperatures below our minimum prior of \(10^{3.5}\)K or our high density prior of \(n_{H}=1\)cm\({}^{-3}\) were not required in this simulation. However, we cannot rule out that that more extreme temperatures will be required by yet higher resolutions needed to probe the 0.01pc scales inferred by our multiphase, strong population, multi-element fit. Although it seems that no simulations currently exist that reproduce the full range of conditions we infer for SBLAs, they can validate our findings that extreme small-scales are a requirement. This can be achieved by simply passing lines of sight through CGM zoom-in simulations and selecting those which meet our with H i properties (an HI column of \(\approx 10^{16}\)cm\({}^{-2}\) and distributed in components over 138 km s\({}^{-1}\) to generate flux transmission <25%) and comparing with with the metal populations we infer. Cosmological simulations can also address the potentially less demanding task of helping us understand the relationship between our selection of strong, blended Ly\(\alpha\) absorption and the galaxies and dark matter halos identified by it. In particular, to learn whether it can be optimised to better recover these systems or be modified to identify others. Such tests would greatly enhance our understanding of how the Ly\(\alpha\) forest traces IGM and CGM properties. ### Individual systems and SBLA analogues As explained in Section 7.1, we advise caution in the interpretation of column densities measured in this work. The features measured here are integrated and averaged quantities. Our population analysis seeks to correct for the impact of averaging SBLAs showing weaker metal absorption with SBLAs showing stronger metal absorption, but the integrated nature of our measurements per SBLA is unavoidable. SBLAs themselves arise due to the blending of Ly\(\alpha\) lines over 138 km s\({}^{-1}\) and we cannot rule out that they correspond to multiple close CGM regions of multiple close galaxies ('close' here referring to both impact parameter and redshift). Furthermore within one CGM region we cannot resolve individual metal lines. We don't measure metals over the full observed feature profile as explained in Section 7.1, but even within the narrower 138 km s\({}^{-1}\) velocity window the measurements are integrated quantities. They cannot be trivially compared to individual metal line components that one might fit in an individual spectrum. If one interpreted the measured signal as arising from single lines the metals would be strong and quite evident in high-resolution and high signal-to-noise studies of individual quasar absorption spectra. Those systems drawn from the strong population we have inferred would be even more evident once one takes into account the associated line strength boost, leading to quite high column densities ( \(F_{strng}\) in Table 6) but once again we stress that these are integrated column densities. We illustrate this argument with Appendix C in which we identify SBLAs at \(2.4<z_{abs}<3.1\) in 15 high resolution and high signal-to-noise KODIAK spectra by taking 138 km s\({}^{-1}\) bins and the noiseless definition of SBLAs (\(-0.05\leq F_{\rm Ly\alpha}<0.25\); where in this work we limit ourselves to \(-0.05\leq F_{\rm Ly\alpha}<0.05\) to prioritiise SBLA purity in light of the SDSS noise). Figure 18 shows the distribution of flux transmissions in native Keck HIRES wavelength bins at the position of Si ii (1260A) in the SBLA rest frame. Distributions are also shown for pixels on both the red and blue side of the Si ii feature (selected as usual to be at wavelengths away from lines and on the pseudo-continuum). Error bars show the 75% spread of these null distributions. At the level of what one can discern by eye the Si ii (1260A) pixel distribution could have been drawn from the null distributions. Based on our analysis, around a third of SBLAs should show'strong' Si ii absorption with an integrated column density of \(N_{strng}=10^{12.7}\)cm\({}^{-2}\). Assuming that this signal is present in association with this KODIAK SBLA sample, it must be weak enough to not be clearly detected here. In other words, the Si ii absorption signal must be weak and distributed among the native pixels in the 138 km s\({}^{-1}\) SBLA window and not a single narrow Si ii line with \(N=10^{12.7}\)cm\({}^{-2}\). One might reasonably ask, then, what SBLAs should look like in individual spectra of high quality. The inferred column densities may be integrated column densities but the strong metal population should nevertheless be individually significant. However, high confidence individual line identification isn't simply a matter of observing a significant absorption line. They must also be unambiguously assigned an absorption transition and redshift. This may be complex task when lines are weak and there are no lines from the same species with which to confirm. It is made more difficult at high redshift where the line density in quasar spectra is generically higher, particularly in the Ly\(\alpha\) forest. O i is particularly challenging since Si ii absorption is expected to be nearby and could be caused by the same galaxy or galaxy group. Our measurement of statistical excess here is robust and unambiguous because all sources of contaminating absorption are included in our error analysis both in the mean composite and the multi-population decomposition. We are aware of what appears to be one strong metal SBLA analogue at \(z>2\) in the literature, published in Nielsen et al. (2022). Following up on systems in their catalogue of Mg ii absorbers they discovered an associated compact group of galaxies and DLA absorption. Among many interesting structures seen, there is an group of seven H i absorbers with velocities offset blueward from the central velocity by between 350 and 450 \(\,\mathrm{km\,s^{-1}}\). The H i column density of these lines is between \(\approx 10^{13.5}\) and \(\approx 10^{15.8}\mathrm{cm^{-2}}\), with a group total of approximately \(10^{16}\mathrm{cm^{-2}}\). The velocity range of this structure and the resulting integrated column density are consistent with our SBLA sample. In Nielsen et al. (2022) this SBLA seems to have been found because of its association with this wider clustering of strong H i and strong Mg ii. It should be noted that this system would not have been selected by our methods because the SBLA Ly\(\alpha\) absorption is masked by the wide damping wing of the close DLA in the spectrum. Of course SBLAs in groups with DLAs will be missing from our sample in general, but the loss will be minimal because, as mentioned elsewhere (e.g. Section 3), SBLAs are much more numerous than DLAs. Nielsen et al. (2022) measure the H i column densities of these individual lines using higher order Lyman lines. The average metal absorption strengths over a 138 \(\,\mathrm{km\,s^{-1}}\) window is similar to our strong metal population in all the lines which are measured by both studies: Si ii, Si iii, C iii, Si iv, and C iv. Their intermediate metal ion models are also broadly similar to what we find. For low ionization species Nielsen et al. (2022) infer that components are present with solar or super-solar metallicities, high densities (between \(-2<log_{H}<-1\)), low temperatures (\(3<logT(K)<4.5\)) and sub-parsec gas clouds. They do not infer densities as high as here nor gas clouds as small but they do not present detailed O i measurements, which are the main driving factor our extreme inferences. They point out that the observed O i column density of the DLA portion of the group is high compared to their model, but they are not able to measure O i for the SBLA (private communication). The analysis of KDDAQ data presented in Lehner et al. (2022) presumably include SBLAs among their sample but when they define their sample of strong Lyman-\(\alpha\)absorption systems (or 'SLFS' as they call them) they do not include the blending requirement critical for SBLAs selection and CGM properties that we, P10, P14 and Yang et al. (2022) have seen. Instead their SLFS appear better characterised as IGM systems. However, they do show an example which superficially seems to qualify for SBLA selection, and it appears to be an example of a weak metal system in contrast to the strong metal system case discussed above. Studies of individual low ionization systems in photoionized (\(N_{HI}\approx 10^{16}\mathrm{cm^{-2}}\)) are more common at low redshift. Examples of such works are Lehner et al. (2013), Sameer et al. (2021) and Qu et al. (2022). These works also produce a similar picture of multiphase gas showing small clumps (or clouds or shells) on parsec scales with temperatures of around \(10^{4}\mathrm{K}\). Studies such as these (that focus on the detailed properties of individual absorbers) have particular virtues compared to our work, including probing detailed velocity structure and temperature from line widths. However, they cannot (yet) study the statistical properties of well-defined and unbiased large samples of CGM systems with our wide range of metal species. Our work demonstrates that the Nielsen et al. (2022) SBLA with super-solar metallicity and high densities is not simply an isolated oddity but a member of a population of around 125,000 in current surveys (taking a \(\sim\)25% strong population 0.5 million SBLAs expected in eBOSS). Simulators aiming to reproduce the results of these studies can seek to generate gas clouds that reproduce these properties among the clouds in their simulations. Whereas simulators can aim to compare the global properties of their CGM systems by simply reproducing our simple selection function. In this sense our statistical work complements the detail gas properties derived form those observations. ### Comparison with other observations based on stacking We have referred P10 and P14 throughout this work. They showed evidence of dense, parsec-scale, photoionized gas, and the goal has been to build upon their stacking methods, improve on the exploitation of their composite spectra, and verify their conclusions. There is a another study, Yang et al. (2022), that has been inspired to apply these methods to SDSS-IV/eBOSS DR16 data. Our work is different in many respects from that publication. Referring back to the list at the beginning of this section, only point (iv) regarding investigating the density and temperature of gas proved by the composite spectrum is in common between the two papers. In a sense Yang et al. (2022) follows on directly from P14 in that they take a range of composite spectra for different Ly\(\alpha\) absorption strengths and explore more sophisticated ionization models to interpret them. P14 measured both the full profile of the metal features and the core of the absorption profile associated with the 138 \(\,\mathrm{km\,s^{-1}}\) velocity window 'central pixel' matched to the Ly\(\alpha\) selection. The former is a more inclusive integration and therefore generates a higher column density for both metals and H i (see for example the comparison between their table A1 and table A3). Yang et al. (2022) take the full profile approach only, while we take the central pixel approach only. The motivation for our choice is set out in Section 7.1. Yang et al. (2022) will, therefore, naturally present higher metal column densities than us derived from the composite spectrum. This difference makes direct comparison difficult. There are further complications from differences in analysis choices. We select and stack Ly\(\alpha\) absorbers and their associated spectra in precisely the same way as P14 in bins of flux transmission (and so take advantage P14 progress on understanding SBLAs with tests on hydrodynamic simulations and comparison with Lyman break galaxy samples). On the other hand Yang et al. (2022) selects Ly\(\alpha\) samples in windows of flux transmission contrast (see Appendix A), have a different S/N requirement for selection, apply no strong redshift cut (sacrificing sample homogeneity for statistics in the process) and weight their stack of spectra to compute the composite. On this final point regarding weighting, we do not weight the spectra by S/N because we wish to preserve the equal contribution of every system stacked, which simplifies our population analysis. We are also conscious of the fact that weighting the stacking by S/N would bias us towards stronger Ly\(\alpha\) absorption in complex in difficult to control ways 2 Footnote 2: Higher S/N for Ly\(\alpha\) selection provides a purer selection of strong Ly\(\,\alpha\). This higher S/N is typically associated with a higher S/N spectrum as a whole (quasar brightness varies and the S/N is highly covariant across each spectrum), therefore the weighting applied at the metal line is a complex mix of weighting towards stronger Ly\(\,\alpha\) systems modulated by any quasar shape change between 1216Å and the metal line placement in the absorber rest frame. With all these caveats in mind, the results of Yang et al. (2022) and our measurements of the mean composite spectrum present broadly the same picture of multiple gas phases in the CGM exhibiting low ionization species tracing at least one high density phase, high ionization species tracing at least one low density phase, and intermediate ionization species probing intermediate densities. They do not go into a detailed error analysis to understand what is allowed statistically, and so did not conclude (as we do) that column densities and their small error estimates force us to go beyond fits to the composite spectrum and study the underlying population behind the mean. When we do this, we appear to disagree with some of the findings of Yang et al. (2022). Our population analysis leads us to rule out a significant higher column density H i sub-population, forces us to higher densities, sub-parsec clumping and lower temperatures for agreement with low ionization species. We are also forced to similarly low temperatures for intermediate/high ionization species (excluding O vi) along with elevated densities and metallicities. In this work we explored a more precise and demanding error analysis method compared to P14 and included not just the statistical errors in the stacking but also absorbed uncertainty in the pseudo-continuum fitting to generate the final composite spectrum. P14 conservatively assumed that the errors in the final step were equal to the rest of the errors combined and scaled their error estimates of the stacked spectra by \(\sqrt{2}\) for the composite spectra. Our end-to-end bootstrap error analysis shows that the pseudo-continuum fitting step contributes weakly to the errors. This is quantified by \(\epsilon\) as shown in Table 4. Assuming that the pseudo-continuum fitting is performed with similar care to this work, this contribution can typically be neglected and the step of pseudo-continuum fitting an entire suite of bootstrapped realisations of the stack can be foregone. This is assuming that the error estimate need only be known to around 10% precision. A notable exception is C iii, for which the error contribution is estimated at 26% due to the challenge of separating it from absorption by Lyman series lines. Overall, we advocate moving beyond studies of the mean (or median) composite spectra alone and in doing so make the need for precise error estimates redundant. Instead we advocate a focus on forward modelling the underlying population, and measuring covariance between the metal features in order to obtain a deeper understanding of the SBLA population studied. ### Future surveys Despite the extreme high signal-to-noise in the composite spectrum presented here, our work demonstrates that more data is needed. Our population analysis requires not only high S/N in the composite spectrum but excellent sampling over the entire SBLA ensemble to build high a S/N measurement of the distribution function of the flux for every metal line studied. Only the metal transitions presented here were sufficiently well-sampled to obtain a population estimate. On the other hand, the distributions functions of some metal transitions are sufficiently well-measured that our 5 parameter fit does not appear to capture the characteristics of the population and a more complex parametrisation is required. More quasar absorption spectra are required to both widen the range of transitions (and species) measurable and help define improved metal populations for a more extensive round of forward modelling. The DESI survey (DESI Collaboration et al., 2016) began in 2021 and is expected to grow to produce around 700,000 \(z>2.1\) quasar spectra. The WEAVE-QSO survey (Jin et al., 2023; Pieri et al., 2016; Pieri et al., 2016) is expected to begin immunotherapy and will observe around 400,000 \(z>2.1\) quasar spectra. 4MOST (de Jong et al., 2019) is also in preparation and looks set to include \(z>2.1\) quasars among its spectroscopic sample. These surveys will also generate parameters of the moderate-to-high signal-to-noise spectra (S/N\(\geq 3\)) spectra required to identify SBLAs. These next generation surveys will also provide spectral resolution that is twice (DESI and 4MOST), three-times (WEAVE-QSO LR) or even ten-times (WEAVE-QSO HR) the resolution BOSS spectra. This will allow us the freedom to treat the velocity scale of the selection blend as a free parameter. In this work, we noted the striking similarity between the inferred halo mass derived from the large-scale 3D clustering of the Ly\(\alpha\) forest with SBLAs and the virial mass inferred by treating the velocity-scale of the blend as the halo circular velocity. This may be a coincidence but if there is some connection it raises the attractive possibility of identifying specific galaxy populations or halo populations from Ly\(\alpha\) absorption blends/groups alone. This warrants further study using next generation surveys and simulations with accurate small-scale IGM and CGM Ly\(\alpha\) clustering. The diversity of environmental properties for IGM/CGM gas studied in the Ly\(\alpha\) forest is also expected to grow substantially in the coming years. Maps of the cosmic web are expected using IGM tomography applied to data from WEAVE-QSO (Kraljic et al., 2022), DESI, PFS (Takada et al., 2014; Greene et al., 2022) and further to the future MOSAIC Japeli et al. (2019) and a potential DESI-II survey, allowing us to study SBLA properties in filaments, sheets and voids of structure. Furthermore large \(z>2\) galaxy surveys are expected over the coming years associated with these surveys allowing us to study gas properties near confirmed galaxies of known impact parameter with galaxy properties. These surveys promise to shed new light on the formative epoch of galaxy formation in the build-up towards cosmic noon. ## 10 Conclusions In this work we have sought to establish the potential of Strong, Blended Lyman-\(\alpha\), or SBLA, absorption systems for the study of the CGM. In this work we define "strong" as a flux transmission less than 25% and "blended" as average absorption in bins of 138 km s\({}^{-1}\). We build on the work of P14 in various ways such that we conclude a new widespread class of circumgalactic system must be defined and we explore the properties of these CGM systems. Specifically we find, 1. SBLA samples can be defined various ways to prioritise sample size of sample purity, though we focus on the main sample of P14 for continuity, we which label FS0. 2. We make the first statistical constraint of the H i column density of the FS0 SBLA sample and find it to be \(\log(N_{HI}/\rm{cm}^{-2})=16.04^{+0.05}_{-0.06}\) with a Doppler parameter of \(b=18.1^{+0.04}_{-0.04}\) km s\({}^{-1}\). This is not an individual line measurement but a constraint of the dominant H i column density in the 138 km s\({}^{-1}\) spectra window driven by a convergence to a solution ascending the Lyman series. * By studying the mean composite of the FS0 sample we find that at least 3 phases of gas are present in SBLAs but that no single multiphase solution can be found that would agree with the tight error bars and so a multiphase _and_ multi-population model is needed. * We explore the SBLA population by forward-modelling trial populations using portions of the stack of spectra without correlated absorption as a null test-bed. In doing this we find good agreement with a bi-modal population, and we exclude from further study metal transitions which are consistent with populations small enough to plausibly arise from rare Lyman limit system interlopers. * We find that low ionization metals (traced by optically thin gas) are present in a 1/4 of SBLAs while higher metal ionization species are typically more common in SBLAs (present in 40-80% cases). We also find that H i shows a high degree of homogeneity as measured from the Ly\(\epsilon\) population. * We study the covariance between our metal features and find that metals species are significantly covariant with one another spanning all ionization potentials. In general low ions show a high excess covariance with one another, moderately excess covariance with intermediate ions and a mild excess covariance with high ions. This is consistent with the picture presented by the population analysis where low ions appear 25% of the time and tend to appear together, while other ions are more common in SBLAs. It also indicates that when SBLAs are strong low ions, they are strong in all metal ions and so defines a sub-class of metal strong SBLAs. * By conservatively focusing only silicon species Si iv, Si iii, and Si ii we find densities in metal strong SBLAs of at least \(\log(n_{H}/\mathrm{cm}^{-3})>-2.45\) are required assuming \(>10^{3.5}\)K. This corresponds to gas clumping on \(<25\) parsecs scales. * Focusing conservatively only carbon species C iv, C iii, and C ii we find that densities in metal strong SBLAs of at least \(\log(n_{H}/\mathrm{cm}^{-3})>-2.95\) are required assuming \(>10^{3.5}\)K. This corresponds to gas clumping on \(<2.5\) kpc scales. * We fit a mixture of three gas phases to all metal lines associated with the metal strong SBLA sub-population (excluding species that could arise due to self-shielding). The highest ionization phase is required by O vi but it unconstrained. The intermediate ionization and low ionization phases both require our minimum temperature of \(T=10^{3.5}\)K. The intermediate ionization model shows a density of \(\log(n_{H}/\mathrm{cm}^{-3})>-3.35\) (equivalent to 15 kpc clumping) with metallicity \([X/H]=-1.1\). The favoured low ionization phase model has a density of \(n_{H}=1\mathrm{cm}^{-3}\) corresponding to scales of only 0.009 parsecs and metallicity \([X/H]=0.8\). The minimum allowed density for this phase is \(\log n_{H}>-0.93\) (at 1\(\sigma\)) corresponding to a clumping of 0.38 parsecs. These extreme and yet common CGM conditions required further study in simulations. ## Acknowledgements We thank KG Lee for his continuum fitting code that was used in a modified form to produce the continua used in this work. We thank Ben Oppenheimer supplying the ionization tables and providing helpful discussions. We also thank Nikki Nielsen for her useful comments about this work. This work was supported by the A*MIDEX project (ANR-11-IDEX-0001-02) funded by the "Investissements d'Avenir" French Government program, managed by the French National Research Agency (ANR), and by ANR under contract ANR-14-ACHN-0021. Some the data presented in this work were obtained from the Keck Observatory Database of Ionized Absorbers toward QSOs (KODIAQ), which was funded through NASA ADAP grant NNX10AE84G. This research has made use of the Keck Observatory Archive (KOA), which is operated by the W. M. Keck Observatory and the NASA Exoplanet Science Institute (NExScI), under contract with the National Aeronautics and Space Administration. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is www.sdss4.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics | Harvard & Smithsonian, the Chilean Participation Group, the French Participation Group, Instituto de Astrofisica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU ) University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut fur Astrophysik Potsdam (AIP), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astrophysik (MPA Garching), Max-Planck-Institut fur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatorio Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autonoma de Mexico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. ## Data availability Catalogues and derived data products from this article are available at [https://archive.lam.fr/GECO/SBLA-eBOSS](https://archive.lam.fr/GECO/SBLA-eBOSS) The data underlying this article were accessed from SDSS-IV DR16 ([https://www.sdss.org/dr16/](https://www.sdss.org/dr16/)) and Keck Observatory Database of Ionized Absorption toward Quasars (KODIAQ; [https://koa.ipac.caltech.edu/applications/KODIAQ](https://koa.ipac.caltech.edu/applications/KODIAQ)).
``` 新しいクラスのCircumgalactic Medium absorbersについての研究を、Lyman-$\alpha$森林で発見された「強、混ざり合ったLyman-$\alpha$」 (SBLA) 吸収系に焦点を当てて行っています。私たちは、SDSS-IV/eBOSSスペクトルで$2.4<z<3.1$におけるSBLAsの研究をしています。これらのSBLAsは、138 $\,\,{\rm km}\,{\rm s}^{-1}$の強大な拡張されたLyman-$\alpha$吸収系を特徴としており、積算した$\log (N_{HI}/$cm$^{-2})=16.04\substack{+0.05 \\ -0.06}$とDopplerパラメータ$b=18.1 \substack{+0.7\\ -0.4}\,\,{\rm km}\,{\rm s}^{-1}$ を持ちます。Ly
2301.13382
Numeracy from Literacy: Data Science as an Emergent Skill from Large Language Models
Large language models (LLM) such as OpenAI's ChatGPT and GPT-3 offer unique testbeds for exploring the translation challenges of turning literacy into numeracy. Previous publicly-available transformer models from eighteen months prior and 1000 times smaller failed to provide basic arithmetic. The statistical analysis of four complex datasets described here combines arithmetic manipulations that cannot be memorized or encoded by simple rules. The work examines whether next-token prediction succeeds from sentence completion into the realm of actual numerical understanding. For example, the work highlights cases for descriptive statistics on in-memory datasets that the LLM initially loads from memory or generates randomly using python libraries. The resulting exploratory data analysis showcases the model's capabilities to group by or pivot categorical sums, infer feature importance, derive correlations, and predict unseen test cases using linear regression. To extend the model's testable range, the research deletes and appends random rows such that recall alone cannot explain emergent numeracy.
David Noever, Forrest McKee
2023-01-31T03:14:57
http://arxiv.org/abs/2301.13382v1
# Numeracy from Literacy: Data Science as an Emergent Skill from Large Language Models ###### Abstract Large language models (LLM) such as OpenAI's ChatGPT and GPT-3 offer unique testbeds for exploring the translation challenges of turning literacy into numeracy. Previous publicly-available transformer models from eighteen months prior and 1000 times smaller failed to provide basic arithmetic. The statistical analysis of four complex datasets described here combines arithmetic manipulations that cannot be memorized or encoded by simple rules. The work examines whether next-token prediction succeeds from sentence completion into the realm of actual numerical understanding. For example, the work highlights cases for descriptive statistics on in-memory datasets that the LLM initially loads from memory or generates randomly using python libraries. The resulting exploratory data analysis showcases the model's capabilities to group by or pivot categorical sums, infer feature importance, derive correlations, and predict unseen test cases using linear regression. To extend the model's testable range, the research deletes and appends random rows such that recall alone cannot explain emergent numeracy. Exploratory Data Analysis, Transformers, Text Generation, Generative Pre-trained Transformers, GPT ## 1 Introduction Three promising and challenging AI technologies are benchmarks for community research progress: autonomous driving, personal assistant, and chatbots [1]. OpenAI's ChatGPT combined the personal assistant with a chat interface in their late November 2022 public release [2-8]. Prompt or conversational customization of the ChatGPT API reveals the depth of its encyclopedic knowledge [7], somewhat akin to an effective Google advanced search or dynamically created Wikipedia entry [9]. Previous researchers have noted emergent features [10-19] beyond what a search engine, spidering indexer, or community-sourced compilation like Wikipedia might answer complex questions. This paper proposes several tasks that require ChatGPT to reason [10,20]. While traditional challenge problems presented to large language models like "2+2=" have previously not satisfied any reasoning tests [21], the latest generation seems to display what the AI community might categorize as emergent properties [15-17]. For instance, previous work highlighted ChatGPT's capability to mimic complex computer operating systems as if a hacker interacted with text commands [22-24]. As an API interface, ChatGPT could serve as a dynamic honeypot with realistic responses [23]. The present work extends this "out-of-the-box" simulation capability to role-play the data scientist or knowledge assistant as they perform exploratory data analysis [15]. ChatGPT's latest release (19JAN2023) incorporates a basic understanding of benchmark machine learning datasets like iris [25-26], Titanic survival [27-28], and Boston housing [29] without explicit programming. Some critical tests of the LLM's reasoning or knowledge [30-35] include random re-sampling of available datasets and _de novo_ generation from scratch. The present work examines whether ChatGPT possesses built-in knowledge of classic data science case studies like iris [25-26], Boston housing [29], and Titanic [27-28]. Without the built-in capability to load data, the large language models simulate user interactions [22-24], including coded Python that edits the datasets and removes memorized responses from the model's responses. Once the modified data receives prompts and queries, the LLM delivers emergent answers [15-17] based on its capability to perform arithmetic calculations or generate display code. Finally, each case presents word problem categories, such as "what demographic group most likely did not survive the Titanic crash? [27-28]". The paper presents a systematic version of exploratory data analysis (EDA) with linguistics models. The models generate python code to execute in a Jupyter notebook or answer questions related to identifying correlations, trends, outliers, and missing values as one might anticipate as typical data science pre-processing routines or follow-up summaries based on post-processing results [37] (Appendices A-E). The goal is to identify how well an LLM adapts to previously unseen datasets and generates plausible hints for extending the EDA [38]. Where possible, we generalize the results to either synthetic data that could not appear in the training data or to data slices that offer unique challenges from the well-known cases [36]. For example, by adding or subtracting random rows of well-known datasets, we force the LLM to demonstrate whether it can creatively tailor its responses to prompts in ways that differ from a simple search engine reply [37]. ## 2 Methods We organize the paper around the exploratory data analysis shown in Appendices A through E, as summarized in Table 1. Appendix A establishes basic numeracy skills, including date-time manipulation and word problems. Appendices B-D examine three tabular datasets where ChatGPT includes the data frame as part of its training data but receives customizations that make it impossible for the LLM to recall what previous steps or outcomes might apply. For instance, we use a random train-test split to the Titanic dataset to force the LLM to describe through its emergent statistical skills rather than return the standard answers that internet tutorials present. One motivation for this approach stems from the failure of earlier LLMs to perform basic arithmetic based on the next token prediction methods. Appendix F adds a further test of ChatGPT's data science skills by creating a randomized insurance claim dataset valid for the session only and would not appear in any historical training from the internet archive. \begin{tabular}{l|l|l} \hline **A**. & **B**. & **C**. \\ **A**. & **Basic Statistics** & **Arithmetic, Date-Time Manipulation,** & **Prompt-driven single values** \\ **and** & **Unit Conversions, Word Problems,** & \\ **B**. & **ChatGPT Iris** & **Descriptive statistics, missing and** & **IRIS dataset, Petal and Sepal** \\ **Dataset** & **duplicate value identification, variable** & **Width, and Length for Species** \\ **Interactions** & **correlations, factor analysis and feature** & **Identification (Some knowledge** \\ & **importance, plot code generation using libraries (seaborn, plotly), outlier identification, dataset augmentation,** & \\ \hline **C**. & **ChatGPT Titanic** & **Descriptive statistics, data frame** & **Titanic Survival Dataset based** \\ **Dataset** & **operations such as drop columns, missing values, composite column creation,** & **on passenger list demographics** \\ & **python function generation and execution in place, random test-train split, feature importance, pivot tables, and factor summation** & **(Some knowledge embedded but** \\ \hline \end{tabular} ## 3 Results For all five tests, the main result supports the hypothesis that the latest LLMs have reached sufficient scale to handle complex statistical questions. As proposed by the builders of GPT-3, these models offer public access to "zero-shot" or "few-shot" learning capabilities when scaled to sufficient parametric size. The model encodes enough general knowledge to answer mathematical questions with plausible answers, even when presented with only a few (or no) examples of formatted requirements or context. Because ChatGPT provides memory within a given session to at least 8,000 tokens (25 pages), the model's coherence and relevance present new inquiries in data science. One might call this quality "emergent" because no rules or computational layers are explicitly defined. The following sections outline the general characteristics of the four datasets presented (Iris, Titanic, Boston Housing, synthetic) along with a chain of statistical calculations selected to highlight date-time manipulations, approximations, and word problems. It is worth noting that ChatGPT provides self-contained datasets to test, which proves critical to complete any analysis. As an LLM frozen in time (2021) without any buffer or storage, the traditional steps needed to upload or present data fail. But having encountered the three well-known examples and one synthetic one, the model keeps track of each manipulation such that if a data row disappears, the resulting median or count changes accordingly. ### Descriptive Statistics As illustrated in Appendix A, the model can add large numbers, reduce answers to N significant digits, identify divisors, and perform an order of magnitude calculation with unit conversions. When asked for the day of the week from history, the model correctly identifies the day from 60 years prior. While not remarkable from a lookup table or internet search, the model only generates the correct result using next-token prediction and language training. To highlight the model's capacity for manipulating extensive, multi-stage calculations, we prompt for the number of minutes in a decade, the number of inches between the Eiffel Tower and London Bridge, and the number of people who could fit on the island of Manhattan. ChatGPT answers incorrectly to identify the time zone that corresponds to six hours ahead of US Eastern (EST) (False: Greenwich GMT\(+6\) or Bangladesh). When instructed that the model responded incorrectly, ChatGPT shows a Universal Time formula UTC-5 as EST, followed by UTC-5\(+6\)\(=\)UTC\(+1\), or Central European Time (CET). ChatGPT's capabilities to self-correct provide a novel user interface for redefining a precise question-and-answer sequence. For example, asking the model to do distance calculations between two cities in small units like inches seems to raise the need for further explanation: What's the point of knowing urban-scale dimensions in such small increments? When pressed in follow-up inquiries, the response showcases the conversion of units (miles to inches) but begins with an incorrect distance (3,500 miles rather than 212 miles). While the math is correct, the more specific initial conditions are flawed. When asked a more eccentric estimation problem (the number of people standing shoulder to shoulder who could fit in Manhattan a densely packed single layer), ChatGPT responds with the correct initial condition for the area calculation (22.96 square miles). If a person requires 2 square feet, the model fails to convert square miles to feet (ChatGPT: 8.9 million people vs. 318 million people in a 636 million sq foot area). The model qualifies its answer as a safety, logistical, and health problem based on crowd control, then further amends its calculation to exclude parks, buildings, or non-built-up areas. As noted previously, ChatGPT has access to structured and organized datasets. LLMs can perform the four basic software operations expected for databases: Create, Read, Update, and Delete (CRUD). In Appendix B, the iris dataset describes the classification challenge to identify one of three flower species by its distinct petal and sepal dimensions. For this multi-class clustering, the model answers that there are no duplicates or missing values for the 50 examples of each class. When prompted to mimic a python interpreter (as a Jupyter notebook), the model responds with the expected output given a prompt in code alone. For example, using "data.corr()" as the prompt produces the correct python output for the iris data. We prompt the model to produce graphical code given a desired figure output (such as histograms, heatmaps, boxplots, pair plots, scatter, and distribution plots). Rather than a language-only model producing the requested figures directly, ChatGPT responds with the python libraries (plotly, seaborn, matplotlib) and codes, which run in a separate interpreter to give the graphs shown in Appendices B-D. When asked for interpretations based on the exploratory charts, the model responds with a description, such as a box-and-whiskers plot showing the quartiles and statistical outliers. ChatGPT does not limit its response to general code commentary for box plots but identifies the given dataset's variables and highlights conclusions for each class. While GitHub or internet tutorials might support ChatGPT training for this EDA, we alter the expected output by adding or deleting data frame rows to avoid the memorized response. This way, the emergent capabilities for performing statistical inference get isolated from the baseline training inputs. ### Coding and Plots Appendices B-E focus on the four data science tasks to exercise ChatGPT's capabilities for python code generation. Because the LLM offers no graphical output, the problem set transforms from the previous tasks to coding solutions to test using Jupyter. Both Codex and copilot have offered coding assistance since August 2021. In Appendix B, ChatGPT shows the output of exploratory data analysis as displayed by python code for outliers, histograms, and distribution plots for the iris dataset. In Appendix C, we ask the LLM to modify the Titanic dataset in preparation for categorical analysis and survivorship demographics. The raw data (891 rows x 12 columns) offers irrelevant predictive variables ("PassengerID"), which we drop, then let ChatGPT pick up with the finer manipulation of the modified data. The sequence of steps matter along the path to generating a final working dataset ready for machine learning. In the prompt, for instance, one can define python functions that recode the embarkation points, ticket prices, and passenger class with mappings from symbols to full names. One further can bin age into five maturity categories between infant and elderly and distribute the passenger ages into ten-year brackets. A further partition transforms the gender and marital status into categoricals. Once ChatGPT gets the python functions, the running of dataset modifications provides an in-memory style of output for further analysis. It is worth noting that these steps illustrate how a language model serves as a computational interface to perform complex statistical actions, pivot groupings, and train-test splits that could not appear in the model's original corpus. Once the unique Titanic data is created and plotted, ChatGPT can answer demographic questions about survivorship: third-class male passengers proved least likely to live through the crash and rescue. In Appendix D, we perform essential machine learning (ML) steps that drop highly correlated variables and split the Boston housing data into train and test sets. We applied linear regression models from sci-kit learn python libraries and asked for root mean square error (RMSE) results. The initial prompts without context led to coding suggestions but refused to perform calculations on an arbitrary train-test split. However, when prompted to act as a Jupyter notebook, the code output renders actual numerical RMSE and correlation coefficients (R-squared) values. We created an example row to test the model and asked for a linear model prediction. A series of word problems round out the Appendix E example, such that based on the data, the model highlights low-crime areas or numbers of rooms. In a plausible real estate setting, the LLM answers with a data-driven response that a combination of many rooms and the lowest price might satisfy a buyer. To our knowledge, this output seems unique to this scale of LLM in public access, both as a data science platform but also as capable of performing as an ML algorithm and predict on unseen inputs. It is worth noting that previous models from the last few years, like GPT-2, failed on simple addition questions. ### Emergent Understanding Appendix E establishes that a randomized dataset was created using the python library Faker to synthesize an anonymous insurance claim dataset that could not be repeated in previous LLM inputs. This library makes categorical and numerical variables to include names, addresses, companies, claim reasons, and claim confidentiality levels. For the final mock dataset created, 200 rows and nine columns make up the in-memory capability of ChatGPT. When asked to reason over the nine columns, the LLM recognizes that 6 or 8 variables are categorical and that for the remaining two numerical categories, a median value emerges as the (randomized) claim amount of $1498.5. This number appears differently every time the conversation commences, such that the net amount sums the medical, travel, phone, and unknown reasons are segmented. The minimum possible value in this example would equal one, and the maximum (medical) claim would equal 2300. While this sample of 200 values over many trials should converge to approximately 1650, the resulting language model performs a reasonable approximation in building the anonymized dataset for insurance claim values. A current search engine (Google) value for: "Give me a random value between 1 and 2300" yields a link tree that samples 11.8 million examples on the internet but does not answer the arithmetic question specifically. The referral engine links to calculators. ## 4 Discussion The present work selects these demonstrations to illustrate the data science capabilities of large language models. The result extends previous efforts that highlight the computational capacity of language as an expressive and generative transformer. There exist few obvious precursors to evolving numeracy from literacy. As recently as 18 months prior, the most advanced linguistic models could not perform elementary addition. One consequence of the emergent skill that exceeds the expectations of a python coder would include the comprehensive explanation of word problem challenges. So not only does ChatGPT produce code from surveying Github, but it also reaches a natural (and relatively safe) conclusion based on the output of running sample code. While previous work has demonstrated this "fake storefront" or "Hollywood stage" effect in ChatGPT when assuming different operating systems, honeypots, or characters in a play, the role of data scientist provides a novel representation to evolve exploratory analysis. In the classic triad of iris, Titanic, and Boston housing, the work demonstrates that standard operations like pivoting, statistical observation, and anomaly detection suggest legitimate linguistic operations to supplement arithmetic understanding. Like young children, the LLM has some capacity for reasoning across symbolic abstraction (numeracy) and linguistic interpretation (literacy). An obvious extension of this work would combine the symbolic and literate to translate word problems in multiple languages with complex alphabets like Chinese, Cyrillic, or Arabic. In this way, one might imagine the union of symbolic AI with its more brute-force cousin as a trained transformer capable of compressing and representing the sum of human knowledge into "next token" predictions. ## 5 Conclusions In conclusion, the present work demonstrates large language models like ChatGPT carry a built-in capacity for performing numerical work using a basic linguistic representation and (attention-based) weights across a vast (40TB) dataset of human knowledge. Presumably, no single branch of its 175 billion parameters encodes a given dataset like Titanic or Boston housing, but even without the capability to upload the data, the model knows and illustrates complex manipulations. If presented with 8000 tokens (around 25 pages) of a novel dataset, one can presume that ad hoc and de novo data science becomes possible within an otherwise numerically challenged token-centric model by appending it as a data frame. The work surveys basic and advanced operations, including CRUD, which makes a dataset otherwise impossible to memorize but amenable to a linguistics model that can summarize and coherently alter what it stores in memory. While ChatGPT set out to demonstrate the first chat interface that could survive both "safely and naturally" in the wild, what the scale of its operation may eventually reveal is emergent qualities that either are too complex for human traceability and validation or that survive in some over-fit quality from a few key transformer branches in the maze of internet databases for training. This work scratches the surface of what automated data science and exploratory analysis might evolve into given a language model that can calculate, infer and predict. ## Acknowledgments The authors thank the PeopleTec Technical Fellows program for encouragement and project assistance. The authors thank the researchers at OpenAI for developing large language models and allowing public access to ChatGPT.
大規模言語モデル(LLM)は、OpenAIのChatGPTやGPT-3のようなものと、読み書きの変換の翻訳課題を探索するためのユニークなテストベッドを提供しています。18ヶ月前の公開されたTransformerモデルは、基礎的な算術を実行できませんでした。ここでは、4つの複雑なデータセットの統計分析は、単純なルールで記憶したりエンコードしたりできない演算を組み合わせます。この作業では、文の完了から実際の数字理解への次予測成功を検証しています。たとえば、LLMは、メモリからロードするか、Pythonライブラリを使用してランダムに生成したインメモリーデータセットについて記述統計を行うケースを調べます。その結果、探索的なデータ分析は、モデルがカテゴリカル総をグループ化したり、特徴の重要性を推測したり、相関を導き出すことができ、そして線形回帰を使用して見逃したテストケースを予測できることを示しています。このモデルの
2309.04962
Scalar fields around a loop quantum gravity black hole in de Sitter spacetime: Quasinormal modes, late-time tails and strong cosmic censorship
Loop quantum gravity, as one branch of quantum gravity, holds the potential to explore the fundamental nature of black holes. Recently, according to the quantum Oppenheimer-Snyder model in loop quantum cosmology, a novel loop quantum corrected black hole in de Sitter spacetime has been discovered. Here, we first investigate the corresponding quasinormal modes and late-time behavior of massless neutral scalar field perturbations based on such a quantum-modified black hole in de Sitter spacetime. The frequency and time domain analysis of the lowest-lying quasinormal modes is derived by Prony method, Matrix method as well as WKB approximation. The influences of loop quantum correction, the black hole mass ratio, and the cosmological constant on the quasinormal frequencies are studied in detail. The late-time behaviors of quantum-modified black holes possess an exponential decay, which is mainly determined not only by the multipole number but also by the cosmological constant. The impact of loop quantum correction on the late-time tail is negligible, but it has a significant impact on damping oscillation. To explore spacetime singularities, we examine the validity of strong cosmic censorship for a near-extremal quantum-modified black hole in de Sitter spacetime. As a result, it is found that the strong cosmic censorship is destroyed as the black hole approaches the near-extremal limit, but the violation becomes weaker as the cosmological constant and the loop quantum correction increase.
Cai-Ying Shao, Cong Zhang, Wei Zhang, Cheng-Gang Shao
2023-09-10T08:32:49
http://arxiv.org/abs/2309.04962v2
# Strong cosmic censorship for a black hole in loop quantum gravity ###### Abstract A fine gravitational theory is essentially expected to deal with the problem of spacetime singularities. Loop quantum gravity as one branch of quantum gravity is potential to explore the nature of black holes. Recently, according to the quantum Oppenheimer-Snyder model in loop quantum cosmology, a novel loop quantum corrected black hole in de Sitter spacetime has been discovered. Here, we focus on examining the strong cosmic censorship(SCC) based on such a quantum modified black hole by considering a massless neutral scalar field perturbation. As a result, we find that the SCC is destroyed as the black hole approaches to the near-extremal limit. Notably, the critical value of the black hole mass ratio for such a violation increases with the increase of the cosmological constant. It was implied the cosmological constant plays an important role in moderating the violation of the SCC. ## I Introduction Spacetime singularities, characterized by infinite curvature or density, have been a subject of great interest and curiosity in the fields of gravitation theory and relativistic astrophysics. According to the singularity theorems proved by Hawking and Penrose, the existence of singularities is unavoidable in generic gravitational collapses. The presence of singularities poses profound challenges to our understanding of the universe within the context of classical general relativity. One specific concern is the existence of naked singularities, which are singularities that are not hidden within a black hole event horizon and thus could be observed by outside observers, breaking down the predictive power of classical general relativity. In order to alleviate such a loss of predictability, Penrose proposed the cosmic censorship conjectures [1]. One is called weak cosmic censorship conjecture (WCC), which asserts that any spacetime singularity formed in a generic gravitational collapse should be covered by a black hole horizon. It is obvious that WCC guarantees the predictive power of classical general relativity only in the spacetime region outside of the black hole. While the predictability of classical general relativity inside of the black hole is further restored by the other conjecture, named the strong cosmic censorship conjecture (SCC), which claims in a colloquial style that the timelike singularities are not allowed, or can be formulated equivalently as a more rigorous mathematical statement that the Cauchy horizon inside of the black hole is unstable for the generic perturbations and thus inextendible. This is the case for Kerr and Reissner-Nordstrom black holes, where the would-be timelike singularity does not lead to the violation of the SCC because the Cauchy horizon becomes singular and inextendible due to the exponential blueshift effect of the perturbations along it. Actually, in asymptotically flat spacetimes, the SCC is always valid except for the accelerating black holes[2; 3]. However, the validity of SCC will become more complicated in the asymptotically de Sitter spacetimes. A positive cosmological constant leads to an exponential decay of the external perturbations, which can compete with the aforementioned blueshift effect along the Cauchy horizon [4; 5]. Thus the validity of the SCC depends on which one will win in the competition. To be more specific, the SCC has recently been found violated in the nearly extremal charged Reissner-Nordstrom de Sitter (RNdS) black hole by the scalar field [6; 7; 8; 9; 10], the fermionic field [11; 12; 13], and the gravito-electromagnetic field [14]. In addition, as to the rotating Kerr de Sitter black hole, the SCC can be respected by the bosonic field perturbations [15; 16], but violated by the fermionic field perturbation [17]. While for the Kerr-Newman de Sitter black hole, the SCC is still violated by both the scalar and fermionic fields [18]. Last but not least, it is noteworthy that other factors, such as the smoothness of initial data, nonlinear effect, dark matter and dark energy, space-time dimensions, and quantum effect of the perturbation fields, could also impact the validity of the SCC [19; 20; 21; 22; 23; 24]. On the other hand, the presence of singularities in classical general relativity highlights the necessity for a theory of quantum gravity (QG) that combines the principles of quantum mechanics and general relativity. Among the various approaches to QG, loop quantum gravity (LQG) has shown great promise with significant advancements made (see, e.g., [25; 26; 27; 28; 29; 30; 31] and the references therein). By applying the procedure of loop quantization to spherically symmetric black holes, one has gained many insights into the quantum nature of black holes [32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42], where the singularity of the Schwarzschild black hole is believed to be resolved through the effects of LQG as it should be the case, although the specific detail of how this resolution occurs is scheme dependent. In particular, with the quantum Oppenheimer-Snyder model in loop quantum cosmology, a new quantum black hole model has been derived most recently [39], where the Schwarzschild singularity is resolved by a transition region that contains an inner horizon. As a result, the global structure of such a quantum black hole model resembles that of the charged Reissner-Nordstrom black hole. In this sense, the SCC is still plagued potentially by the emergence of the inner Cauchy horizon if one immerses this quantum modified black hole in de Sitter space. The purpose of this paper is to examine whether the SCC holds for such a quantum modified black hole in de Sitter space. To this end, we first follow the same procedure developed in [39] to derive the modified metric of the loop quantum black hole in de Sitter space in the next section. Then we present the dynamics of a neutral massless scalar perturbation and derive Christodoulou's formulation of the SCC in terms of quasinormal modes in Sec. III. With the above preparation, we use different numerical methods to calculate the quasinormal modes and explore the validity of the SCC in Sec. IV. Finally, the concluding remarks are presented in the last section. ## II The loop quantum gravity corrected geometry of the black hole in de Sitter space Let us follow the precedure introduced in [39] to get the quantum modified spacetime by considering the quantum Oppenheimer-Snyder model. In this model, the entire spacetime is divided into two regions. One region comprises a pressureless dust ball with a constant density, and the other region is a vacuum outside the dust ball. In the region with dust, we introduce a coordinate \((\tau,\tilde{r},\theta,\phi)\) with \(0<\tilde{r}<\tilde{r}_{0}\) which adapts the symmetry of the dust ball. Then, the metric of the ball takes the form \[ds_{\rm in}^{2}=-d\tau^{2}+a(\tau)^{2}(d\tilde{r}^{2}+d\Omega^{2}), \tag{1}\] where \(d\Omega^{2}=d\theta^{2}+\sin^{2}\theta d\phi^{2}\). The dynamics of the scale factor \(a(\tau)\) is governed by the LQC modified Friedmann equation \[H^{2}=\left(\frac{\dot{a}}{a}\right)^{2}=\frac{8\pi G}{3}\rho(1-\frac{\rho}{ \rho_{c}})+\frac{\Lambda}{3},\quad\rho=\frac{M}{\frac{4}{3}\pi\tilde{r}_{0}^{ 3}a^{3}}, \tag{2}\] where the deformation parameter \(\rho_{c}\) denotes the critical density defined as \(\rho_{c}=\sqrt{3}/(32\pi^{2}\gamma^{3}G^{2}\hbar)\) with the Barbero-Immirzi parameter \(\gamma\). \(M\) is the mass of the ball with radius \(a(\tau)\tilde{r}_{0}\). It should be noted that the current work adds a cosmological constant term to the modified Friedmann equation, different from the initial model considered in [39]. Eq.(2) reverts to the usual Friedmann equation in the classical regime where \(\rho\ll\rho_{c}\). However, in the quantum regime where \(\rho\) is comparable with \(\rho_{c}\) so that the spacetime curvature becomes Planckian, the deformation term will prevent the matter density \(\rho(\tau)\) from reaching infinity which thus prevents the formation of the singularity. Indeed, according to Eq.(2), at the moment \(\tau_{b}\) with \(\rho(\tau_{b})=\rho_{c}\left[1+\sqrt{1+\Lambda/(2\pi G\rho_{c})}\right]/2\), one has \(H=0\), which signifies a change of the dynamics of the ball from the collapsing phase to the expanding phase at \(\tau_{b}\). In the outside region of the dust ball, we assume the spacetime to be spherically symmetric and static, as done in [39]. We can use the coordinates \((t,r,\theta,\phi)\) to describe this region, which is adapted to the symmetry of the spacetime. In this coordinate, the metric of the outside region reads \[ds_{\rm out}^{2}=-f(r)dt^{2}+g(r)^{-1}dr^{2}+r^{2}d\Omega^{2}, \tag{3}\] where \(f(r)\) and \(g(r)\) are two unknown functions to be determined. In order to determine the unknown functions \(f(r)\) and \(g(r)\), we need to find the inner most boundary of the outside region which is glued with the dust ball surface. The junction condition for the gluing requires that the reduced 3-metrics and the extrinsic curvatures along the gluing surfaces obtained from the 4-metrics \(ds_{\rm in}^{2}\) and \(ds_{\rm out}^{2}\) respectively are continuous. It should be noted that the worldlines \(\tau\mapsto(\tau,\tilde{r}_{0},\theta,\phi)\) of each particle on the surface of the dust ball is a timelike geodesic without rotation. This implies that the inner most surface of the outside region is also composed of the congruence of freely falling timelike geodesics associated to the metric \(ds_{\rm out}^{2}\). Moreover, let \(\tau\rightarrow(t(\tau),r(\tau),\theta,\phi)\) be a geodesic in the innermost surface of the outside region, with \(\tau\) being the length of the geodesic. Then, the surfaces are glued by the identification \((\tau,\tilde{r}_{0},\theta,\phi)\sim(t(\tau),r(\tau),\theta,\phi)\). The calculation could be simplified by such a junction condition. So far, we have built our model and sketched the calculation to get the metric of the outside region by the junction condition. Then, just following the procedure shown in [39], we get \[f(r)=g(r)=1-\left(\frac{2GM}{r}+\frac{\Lambda r^{2}}{3}-\frac{\alpha G^{2}M^{2 }}{r^{4}}\left(1+\frac{\Lambda r^{3}}{6GM}\right)^{2}\right) \tag{4}\] where \(\alpha=16\sqrt{3}\pi\gamma^{3}G\hbar\), proportional to the Planck area, is the quantum deformation parameter. It should be noted that the metric (4) is valid only for \(r>r_{b}\) with \(r_{b}\) denoting the minimal radial of the dust ball at which the bounce occurs [39]. For convenience, we set \(G=\hbar=1\) and \(\gamma=0.2375\) in the remainder of this paper. In Fig. 1, we plot the values of \(f(r)\) depending on \(r\) for \(\Lambda=0.1\), in which \(M\) can take different values. As shown in the figure, for \(M\) bigger than some extreme value \(M_{\rm Ext}\), the metric function \(f(r)\) has three roots, corresponding to the three horizons of the black hole. They are respectively the Cauchy horizon \(r_{i}\), the event horizon \(r_{h}\) and the cosmological horizon \(r_{c}\), with \(r_{i}<r_{h}<r_{c}\). If one decreases the mass of the black hole for the given cosmological constant, the Cauchy and the event horizons gradually approach. When the Cauchy horizon coincides with the event horizon, the mass reaches an extreme value, which is denoted as \(M_{\rm ext}\).For \(M<M_{\rm ext}\), the event horizon disappears resulting in a naked singularity. This case is thus prohibited by the WCC. For research purposes, our focus here is only on a black hole with three horizons. ## III Quasinormal modes and strong cosmic censorship Now, we consider a massless neutral scalar field perturbation in the above background. The equation of motion in such a curved spacetime is governed by the following Klein-Gordon equation: \[\Box\Phi=0. \tag{5}\] According to spherical symmetry of the spacetime, the scalar field can be expanded as \[\Phi=\frac{\phi(r)}{r}Y_{lm}(\theta,\varphi)e^{-i\omega t}, \tag{6}\] where \(Y_{lm}(\theta,\varphi)\) is the spherical harmonics function. By plugging it into the Klein-Gordon equation, the master equation in the radial part reads \[\left(\frac{d^{2}}{dr_{*}^{2}}+\omega^{2}-V_{eff}(r)\right)\phi(r)=0, \tag{7}\] where the effective potential is given by \[V_{eff}(r)=f(r)\left[\frac{l(l+1)}{r^{2}}+\frac{f^{\prime}(r)}{r}\right]. \tag{8}\] \(dr_{*}\) is tortoise coordinate, which is defined as \(dr_{*}=\frac{dr}{f(r)}\). Physically, there only exist purely ingoing waves near the event horizon and purely outgoing waves near the cosmological horizon [43]. Thus the boundary conditions are imposed as \[\phi(r)\approx e^{-i\omega r_{*}}\left(r\to r_{h}\right),\quad\phi(r) \approx e^{i\omega r_{*}}\left(r\to r_{c}\right). \tag{9}\] Then, the discrete quasinormal frequencies can be derived by solving the equation of motion with the above boundary conditions. On the other hand, if one imposes purely ingoing wave near the event horizon, the solution from the equation of motion has both the outgoing and ingoing waves near the Cauchy horizon, which can be expressed as \[\phi_{ingoing}\approx e^{-i\omega u}(r-r_{i})^{\frac{i\omega}{ \kappa_{i}}},\quad\phi_{outgoing}\approx e^{-i\omega u}, \tag{10}\] where \(u\) is outgoing coordinate defined as \(u=t-r_{*}\) and \(\kappa_{i}\) is the surface gravity of Cauchy horizon defined as \(\kappa_{i}=\left|\frac{1}{2}f^{\prime}\left(r_{i}\right)\right|\). Obviously, the ingoing wave has non-smooth radial dependence, which results in the potential non-smoothness behavior in the energy momentum tensor of the scalar field. Commonly, the violation of the SCC implies the weak solution can be extended beyond the Cauchy horizon. In other words, the energy-momentum tensor consisting of the square of its first derivative for the scalar field can be integrable at the Cauchy horizon, which requires [6] \[\beta=-\frac{\text{Im}\omega}{\kappa_{i}}>\frac{1}{2}. \tag{11}\] for all the quasinormal modes. On the contrary, as long as one finds the lowest lying quasinormal modes with the criterion \(\beta\leq\frac{1}{2}\), the SCC is preserved. Hence, in order to check the validity of the SCC, we exclusively focus on the lowest-lying quasinormal modes in the remainder of this paper. ## IV Numerical methods and relevant results In this section, we will use two numerical methods to accurately calculate the lowest-lying quasinormal modes and present some relevant results. Presently, many numerical computations of quasinormal modes have been developed with high precision [44; 45; 46]. Here, we introduce the finite difference method [47] to obtain the numerical evolution of the scalar field and then extract the quasinormal spectrum from the data samples with Prony method [48]. In order to check the correctness of our results, we also employ the matrix method [49]. Besides that, WKB approximation [50; 51; 52] is also performed for the quasinormal modes with large \(l\) s. Note that there are three distinct families to classify the relevant quasinormal modes, namely, the near-extremal modes with \(l=0\), the de Sitter modes with \(l=1\), and the photon sphere modes with large \(l\) s. In what follows, we are going to explore the neutral massless scalar field with these three modes. First, it is necessary to perform a coordinate transformation to derive the double null coordinates, which is defined as \(u=t-r_{*}\) and \(v=t+r_{*}\). Accordingly, the Klein-Gordon equation can be expressed as \[-4\frac{\partial^{2}\phi}{\partial u\partial v}=V_{\text{eff}}(r(u,v))\phi. \tag{12}\] According to finite difference scheme, the data at \(N\) can be obtained from \(W\), \(E\), and \(S\), such that the above equation of motion gives rise to \[\phi_{N}=\phi_{W}+\phi_{E}-\phi_{S}-\Delta u\Delta vV_{\text{eff }}(r(u,v))\frac{\phi_{W}+\phi_{E}}{8}, \tag{13}\] where the indices \(N,W,E,S\) denote grid-points, respectively corresponding to the points \(N\equiv(u+\Delta,v+\Delta)\), \(W\equiv(u,v+\Delta)\), \(E\equiv(u+\Delta,v)\), and \(S\equiv(u,v)\) with \(\Delta\) the step width of \((u,v)\). The time-domain profile will appear soon, once one provides the specific initial conditions \[\phi(u,0)=0,\quad\phi(0,v)=e^{-\frac{(v-v_{c})^{2}}{2\sigma^{2}}}, \tag{14}\] where \(v_{c}\) and \(\sigma\) correspond to the center and width of the Gaussian wave packet. The resulting temporal evolution \(\phi(t,r_{*})\) can be obtained from equally elapsed late-time data. Next, to extract the quasinormal mode from the temporal evolution data, Prony method is a very useful tool, which as an extension of the Fourier decomposition, is of great significance for signal processing and data analysis. The late-time signal at a certain \(r_{*}\) is composed of a set of quasinormal modes, which can be expanded as \[\phi(t)=\sum_{j=1}^{p}C_{j}e^{-i\omega_{j}t}. \tag{15}\] The time interval of the time-domain profile is between \(t_{0}\) and \(t=t_{0}+qh\), where \(h\) is the time interval of each point. \(q\) as the number of sample signals is an integer and satisfies \(q=2p\). For convenience, every sample is labeled by an integer \(n\). According to the above formula, the time-domain data at any time can be expressed as \[x_{n}=\sum_{j=1}^{p}\tilde{C}_{j}z_{j}^{n}, \tag{16}\] where \(x_{n}=\phi\left(t_{0}+nh\right),z_{j}=e^{-i\omega_{j}h},\tilde{C}_{j}=C_{j}e^ {-i\omega t_{0}}\). In order to find \(z_{j}\), it is necessary to introduce a polynomial function \[A(z)=\prod_{j=1}^{p}\left(z-z_{j}\right)=\sum_{i=0}^{p}\alpha_{i}z^{p-i}, \tag{17}\] with \(\alpha_{0}=1\). Obviously, for any integer \(j\) from 1 to \(p\), \(A(z_{j})=0\). Thus, it's easy to obtain the sum \[\sum_{i=0}^{p}\alpha_{i}x_{j-i}=\sum_{i=0}^{p}\alpha_{i}\sum_{k=1}^{p}C_{k}z_{ k}^{j-i}=\sum_{k=1}^{p}C_{k}z_{k}^{j-p}A\left(z_{k}\right)=0. \tag{18}\] Considering \(\alpha_{0}=1\), the above equation can be rewritten as \[\sum_{i=1}^{p}\alpha_{i}x_{j-i}=-x_{j}. \tag{19}\] Thus, we can get \(p\) equations after taking \(j\) from \(p+1\) to \(q\) such that \(\alpha_{i}\) can be solved. After substituting \(\alpha_{i}\) into Eq. (17), \(z_{j}\) can be derived easily. Then the quasinormal modes are obtained with the relation \(\omega_{j}=\frac{i}{h}\ln\left(z_{j}\right)\). The coefficients \(C_{i}\) can also be found according to Eq. (16). As a comparison, we further resort to the matrix method to ensure the accuracy of numerical results. By introducing reasonable wave function with a new variable \(Y(y)\) and changing the equation of motion into a regular form in the interval \([0,1]\), the Eq.(7) can be transformed into a matrix equation in the form of \(\Gamma(\omega)\mathcal{Y}=0\) with the boundary condition \(Y(0)=Y(1)=0\), where \(\Gamma(\omega)\) is a matrix and \(\mathcal{Y}\) a vector given by \(\mathcal{Y}_{i}=Y(y_{i})\). The quasinormal modes can be determined by solving the nonlinear algebraic equation \(\det(\Gamma(\omega))=0\). In Tab.I and II, we present low-lying quasinormal modes for the massless neutral scalar field obtained from both the Prony method and the matrix method. As shown in Tab.I and II, the numerical results derived by both methods are consistent with each other, and their accuracy error is controlled within 5 percent, which demonstrates the reliability of our numerical calculations. Moreover, we also employ the WKB approximation to calculate low-lying quasinormal modes with large \(l\) s and find the correlation results converge with those of other methods. It is noted that as the mass of the black hole is close to the extremal limit, \(\frac{-\mathrm{Im}(\omega)}{\kappa_{i}}\) becomes larger and larger. This indicates that the SCC might be violated in the near extremal regime. As a demonstration, Fig.2 presents the variation of \(\beta\) with the black hole mass ratio \(M_{\mathrm{Ext}}/M\) for different cosmological constants for the given \(l\). As expected, when the cosmological constant is fixed, the SCC will only be violated as the mass ratio exceeds a certain critical value. In addition, the critical value of the mass ratio for the violation of the SCC increases with the cosmological constant. To test this further, we also plot the variation of \(\beta\) with the cosmological constant \(\Lambda\) for different black hole mass ratios \(M_{\mathrm{Ext}}/M\) in Fig.3. It is noted that the larger the cosmological constant is, the harder the SCC is violated. It seems that the cosmological constant play an important role in recovering the SCC. Furthermore, the critical value for \(\Lambda\) to rescue the SCC becomes larger with the increase of the mass ratio. Finally, to display the behavior of \(\beta\) more intuitively, we present the density plots of \(\beta\) in the \(\frac{M_{Err}}{M}-\Lambda\) plane in Fig.4. The critical threshold \(\beta=1/2\) is marked as a solid red line. Only in the area above that can the SCC be violated. As one can see, the dashed line in black \(\frac{M_{Err}}{M}=0.99\) illustrates that the SCC is violated in the case with a smaller cosmological constant but respected when the cosmological constant is large enough. But anyhow, the SCC will always be violated if only the black hole approaches a highly near-extremal limit. The critical value of the black hole mass ratio for the violation is increased with the increase of the cosmological constant. To a certain degree, the cosmological constant can moderate the violation of the SCC. ## V Concluding remarks In this paper, we consider the perturbation of the massless neutral scalar field on the top of a loop quantum gravity corrected black hole in de Sitter spacetime. We obtain the low-lying quasinormal modes by employing the Prony method and the matrix method. Based on these, we further explore the validity of the SCC under such a Figure 3: The lowest-lying quasinormal modes with the frequency \(\beta=\frac{-\mathrm{Im}(\omega)}{\kappa_{i}}\) as a function of the cosmological constant \(\Lambda\), where the dotted magenta horizontal line represents the threshold value \(\beta=\frac{1}{2}\) and the dotted cyan vertical line denotes the critical value of \(\Lambda\) for the restoration of the SCC. Figure 2: The lowest-lying quasinormal modes with the frequency \(\beta=\frac{-\mathrm{Im}(\omega)}{\kappa_{i}}\) as a function of the black hole mass ratio \(M_{\mathrm{Ext}}/M\), where the dotted magenta horizontal line represents the threshold value \(\beta=\frac{1}{2}\) and the dotted cyan vertical line denotes the critical value of the mass ratio for the violation of the SCC. perturbation. As a result, the SCC is always violated when the black hole approaches the extremal one. It is found that the larger the cosmological constant is, the harder the SCC is violated, which implies the cosmological constant plays an important role in alleviating such a violation. We conclude our paper by pointing out a potential tension between the SCC and LQG. If the SCC is valid, it means that the Cauchy horizon becomes singular and inextendible. But the effect from LQG is supposed to resolve any potential singularity. If this is the case, the Cauchy horizon is supposed to keep smooth and extendible in LQG even in the presence of our scalar field, invalidating the SCC even in the validity regime of the SCC we have found in this paper. Such a tension may be solved by shifting our perspectives. For instance, although the emergent Cauchy horizon in the full LQG is smooth and extendible, making it the predictive power lost beyond the Cauchy horizon, such a loss might simply be the classical manifestation of quantum uncertainty. In this sense, the SCC should be discarded. To have a deep understanding of this issue, it is better to explore what the emergent classical geometry really looks like in loop quantum gravity coupled to the quantum scalar field. But this is utterly beyond the scope of this paper and expected to be reported somewhere else. ## Acknowledgements This work is supported by the National Key R&D Program of China under Grant No.2021YFC2203001 and Grant No.2022YFC2204602, the Natural Science Foundation of China Grant No.11925503, Grant No.12075026, and Grant No. 12275022.
ループ量子重力、量子重力の一枝として、ブラックホールの根本的な性質を探る可能性を秘めています。最近、ループ量子 kosmologie の量子オッペンハイマー・スナイダーモデルによると、デ・シテス空間における新しいループ量子補正ブラックホールが発見されました。ここで、私たちはまず、このような量子修正されたブラックホールがデ・シテス空間におけるクォンタムノーマルのモードと遅延時間の性質について調査します。低位クォンタムノーマルのモードの周波数と時間域解析は、Prony法、行列法、WKB近似により導出されます。ループ量子補正、ブラックホールの質量比、そして宇宙の常圧によるクォンタムノーマルの周波数の影響を詳細に調べました。量子修正されたブラックホールの遅延時間特性は指数関数的に減衰しており、これは単に多極数のみ
2309.10917
End-to-End Speech Recognition Contextualization with Large Language Models
In recent years, Large Language Models (LLMs) have garnered significant attention from the research community due to their exceptional performance and generalization capabilities. In this paper, we introduce a novel method for contextualizing speech recognition models incorporating LLMs. Our approach casts speech recognition as a mixed-modal language modeling task based on a pretrained LLM. We provide audio features, along with optional text tokens for context, to train the system to complete transcriptions in a decoder-only fashion. As a result, the system is implicitly incentivized to learn how to leverage unstructured contextual information during training. Our empirical results demonstrate a significant improvement in performance, with a 6% WER reduction when additional textual context is provided. Moreover, we find that our method performs competitively and improve by 7.5% WER overall and 17% WER on rare words against a baseline contextualized RNN-T system that has been trained on more than twenty five times larger speech dataset. Overall, we demonstrate that by only adding a handful number of trainable parameters via adapters, we can unlock contextualized speech recognition capability for the pretrained LLM while keeping the same text-only input functionality.
Egor Lakomkin, Chunyang Wu, Yassir Fathullah, Ozlem Kalinli, Michael L. Seltzer, Christian Fuegen
2023-09-19T20:28:57
http://arxiv.org/abs/2309.10917v1
# End-to-End Speech Recognition Contextualization with Large Language Models ###### Abstract In recent years, Large Language Models (LLMs) have garnered significant attention from the research community due to their exceptional performance and generalization capabilities. In this paper, we introduce a novel method for contextualizing speech recognition models incorporating LLMs. Our approach casts speech recognition as a mixed-modal language modeling task based on a pretrained LLM. We provide audio features, along with optional text tokens for context, to train the system to complete transcriptions in a decoder-only fashion. As a result, the system is implicitly incentivized to learn how to leverage unstructured contextual information during training. Our empirical results demonstrate a significant improvement in performance, with a 6% WER reduction when additional textual context is provided. Moreover, we find that our method performs competitively and improve by 7.5% WER overall and 17% WER on rare words against a baseline contextualized RNN-T system that has been trained on more than twenty five times larger speech dataset. Overall, we demonstrate that by only adding a handful number of trainable parameters via adapters, we can unlock contextualized speech recognition capability for the pretrained LLM while keeping the same text-only input functionality. Egor Lakomkin, Chunyang Wu, Yassir Fathullah\({}^{\dagger}\), Ozlem Kalinli, Michael L. Seltzer, Christian Fuegen Meta + Footnote †: Work done during internship at Meta AI. contextual biasing, large language models, speech recognition ## 1 Introduction In recent years, there has been growing interest in Large Language Models (LLMs) due to their remarkable efficacy in enhancing performance in tasks like question answering and summarization, surpassing specialized models [1, 2]. LLMs are trained on vast quantities of text data, thereby encapsulating a wealth of world knowledge within the network. This accumulated knowledge and contextual understanding prove to be particularly beneficial in the field of Automatic Speech Recognition (ASR), especially when additional context surrounding an utterance is available beyond the audio alone. For example, video titles and descriptions can provide insights into the topic of the video or offer clues about named entities that might be mentioned [3, 4]. Such contextual information can assist in disambiguating challenging pronunciations, as certain words, domain-specific terms, or named entities can often be inferred from context alone. Traditional approaches to ASR contextualization [4, 3, 5, 6] operate at the token or phrase level, employing techniques like biasing with weighted finite state transducers (WFSTs) or using specialized attention networks. These are typically either incorporated during the decoding stage or trained as separate components. Consequently, contextualization significantly improves the ASR system's ability to recognize named entities or specialized in-domain terms. However, there are some limitations to these approaches: - The biasing is limited towards individual phrases or words, as opposed to contextualizing based on external information as a whole (for example, topic-based biasing). - The biasing strength is usually controlled via a hyper-parameter or requires specialized architectural changes and training procedures to ensure the system is not overbiased. - Some of the contextualization methods influence only the decoder state without interacting with the encoder directly. In this work, we propose a Speech LLaMA - a decoder-only architecture inspired by recent developments in LLMs tailored towards speech recognition. It is trained to use the contextual information end-to-end without any additional hy Figure 1: A speech recognition model with mixed-modal context consisting of audio and optional text tokens based on a pretrained LLM backbone. Speech encoder and LLM decoder are both initially pretrained. The LLM weights are frozen (orange blocks), while audio encoder and LoRa adapters are fine-tuned during training (blue blocks). perparameters. Specifically, 1) we prepend the whole available textual context as a prompt to an ASR system along with audio tokens. The Speech LLaMA hence have the full flexibility to look back and cross-corellate the contextual text tokens and the acoustic tokens when decoding the next spoken word. And 2) we employ the publicly available 7B LLaMA LLM [1] as a pretrained decoder for the Speech LLaMA. This simplifies the overall design of a contextual ASR as speech recognition can be considered as mixed-modal language model with next-token prediciton. Our intuition behind this is the pre-trained LLMs already distill the linguistic information which should be particularly useful when reasoning which part of the context is relevant given the utterance. Our results on a competitive benchmark suggest a feasibility of this modelling approach. ## 2 Related Work There have been several works on speech recognition models contextualization including deep and shallow biasing [8, 4]. Le et al. [4] introduced a weighted finite state transducer (WFST) composed from biasing strings which is attached dynamically during decoding and the scores of the RNN-T system and biasing WFST are interpolated. The advantage of such approaches is that they could be attached to any system after the training is completed. Another line of research is deep biasing methods that incorporate contextualization end-to-end during the model training [9, 3, 6, 10, 11, 5]. A common limitation of these approaches is that the bias on the phrase level, rather than providing on the full context available. In addition, these approaches require a specialized biasing modules added to the main ASR architecture. In parallel to this reseach several approaches were presented incorporating LLMs for speech related tasks. Wu at al. [12] incorporated LLaMA LLM for speech translation by concatenating a textual prompt (_"Translate audio to language X"_) with audio representations. AudioPalm [13] model was proposed mixing audio and text tokens for speech-to-text and speech to speech tasks. Fathullah et al. [14] presented results on enabling speech recognition capabilities for LLaMA model on the multi-lingual data. Recently a Whisper model [15] incorporated a biasing approach, where the previous segment's transcription was added to the prompt for the long-form speech recognition. In difference to their work, we bias the system on the unstructured and sometimes unrelated textual context as not always video title and description match the context of speech. ## 3 Experimental Setup **Model**: Figure 1 illustrates the overview of our proposed model. This speech LLM architecture consists of two main blocks: audio encoder and text decoder. The audio encoder firstly applies four downsampling blocks resulting in 16x time reduction of audio representations. After that a stack of Conformer [16] blocks with rotary positional embeddings [17] are applied with hidden dimensionality of 512 and kernel size of 9. At the end we add an additional downsampling block. As a result the decoder observes audio tokens sampled every 320ms with dimensionality of size 4,096. We pretrained the audio encoder with Connectionist Temporal Classification [18] criterion for 300k training steps on the same training data. We used a pretrained 7B LLaMA (v1) [1] as a decoder. To adapt text-only LLaMA to speech recognition task, we have added Low-Rank Adapters [19] to query, key, value, and output projection matrices in the self-attention layer of every decoder layer while keeping the rest of LLM parameters frozen throughout the training. We used the following LoRa parameters: rank of size 32, dropout rate of 5%, and 0.05 scaling parameter. Overall LoRa parameters add 30 million trainable parameters to the LLM decoder and the rest 6.7 billion are kept frozen. We used 80 dimensional log Mel features computed every 10ms with a window of 25ms. SpecAugment [20] with two frequency masks of width 27 and ten time masks with maximum width of 4% of the length of an utterance. We trained our models for 200,000 updates with mixed precision, linearly increasing the learning rate to \(5\)e-\(4\) in the first 20,000 updates and exponentially decaying to \(1\)e-\(5\) over the remaining updates. We use Adam with parameters \(\beta\)1 = 0.9, \(\beta\)2 = 0.98, weight decay = \(1\)e-\(5\) and clip the gradient norm to 1. Our model is trained with 128 A100 GPUs for 3 days using Fairseq library [21]. **Data**: The models are trained on an in-house dataset that was de-identified with no personally identifiable information (PII) derived from public Facebook and Instagram videos. The data was further augmented with two distortion methods: speed perturbation [22] and randomly sampled additive background noise. For evaluation, we have sampled 3,200 videos comprising around 34 hours of speech that have context of at least 100 characters length with at least one non-frequent word from the context occurs in the transcription. **Metrics**: To evaluate our models, we report both the overall Word Error Rate (WER) and _Rare WER_, which considers only rare words. A word is considered rare if it does not occur in the 90% percentile of the most frequent words estimated on training data. **Textual context**: Similar to Xiao et al. [7] we incorporate video title and video description as an external context. We perform basic text post-processing like unicode character normalization and removal of all non-ascii symbols. Overall approximately 25% of videos from supervised video dataset have non-empty text context. When video title or description are present, we first concatenate and then tokenize them with the LLaMA tokenizer. After that, we prepend the \(<\)_bos\(>\)_ token with the textual tokens. When both video title and descriptions are missing, the input corresponds to a traditional ASR setup without contextual information. The cross-entropy loss is masked for the contextual tokens and only computed for spoken tokens. In these experiments we limit the textual content to a maximum of 50 tokens for computational reasons. If the context is longer than the threshold, we perform a random crop of size 50 during training and crop the leading tokens during inference. **Baseline**: As a baseline we used a transformer based RNN-T system with one billion parameters [7], which is trained on four million hours of supervised and semi-supervised speech data. The RNN-T system architecture consists of 60 transformer layers in the encoder and 3 LSTM layers in the decoder. For contextualization it uses an WFST biasing method with neural language modelling shallow fusion [4], where the biasing FST is composed from video title and description. We are using exactly the same contextual information during decoding for the RNN-T baseline and our Speech LLaMA. ## 4 Results Table 1 presents a summary of our decoding results on the evaluation set. We compare the Speech LLaMA against the offline RNN-T 1b model, considering two scenarios: with and without presenting contextualization information during decoding. The WER scores obtained for these scenarios using RNN-T are 12.34% and 12.13% respectively. Contextual biasing results in a relative WER reduction of approximately 1.7%. Even without the use of contextual information during training and evaluation, Speech LLaMA achieves a WER of 11.70%, a relative reduction of 5.2% over the RNN-T system trained on much more data. By incorporating context during training and evaluation, we achieve a significant improvement reaching an overall WER of 11.22% and resulting in 17% relative improvement in Rare WER, surpassing the performance of the RNN-T model with contextualization. It is worth noting that when we evaluate the Speech LLaMA trained with context but do not provide the context during inference, we obtain a WER of 11.98%. This corresponds to a slight WER gap compared to the model trained without context. We leave to address this minor performance difference to the future work, where adding a certain jitter to the context may improve the generalization of a model towards presence of the context. ### Ablation studies #### 4.1.1 Context sensitivity To better understand how the model learns to use the context, we studied how receptive the model is to context perturbations. For this we tried a few ways to modify the prompt and measure its effect on the decoding. Specifically, we experimented with: 1. Replacing the actual context with words randomly sampled from the training data. 2. Replacing the context with the ground truth words. We filter out frequent words in this experiment as we assume that the model should not have significant issues in transcribing them. We expect a significant reduction of WER if the model is capable of copy-pasting the words from the context. 3. Replacing the contextual words with phonetical repsellings of the words that appear in the transcripts. Our intuition is that such replacements are particularly challenging for the model and we should expect a bigger WER change compared to random substitutions. To generate re-spellings we employed a G2G [23] model. For every rare word in the ground truth we sample an alternative spelling from the G2G model and add it to the context. For example, if the word _ball_ is present in the context and ground truth we replace it by _bawl_ and use that as context instead of the original token. 4. In addition to the previous perturbation we probe appending a similar sounding word to the context (e.g. both tokens _ball_ and _bawl_ will be present in the context). This tests the ability of an ASR system to disambiguate the actual spoken word given a competitive word in context. \begin{table} \begin{tabular}{l l c c c c c c c c} \hline \hline Model & Speech & Trainable & Context presence & WER (\%) & SUB & INS & DEL & Rare WER (\%) \\ & data (h) & params (M) & Training & Evaluation & & & & & \\ \hline 1B RNN-T [7] & 4M & 1000 & - & - & 12.34 & 6.53 & 3.21 & 2.60 & 30.80 \\ 1B RNN-T [7] & 4M & 1000 & - & ✓ & 12.13 & 6.23 & 3.05 & 2.85 & 28.96 \\ Speech LLaMa & 150k & 130 & - & - & 11.70 & 6.09 & 3.20 & 2.38 & 27.33 \\ Speech LLaMa & 150k & 130 & ✓ & - & 11.98 & 6.28 & 3.07 & 2.63 & 28.64 \\ Speech LLaMa & 150k & 130 & ✓ & ✓ & **11.22** & 5.76 & 3.14 & 2.32 & **23.88** \\ \hline \hline \end{tabular} \end{table} Table 1: Evaluation results of Speech LLaMA compared to large-scale RNN-T baseline on English speech data. We report overall WER and Rare WER. Rare WER specifically focuses on the accuracy of recognizing rare words in the dataset. We present our results in Table 2. We note that replacing the whole context with random words sampled from the training data results in only a marginal difference in WER compared to removing all external context (11.98% vs. 12.07%). This indicates that the model is robust against some contextual noise and can distinguish relevant from irrelevant context. Substituting rare words that match both the context and the ground truth with G2G respellings results in a significant drop in WER (11.22% \(\rightarrow\) 11.89%), almost matching with not using any context. This hints that the majority of gains observed are due to the model being able to copy certain words from the context. In contrast, when we instead of replacing the matching contextual word rather append a competing similar-sounding word, we observe a smaller WER drop (11.22% \(\rightarrow\) 11.46%). This indicates that the model does not necessarily get confused by similarly pronounced words with different meanings. Furthermore, when we take the rare words from the ground truth into the context, the WER improves to 10.50% (6% relative change) and Rare WER improves by 18% relative. This further proves the ability of the model to utilize contextual information when present in order to better recognize rare entities. #### 4.1.2 Causal vs Full Masking Traditionally causal masking is used in all self-attention layers for decoder-only language models to prevent future information leakage. However for offline speech recognition we have full audio and text context observed at the time of decoding and only transcription tokens are necessary to be masked causally. In this section we experiment the impact of applying causal masking on all input tokens and contrast it with applying full mask on the text and audio context followed by causal masking on transcription tokens. While the audio representations are fully contextualized already, we hypothesize that textual tokens may benefit from full masking. We present our results in Table 3. The full-mask shows only marginally better WER then causal masking (improving from 11.22% \(\rightarrow\) to 11.15%). This comes at a cost as efficient self-attention implementations are currently tailored towards causal masking (Flash-Attention v2) and using a custom masking slows down training by 10%. #### 4.1.3 Decoder-only vs Cross-attention Furthermore, we compared the decoder-only approach to a traditional encoder-decoder model by converting the Speech LLM architecture to Listen-Attend-Spell architecture [24]. To achieve that, instead of concatenating audio and text tokens we treated them separately. We added trainable cross-attention matrices to every LLM decoder layer. Table 3 presents the results of this study. We observed that the two approaches perform similarly, with only minor improvement for the Encoder-Decoder architecture (11.22% \(\rightarrow\) 11.18%). This indicates that the decoder-only approach is a viable and straightforward method for performing ASR with or without contextualization. However, one limitation of the decoder-only approach is the quadratic attention complexity, which can impose restrictions on the overall sequence length. This limitation becomes significant as the context grows. To address this issue, we can employ techniques such as lower precision training (8 or 4 bits) and linear attention approximation methods [25, 26]. ## 5 Conclusions and Future Work In this work, we have presented to our knowledge the first results on utilizing pretrained LLMs to leverage contextual information in order to improve speech recognition. We have demonstrated that with a simple decoder-only architecture we can condition the ASR output on the unstructured text. Our approach shows superior performance against a strong baseline, proving the feasability of the proposed method at scale. End-to-end contextualization via text prompting with LLMs shows better context utilization compared to our strong RNN-T based baselines. In addition, our ablation studies show that the system is robust to noise perturbations and shows abilities to perform a phonetic disambiguation. As part of the future work, we plan to extend the methods towards long context and other modalities. \begin{table} \begin{tabular}{l c} \hline \hline Decoder & WER (\%) \\ \hline Decoder-only & 11.22 \\ Encoder-decoder & 11.18 \\ \hline \hline \end{tabular} \end{table} Table 4: Performance comparison of decoder-only Speech LLM and cross-attention Speech LLM. \begin{table} \begin{tabular}{l c c} \hline \hline Context noise & WER (\%) & Rare WER (\%) \\ \hline (Original context) & 11.22 & 23.88 \\ (Remove all context) & 11.98 & 28.64 \\ Random & 12.07 & 28.85 \\ Respellings & 11.89 & 28.31 \\ Respellings (append) & 11.46 & 25.59 \\ Ground Truth & 10.50 & 19.54 \\ \hline \hline \end{tabular} \end{table} Table 2: WER under different context perturbations during decoding stage. \begin{table} \begin{tabular}{l c} \hline \hline Masking & WER (\%) \\ \hline Causal & 11.22 \\ Full-Mask & 11.15 \\ \hline \hline \end{tabular} \end{table} Table 3: Impact of the context masking structure on the WER.
最近の研究では、大規模言語モデル(LLM)は、その卓越した性能と汎用性を理由として、研究界において大きな注目を集めています。この論文では、LLMを組み込むことで、音声認識モデルを文脈化する方法を提示します。私たちのアプローチは、トレーニング済みLLMに基づいて、音声認識を混合モデル言語モデルタスクとして表現しています。音声の特徴と、文脈のオプションテキストデータを使用して、デコーダのみを扱うモデルを訓練します。その結果、このシステムは、訓練中に無構造的な文脈情報を利用する方法を学習する動機付けを得ています。実証的な結果により、パフォーマンスが改善し、テキストコンテキストが追加された場合、WERが6%減少しています。さらに、この手法は、25倍以上の大規模な音声データセットでトレーニングされた基線文脈化RNN-Tシステムと比較して、WERの7.5%向上と、1
2307.16612
Light, Reliable Spanners
A \emph{$\nu$-reliable spanner} of a metric space $(X,d)$, is a (dominating) graph $H$, such that for any possible failure set $B\subseteq X$, there is a set $B^+$ just slightly larger $|B^+|\le(1+\nu)\cdot|B|$, and all distances between pairs in $X\setminus B^+$ are (approximately) preserved in $H\setminus B$. Recently, there have been several works on sparse reliable spanners in various settings, but so far, the weight of such spanners has not been analyzed at all. In this work, we initiate the study of \emph{light} reliable spanners, whose weight is proportional to that of the Minimum Spanning Tree (MST) of $X$. We first observe that unlike sparsity, the lightness of any deterministic reliable spanner is huge, even for the metric of the simple path graph. Therefore, randomness must be used: an \emph{oblivious} reliable spanner is a distribution over spanners, and the bound on $|B^+|$ holds in expectation. We devise an oblivious $\nu$-reliable $(2+\frac{2}{k-1})$-spanner for any $k$-HST, whose lightness is $\approx \nu^{-2}$. We demonstrate a matching $\Omega(\nu^{-2})$ lower bound on the lightness (for any finite stretch). We also note that any stretch below 2 must incur linear lightness. For general metrics, doubling metrics, and metrics arising from minor-free graphs, we construct {\em light} tree covers, in which every tree is a $k$-HST of low weight. Combining these covers with our results for $k$-HSTs, we obtain oblivious reliable light spanners for these metric spaces, with nearly optimal parameters. In particular, for doubling metrics we get an oblivious $\nu$-reliable $(1+\varepsilon)$-spanner with lightness $\varepsilon^{-O({\rm ddim})}\cdot\tilde{O}(\nu^{-2}\cdot\log n)$, which is best possible (up to lower order terms).
Arnold Filtser, Yuval Gitlitz, Ofer Neiman
2023-07-31T12:39:18
http://arxiv.org/abs/2307.16612v1
# Light, Reliable Spanners ###### Abstract A _\(\nu\)-reliable spanner_ of a metric space \((X,d)\), is a (dominating) graph \(H\), such that for any possible failure set \(B\subseteq X\), there is a set \(B^{+}\) just slightly larger \(|B^{+}|\leq(1+\nu)\cdot|B|\), and all distances between pairs in \(X\setminus B^{+}\) are (approximately) preserved in \(H\setminus B\). Recently, there have been several works on sparse reliable spanners in various settings, but so far, the weight of such spanners has not been analyzed at all. In this work, we initiate the study of _light_ reliable spanners, whose weight is proportional to that of the Minimum Spanning Tree (MST) of \(X\). We first observe that unlike sparsity, the lightness of any deterministic reliable spanner is huge, even for the metric of the simple path graph. Therefore, randomness must be used: an _oblivious_ reliable spanner is a distribution over spanners, and the bound on \(|B^{+}|\) holds in expectation. We devise an oblivious \(\nu\)-reliable \((2+\frac{2}{k-1})\)-spanner for any \(k\)-HST, whose lightness is \(\approx\nu^{-2}\). We demonstrate a matching \(\Omega(\nu^{-2})\) lower bound on the lightness (for any finite stretch). We also note that any stretch below 2 must incur linear lightness. For general metrics, doubling metrics, and metrics arising from minor-free graphs, we construct _light_ tree covers, in which every tree is a \(k\)-HST of low weight. Combining these covers with our results for \(k\)-HSTs, we obtain oblivious reliable light spanners for these metric spaces, with nearly optimal parameters. In particular, for doubling metrics we get an oblivious \(\nu\)-reliable \((1+\varepsilon)\)-spanner with lightness \(\varepsilon^{-O(\mathrm{ddim})}\cdot\tilde{O}(\nu^{-2}\cdot\log n)\), which is best possible (up to lower order terms). ###### Contents * 1 Introduction * 1.1 Our Results * 1.2 Technical Overview * 1.3 Related Work * 1.4 Organization * 2 Preliminaries * 3 Light Reliable Spanner for \(k\)-HSTs * 3.1 Decomposition of \(T\) to Heavy Paths * 3.2 Construction * 3.3 Analysis * 3.4 Improved Stretch for Small Max Degree HST * 4 Pairwise Partition Cover for Minor Free Graphs * 5 From Pairwise Partition Cover to Light \(k\)-HST Cover * 5.1 \(k\)-HST Cover for Doubling Metrics. * 6 Reliable Spanners for Metric Spaces * 6.1 Doubling Metrics * 6.2 General Metric Spaces * 6.3 Minor Free Graphs * 6.4 Doubling Metric of High Dimension * 6.5 General Ultrametric * 7 Light Reliable Spanner for the Path Graph * 7.1 Construction * 7.2 Analysis * 8 Improved Light Reliable Spanners for Minor-free Graphs * 9 Lower Bounds * 9.1 Lower bound for deterministic light reliable spanners * 9.2 Lower Bound for HST * 9.3 Lower Bound for the Unweighted Path * A A Helpful Lemma * B Light Reliable \(O(\log n)\)-Spanner Introduction Given a metric space \((X,d_{X})\), a \(t\)-_spanner_ is a graph \(H\) over \(X\) such that for every \(x,y\in X\), \(d_{X}(x,y)\leq d_{H}(x,y)\leq t\cdot d_{X}(x,y)\), where \(d_{H}\) is the shortest path metric in \(H\). 1 The parameter \(t\) is often referred to as the _stretch_. In essence, the purpose of spanners is to represent the distance metric using a sparse graph. Spanners where introduced by Peleg and Schaffer [14], and found numerous applications throughout computer science. For a more systematical study, we refer to the book [13] and survey [1]. In many cases, the goal is to minimize the total weight of the spanner and not just the number of edges. E.g., when constructing a road network, the cost is better measured by the total length of paved roads, as opposed to their number. This parameter of interest is formalized as the _lightness_ of a spanner, which is the ratio between the weight of the spanner (sum of all edge weights), and the weight of the Minimum Spanning Tree (MST) of \(X\): \(\frac{w(H)}{w(\text{MST})}\). Note that the MST is the minimal weight of a connected graph, and thus of a spanner with finite stretch. So the lightness is simply a "normalized" notion of weight. Footnote 1: Often in the literature, the input metric is the shortest path metric of a graph, and a spanner is required to be a subgraph of the input graph. Here we study metric spanners where there is no such requirement. Light spanners have been thoroughly studied. It is known that general \(n\)-point metric spaces admit a \((2k-1)(1+\varepsilon)\) spanner (for \(k\in\mathbb{N}\), \(\varepsilon\in(0,1)\)) with \(O(n^{1+1/k})\) edges and lightness \(O(\varepsilon^{-1}\cdot n^{1/k})\)[12, 13] (see also [1, 1, 14, 15]). Every \(n\)-point metric space with doubling dimension2 ddim admits a \((1+\varepsilon)\)-spanner with \(n\cdot\varepsilon^{-O(\text{ddim})}\) edges and lightness \(\varepsilon^{-O(\text{ddim})}\)[1] (see also [1, 14]). Finally, the shortest path metric of a graph excluding a fixed minor admits a (sub-graph, which already implies sparsity) \((1+\varepsilon)\)-spanner with lightness \(\tilde{O}(\varepsilon^{-3})\)[1]. Footnote 2: A metric space \((X,d)\) has doubling dimension ddim if every ball of radius \(2r\) can be covered by \(2^{\text{ddim}}\) balls of radius \(r\). The \(d\)-dimensional Euclidean space has doubling dimension \(\Theta(d)\). A highly desirable properly of a spanner is the ability to withstand massive node-failures. To this end, Bose _et. al._[1] introduced the notion of a _reliable spanner_. 3 Here, given a set of failed nodes \(B\subseteq X\), the residual spanner \(H\setminus B\) is a \(t\)-spanner for \(X\setminus B^{+}\), where \(B^{+}\supseteq B\) is a set slightly larger than \(B\). For the case of points in \(d\)-dimensional Euclidean space, for constant \(d\), Bose _et. al._[1] constructed \(O(1)\) spanner such that \(|B^{+}|\leq O(|B|^{2})\). Later, Buchin, Har-Peled, and Olah [1] constructed \(1+\varepsilon\) reliable spanner with \(n\cdot\varepsilon^{-O(d)}\cdot\nu^{-6}\cdot\tilde{O}(\log n)\) edges, guaranteeing that for every set of failed nodes \(B\), \(|B^{+}|\leq(1+\nu)\cdot|B|\). This result was generalized to metric spaces with doubling dimension ddim by Filtser and Le [11]. Footnote 3: For a comprehensive discussion with the related notion of fault-tolerant spanners, see Section 1.3. While reliable spanners for Euclidean and doubling metrics admit sparsity which is comparable to their non-reliable counter-parts, the situation is very different for other metric families. Indeed, Har-Peled _et. al._[12] showed that every reliable \(k\)-spanner of the simple uniform metric (which is also a tree metric) must have \(\Omega(n^{1+1/k})\) edges. Nevertheless, it is possible to construct _oblivious_ reliable spanner for other metric spaces with good parameters, where the bound on the size of \(B^{+}\) is only in expectation. **Definition 1** (Reliable spanner).: _A weighted graph \(H\) over point set \(X\) is a deterministic \(\nu\)-reliable \(t\)-spanner of a metric space \((X,d_{X})\) if \(d_{H}\) dominates4\(d_{X}\), and for every set \(B\subseteq X\) of points, called an attack set, there is a set \(B^{+}\supseteq B\), called a faulty extension of \(B\), such that: (1) \(|B^{+}|\leq(1+\nu)|B|\). (2) For every \(x,y\notin B^{+}\), \(d_{H[X\setminus B]}(x,y)\leq t\cdot d_{X}(x,y)\)._ Footnote 4: Metric space \((X,d_{H})\) dominates metric space \((X,d_{X})\) if \(\forall u,v\in X\), \(d_{X}(u,v)\leq d_{H}(u,v)\). _An oblivious \(\nu\)-reliable \(t\)-spanner is a distribution \(\mathcal{D}\) over dominating graphs \(H\), such that for every attack set \(B\subseteq X\) and \(H\in\operatorname{supp}(\mathcal{D})\), there exist a superset \(B^{+}_{H}\supseteq B\) such that, for every \(x,y\notin B^{+}_{H}\), \(d_{H[X\setminus B]}(x,y)\leq t\cdot d_{X}(x,y)\), and \(\mathbb{E}_{H\sim\mathcal{D}}\left[|B^{+}_{H}|\right]\leq(1+\nu)|B|\). We say that the oblivious spanner \(\mathcal{D}\) has \(m\) edges and lightness \(\phi\) if every \(H\in\operatorname{supp}(\mathcal{D})\) has at most \(m\) edges and lightness at most \(\phi\)._ For general \(n\)-point metrics, Filtser and Le [11] (improving over [10]) constructed an oblivious \(\nu\)-reliable \(8k+\varepsilon\)-spanner with \(\tilde{O}(n^{1+\frac{1}{k}}\cdot\varepsilon^{-2})\cdot\nu^{-1}\) edges. For the shortest path metric of graph excluding a fixed minor, there is oblivious \(\nu\)-reliable \((2+\varepsilon)\)-spanner with \(\varepsilon^{-2}\cdot\nu^{-1}\cdot\tilde{O}(n)\) edges, while every oblivious reliable spanner with stretch \(t<2\) requires \(\Omega(n^{2})\) edges [11]. For Euclidean and doubling metrics, oblivious \(\nu\)-reliable \((1+\varepsilon)\)-spanners can be constructed with only \(n\cdot\varepsilon^{-O(d)}\cdot\tilde{O}(\nu^{-1}\cdot\log^{2}\log n)\) edges [1, 11]. But what about lightness? no previous work attempted to construct reliable spanners of low total weight even though it is clearly desirable to construct reliable networks of low total cost. The single most studied metric in the context of reliable spanners is the unweighted path \(P_{n}\). Indeed, most of the previous work [1, 1, 1, 11] focused on constructing various reliable \(1\)-spanners for the path graph, and then generalized it other metric spaces using _locality sensitive orderings_5. A reliable spanner should have many edges between every two large enough sets, so that they could not be easily disconnected. Consider an attack \(B\) consisting of the middle \(\frac{n}{2}\) vertices on \(P_{n}\). If there are less than \(\frac{n}{8}\) crossing edges from left to right, then an attack \(B^{\prime}\supseteq B\) that contains also one endpoint per crossing edge, will disconnect two sets of size \(\frac{n}{8}\). Therefore a linear number of vertices should be added to \(B^{\prime+}\). We conclude that every deterministic reliable spanner (for any finite stretch) must have lightness \(\Omega(n)\) (see Theorem 19 for a formal proof). Thus, all hope lies in oblivious reliable spanners. However, even here any two large sets must be well connected. Previous oblivious reliable spanners for \(P_{n}\) all had unacceptable polynomial lightness. Footnote 5: Locality sensitive ordering is a generic tool that “reduces” metric spaces into the line, by devising a collection of orderings such that every two points are “nearby” in one of the orderings, see [1, 11]. As reliable spanners for \(P_{n}\) are the main building blocks for reliable spanners for other metric spaces, all previous constructions have inherent polynomial lightness.6 Footnote 6: The only previous work that did not reduced to \(P_{n}\) is by Har-Peled _et. al._[10] who reduced to uniform metrics. Nevertheless, their approach on \(P_{n}\) will have stretch \(3\), and lightness \(\Omega(n)\). ### Our Results The results of this paper are summarized in Table 1. Our results on light reliable spanners for various metric families are based on constructing such spanners for \(k\)-HSTs, this lies in contrast to previous results on sparse reliable spanners, which were mostly based on reliable spanners for the path graph. Roughly speaking, previous works on reliable spanners show us that the "cost" of making a spanner \(\nu\)-reliable, is often a \(\nu^{-1}\) factor in its size. Our results in this paper offer a similar view for light spanners: here the "cost" of reliability is a factor of \(\nu^{-2}\) in the lightness. That is, an \(\Omega(\nu^{-2})\) factor must be paid in the most basic cases (path graph, HST), while in more interesting and complicated metric families, we essentially match the best non-reliable light spanner constructions, up to this \(\nu^{-2}\) factor (and in some cases, such as minor-free graphs, an unavoidable constant increase in the stretch). For brevity, in the discussion that follows we omit the bounds on the size of our spanners (which can be found in Table 1). \(k\)-Hsts.We devise an oblivious \(\nu\)-reliable \(2+\frac{O(1)}{k}\)-spanner for any \(k\)-HST (see Definition 2), whose lightness is \(\tilde{O}(\nu^{-1}\cdot\log\log n)^{2}\) (see Theorem 1). It is implicitly shown in [12, Observation 1] that with stretch smaller than \(2\), the lightness must be \(\Omega(n)\). So when \(k\) is large, our stretch bound is nearly optimal.7 We also show that the lightness must be at least \(\Omega(\nu^{-2})\), regardless of the stretch, thus nearly matching our upper bound. Footnote 7: We also have a similar result for every \(k\geq 1\), with stretch \(2+\varepsilon\) and lightness \(\tilde{O}(\varepsilon^{2}\cdot\nu^{-1}\cdot\log\log n)^{2}\). Light \(k\)-Hst Covers.To obtain additional results for other metric families, following [12], we use the notion of _tree covers_, in which every tree is a \(k\)-HST (see Definition 3). We \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline Family & Stretch & Lightness & Size & Ref \\ \hline \multirow{2}{*}{Doubling ddim} & \(1+\varepsilon\) & \(\varepsilon^{-O(\operatorname{ddim})}\cdot\tilde{O}(\nu^{-2}\cdot\log n)\) & \(n\cdot\varepsilon^{-O(\operatorname{ddim})}\cdot\tilde{O}(\nu^{-2})\cdot*\) & Cor. 8 \\ \cline{2-5} & ddim & \(\tilde{O}(\log n\cdot\nu^{-2})\cdot\operatorname{ddim}^{O(1)}\) & \(n\cdot\tilde{O}\left(\nu^{-2}\right)\cdot\operatorname{ddim}^{O(1)}\cdot*\) & Cor. 14 \\ \hline \multirow{2}{*}{General Metric} & \(12t+\varepsilon\) & \(n^{1/t}\cdot\tilde{O}(\nu^{-2}\cdot\varepsilon^{-4})\cdot\log^{O(1)}n\) & \(\tilde{O}\left(n^{1+1/t}\cdot\nu^{-2}\cdot\varepsilon^{-3}\right)\) & Cor. 10 \\ \cline{2-5} & \(O(\log n)\) & \(\tilde{O}(\nu^{-2}\cdot\log^{4}n)\) & \(n\cdot\tilde{O}\left(\nu^{-2}\cdot\log^{3}n\right)\) & Cor. 22 \\ \hline Minor-Free & \(2+\varepsilon\) & \(\tilde{O}(\nu^{-2}\cdot\varepsilon^{-7}\cdot\log^{8}n)\) & \(\tilde{O}(n\cdot\nu^{-2}\cdot\varepsilon^{-6})\) & Thm. 18 \\ \hline Tree & \(<2\) & \(\Omega(n)\) & \(\Omega(n^{2})\) & [12] \\ \hline Weighted Path & \(1\) & \(\nu^{-2}\cdot\tilde{O}(\log n)\) & \(n\cdot\tilde{O}(\nu^{-1})\cdot*\) & Cor. 17 \\ \hline Unweighted & \(<\infty\) & \(\Omega(\nu^{-2}\cdot\log(\nu\cdot n))\) & - & Thm. 21 \\ Path & \(<\infty\) & \(\Omega(n)\) (deterministic) & - & Thm. 19 \\ \hline HST & \(2+\varepsilon\) & \(\tilde{O}(\varepsilon^{-4}\cdot\nu^{-2})\cdot*\) & \(n\cdot\tilde{O}\left(\varepsilon^{-3}\cdot\nu^{-2}\right)\cdot*\) & Thm. 15 \\ \cline{2-5} (ultrametric) & \(<\infty\) & \(\Omega(\nu^{-2})\) & - & Thm. 20 \\ \hline \end{tabular} \end{table} Table 1: Our results for constructing light \(\nu\)-reliable spanners for various metric spaces. All the results in the table (other than the one specified as deterministic) are for oblivious reliable spanners. Stretch \(<\infty\) stands for the requirement that all the points in \(X\setminus B^{+}\) belong to the same connected component in \(H\setminus B\). \(*\) stands for \(\operatorname{poly}(\log\log n)\) factors. design these covers for metrics admitting a pairwise partition cover scheme (see Definition 4), such that each \(k\)-HST in the cover has lightness \(O(k\cdot\log n)\). General Metrics.For any metric space, by building a light \(k\)-HST cover, and applying our oblivious reliable spanner for every \(k\)-HST in the cover, we obtain an oblivious \(\nu\)-reliable \(O(k)\)-spanner with lightness \(\tilde{O}(\nu^{-2}\cdot n^{1/k})\). Note that up to a constant in the stretch (and lower order terms), this result is optimal, even omitting the reliability requirement. Doubling Metrics.For any metric with doubling dimension \(\operatorname{ddim}\),2 and \(\varepsilon\in(0,1)\), we devise an oblivious \(\nu\)-reliable \((1+\varepsilon)\)-spanner with lightness \(\varepsilon^{-O(\operatorname{ddim})}\cdot\tilde{O}\left(\nu^{-2}\cdot\log n\right)\). This result is tight up to second order terms. Indeed, it is folklore that any \((1+\varepsilon)\)-spanner for doubling metrics must have lightness \(\varepsilon^{-\Omega(\operatorname{ddim})}\) (see e.g., [1]). In Theorem 21, we show that every oblivious \(\nu\)-reliable spanner (for any finite stretch) for the shortest path metric of the unweighted path graph (which has \(\operatorname{ddim}\) 1) must have lightness \(\Omega(\nu^{-2}\cdot\log(\nu n))\). This dependence on \(n\) in the lower bound is somewhat surprising, and does not appear in the closely related fault-tolerant spanners for doubling metrics (see Section 1.3 for further details). Footnote 2: The \(k\)-HSTs are the \(k\)-H ### Technical Overview From a high level, our construction of light reliable spanners for various graph families has the following structure. * We first devise light reliable spanners for \(k\)-HSTs. * We construct _light_ tree covers for the relevant family, where all the trees in the cover are \(k\)-HSTs. * The final step is to sample a reliable spanner for each tree in the cover, and take as a final spanner the union of these spanners. In what follows we elaborate more on the main ideas and techniques for each of those steps. #### 1.2.1 Reliable Light Spanner for \(k\)-HSTs Let \(T\) be the tree representing the \(k\)-HST (see Definition 2). Our construction consists of a collection of randomly chosen bi-cliques: For every node \(x\in T\) we choose at random a set \(Z_{x}\) of \(\ell\approx\nu^{-1}\) vertices from the leaves of the subtree rooted at \(x\) (denoted \(L(x)\)). Then, for every \(x\in T\) with children \(x_{1},\ldots,x_{t}\), add to the spanner \(H\) all edges in \(Z_{x}\times Z_{x_{j}}\) for every \(j=1,\ldots,t\). Fix a pair of leaves \(u,v\in T\), let \(x=\operatorname{lca}(u,v)\), and let \(x_{i}\) (resp., \(x_{j}\)) be the child of \(x\) whose subtree contains \(u\) (resp., \(v\)). The idea behind finding a spanner path between \(u,v\) is as follows. We will connect both \(u,v\) to a certain chosen leaf \(x^{\prime}\in Z_{x}\). To this end, we first connect recursively \(u\) to a \(u^{\prime}\in Z_{x_{i}}\) and \(v\) to \(v^{\prime}\in Z_{x_{j}}\). Now, if \(x\), \(x_{i}\), and \(x_{j}\) have all chosen such leaves \(x^{\prime},u^{\prime},v^{\prime}\) to the sets \(Z_{x},Z_{x_{i}},Z_{x_{j}}\) respectively, that survive the attack \(B\), and also we managed the \(u-u^{\prime}\) and \(v-v^{\prime}\) connections recursively, then we can complete the \(u-v\) path. That path will consists of the two "long" bi-clique edges \(\{u^{\prime},x^{\prime}\},\{x^{\prime},v^{\prime}\}\), and the recursive \(u-u^{\prime}\) and \(v-v^{\prime}\) paths. Note that since \(u,u^{\prime}\in L(x_{i})\), \(d_{T}(u,u^{\prime})\leq d_{T}(u,v)/k\) (and similarly \(d_{T}(v,v^{\prime})\leq d_{T}(u,v)/k\)), so we can show inductively that the total distance taken by these recursive paths is only \(O(d_{T}(u,v)/k)\). See Figure 1 for an illustration of a path in \(H\) between two vertices \(u,v\). Having established what is needed for finding a spanner path, we say that a leaf is _safe_ if all its ancestors \(x\) in \(T\) have that \(Z_{x}\) is not fully included in \(B\). The failure set \(B^{+}\) consists of \(B\) and all leaves that are not safe. A subtle issue is that a vertex may have a linear number of ancestors, and we will need \(\ell\) to be at least logarithmic to ensure good probability for success in all of them. To avoid this, we use the following approach. For any node \(x\) that has a "heavy"child \(y\) (that is, \(L(y)\) is almost as large as \(L(x)\)), we use the sample \(Z_{y}\) for \(x\), instead of sampling \(Z_{x}\). This way, any leaf will have only logarithmically many ancestors that are not heavy parents, which reduce dramatically the sample size needed for success in all ancestors. For the reliability analysis, we distinguish between leaves that have an ancestor \(x\) with a very large \(1-\nu\) fraction of vertices in \(L(x)\) that fall in the attack set \(B\). These leaves are immediately taken as failed, but there can be only \(\approx\nu|B|\) such leaves. For the other leaves, a delicate technical analysis follows to show that only a small fraction \(\approx\nu\cdot|B|\) new vertices are expected to join \(B^{+}\). Note that if some node has a heavy child, we take the child's sample, so some care is needed in the analysis to account for this - roughly speaking, the definition of "heavy" must depend on the reliability parameter \(\nu\), in order to ensure sufficiently small failure probability. Improved stretch for bounded degree HSTs.In the case the \(k\)-HST has bounded degree \(\delta\), we can alter the construction slightly, and for every \(x\) with children \(x_{1},\ldots,x_{s}\), also add all edges in \(Z_{x_{i}}\times Z_{x_{j}}\) for every \(1\leq i<j\leq s\). While this alternative increases the lightness and size by a factor of \(\delta\), the stretch improves to \(1+\frac{O(1)}{k}\), since we only use one long edge. This variation will be useful for the class of doubling metrics. #### 1.2.2 Reliable Spanners via Light \(k\)-HST Covers A \((\tau,\rho)\)-tree cover of a metric space \((X,d)\), is a collection of \(\tau\) dominating trees, such that for every pair \(u,v\in X\), there exists a tree \(T\) in the cover with \(d_{T}(u,v)\leq\rho\cdot d(u,v)\). Let \((X,d)\) be any metric that admits a \((\tau,\rho)\)-tree cover in which all trees are \(k\)-HSTs of weight at most \(O(l\cdot w(MST(X))\), then we can devise an oblivious reliable spanner for \(X\) as follows. Sample an oblivious light \(\nu/\tau\)-reliable spanner \(H_{T}\) for each tree \(T\), and define \(H=\bigcup_{T}H_{T}\) as their union. We define \(B^{+}\) as the union of all the failure sets \(B^{+}_{T}\) over all tree spanners. Since in every \(\nu/\tau\)-reliable spanner of a tree only \(\nu/\tau\cdot|B|\) additional vertices fail in expectation, the total expected number of additional failures is at most \(\nu\cdot|B|\), as required. Now, if a pair \(u,v\) did not fail, there is a \(k\)-HST \(T\) in which \(d_{T}(u,v)\leq\rho\cdot d(u,v)\), and thus \(H\) has stretch at most \(\rho\cdot(2+\frac{O(1)}{k})\) for such a pair. Light \(k\)-HST Covers using Pairwise Partition Cover Scheme.A \((\tau,\rho,\varepsilon,\Delta)\)-Pairwise Partition Cover for a metric space \((X,d)\) is a collection of \(\tau\) partitions, each cluster in each partition has diameter at most \(\Delta\), and every pair \(u,v\in X\) with \(\frac{\Delta}{2\rho}\leq d(u,v)\leq\frac{\Delta}{\rho}\) is _padded_ in at least one cluster \(C\) of a partition. This means that the cluster \(C\) contains \(u,v\), and also Figure 1: _Illustration of the construction of spanner for a \(k\)-HST. For each internal node \(x\) we sample a subset \(Z_{x}\) of leaves from \(L(x)\), and connect all of \(Z_{x}\) to \(Z_{x^{\prime}}\) for every child \(x^{\prime}\) of \(x\). The path from \(u\) to \(v\) will first go from \(u\) to a surviving vertex in \(Z_{x_{i}}\) (using recursion), from there to a surviving vertices in \(Z_{x}\) and \(Z_{x_{j}}\), and finally to \(v\) (again by recursion)._ the balls of radius \(\varepsilon\Delta\) around them, see Definition 4. If \((X,d)\) admits such a cover for every \(\Delta\), we say it has a Pairwise Partition Cover Scheme (PPCS). In [12], PPCS were shown for general metrics and doubling metrics. In this paper, for any parameter \(0<\varepsilon<1/6\), we devise a \(\left(\frac{\log n}{\varepsilon},\frac{2}{1-6\varepsilon},\varepsilon\right)\)-PPCS for minor-free graphs. In [12] it was shown that one can obtain a \(k\)-HST cover from a PPCS, in such a way that every cluster of diameter \(\Delta\) in the PPCS corresponds to an internal node \(x\) of one of the \(k\)-HSTs, with label \(\Gamma_{x}=\Delta\). For our purposes, we want every \(k\)-HST in the cover to be light. To this end, we augment the reduction of [12] by a feature that allows us to bound the lightness of the resulting \(k\)-HST. The idea is to use _nets_, see Definition 5. A basic observation for a \(\Delta\)-net \(\mathcal{N}\) of a metric space \((X,d)\), is that \(w(MST(X))\geq\Omega(|\mathcal{N}|\cdot\Delta)\). On the other hand, the weight of a \(k\)-HST \(T\) is roughly \(\sum_{x\in T}k\cdot\Gamma_{x}\) (every node pays for the edge to its parent in \(T\)). So as long as the number of internal nodes with label \(\Delta\) is bounded by \(|\mathcal{N}|\), the \(k\)-HST will be rather light. Now, given some partition with diameter bound \(\Delta\), we take a \(\approx\varepsilon\Delta\)-net \(\mathcal{N}\), and break all clusters that do not contain a net point. Then the points in the broken clusters are joined to a nearby remaining cluster. Since the net is dense enough, each cluster that was used for padding remains intact, while the number of clusters is bounded by \(|\mathcal{N}|\). This enables us to bound the weight of the \(k\)-HST accordingly. #### 1.2.3 Reliable Light Spanner for Minor-free Graphs with \(2+\varepsilon\) stretch In the special case of minor-free graphs, the framework described above will lose a factor of \(2\) in the stretch in two places. The first is due to the padding of the PPCS, and the second in the reliable spanners for the \(k\)-HSTs. While each of these losses is unavoidable,10 we can still exploit a certain property of our PPCS for minor-free graphs, to improve the stretch to near optimal \(2+\varepsilon\). Footnote 10: Stretch 2 for HST is necessary: Consider the uniform metric, every spanner with less than \(\binom{n}{2}\) edges has stretch 2. Every PPCS for minor free graphs must have either \(\rho\geq 2\) or \(\tau=\Omega(n)\): Fix \(\rho<2\), and consider the unweighted star graph. There are \(n-1\) leaf-center pairs, while a single partition can satisfy at most a single pair. In our previous approach, suppose vertices \(u,v\) are padded in some cluster \(C\) of the PPCS, with diameter at most \(\Delta\). Then in the \(k\)-HST cover, we will have some tree with an internal node \(x\) corresponding to \(C\), whose label is \(\Gamma_{x}=\Delta\). The way we construct the spanner path between \(u,v\) is via some chosen leaf \(z\) in \(L(x)\), and as both \(d(u,z)\), \(d(v,z)\) can be as large as \(\Delta\), we loose a factor of \(2\) here. The main observation behind overcoming this loss, is that in our PPCS for minor-free graphs, each cluster \(C\) is a ball around some center \(x\), and whenever a pair \(u,v\) is padded, then \(x\) is very close to the shortest \(u-v\) path, meaning that \(d(u,x)+d(v,x)\leq(1+\varepsilon)\cdot d(u,v)\). While we cannot guarantee that \(x\), or a vertex close to \(x\), will survive the attack \(B\), we can still use this to improve the stretch guarantee. Suppose that \(Z_{x}\) contains a surviving leaf \(z\) which is closer to \(x\) than both \(u,v\), then \[d(u,z)+d(z,v)\leq(d(u,x)+d(x,z))+(d(z,x)+d(x,v))\leq 2(d(u,x)+d(x,v))\leq 2(1+ \varepsilon)\cdot d(u,v)\;.\] So, instead of sampling a set \(Z_{x}\) of leaves at random from \(L(x)\), we create a bias towards vertices closer to the center \(x\). Concretely, order the leaves of \(L(x)\) by their distance to \(x\), and we would like that the probability of the \(j\)-th leaf in \(L(x)\) to join \(Z_{x}\) will be \(\approx\frac{1}{j}\). This way, the expected size of \(Z_{x}\) is still small, and if not too many vertices in the appropriate prefix of \(L(x)\) are in \(B\), then there is a good probability that such a \(z\in Z_{x}\) exists. However, as it turns out, this requirement it too strict, since every internal node \(x\) will force us to move vertices to \(B^{+}\) that fail due many vertices in \(B\) in its induced ordering. To avoid this hurdle, we use a _global_ ordering for all internal nodes - a carefully chosen preorder of \(T\) - and prove that the induced order on \(L(x)\) is a good enough approximation of distances to \(x\) (specifically, up to an additive factor of \(\approx\Gamma_{x}/k\)). #### 1.2.4 Reliable Light Spanner for the Path Graph There were several construction of a reliable spanner for \(P_{n}\) in previous works [1, 1, 2], none of them could provide a meaningful bound on the lightness. For instance, the first step in the construction of [1] was to connect the first \(n/2\) vertices to the last \(n/2\) vertices via a bipartite expander graph. In particular, the total weight of just this step is \(\Omega(n^{2})\). The method of [11] is to sample \(\approx\nu^{-1}\) vertices as star centers, and connect all other vertices to each center. This construction also clearly isn't light, as the total weight of even one such star is \(\Omega(n^{2})\). Our construction of an oblivious light \(\nu\)-reliable spanner for (weighted) \(P_{n}\) is similar to the approach taken by [1]. It starts by sampling a laminar collection of subsets \([n]=V_{0}\supseteq V_{1}\supseteq V_{2}\supseteq\cdots\supseteq V_{\log n}\), where \(|V_{i}|\) contains \(\frac{n}{2^{i}}\) points in expectation. However, the construction of [1] used long range edges: from vertices in \(V_{i}\) to the nearest \(\approx 2^{i/2}\) other vertices in \(V_{i}\), and thus its lightness is polynomial in \(n\).11 Footnote 11: To see why the lightness is polynomial, consider just the level \(i=\frac{2}{3}\log n\), then \(|V_{i}|\approx n^{1/3}\), but also the number of connected neighbors is \(2^{i/2}=n^{1/3}\), so all \(\approx n^{2/3}\) edges between vertices in \(V_{i}\) are added. The average length of these edges is linear in \(n\), so the lightness is \(\Omega(n^{2/3})\). To ensure bounded lightness, we take a more local approach, and each point \(a\in V_{i}\) adds edges to only the nearest \(\ell\approx\nu^{-1}\) points in \(V_{i}\) and \(V_{i+1}\) on both its left and right sides. We remark that the connections to the next level are crucial in order to avoid additional logarithmic factors (since unlike [1], we cannot use the exponentially far away vertices, that would have provided high probability for connection of every vertex to the next level). The lightness follows as each edge \(e\) of \(P\) is expected to be "covered" \(\ell^{2}\) times, in each of the \(\log n\) levels. The reliability analysis of our spanner uses the notion of _shadow_, introduced by [1]. For the path \(P_{n}\), roughly speaking, a vertex \(u\) is outside the \(\alpha\)-shadow of an attack \(B\), if in all intervals containing \(u\), there is at most an \(\alpha\) fraction of failed vertices (in \(B\)). The reliability argument goes as follows: a vertex \(a\in[n]\setminus B\) fails and joins \(B^{+}\) only if there exists a level \(i\) in which all its connections to \(V_{i+1}\) fail. That is, its \(\ell\) closest vertices in \(V_{i+1}\) are in \(B\). But as points are chosen to \(V_{i+1}\) independently of \(B\), this is an unlikely event, whose probability can be bounded as a function of the largest \(\alpha\)-shadow that does not contain \(a\). To obtain our tight bound, we need a delicate case-analysis for the different regimes of \(\alpha\)-shadows. The stretch analysis is a refinement of [1] stairway approach. A nice feature is that each pair in \([n]\setminus B^{+}\) will have a shortest path of at most \(\log n\) hops in the spanner \(H\). ### Related Work Light fault-tolerant spanners.Levcopoulos _et. al._[10] introduced the notion of \(f\)-fault-tolerant spanner, where it is guaranteed that for every set \(F\) of at most \(f\) faulty nodes, \(H\setminus F\) is a \(t\)-spanner of \(X\setminus F\). However, the parameter \(f\) has to be specified in advance, and both sparsity and lightness of the spanner must polynomially depend on \(f\). Thus, unlike reliable spanners, it is impossible to construct sparse and light fault-tolerant spanners that can withstand scenarios where, say, half of the nodes fail. Czumaj and Zhao [11] constructed \(f\) fault-tolerant spanners for point in constant dimensional Euclidean space with optimal \(O(f^{2})\) lightness (improving over [10]\(2^{O(f)}\) lightness). This result was very recently generalized to doubling spaces by Le, Solomon, and Than [12], who obtain \(O(f^{2})\) lightness (improving over [11]\(O(f^{2}\log n)\) lightness, and [13]\(O(f^{2}+f\log n)\) lightness). Abam _et. al._[1] introduced the notion of _region_ fault-tolerant spanners for the Euclidean plane. They showed that one can construct a \(t\)-spanner with \(O(n\log n)\) edges in such a way that if points belonging to a convex region are deleted, the residual graph is still a spanner for the remaining points. More on Light spanners.Light spanners were constructed for high dimensional Euclidean and doubling spaces (in similar context to our Corollary 14) [14, 10]. Subset light spanners were studied for planar and Minor free graphs [11, 12, 13, 14], where the goal is to maintain distances only between a subset of terminals (and the lightness is defined w.r.t. the minimum Steiner tree). Bartal _et. al._ constructed light prioritized and scaling spanner [1], where only a small fraction of the vertex pairs suffer from large distortion. Recently Le and Solomon conducted a systematic study of efficient constructions of light spanners [10] (see also [10, 12]). Finally, light spanners were efficiently constructed in the LOCAL [13], and CONGEST [15] distributed models. ### Organization After a few preliminaries in section 2, we show our reliable spanner for \(k\)-HSTs in section 3. In section 4 we show how to devise PPCS for minor-free graphs, and in section 5 we show how to construct light \(k\)-HST covers based on PPCS. In section 6 we combine the results of all previous sections, and derive our results on light reliable spanners for various metric spaces. We show our reliable spanner for the path graph in section 7. In section 8 we devise a reliable spanner for minor-free graphs with improved stretch, and finally, in section 9 we exihibit our lower bounds for the path graph and for ultrametrics. Preliminaries All logarithms (unless explicitly stated otherwise) are in base 2. We use \(\tilde{O}\) notation to hide poly-logarithmic factors. That is \(\tilde{O}(s)=O(s)\cdot\log^{O(1)}(s)\). For a weighted graph \(G=(V,E)\), denote the distance between \(u,v\in V\) by \(d_{G}(u,v)\). When \(G\) is clear from context, we might write \(d(u,v)\). For a metric space \((X,d)\), we denote the ball of \(v\in X\) of radius \(\Delta\geq 0\) by \(B(v,\Delta)=\{u\in X\ :\ d(u,v)\leq\Delta\}\). The diameter of a cluster \(C\subseteq X\) is maximum pairwise distance: \(\operatorname{diam}(C)=\max_{u,v\in C}d(u,v)\). Let \([n]\) denote the set \(\{1,\ldots,n\}\), and for integers \(a\leq b\) let \([a:b]\) denote \(\{a,\ldots,b\}\), and \([a:b)\) denote \(\{a,...,b-1\}\). We next define ultrametrics and HSTs. **Definition 2**.: _A metric \((X,d)\) is a called an ultrametric if it satisfies a strong form of the triangle inequality_ \[\forall x,y,z\in X,\ d(x,z)\leq\max\{d(x,y),d(y,z)\}\.\] _Equivalently [1], if there exists a bijection \(\varphi\) from \(X\) to the leaves of a rooted tree \(T\) in which:_ 1. _Each node_ \(v\in T\) _is associated with a label_ \(\Gamma_{v}\) _such that_ \(\Gamma_{v}=0\) _if and only if_ \(v\) _is a leaf, and if_ \(u\) _is a child of_ \(v\) _in_ \(T\) _then_ \(\Gamma_{v}\geq\Gamma_{u}\)_._ 2. \(d(x,y)=\Gamma_{\operatorname{lca}(\varphi(x),\varphi(y))}\) _where_ \(\operatorname{lca}(u,v)\) _is the least common ancestor of_ \(u,v\) _in_ \(T\)_._ _For \(k\geq 1\), a \(k\)-hierarchical well-separated tree (\(k\)-HST) is an ultrametric \(T\) that also satisfies that whenever \(u\) is a child of \(v\) in \(T\), then \(\Gamma_{v}\geq k\cdot\Gamma_{u}\)._ **Definition 3** (ultrametric cover).: _A \((\tau,\rho)\)-ultrametric cover for a metric space \((X,d)\), is a collection of at most \(\tau\) dominating\({}^{4}\) ultrametrics \(\mathcal{U}=\{(U_{i},d_{U_{i}})\}_{i=1}^{\tau}\) over \(X\), such that for every \(x,y\in X\) there is an ultrametric \(U_{i}\) for which \(d_{U_{i}}(x,y)\leq\rho\cdot d_{X}(x,y)\)._ _The cover is called \(l\)-light, if the weight of every ultrametric \(U_{i}\) is at most \(l\cdot w(MST(X))\)._ **Definition 4** (Pairwise Partition Cover Scheme).: _A collection of partitions \(\mathbb{P}=\{\mathcal{P}_{1},\ldots,\mathcal{P}_{s}\}\) is \((\tau,\rho,\varepsilon,\Delta)\)-pairwise partition cover if (a) \(s\leq\tau\), (b) every partition \(\mathcal{P}_{i}\) is \(\Delta\)-bounded (that is, \(\forall C\in\mathcal{P}_{i}\), \(\operatorname{diam}(C)\leq\Delta\)), and (c) for every pair \(x,y\) such that \(\frac{\Delta}{2\rho}\leq d(x,y)\leq\frac{\Delta}{\rho}\), there is a cluster \(C\) in one of the partitions \(\mathcal{P}_{i}\) such that \(C\) contains both closed balls \(B(x,\varepsilon\Delta),B(y,\varepsilon\Delta)\). A space \((X,d)\) admits a \((\tau,\rho,\varepsilon)\)-pairwise partition cover scheme (PPCS) if for every \(\Delta>0\), it admits a \((\tau,\rho,\varepsilon,\Delta)\)-pairwise partition cover._ **Definition 5** (\(\Delta\)-net).: _For \(\Delta>0\) and a metric space \((X,d)\), a \(\Delta\)-net is a set \(\mathcal{N}\subseteq X\) such that:_ 1. _Packing: For every_ \(u,v\in\mathcal{N},d(u,v)>\Delta\)__ 2. _Covering: For every_ \(x\in X\)_, there exists_ \(u\in\mathcal{N}\) _satisfying_ \(d(x,u)\leq\Delta\)_._ It is well known that a simple greedy algorithm can find a \(\Delta\)-net. **Definition 6**.: _A metric space \((X,d)\) has doubling dimension \(\mathrm{ddim}\), if for every \(r>0\), every ball of radius \(2r\) can be covered by \(2^{\mathrm{ddim}}\) balls of radius \(r\). A family of metrics is called doubling, if all the metrics in the family have uniformly bounded doubling dimension._ By applying the definition iteratively, we get the following simple corollary. **Lemma 7** (Packing Lemma).: _If \((X,d)\) has doubling dimension \(\mathrm{ddim}\), and \(\mathcal{N}\) is a \(\Delta\)-net, then for any \(R>1\), a ball of radius \(R\cdot\Delta\) contains at most \((2R)^{\mathrm{ddim}}\) net points._ The proof uses the fact that a ball of radius \(\Delta/2\) cannot contain two net points of \(\mathcal{N}\). The following lemma is an extension of [10, Lemma 2], that shows it suffices to bound the expected size and lightness of an oblivious \(\nu\)-reliable spanner, in order to obtain worst-case guarantees. **Lemma 8**.: _Suppose that \((X,d)\) admits an oblivious \(\nu\)-reliable spanner \(\mathcal{D}\) with expected size \(m\) and expected lightness \(\phi\), then \((X,d)\) admits an oblivious \(3\nu\)-reliable spanner \(\mathcal{D}^{\prime}\) with size \(3\cdot m\) and lightness \(3\cdot\phi\)._ Proof.: We define \(\mathcal{D}^{\prime}\) by conditioning on the event \(A=\{(|H|\leq 3m)\wedge(w(H)\leq 3\phi\cdot w(MST(X)))\}\). Observe that \(\Pr[|H|>3m]\leq 1/3\) and also \(\Pr[w(H)>3\phi\cdot w(MST(X))]\leq 1/3\), both by Markov's inequality. So that \(\Pr[A]\geq 1/3\). For any attack \(B\subseteq X\), \[\mathbb{E}_{H\sim\mathcal{D}^{\prime}}[|B_{H}^{+}\setminus B|] = \mathbb{E}_{H\sim\mathcal{D}}[|B_{H}^{+}\setminus B|\ \mid A]\] \[= \sum_{H\in\mathrm{supp}(\mathcal{D})}|B_{H}^{+}\setminus B|\cdot \frac{\Pr[H\cap A]}{\Pr[A]}\] \[\leq \frac{1}{\Pr[A]}\cdot\sum_{H\in\mathrm{supp}(\mathcal{D})}|B_{H} ^{+}\setminus B|\cdot\Pr[H]\] \[\leq \frac{\nu\cdot|B|}{\Pr[A]}\leq 3\nu\cdot|B|\.\] ## 3 Light Reliable Spanner for \(k\)-Hsts In this section we devise a light reliable spanner for the family of \(k\)-HSTs (see Definition 2). Let \(T\) be the tree corresponding to the given \(k\)-HST, we refer to its leaves as vertices, and to the interval nodes as nodes. Each node has an arbitrary order on its children. For a node \(x\) we denote by \(L(x)\) the set of leaves in the subtree rooted at \(x\), and by \(L=[n]\) the set of all leaves. For an internal node \(x\) in \(T\), let \(\deg(x)\) denote the number of children of \(x\). We will assume that \(\deg(x)\geq 2\) (as degree \(1\) nodes are never the least common ancestor, and thus can be contracted). Our goal is to prove the following theorem. **Theorem 1**.: _For any parameters \(\nu\in(0,1/6)\) and \(k>1\), every \(k\)-HST \(T\) admits an oblivious \(\nu\)-reliable \((2+\frac{2}{k-1})\)-spanner of size \(n\cdot\tilde{O}(\nu^{-1}\cdot\log\log n)^{2}\) and lightness \(\tilde{O}(\nu^{-1}\cdot\log\log n)^{2}\)._ ### Decomposition of \(T\) to Heavy Paths We apply the following decomposition of \(T\) into paths, reminiscent of the heavy-path decomposition [12]. Each node \(x\in T\) is given a tag, initially \(\sigma_{x}=|L(x)|\), and set \(D=\emptyset\). Go over the nodes of \(T\) in preorder, and when visiting node \(x\) with children \(x_{1},\ldots,x_{t}\): If there is \(1\leq j\leq t\) such that \(\sigma_{x_{j}}>(1-\nu/2)\sigma_{x}\), set \(\sigma_{x_{j}}=\sigma_{x}\) and add the edge \(\{x,x_{j}\}\) to \(D\). For example, if \(T\) contains a path \((y_{1},y_{2},\ldots,y_{q})\) where \(y_{1}\) is the closest vertex to the root, and \(L(y_{q})>(1-\nu/2)L(y_{2})\) while \(L(y_{2})<(1-\nu/2)L(y_{1})\) then it will hold that \(\sigma_{y_{1}}\neq\sigma_{y_{2}}=\sigma_{y_{3}}=\cdots=\sigma_{y_{q}}=|L(y_{2})|\). We claim that \(\sigma_{x}\geq|L(x)|\) for every node \(x\in T\), because we either have equality or \(x\) inherit the original tag of one of its ancestors. As \(1-\nu/2>1/2\), there cannot be two different children of \(x\) with more than \(|L(x)|/2\) leaves in their subtree, hence there can be at most one child \(x_{j}\) for which an edge is added to \(D\). So indeed \(D\) is a decomposition of \(T\) into heavy paths (some paths can be singletons). Denote by \(\mathcal{Q}\) this collection of paths, and for each \(Q\in\mathcal{Q}\), let \(f(Q)\) be the lowest vertex (farthest from the root) on \(Q\). We overload this notation, and define \(f(x)=f(Q)\), where \(Q\) is the heavy path containing \(x\). Let \(F=\{f(Q)\}_{Q\in\mathcal{Q}}\) be the set of lowest vertices over all paths. **Claim 9**.: _Each root-to-leaf path \(W\) intersects at most \(O(\nu^{-1}\log n)\) paths in \(\mathcal{Q}\)._ Proof.: Fix a path \(Q\in\mathcal{Q}\). Note that all nodes in \(Q\) have the same tag \(\sigma_{Q}\). Whenever the path \(W\) leaves \(Q\), it will go to some node \(y\) with \(\sigma_{y}\leq(1-\nu/2)\sigma_{Q}\). The root has tag \(n\), so after leaving \(2\nu^{-1}\ln n\) heavy paths, the tag will be at most \[n\cdot(1-\nu/2)^{2\nu^{-1}\ln n}<n\cdot e^{-\ln n}=1\,\] since the tag of any internal node \(x\) is at least \(|L(x)|\), we must have reached a leaf. ### Construction For each node \(y\in F\), we independently sample uniformly at random a set \(Z_{y}\) of \(\ell=c\cdot\nu^{-1}\cdot\ln\left(\frac{\ln n}{\nu}\right)\) vertices from \(L(y)\), where \(c\) is a constant to be determined later. If there are less than \(\ell\) vertices in \(L(y)\), take \(Z_{y}=L(y)\). For each internal node \(x\) in \(T\) with children \(x_{1},\ldots,x_{t}\), and for every \(1\leq j\leq t\), we add the edges \(\{\{y,z\}\ :\ y\in Z_{f(x)},z\in Z_{f(x_{j})}\}\) to the spanner \(H\). Defining the set \(B^{+}\).Consider an attack \(B\). We say that an internal node \(x\in T\) is _good_ if \(Z_{f(x)}\setminus B\neq\emptyset\). A leaf \(u\) is _safe_ if for every ancestor \(x\) of \(u\), \(x\) is good. In other words, a leaf is safe if every ancestor \(x\) sampled a leaf to \(Z_{f(x)}\) which is not in \(B\). Define \(B^{+}\) as the set of all leaves which are not safe. ### Analysis Size Analysis.For each internal node \(x\) in \(F\) and each child \(x_{j}\) of \(x\), we added the bi-clique \(Z_{x}\times Z_{x_{j}}\), which contains at most \(\ell^{2}\) edges. Since the sum of degrees of internal nodes in \(T\) is \(O(n)\) (recall that all degrees are at least 2), the total number of edges added to \(H\) is \(O(n\cdot\ell^{2})=n\cdot\tilde{O}(\nu^{-1}\cdot\log\log n)^{2}\). Weight Analysis.First, we claim that the weight of the MST for the leaves of \(T\) is equal to \[\sum_{x\in T}(\deg(x)-1)\cdot\Gamma_{x}. \tag{1}\] This can be verified by running Boruvka's algorithm, say.12 Every internal node \(x\) in \(F\), adds at most \(\ell^{2}\cdot\deg(x)\) edges of weight at most \(\Gamma_{x}\) to the spanner. The total weight is thus Footnote 12: In Boruvka’s algorithm, we start with all vertices as singleton components. In each iteration, every component adds to the MST the edge of smallest weight leaving it (breaking ties consistently). For a \(k\)-HST, we use a small variation – only components which are the deepest leaves in the HST participate in the current iteration. We claim that the connected components after the \(j\)-th iteration correspond to nodes of height \(j\) above the leaves. Thus, in the \(j\)-th iteration, any node \(x\) of height \(j\) will add \(\deg(x)-1\) edges with weight \(\Gamma_{x}\) each, that connect the components corresponding to its children. \[\sum_{x\in F}\deg(x)\cdot\ell^{2}\cdot\Gamma_{x}=O(w(MST)\cdot\ell^{2})=w( MST)\cdot\tilde{O}(\nu^{-1}\cdot\log\log n)^{2})\.\] Stretch Analysis.The stretch analysis is based on the following lemma. **Lemma 10**.: _Let \(u\notin B^{+}\) be any safe leaf. Then for any ancestor \(x\) of \(u\) and any \(v\in Z_{f(x)}\setminus B\), the spanner \(H\) contains a path from \(u\) to \(v\) of length at most \(\left(1+\frac{1}{k-1}\right)\cdot\Gamma_{x}\) that is disjoint from \(B\)._ Proof.: The proof is by induction on \(|L(x)|\). The base case is when \(x=u\), then \(L(u)=\{u\}\) and the statement holds trivially. Let \(x\) be an ancestor of \(u\), and take any vertex \(v\in Z_{f(x)}\setminus B\). We need to find a path in \(H\) of length at most \(\left(1+\frac{1}{k-1}\right)\cdot\Gamma_{x}\) from \(u\) to \(v\) that is disjoint from \(B\). Let \(x_{u}\) be the child of \(x\) whose subtree contains \(u\). Since \(u\) is safe, we know that \(Z_{f(x_{u})}\setminus B\neq\emptyset\), so take any vertex \(u^{\prime}\in Z_{f(x_{u})}\setminus B\). By the induction hypothesis on \(x_{j}\), there is a path \(P^{\prime}\) in \(H\) from \(u\) to \(u^{\prime}\) of length at most \(\left(1+\frac{1}{k-1}\right)\cdot\Gamma_{x_{j}}\) disjoint from \(B\) (note that indeed \(|L(x_{j})|<|L(x)|\), as all vertices have degree at least 2). Recall that in the construction step for \(x\), we added all edges from \(Z_{f(x)}\) to \(Z_{f(x_{u})}\), in particular the edge \(\{u^{\prime},v\}\in H\). Note that \(v\notin B\), that \(u^{\prime},v\in L(x)\) and therefore \(d_{T}(u^{\prime},v)\leq\Gamma_{x}\), and as \(T\) is a \(k\)-HST we have that \(\Gamma_{x_{j}}\leq\frac{\Gamma_{x}}{k}\). It follows that the path \(P=P^{\prime}\circ\{u^{\prime},v\}\) from \(u\) to \(v\) in \(H\) is disjoint from \(B\), and has length at most \[\left(1+\frac{1}{k-1}\right)\cdot\Gamma_{x_{j}}+\Gamma_{x}\leq\left(\frac{1+ \frac{1}{k-1}}{k}\right)\cdot\Gamma_{x}+\Gamma_{x}=\left(1+\frac{1}{k-1} \right)\cdot\Gamma_{x}\] Fix a pair of leaves \(u,v\notin B^{+}\), and let \(x=\operatorname{lca}(u,v)\). Since both are safe, \(Z_{f(x)}\setminus B\neq\emptyset\), and pick any \(z\in Z_{f(x)}\setminus B\). By lemma 10 there are paths in \(H\) from \(u\) to \(z\) and from \(v\) to \(z\), both disjoint from \(B\), of combined length at most \[2\cdot\left(1+\frac{1}{k-1}\right)\cdot\Gamma_{x}=\left(2+\frac{2}{k-1}\right) \cdot d_{T}(u,v)\.\] Reliability Analysis.For every \(x\in T\), denote by \(B^{(x)}\) the set of all vertices in \(u\in L(x)\setminus B\), such that there is an ancestor \(z\) of \(u\) in the subtree rooted at \(x\) for which \(Z_{f(z)}\subseteq B\). In other words, those are the leaves (outside \(B\)) who are not safe due to a bad ancestor in the subtree rooted at \(x\). We say that a node \(x\in T\) is _brutally attacked_ if \(|B\cap L(x)|\geq(1-\nu)\cdot|L(x)|\), that is at least a \(1-\nu\) fraction of the decedent leaves of \(x\) are in the attack \(B\). Denote by \(B^{(x)}_{1}\subseteq B^{(x)}\) the set of vertices \(u\in L(x)\setminus B\) that have a brutally attacked ancestor \(y\) in the subtree rooted at \(x\). Denote by \(B^{(x)}_{2}=B^{(x)}\setminus B^{(x)}_{1}\) the rest of the vertices in \(B^{(x)}\). We next argue that the number of vertices added to \(B^{+}\) (in the worst case) due to brutally attacked nodes is bounded by \(O(\nu)\cdot|B|\). Let \(A_{\text{ba}}\) be the set of \(T\) nodes which are brutally attacked, and they are maximal w.r.t. the order induced by \(T\). That is, \(x\in A_{\text{ba}}\) if and only if \(x\) is brutally attacked, while for every ancestor \(y\) of \(x\), \(y\) is not brutally attacked. Clearly, for every \(x\in A_{\text{ba}}\) it holds that \(|B^{(x)}_{1}|\leq|L(x)\setminus B|\leq\nu\cdot|L(x)|\leq\frac{\nu}{1-\nu}\cdot |L(x)\cap B|\). In total, for the root \(r\) of \(T\) it holds that \[|B^{(r)}_{1}|=\sum_{x\in A_{\text{ba}}}|B^{(x)}_{1}|\leq\sum_{x\in A_{\text{ba }}}\frac{\nu}{1-\nu}\cdot|L(x)\cap B|\leq\frac{\nu}{1-\nu}\cdot|B|\leq 2\nu \cdot|B|\.\] Next we bound the damage done (in expectation) due to non brutally attacked nodes. Denote \(\beta=\frac{1}{\ln\ln n}\). We will prove for any node \(x\in T\) which is not a heavy child, by induction on \(|L(x)|\) that \[\mathbb{E}[|B^{(x)}_{2}|]\leq\max\left\{0,\nu\cdot\beta\cdot\ln\ln(|L(x)|) \cdot|B\cap L(x)|\right\}. \tag{2}\] The base case where \(|L(x)|\leq\nu^{-1}\) holds trivially as \(B^{(x)}_{2}=\emptyset\). Indeed, consider a descendent leaf \(v\notin B\) of \(x\). For every ancestor internal node \(y\) of \(v\), which is a descendent of \(x\), it holds that \(f(y)=y\) (\(y\) does not have heavy children as \(|L(y)|-1=(1-\frac{1}{|L(y)|})\cdot|L(y)|<(1-\frac{\nu}{2})\cdot|L(y)|\)). In particular \(v\in Z_{f(y)}\setminus B\). It follows that \(v\notin B^{(x)}_{2}\), and thus \(B^{(x)}_{2}=\emptyset\). In general, let \(x\in T\) be an inner node, which is not a heavy child. Denote \(m=|L(x)|>\nu^{-1}\). \(x\) is the first vertex in a heavy path \(Q=(x=y_{1},y_{2},...,y_{s})\in\mathcal{Q}\). Let \(x_{1},\ldots,x_{t}\) be the children of all the nodes in \(Q\). Observe that none of \(x_{1},\ldots,x_{t}\) is a heavy child, and that \(L(x_{1}),\ldots,L(x_{t})\) is a partition of \(L(x)\). The main observation is that all the vertices in \(Q\) use the same sample \(Z_{f(x)}\), so a leaf \(u\) is in \(B^{(x)}_{2}\) if at least one the following holds: 1. \(u\in B^{(x_{j})}_{2}\) for some \(1\leq j\leq t\), or 2. \(Z_{f(x)}\subseteq B\). We conclude that \[\mathbb{E}[|B_{2}^{(x)}|]\leq\sum_{j=1}^{t}\mathbb{E}[|B_{2}^{(x_{j})}|]+|L(x)| \cdot\Pr[Z_{f(x)}\subseteq B]. \tag{3}\] In what follows we bound each of the two summands. For the first, we use the induction hypothesis on \(x_{j}\) (clearly \(|L(x_{j})|<m=|L(x)|\)), to get that \[\mathbb{E}\left[\left|B_{2}^{(x_{j})}\right|\right]\leq\max\left\{0,\nu\cdot \beta\cdot\ln\ln(|L(x_{j})|)\cdot|B\cap L(x_{j})|\right\}\.\] By definition of a heavy path, for every \(1\leq j\leq t\), \(|L(x_{j})|\leq(1-\nu/2)\cdot\sigma_{Q}=(1-\nu/2)\cdot m\). It holds that \((1-\frac{\nu}{2})\cdot m\geq(1-\frac{\nu}{2})\cdot\nu^{-1}\geq\nu^{-1}-\frac{ 1}{2}\geq 5.5\), and in particular, \(\ln\ln\left((1-\frac{\nu}{2})\cdot m\right)>0\). It follows that \[\sum_{j=1}^{t}\mathbb{E}[|B_{2}^{(x_{j})}|] \leq\sum_{j=1}^{t}\nu\cdot\beta\cdot\ln\ln\left(\left(1-\frac{ \nu}{2}\right)\cdot m\right)\cdot|B\cap L(x_{j})| \tag{4}\] \[=\nu\cdot\beta\cdot\ln\ln\left(\left(1-\frac{\nu}{2}\right)\cdot m \right)\cdot|B\cap L(x)|\.\] For the second summand, we now analyze the probability of the event \(Z_{f(x)}\subseteq B\). If \(|B\cap L(x)|\geq(1-\nu)\cdot|L(x)|\), then \(x\) is brutally attacked and thus \(B_{2}^{(x)}=\emptyset\) and (2) holds. We thus can assume \(|B\cap L(x)|<(1-\nu)\cdot|L(x)|\). By the heavy path decomposition, it holds that \(|L(f(x))|>(1-\frac{\nu}{2})\cdot m\). In the case that \(|L(f(x))|\leq\ell\) we take \(Z_{f(x)}=L(f(x))\), and as \(|L(f(x))|>(1-\frac{\nu}{2})\cdot m>(1-\nu)m>|B\cap L(x)|\), there must be a vertex in \(Z_{f(x)}\setminus B\). In particular, \(\Pr\left[Z_{f(x)}\subseteq B\right]=0\). Otherwise, we have that \(|L(f(x))|>\ell\). As \(Z_{f(x)}\) is chosen from \(L(f(x))\) independently of \(B\), by Lemma 33, the probability that all of the \(\ell\) vertices in \(Z_{f(x)}\) are chosen from \(B\cap L(f(x))\) is at most \[\Pr\left[Z_{f(x)}\subseteq B\right] =\frac{\binom{|B\cap L(f(x))|}{\ell}}{\binom{|L(f(x))|}{\ell}} \leq O(\sqrt{\ell})\cdot\left(\frac{|B\cap L(f(x))|}{|L(f(x))|}\right)^{\ell}\] \[\leq O(\sqrt{\ell})\cdot\left(\frac{1-\nu}{1-\frac{\nu}{2}}\right) ^{\ell-1}\cdot\frac{|B\cap L(f(x))|}{m}\] \[\stackrel{{(*)}}{{\leq}}\frac{\nu^{2}\cdot\beta}{4 \cdot\ln n}\cdot\frac{|B\cap L(f(x))|}{m}\leq\frac{\nu^{2}\cdot\beta}{4\cdot \ln m}\cdot\frac{|B\cap L(x)|}{m}\, \tag{5}\] where the inequality \({}^{(*)}\) uses that \(\frac{1-\nu}{1-\frac{\nu}{2}}\leq 1-\frac{\nu}{2}\leq e^{-\nu/2}\), and taking a large enough constant \(c\) in the definition of \(\ell\). By plugging (4) and (5) into (3) we conclude that, \[\mathbb{E}\left[\left|B_{2}^{(x)}\right|\right] \leq\sum_{j=1}^{t}\mathbb{E}[|B_{2}^{(x_{j})}|]+m\cdot\Pr[Z_{f(x) }\subseteq B]\] \[\leq\nu\cdot\beta\cdot\ln\ln\left(\left(1-\frac{\nu}{2}\right) \cdot m\right)\cdot|B\cap L(x)|+\frac{\nu^{2}\cdot\beta}{4\cdot\ln m}\cdot|B \cap L(x)|\] \[\stackrel{{(**)}}{{\leq}}\nu\cdot\beta\cdot\ln\ln m \cdot|B\cap L(x)|\ \,\] which concludes the proof of (2), and thus the induction step. It remains to validate \({}^{(**)}\): \[\ln\ln m-\ln\ln\left((1-\frac{\nu}{2})\cdot m\right) =\ln\frac{\ln m}{\ln\left((1-\frac{\nu}{2})\cdot m\right)}\geq\ln \frac{\ln m}{\ln m-\ln(1+\frac{\nu}{2})}\] \[\geq\ln\left(1+\frac{\ln(1+\frac{\nu}{2})}{\ln m}\right)\geq\frac {\ln(1+\frac{\nu}{2})}{2\ln m}\geq\frac{\nu}{4\ln m}\,\] using \(\ln(1+x)\geq\frac{x}{2}\) for \(0<x<1\). Finally, by applying (2) on the root \(r\) of \(T\), we get that \[\mathbb{E}[|B^{+}\setminus B|]=\mathbb{E}[|B^{(r)}_{1}|+|B^{(r)}_{2}|]\leq(2 \nu+\nu\cdot\beta\cdot\ln\ln n)\cdot|B|=3\nu\cdot|B|\.\] Theorem 1 follows by rescaling \(\nu\) by a factor of \(3\). ### Improved Stretch for Small Max Degree HST In this subsection we slightly modify Theorem 1 to obtain a spanner with stretch \((1+\frac{2}{k-1})\), while increasing the lightness and sparsity to be linear in the maximum degree of the HST. Later, we will use Theorem 2 to construct an oblivious light \((1+\varepsilon)\)-reliable spanner for doubling metrics. **Theorem 2**.: _Consider a \(k\)-HST \(T\) of maximum degree \(\delta\). For any parameters \(\nu\in(0,1/6)\) and \(k>1\), \(T\) admits an oblivious \(\nu\)-reliable \((1+\frac{2}{k-1})\)-spanner of size \(n\cdot\delta\cdot\tilde{O}\left(\nu^{-1}\cdot\log\log n\right)^{2}\) and lightness \(\delta\cdot\tilde{O}(\nu^{-1}\cdot\log\log n)^{2}\)._ Proof.: The construction will follow the exact same lines of Theorem 1 with a small tweak. We will use the heavy path decomposition \(\mathcal{Q}\), and for every node \(y\in F\), we will sample a set \(Z_{y}\) of size \(\ell\) from \(L(y)\). The set \(B^{+}\) (and the definition of safe), remain exactly the same. The only difference is in the definition of bi-cliques. Specifically, for each internal node \(x=x_{0}\) in \(T\) with children \(x_{1},\ldots,x_{t}\), for every \(0\leq j<j^{\prime}\leq t\), we add the edges \(\{\{y,z\}\ :\ y\in Z_{f(x_{j})},z\in Z_{f(x_{j^{\prime}})}\}\) to the spanner \(H\). That is, in addition to adding edges from \(Z_{f(x)}\) (the sample set of \(x\)) to all the other sampled sets (of the children of \(x\)), we also add all the edges between the two sets \(Z_{f(x_{j})},Z_{f(x_{j^{\prime}})}\) of every pair of children of \(x\). As \(B^{+}\) is defined in the exact same way, for every attack \(B\) we have \(\mathbb{E}[|B^{+}|]\leq(1+\nu)\cdot|B|\). For the size analysis, consider an internal node \(x\) of degree \(\deg(x)\leq\delta\), we add at most \(\ell^{2}\cdot\binom{\deg(x)+1}{2}\leq\ell^{2}\cdot\delta\cdot\deg(x)\) edges. In total, the size of the spanner is bounded by \(n\cdot\ell^{2}\cdot\delta=n\cdot\delta\cdot\tilde{O}(\nu^{-1}\cdot\log\log n )^{2}\). For the lightness analysis, the total weight added due to an internal node \(x\) of degree \(\deg(x)\leq\delta\) is at most \(\ell^{2}\cdot\delta\cdot\deg(x)\cdot\Gamma_{x}\). Thus, the total weight added due to the bi-cliques is \(\sum_{x\in T}\deg(x)\cdot\ell^{2}\cdot\delta\cdot\Gamma_{x}=\delta\cdot\tilde {O}(\nu^{-1}\cdot\log\log n)^{2}\cdot w(MST)\). It remains to analyze the stretch. The argument is similar to Theorem 1, where the main difference is that a \(u-v\) path will be using only a single edge in the highest level (instead of two). Note that since we only add additional edges to \(H\) in this variant, Lemma 10 still holds. Fix a pair of leaves \(u,v\notin B^{+}\), and let \(x=\operatorname{lca}(u,v)\). Let \(x_{u}\) (resp., \(x_{v}\)) be the child of whose subtree contains \(u\) (resp., \(v\)). Since both \(u,v\) are safe, \(Z_{f(x_{u})}\setminus B\neq\emptyset\) and \(Z_{f(x_{v})}\setminus B\neq\emptyset\), so pick any \(u^{\prime}\in Z_{f(x_{u})}\setminus B\) and \(v^{\prime}\in Z_{f(x_{v})}\setminus B\). By the construction step for \(x\), we added all edges in \(Z_{f(x_{u})}\times Z_{f(x_{v})}\), in particular, \(\{u^{\prime},v^{\prime}\}\in H\). Note that \(d_{T}(u^{\prime},v^{\prime})\leq\Gamma_{x}\), since both are in \(L(x)\). By Lemma 10 there is a path \(P_{u}\) (resp., \(P_{v}\)) in \(H\) from \(u\) to \(u^{\prime}\) (resp., \(v\) to \(v^{\prime}\)), which is disjoint from \(B\), and of length at most \(\left(1+\frac{1}{k-1}\right)\cdot\Gamma_{x_{u}}\) (resp., \(\left(1+\frac{1}{k-1}\right)\cdot\Gamma_{x_{v}}\)). Since \(T\) is a \(k\)-HST we have that \(\Gamma_{x_{u}},\Gamma_{x_{v}}\leq\frac{\Gamma_{x}}{k}\), therefore the path \(P=P_{u}\circ\{u^{\prime},v^{\prime}\}\circ P_{v}\) is a \(u-v\) path in \(H\), disjoint from \(B\), and has total length at most \[2\cdot\left(1+\frac{1}{k-1}\right)\cdot\frac{\Gamma_{x}}{k}+\Gamma_{x}=\left( 1+\frac{2}{k-1}\right)\cdot d_{T}(u,v)\.\] ## 4 Pairwise Partition Cover for Minor Free Graphs In this section we construct a _Pairwise Partition Cover Scheme_ (PPCS, recall Definition 4) for metrics arising from shortest paths of graphs excluding a fixed minor. The main building block in the construction of our PPCS is the so called Shortest Path Decomposition (SPD) introduced by [1]. Roughly speaking, this is a recursive decomposition of the graph into shortest paths, and the measure of interest is the depth of the recursion, as captured by the following definition. **Definition 11** (Spddepth).: _A graph has an SPDdepth\(1\) if and only if it is a (weighted) path. A graph \(G\) has an SPDdepth\(k\geq 2\) if there exists a shortest path\(P\), such that deleting \(P\) from the graph \(G\) results in a graph whose connected components all have SPDdepth at most \(k-1\)._ It is shown in [1] that \(n\)-vertex graphs excluding a fixed minor have SPDdepth\(O(\log n)\) (this follows by using the balanced separator consisting of \(O(1)\) shortest paths, by [1]). We will prove the following lemma: **Lemma 12**.: _For any parameter \(0<\varepsilon<1/6\), any graph \(G=(V,E)\) with SPDdepth\(k\) admits a \(\left(\frac{k}{\varepsilon},\frac{2}{1-6\varepsilon},\varepsilon\right)\)-PPCS._ In particular, as graphs excluding a fixed minor have SPDdepth\(=O(\log n)\), we obtain the following corollary. **Corollary 3**.: _For any parameter \(\varepsilon<1/6\), every graph \(G=(V,E)\) that excludes a fixed minor, admits a \(\left(\frac{O(\log n)}{\varepsilon},\frac{2}{1-6\varepsilon},\varepsilon\right)\)-PPCS_ Proof of Lemma 12.: We will assume for simplicity (and w.l.o.g.) that \(\varepsilon^{-1}\) is an integer. Fix \(\Delta>0\). We will prove by induction on the SPDdepth, that graphs with SPDdepth\(k\) admit a \(\left(\frac{k}{\varepsilon},\frac{2}{1-6\varepsilon},\varepsilon,\Delta\right)\)-PPC, assuming all graphs with SPDdepth less than \(k\) admits a \(\left(\frac{k-1}{\varepsilon},\frac{2}{1-6\varepsilon},\varepsilon,\Delta\right)\)-PPC. For the base case, we think of a graph with SPDdepth\(0\) as the empty graph, where there is nothing to prove. Let \(G=(V,E)\) be a connected graph with SPDdepth\(k\), denote by \(d(u,v)\) the shortest path distance between \(u,v\in V\), and let \(P\) be a shortest path in \(G\) such that every connected component in \(G\backslash P\) has SPDdepth at most \(k-1\). Construction.The basic idea is quite simple, we use the \(\frac{k-1}{\varepsilon}\) partitions for the connected components of \(G\setminus P\), and create \(\frac{1}{\varepsilon}\) new partitions, whose goal is proving padding for pairs \(u,v\) such that \(P\) intersect the shortest \(u-v\) path, or the balls \(B_{G}(u,\varepsilon\Delta),B_{G}(v,\varepsilon\Delta)\). We start by defining the new partitions \(\mathcal{P}_{new}=\{\mathcal{P}_{1},\ldots,\mathcal{P}_{\varepsilon^{-1}}\}\). Let \(\mathcal{N}=\{z_{1},\ldots,z_{l}\}\subseteq P\) be an \(\varepsilon\Delta\)-net for \(P\) (recall Definition 5). Fix one endpoint of \(P\), and assume that \((z_{1},\ldots,z_{l})\) are sorted by their distance to this endpoint of \(P\). For every \(i\in\{0,1,\ldots,\varepsilon^{-1}-1\}\), let \(\mathcal{N}_{i}=\{z_{j}\ :\ j\equiv i\mod\varepsilon^{-1}\}\). For every \(z_{p},z_{q}\in\mathcal{N}_{i}\) with \(1\leq p<q\leq l\), we have that \[d(z_{p},z_{q})=\sum_{j=p}^{q-1}d(z_{j},z_{j+1})>(p-q)\varepsilon\Delta\geq \Delta\.\] The equality holds as \(P\) is a shortest path, the first inequality holds since the distance between net points is larger than \(\varepsilon\Delta\), and the last inequality by definition of \(\mathcal{N}_{i}\). Thus, the balls \(B(z_{p},\Delta/2),B(z_{q},\Delta/2)\) are disjoint. For every \(0\leq i\leq\varepsilon^{-1}-1\), we set \(\mathcal{P}_{i}\) to contain the clusters \(\{B(z,\Delta/2)\}_{z\in\mathcal{N}_{i}}\), and add the rest of the vertices (those that are not contained in any of these balls) as singleton clusters. Let \(G_{1},\ldots,G_{t}\) be the connected components of \(G\backslash P\), where \(t\) is the number of connected components. For every \(1\leq j\leq t\), we apply the induction hypothesis on \(G_{j}\), which yields a \(\big{(}\frac{k-1}{\varepsilon},\frac{2}{1-6\varepsilon},\varepsilon,\Delta \big{)}\)-PPC for \(G_{j}\). This is a collection \(\mathcal{F}^{(j)}=\{\mathcal{P}_{1}^{(j)},\ldots\mathcal{P}_{\varepsilon^{-1}( k-1)}^{(j)}\}\) of \(\varepsilon^{-1}(k-1)\) partitions. For every \(1\leq i\leq\varepsilon^{-1}(k-1)\), we construct a partition \(\mathcal{H}_{i}\) for \(G\), by taking \(\cup_{j=1}^{t}\mathcal{P}_{i}^{(j)}\), and adding the remaining vertices (note these are the vertices of \(P\)) as singleton clusters. We return \(\mathcal{F}=\{\mathcal{P}_{i}\}_{i=0}^{\varepsilon^{-1}-1}\cup\{\mathcal{H}_{ i}\}_{1\leq i\leq\varepsilon^{-1}(k-1)}\) as the PPC for \(G\). It remains to show that \(\mathcal{F}\) is indeed a \(\big{(}\frac{k}{\varepsilon},\frac{2}{1-6\varepsilon},\varepsilon,\Delta \big{)}\)-PPC. Correctness.First observe that \(\mathcal{F}\) is a set of partitions: for \(0\leq i\leq\varepsilon^{-1}-1\), \(\mathcal{P}_{i}\) is a partition by definition, while for \(1\leq i\leq\varepsilon^{-1}\cdot(k-1)\), \(\mathcal{H}_{i}\) is a partition since the connected components are pairwise disjoint. The number of partitions is \(\varepsilon^{-1}+\varepsilon^{-1}(k-1)=\varepsilon^{-1}\cdot k\) as required. Diameter bound.Note that \(\mathcal{P}\) is \(\Delta\)-bounded, because every cluster is either a ball of radius \(\Delta/2\), a singleton, or a cluster in a \(\Delta\)-bounded partition \(\mathcal{H}_{i}\). Padding property.Let \(u,v\in V\), and denote by \(P_{uv}\) the shortest \(u-v\) path in \(G\), and by \(B_{u}=B(u,\varepsilon\Delta)\), \(B_{v}=B(v,\varepsilon\Delta)\). If \(\Delta>0\) is such that \(\frac{(1-6\varepsilon)\Delta}{4}\leq d(u,v)\leq\frac{(1-6\varepsilon)\Delta}{2}\), then we need to show that at least one of the partitions in \(\mathcal{P}\) contains a cluster \(C\) such that both \(B_{u},B_{v}\) are contained in \(C\). Suppose first that \(P\) is disjoint from \(P_{uv}\cup B_{u}\cup B_{v}\). In this case, there exists a connected component \(G_{j}\) in \(G\backslash P\), such that \(B_{u}\cup B_{v}\cup P_{uv}\subseteq G_{j}\), and therefore \(d_{G_{j}}(u,v)=d(u,v)\) Thus, by the induction hypothesis, there is a cluster \(C\) in \(\mathcal{F}^{(j)}\) which contains both \(B_{u},B_{v}\), and this cluster is also in one of the \(\mathcal{H}_{i}\), and thus in \(\mathcal{F}\). (While in general, distances in \(G_{j}\) can be larger from those of \(G\), the balls \(B_{u},B_{v}\) and \(P_{uv}\) remain exactly the same, as they are disjoint from \(P\).) Consider now the case (see Figure 2 (a)), where \(P\) intersects \(P_{uv}\). Let \(x\in P\cap P_{uv}\) be an (arbitrary) vertex in the intersection. By the covering property of nets, there exists \(z\in\mathcal{N}\) such that \(d(x,z)\leq\varepsilon\Delta\). We bound the distance from any \(y\in B_{u}\) to \(z\) by the triangle inequality, \[d(z,y) \leq d(z,x)+d(x,u)+d(u,y)\] \[\leq d(z,x)+d(v,u)+d(u,y)\] \[\leq\varepsilon\Delta+\frac{(1-6\varepsilon)\Delta}{2}+\varepsilon \Delta\leq\Delta/2.\] Thus, the cluster \(C=B(z,\Delta/2)\) satisfies \(B_{u}\subseteq C\) and by a symmetric argument \(B_{v}\subseteq C\), as required. The remaining case is that \(P\) intersects \(B_{u}\) or \(B_{v}\). Assume w.l.o.g. \(P\) intersects \(B_{v}\), and let \(x\in P\cap B_{v}\) (see Figure 2 (b)). As before, there exists \(z\in\mathcal{N}\) such that \(d(x,z)\leq\varepsilon\Delta\). Let \(y\in B_{u}\). By the triangle inequality \[d_{G}(z,y) \leq d_{G}(z,x)+d_{G}(x,v)+d_{G}(v,u)+d_{G}(u,y)\] \[\leq\varepsilon\Delta+\varepsilon\Delta+\frac{(1-6\varepsilon) \Delta}{2}+\varepsilon\Delta\leq\Delta/2,\] hence \(B_{u}\subseteq C:=B(z,\Delta/2)\). The argument for \(B_{u}\) is simpler. So both balls are in the same cluster \(C\), as required. Figure 2: _Illustration of the proof of Lemma 12, where we show if \(P\) (colored red) intersects either \(P_{uv}\) (figure (a)) or any of \(B_{u},B_{v}\) (figure (b)), then both \(B_{u},B_{v}\) are in \(B(z,\Delta/2)\)._ From Pairwise Partition Cover to Light \(k\)-Hst Cover In this section we devise a light \(k\)-HST cover (see Definition 3) from a Pairwise Partition Cover Scheme (PPCS, see Definition 4). The framework essentially follows that of [12], except that we need to guarantee also a bound on the lightness of each tree in the cover. To this end, we ensure each cluster contains a net point (recall Definition 5). The following simple claim, which lower bounds the MST weight with respect to a net, is proven in [13, Claim 1]. **Claim 13** ([13]).: _Let \(\mathcal{N}\) be a \(\Delta\)-net of a metric space \((X,d)\). Then \(|\mathcal{N}|\leq\left\lceil\frac{2}{\Delta}\cdot w(\mathrm{MST}(X))\right\rceil\)._ The main result of this section is captured by the following theorem. **Theorem 4**.: _Fix any integer \(\tau\geq 1\), and parameters \(\rho\geq 1\) and \(0<\varepsilon<1/12\). Suppose that a given metric space \((X,d)\) admits a \((\tau,\rho,\varepsilon)\)-PPCS, then for any \(k\geq\frac{8\rho}{\varepsilon}\), \((X,d)\) admits a \(O(k\log n)\)-light \(\left(O(\frac{\tau}{\varepsilon}\log k),\rho(1+3\varepsilon)\right)\)-\(k\)-HST cover._ Assume w.l.o.g. that the minimal distance in \((X,d)\) is \(1\), and let \(\Phi\) be the maximal distance. Fix a real number \(1\leq l\leq k\), and for \(-1\leq i\leq\log_{k}\Phi\), let \(\Delta_{i}(l)=l\cdot k^{i}\) (for brevity we will omit \(l\) when it is clear from context), and let \(\mathcal{N}_{i}\) be an \(\frac{\varepsilon\Delta_{i}}{4}\)-net. The following lemma shows how to change a collection of pairwise partition covers, so it will become hierarchical and each cluster will contains a net point. **Lemma 14**.: _Fix a real number \(1\leq l\leq k\). For each integer \(-1\leq i\leq\log_{k}\Phi\), let \(\{\mathcal{P}^{i}_{1},\ldots\mathcal{P}^{i}_{\tau}\}\) be a \((\tau,\rho,\varepsilon,\Delta_{i})\)-pairwise partition cover. Then there exists a collection of \((\tau,(1+\varepsilon)\rho,0,(1+\varepsilon)\Delta_{i})\)-pairwise partition covers \(\{\tilde{\mathcal{P}}^{i}_{1},\ldots\tilde{\mathcal{P}}^{i}_{\tau}\}_{i=-1}^ {\log_{k}\Phi}\) that satisfies the following two properties:_ 1. _For every_ \(-1\leq i\leq\log_{k}\Phi\) _and_ \(1\leq j\leq\tau\)_,_ \(|\tilde{\mathcal{P}}^{i}_{j}|\leq|\mathcal{N}_{i}|\)_._ 2. _For every_ \(1\leq j\leq\tau\)_, the partitions_ \(\{\tilde{\mathcal{P}}^{i}_{j}\}_{i\geq-1}\) _are hierarchical (that is, for each_ \(0\leq i\leq\log_{k}\Phi\)_, every cluster of_ \(\tilde{\mathcal{P}}^{i-1}_{j}\) _is contained in a cluster of_ \(\tilde{\mathcal{P}}^{i}_{j}\)_)._ Proof.: Fix \(j\in[\tau]\). We show how to construct \(\{\tilde{\mathcal{P}}^{i}_{j}\}_{i\geq-1}\) by induction on \(i\). For \(i=-1\), since \(\Delta_{-1}=l/k\leq 1\), there is no padding requirement, and we may take the trivial partition to singletons. Assume that for some \(0\leq i\leq\log_{k}\Phi\), we constructed \(\tilde{\mathcal{P}}^{i-1}_{j}\) that satisfies both properties, and we will show how to construct \(\tilde{\mathcal{P}}^{i}_{j}\). Start with the partition \(\mathcal{P}^{i}_{j}\). The first change will force every cluster to contain a net point. For each cluster \(C\in\mathcal{P}^{i}_{j}\), if \(C\cap\mathcal{N}_{i}=\emptyset\), we remove \(C\) from \(\mathcal{P}^{i}_{j}\). Then for every \(v\in C\) we add \(v\) to the cluster in \(\mathcal{P}^{i}_{j}\) containing the nearest net point in \(\mathcal{N}_{i}\) to \(v\). This creates a partition \(\hat{P}^{i}_{j}\). Now every cluster contains at least one net point, therefore \(|\hat{P}^{i}_{j}|\leq|\mathcal{N}_{i}|\). Also observe that the new cluster of \(v\) will not be removed. The second change will guarantee the hierarchical property. For each cluster \(C^{\prime}\in\tilde{\mathcal{P}}^{i-1}_{j}\), move all the vertices of \(C^{\prime}\) to some cluster \(C\in\hat{P}^{i}_{j}\) which intersects \(C^{\prime}\). Call the resulting partition \(\tilde{\mathcal{P}}^{i}_{j}\), which satisfies the second property by construction. Observe that it is no longer true that every cluster of \(\tilde{\mathcal{P}}^{i}_{j}\) contains a net point (it could have moved in the second change). Nevertheless, the number of clusters in \(\tilde{\mathcal{P}}^{i}_{j}\) did not change. It remains to show that \(\{\tilde{\mathcal{P}}^{i}_{1},\ldots\tilde{\mathcal{P}}^{i}_{\tau}\}\) is indeed a \((\tau,(1+\varepsilon)\rho,0,(1+\varepsilon)\Delta_{i})\)-pairwise partition cover. Diameter bound.We start by showing that each cluster \(\tilde{C}\in\tilde{\mathcal{P}}^{i}_{j}\) has diameter at most \((1+\varepsilon)\Delta_{i}\), by induction on \(i\). The base case \(i=-1\) is trivial since every cluster has diameter \(0\). Assume the claim holds for \(i-1\) and we will prove it for \(i\). Let \(C\in\mathcal{P}^{i}_{j}\) be the cluster before the updates leading to \(\tilde{C}\). In the first change we may have moved vertices from other clusters (those without a net point) to \(C\), creating the cluster \(\hat{C}\). By the covering property of nets, these vertices are at distance most \(\frac{\varepsilon\Delta_{i}}{4}\) from some net point in \(C\). For any \(u\in\hat{C}\), let \(r_{u}\in C\) be the closest point to \(u\) in \(C\) (not necessarily a net point). Then for any \(u,v\in\hat{C}\), \[d(u,v)\leq d(u,r_{u})+d(r_{u},r_{v})+d(r_{v},v)\leq\frac{\varepsilon\Delta_{i }}{4}+\operatorname{diam}(C)+\frac{\varepsilon\Delta_{i}}{4}=\operatorname{ diam}(C)+\frac{\varepsilon\Delta_{i}}{2}. \tag{6}\] In particular, \(\operatorname{diam}(\hat{C})\leq\operatorname{diam}(C)+\frac{\varepsilon \Delta_{i}}{2}\). In the second change, we may have added to \(\hat{C}\) entire clusters \(C^{\prime}\in\tilde{\mathcal{P}}^{i-1}_{j}\) which intersect it, creating \(\tilde{C}\) (note that we may have also removed points from \(C\), but this surely will not increase the diameter). The diameter of each \(C^{\prime}\) is at most \((1+\varepsilon)\Delta_{i-1}\) by the induction hypothesis. Hence, by a similar argument to above, \[\operatorname{diam}(\tilde{C})\leq\operatorname{diam}(\hat{C})+2 \operatorname{diam}(C^{\prime})\leq\operatorname{diam}(\hat{C})+2(1+ \varepsilon)\Delta_{i-1}\.\] Recall that \(k\geq 8\rho/\varepsilon\geq(1+\varepsilon)4/\varepsilon\), and so \(2(1+\varepsilon)\Delta_{i-1}=2(1+\varepsilon)\Delta_{i}/k\leq\varepsilon \Delta_{i}/2\). We conclude that \[\operatorname{diam}(\tilde{C})\leq\operatorname{diam}(\hat{C})+\frac{ \varepsilon\Delta_{i}}{2}\leq\operatorname{diam}(C)+2\cdot\frac{\varepsilon \Delta_{i}}{2}\leq(1+\varepsilon)\cdot\Delta_{i}\.\] Padding property.It remains to show that for \(u,v\in X\), if there exists \(-1\leq i\leq\log_{k}\Phi\) such that \(\frac{\Delta_{i}}{2\rho}=\frac{(1+\varepsilon)\Delta_{i}}{2(1+\varepsilon) \rho}\leq d(u,v)\leq\frac{(1+\varepsilon)\Delta_{i}}{2(1+\varepsilon)\rho}= \frac{\Delta_{i}}{\rho}\), then both \(u,v\) are contained in a single cluster in at least one of the partitions \(\{\tilde{\mathcal{P}}^{i}_{1},...,\tilde{\mathcal{P}}^{i}_{\tau}\}\). By the padding property of \(\{\mathcal{P}^{i}_{1},...,\mathcal{P}^{i}_{\tau}\}\), there exists \(1\leq j\leq\tau\) and a cluster \(C\in\mathcal{P}^{i}_{j}\), such that \(B(u,\varepsilon\Delta_{i}),B(v,\varepsilon\Delta_{i})\subseteq C\). We argue that \(u,v\in\tilde{C}\) for the cluster \(\tilde{C}\in\tilde{\mathcal{P}}^{i}_{j}\) created from \(C\) by our construction. By the covering property of nets, there is a net point of \(\mathcal{N}_{i}\) in \(B(u,\varepsilon\Delta_{i})\subseteq C\), thus \(C\) was not removed in the first change, and there is a corresponding cluster \(\hat{C}\in\tilde{\mathcal{P}}^{i}_{j}\) (note that \(C\subseteq\hat{C}\)). Let \(\tilde{C}_{u},\tilde{C}_{v}\in\tilde{\mathcal{P}}^{i-1}_{j}\) be the clusters containing \(u,v\) respectively. The diameter of \(\tilde{C}_{u},\tilde{C}_{v}\) is bounded by \((1+\varepsilon)\Delta_{i-1}=(1+\varepsilon)\cdot\frac{\Delta_{i}}{k}\leq \frac{(1+\varepsilon)\varepsilon}{8\rho}\Delta_{i}<\varepsilon\Delta_{i}\). Thus, these clusters are contained in \(B(u,\varepsilon\Delta_{i}),B(v,\varepsilon\Delta_{i})\) respectively, and therefore also in \(\hat{C}\). So after the second change, \(u,v\) do not move to any other cluster, and are both in \(\tilde{C}\). This concludes the proof that \(\{\tilde{\mathcal{P}}^{i}_{1},\ldots\tilde{\mathcal{P}}^{i}_{\tau}\}\) is a \((\tau,(1+\varepsilon)\rho,0,(1+\varepsilon)\Delta_{i})\)-pairwise partition cover. We are now ready to prove the main theorem of this section. Proof of Theorem 4.: Fix \(l\in\{(1+\varepsilon)^{c}\ :\ c\in[0,\log_{1+\varepsilon}k]\}\). Since \((X,d)\) admits a PPCS, for every integer \(i\geq-1\) there exist \(\{\mathcal{P}^{i}_{1},\ldots\mathcal{P}^{i}_{\tau}\}\) that is a \((\tau,\rho,\varepsilon,\Delta_{i})\)-pairwise partition cover. Apply Lemma 14 to obtain a \((\tau,(1+\varepsilon)\rho,0,(1+\varepsilon)\Delta_{i})\)-pairwise partition cover \(\{\tilde{\mathcal{P}}^{i}_{1},\ldots\tilde{\mathcal{P}}^{i}_{\tau}\}\) that satisfy both properties described in the lemma. For every \(j\in[\tau]\) we construct a single \(k\)-HST \(T\) from the collection of partitions \(\{\tilde{\mathcal{P}}^{i}_{j}\}_{-1\leq i\leq\log_{k}\Phi}\). There is a bijection from the nodes of \(T\) to the clusters of the partitions. The leaves of \(T\) correspond to the singleton clusters of \(\tilde{\mathcal{P}}^{-1}_{j}\). For each \(0\leq i\leq\log_{k}\Phi\), and each cluster \(C\in\tilde{\mathcal{P}}^{i}_{j}\), create a node \(x=x(C)\) with label \(\Gamma_{x}=(1+\varepsilon)\cdot\Delta_{i}\), and connect \(x\) to all the nodes corresponding to the clusters \(\{C^{\prime}\subseteq C\ :\ C^{\prime}\in\tilde{\mathcal{P}}^{i-1}_{j}\}\) (here we use the fact that this pairwise partition cover is hierarchical). Since the label of every such \(C^{\prime}\) is \((1+\varepsilon)\cdot\Delta_{i-1}=(1+\varepsilon)\cdot\Delta_{i}/k\), and the distance between every two points in \(C\) is at most \((1+\varepsilon)\cdot\Delta_{i}\), this \(T\) is indeed a dominating \(k\)-HST. We construct \(\tau\) of these \(k\)-HSTs for every \(l\), and the collection of all these is our \(k\)-HST cover for \((X,d)\). The number of \(k\)-HSTs is indeed \(\tau\cdot(1+\log_{1+\varepsilon}k)=O(\frac{\tau}{\varepsilon}\cdot\log k)\), as required. It remains to bound the lightness of each \(T\), and argue about the stretch of this cover. Lightness bound.Now we show that for any \(k\)-HST \(T\) created as above, its lightness is \(O(k\log n)\). Recall that the weight of \(T\) is \(\sum_{x\in T}(\deg(x)-1)\cdot\Gamma_{x}\) (see equation (1)). For any \(0\leq i\leq\log_{k}\Phi\), by construction the sum of degrees of nodes corresponding to clusters of \(\tilde{\mathcal{P}}^{i}_{j}\) is exactly equal to \(|\tilde{\mathcal{P}}^{i-1}_{j}|\). By the first property of the lemma we have that \(|\tilde{\mathcal{P}}^{i-1}_{j}|\leq|\mathcal{N}_{i-1}|\), so \[w(T) = \sum_{x\in T}(\deg(x)-1)\cdot\Gamma_{x}\] \[\leq \sum_{i=0}^{\log_{k}\Phi}|\tilde{\mathcal{P}}^{i-1}_{j}|\cdot \Delta_{i}\] \[\leq k\cdot\sum_{i=0}^{\log_{k}\Phi}|\mathcal{N}_{i-1}|\cdot\Delta_{ i-1}\] Denote \(W=w(MST(X))\). If \(\Phi\geq n^{3}\), we bound separately the lower terms in the sum, \[k\cdot\sum_{i=0}^{\log_{k}(\Phi/n^{3})}|\mathcal{N}_{i-1}|\cdot \Delta_{i-1} \leq k\cdot\sum_{i=0}^{\log_{k}(\Phi/n^{3})}n\cdot l\cdot k^{i}\] \[\leq 2n\cdot k^{2}\cdot(\Phi/n^{3})\] \[\leq 2W\,\] using that \(l\leq k\) and \(\Phi\leq W\). For the remaining terms, we have by Claim 13 that \(|\mathcal{N}_{i}|\cdot\Delta_{i}=O(W)\), therefore \[k\cdot\sum_{i=\max\{0,\log_{k}(\Phi/n^{3})\}}^{\log_{k}\Phi}| \mathcal{N}_{i-1}|\cdot\Delta_{i-1} \leq k\cdot\sum_{i=\max\{0,\log_{k}(\Phi/n^{3})\}}^{\log_{k}\Phi}O(W)\] \[= O(k\cdot\log n\cdot W)\,\] so the lightness of each tree is indeed \(O(k\log n)\). Stretch bound.Fix any \(u,v\in X\), and let \(D=\rho\cdot(1+\varepsilon)\cdot d(u,v)\). Let \(i=\lfloor\log_{k}D\rfloor\), and note that \(k^{i}\leq D<k^{i+1}\), so there exists integer \(0\leq c\leq\log_{1+\varepsilon}k\) such that \(l\cdot k^{i}\leq D<(1+\varepsilon)\cdot l\cdot k^{i}\) (recall that \(l=(1+\varepsilon)^{c}\)). With these choices of \(l\) and \(i\) we get that \[\frac{\Delta_{i}}{2\rho}\leq\frac{\Delta_{i}}{\rho\cdot(1+\varepsilon)}\leq d (u,v)\leq\frac{\Delta_{i}}{\rho}.\] By the padding property of \(\{\tilde{\mathcal{P}}^{i}_{j}\}_{1\leq j\leq\tau}\), there exists \(j\in[\tau]\) and a cluster \(C\in\tilde{\mathcal{P}}^{i}_{j}\) such that \(u,v\in C\). So in the \(k\)-HST \(T\) created from \(\tilde{\mathcal{P}}^{i}_{j}\), there is a node \(x\) corresponding to \(C\) with \(\Gamma_{x}=(1+\varepsilon)\Delta_{i}\), and so \[d_{T}(u,v)\leq(1+\varepsilon)\Delta_{i}\leq\rho\cdot(1+\varepsilon)^{2}\cdot d (u,v)\leq\rho\cdot(1+3\varepsilon)\cdot d(u,v)\.\] ### \(k\)-Hst Cover for Doubling Metrics. The following lemma asserts that in our construction of \(k\)-HST cover described above, every tree has bounded degree. **Lemma 15**.: _If a metric space \((X,d)\) has doubling dimension \(\mathrm{ddim}\), then every \(T\) in the \(k\)-HST cover of Theorem 4 has maximum degree \(O(k/\varepsilon)^{\mathrm{ddim}}\)._ Proof.: Let \(x\in T\) be any node with children \(x_{1},...,x_{t}\). The node \(x\) corresponds to a cluster \(\tilde{C}\in\tilde{\mathcal{P}}^{i}_{j}\), and its children to clusters \(\tilde{C}_{1},\ldots,\tilde{C}_{t}\in\tilde{\mathcal{P}}^{i-1}_{j}\) contained in \(\tilde{C}\). Recall that in the partition \(\hat{\mathcal{P}}^{i-1}_{j}\), every cluster contains a net point from an \(\varepsilon\Delta_{i-1}/4\)-net \(\mathcal{N}_{i-1}\). Since every cluster of \(\tilde{\mathcal{P}}^{i-1}_{j}\) was a cluster of \(\hat{\mathcal{P}}^{i-1}_{j}\), the clusters \(\tilde{C}_{1},\ldots,\tilde{C}_{t}\) correspond to different net points. The maximal distance between any two such net points is \[\mathrm{diam}(\tilde{C})+2\varepsilon\Delta_{i-1}/4<2\Delta_{i}\,\] so all these net points are contained in a ball of radius \(2\Delta_{i}\). Since \(\Delta_{i-1}=\Delta_{i}/k\), by the packing lemma (Lemma 7) we conclude that \(t\leq O(k/\varepsilon)^{\mathrm{ddim}}\). Filtser and Le [11] constructed a PPCS for doubling metrics: **Lemma 16** ([11]).: _Every metric space \((X,d)\) with doubling dimension \(\mathrm{ddim}\) admits an \((\varepsilon^{-O(\mathrm{ddim})},1+\varepsilon,\varepsilon)\)-pairwise partition cover scheme for any \(\varepsilon\in(0,1/16)\)._ By applying Theorem 4 (and using Lemma 15), we conclude **Corollary 5**.: _For any \(\varepsilon\in(0,1/16)\), every \(n\)-point metric space \((X,d)\) with doubling dimension \(\mathrm{ddim}\) admits an \(O(\varepsilon^{-1}\cdot\log n)\)-light \((\varepsilon^{-O(\mathrm{ddim})},1+\varepsilon)\)-\(\frac{16}{\varepsilon}\)-HST cover, furthermore, the maximum degree of any tree in the cover is \(\varepsilon^{-O(\mathrm{ddim})}\)._ Proof.: Using Lemma 16, consider a \((\varepsilon^{-O(\mathrm{ddim})},1+\varepsilon,\varepsilon)\)-PPCS for \(X\). Fix \(k=\frac{16}{\varepsilon}\). By Theorem 4, \(X\) admits a \(O(\varepsilon^{-1}\cdot\log n)\)-light \((\varepsilon^{-O(\mathrm{ddim})},1+O(\varepsilon))\)-\(k\)-HST cover. Furthermore, by Lemma 15, every HST in the cover has maximum degree \(O(\frac{k}{\varepsilon})^{\mathrm{ddim}}=\varepsilon^{-O(\mathrm{ddim})}\). The corollary follows by rescaling \(\varepsilon\) accordingly. ## 6 Reliable Spanners for Metric Spaces We begin this section by proving a meta theorem, which given a light \(k\)-HST cover, constructs an oblivious light reliable spanner. In the following subsections, we will apply this meta-theorem to obtain the main results of the paper. **Theorem 6** (Light Reliable Spanner from Light HST Cover).: _Consider an \(n\) point metric space \((X,d)\) that admits \(\psi\)-light \((\tau,\rho)\)-\(k\)-HST cover \(\mathcal{T}\), for some \(k>1\). Then for every parameter \(\nu\in(0,1/6)\), \(X\) admits an oblivious \(\nu\)-reliable \((2+\frac{2}{k-1})\cdot\rho\)-spanner of size \(n\cdot\tilde{O}\left(\tau^{3}\cdot(\nu^{-1}\cdot\log\log n)^{2}\right))\) and lightness \(\psi\cdot\tilde{O}(\tau^{3}\cdot(\nu^{-1}\cdot\log\log n)^{2})\)._ Proof.: For every \(k\)-HST \(T\in\mathcal{T}\), using Theorem 1 we construct a \(\nu^{\prime}\)-reliable spanner \(H_{T}\) for \(T\) for \(\nu^{\prime}=\frac{\nu}{\tau}\). The final spanner we return is \(H=\cup_{T\in\mathcal{T}}H_{T}\). By Theorem 1, the size of the spanner is \(|H|=\tau\cdot n\cdot\tilde{O}(\nu^{\prime-1}\cdot\log\log n)^{2}=n\cdot\tilde {O}\left(\tau^{3}\cdot(\nu^{-1}\cdot\log\log n)^{2}\right)\), while the lightness is \[w(H)\leq\sum_{T\in\mathcal{T}}w(H_{T}) \leq\sum_{T\in\mathcal{T}}\tilde{O}(\nu^{\prime-1}\cdot\log\log n )^{2}\cdot w(MST(T))\] \[\leq\psi\cdot\tilde{O}(\tau^{3}\cdot(\nu^{-1}\cdot\log\log n)^{2} )\cdot w(MST(X))\] Consider an attack \(B\subseteq X\). For every spanner \(H_{T}\), let \(B_{T}^{+}\) be the respective super set, and denote \(B^{+}=\cup_{T\in\mathcal{T}}B_{T}^{+}\). It holds that \[\mathbb{E}\left[\left|B^{+}\setminus B\right|\right]\leq\sum_{T\in\mathcal{T}} \mathbb{E}\left[\left|B_{T}^{+}\setminus B\right|\right]\leq\tau\cdot\nu^{ \prime}\cdot|B|=\nu\cdot|B|\.\] Finally, consider a pair of points \(u,v\notin B^{+}\). The is some \(k\)-HST \(T\in\mathcal{T}\) such that \(d_{T}(u,v)\leq\rho\cdot d_{X}(u,v)\). As \(u,v\notin B_{T}^{+}\), it holds that \[d_{H\setminus B}(u,v)\leq d_{H_{T}\setminus B}(u,v)\leq(2+\frac{2}{k-1})\cdot d _{T}(u,v)\leq(2+\frac{2}{k-1})\cdot\rho\cdot d_{X}(u,v)\.\] By using Theorem 2 instead of Theorem 1 in the proof of Theorem 6 (and keeping all the rest intact) we obtain: **Corollary 7**.: _Consider an \(n\) point metric space \((X,d)\) that admits \(\psi\)-light \((\tau,\rho)\)-\(k\)-HST cover \(\mathcal{T}\), where all the trees in \(\mathcal{T}\) have maximum degree \(\delta\). Then for every parameter \(\nu\in(0,1/6)\), \(X\) admits an oblivious \(\nu\)-reliable \((1+\frac{2}{k-1})\cdot\rho\)-spanner of size \(n\cdot\delta\cdot\tilde{O}\left(\tau^{3}\cdot(\nu^{-1}\cdot\log\log n)^{2}\right)\) and lightness \(\psi\cdot\delta\cdot\tilde{O}(\tau^{3}\cdot(\nu^{-1}\cdot\log\log n)^{2})\)._ ### Doubling Metrics By applying Corollary 7, on the HST cover of Corollary 5 (and rescaling \(\varepsilon\)) we obtain: **Corollary 8**.: _For any \(\varepsilon,\nu\in(0,1/16)\), every \(n\)-point metric space \((X,d_{X})\) with doubling dimension \(\mathrm{ddim}\) admits \(\nu\)-reliable \((1+\varepsilon)\)-spanner with size \(n\cdot\varepsilon^{-O(\mathrm{ddim})}\cdot\tilde{O}(\nu^{-1}\cdot\log\log n)^{2}\), and lightness \(\varepsilon^{-O(\mathrm{ddim})}\cdot\tilde{O}(\nu^{-2}\cdot\log n)\)._ Note that the shortest path metric of the path graph has doubling dimension \(1\). Hence the lower bound of Theorem 21 apply. In particular, for constant \(\mathrm{ddim}\) and \(\varepsilon\), Corollary 8 is tight up to lower order terms. ### General Metric Spaces In this subsection we construct oblivious light reliable spanner for general metric spaces. We begin with the pairwise partition cover of Filtser and Le [11]. **Lemma 17** ([11]).: _Every \(n\)-point metric space \((X,d_{X})\) admits an \((O(n^{1/t}\log n),2t+\varepsilon,\frac{\varepsilon}{2t(2t+\varepsilon)})\)-PPCS for any \(\varepsilon\in[0,1]\) and integer \(t\geq 1\)._ By applying Theorem 4, we conclude **Corollary 9**.: _Every \(n\)-point metric space \((X,d_{X})\) admits a \(O(\varepsilon^{-1}\cdot t^{3}\cdot\log n)\)-light \(\left(n^{1/t}\cdot\log n\cdot\tilde{O}(\frac{t^{2}}{\varepsilon}),2t+ \varepsilon\right)\)-\(\frac{200\cdot t^{3}}{\varepsilon}\)-HST cover for any \(\varepsilon\in(0,1/3)\) and integer \(t\geq 1\)._ Proof.: Using Lemma 17, consider a \((O(n^{1/t}\log n),2t+\varepsilon,\frac{\varepsilon}{2t(2t+\varepsilon)})\)-PPCS for \(X\). Fix \(k=\frac{8\cdot(2t+\varepsilon)}{2t(2t+\varepsilon)}=\frac{16t\cdot(2t+ \varepsilon)^{2}}{\varepsilon}\geq\frac{64t^{3}}{\varepsilon}\). Note that \(k=O(\varepsilon^{-1}\cdot t^{3})\). By Theorem 4 (note that indeed \(\frac{\varepsilon}{2t(2t+\varepsilon)}<1/12\)), \(X\) admits a \(\phi\)-light \((\tau,\rho)\)-\(k\)-HST cover for \[\phi= O(k\log n)=O(\varepsilon^{-1}\cdot t^{3}\cdot\log n)\] \[\tau= O(\frac{n^{1/t}\log n}{\frac{\varepsilon}{2t(2t+\varepsilon)}} \cdot\log k)=O(n^{1/t}\cdot\varepsilon^{-1}\cdot t^{2}\cdot\log n\cdot\log(t/ \varepsilon))=n^{1/t}\cdot\log n\cdot\tilde{O}(t^{2}/\varepsilon)\] \[\rho= (2t+\varepsilon)(1+\frac{3\varepsilon}{2t(2t+\varepsilon)})=2t+ \varepsilon+\frac{3\varepsilon}{2t}<2t+3\varepsilon\.\] The corollary follows by rescaling \(\varepsilon\) by \(3\), and noting that every \(k\)-HST is also a \(\frac{64t^{3}}{\varepsilon}\)-HST. By applying Theorem 6 on the HST cover from Corollary 9 we obtain: **Corollary 10**.: _For any parameters \(\nu\in(0,1/6)\), \(t\in\mathbb{N}\), \(\varepsilon\in(0,1/2)\), any metric space admits an oblivious \(\nu\)-reliable \((12t+\varepsilon)\)-spanner with size \(\tilde{O}\left(n^{1+1/t}\cdot\nu^{-2}\cdot\varepsilon^{-3}\right)\) and lightness \(n^{1/t}\cdot\tilde{O}(\nu^{-2}\cdot\varepsilon^{-4})\cdot\operatorname{ polylog}(n)\)._ Proof.: We can assume that \(t\leq\log n\), as taking larger \(t\) will not reduce size or lightness. Using Theorem 6 on the \(k\)-HST cover from Corollary 9, we obtain an oblivious \(\nu\)-reliable spanner with stretch \((2+\frac{2}{200\cdot t^{3}/\varepsilon})\cdot(2t+\varepsilon)\leq 4t+3\varepsilon\), size \[n\cdot\tilde{O}\left(\left(n^{1/t}\cdot\log n\cdot\tilde{O}(t^{2}/\varepsilon )\right)^{3}\cdot(\nu^{-1}\cdot\log\log n)^{2}\right)=\tilde{O}\left(n^{1+3/t }\cdot\nu^{-2}\cdot\varepsilon^{-3}\right)\.\] and lightness \[\varepsilon^{-1}\!\cdot\!t^{3}\!\cdot\!\log n\!\cdot\!\tilde{O}\left(\left(n^ {1/t}\cdot\log n\cdot\tilde{O}(t^{2}/\varepsilon)\right)^{3}\cdot(\nu^{-1} \cdot\log\log n)^{2}\right)=n^{3/t}\!\cdot\!\tilde{O}(\nu^{-2}\!\cdot\! \varepsilon^{-4})\!\cdot\!\operatorname{polylog}(n)\,\] where in the equality we assumed \(\nu,\varepsilon\geq\frac{1}{n}\) (as trivially every spanner has size and lightness \(O(n^{2})\)). The corollary follows by replacing \(t\) with \(3t\) (and scaling \(\varepsilon\) accordingly). For stretch \(t=\log n\), the lightness of Corollary 10 is \(\approx\nu^{-2}\cdot\operatorname{polylog}(n)\), while by Theorem 21, \(\Omega(\nu^{-2}\cdot\log n)\) lightness is necessary (even for preserving only the connectivity of the path metric). In Appendix B (see Corollary 22) we construct a light reliable \(O(\log n)\)-spanner with lightness \(\tilde{O}(\nu^{-2}\cdot\log^{4}n)\). ### Minor Free Graphs In this subsection we use Corollary 9 to obtain a reliable \((4+\varepsilon)\)-spanner for minor free graphs. Later, in Theorem 18 we will improve the stretch to a near optimal \(2+\varepsilon\). Nevertheless, if the goal is to minimize lightness, the result in this subsection is better. By applying Theorem 4 on the PCSS of Corollary 3 we conclude **Corollary 11**.: _Let \(G\) be an \(n\)-vertex graph excluding a fixed minor. For any \(\varepsilon\in(0,1/12)\), \(G\) admits a \(O(\frac{\log n}{\varepsilon})\)-light \(\left(\log n\cdot\tilde{O}(\varepsilon^{-2}),2+\varepsilon\right)\!\cdot\! \frac{32}{\varepsilon}\)-HST cover._ Proof.: Fix \(k=\frac{32}{\varepsilon}\), and apply Theorem 4 on the PCSS of Corollary 3. As a result we obtain a \(O(\frac{\log n}{\varepsilon})\)-light \((\tau,\rho)\)-\(k\)-HST cover for \[\tau =O(\frac{\varepsilon^{-1}\log n}{\varepsilon}\log k)=\log n\cdot \tilde{O}(\varepsilon^{-2})\] \[\rho =\frac{2}{1-6\varepsilon}\cdot(1+3\varepsilon)=2+O(\varepsilon)\.\] The corollary follows by rescaling \(\varepsilon\) accordingly (and noting that it will still be \(\frac{32}{\varepsilon}\)-HST cover). By applying Theorem 6 on the HST cover from Corollary 11 we obtain: **Corollary 12**.: _Let \(G\) be an \(n\)-vertex graph excluding a fixed minor. For any \(\varepsilon,\nu\in(0,1/20)\), \(G\) admits an oblivious \(\nu\)-reliable \((4+\varepsilon)\)-spanner with size \(\tilde{O}\left(n\cdot\varepsilon^{-6}\cdot\nu^{-2}\right)\) and lightness \(\tilde{O}(\varepsilon^{-7}\cdot\log^{4}n\cdot\nu^{-2})\)._ Proof.: Using Theorem 6 upon the \(k\)-HST cover from Corollary 11, we obtain a \(\nu\)-reliable spanner with stretch \((2+\frac{10}{32/\varepsilon})\cdot(2+\varepsilon)<4+3\varepsilon\), size \(n\cdot\tilde{O}\left(\left(\frac{\log n}{\varepsilon^{2}}\right)^{3}\cdot\nu^ {-2}\right)=\tilde{O}\left(n\cdot\varepsilon^{-6}\cdot\nu^{-2}\right)\), and lightness \(O(\frac{\log n}{\varepsilon})\cdot\tilde{O}((\varepsilon^{-2}\cdot\log n)^{3 }\cdot\nu^{-2})=\tilde{O}(\varepsilon^{-7}\cdot\log^{4}n\cdot\nu^{-2})\). The corollary follows by rescaling \(\varepsilon\) accordingly. ### Doubling Metric of High Dimension Consider a metric space with a moderately large doubling dimension \(\operatorname{ddim}\), e.g. \(\sqrt{\log n}\). The reliable spanner from Corollary 8 has exponential dependence on the dimension in both size and lightness, which might be too large. Nevertheless, such a metric space is much more structured than a general metric space (that has doubling dimension \(O(\log n)\)), and thus we expect to be able to construct better spanners for such graphs (compared to Corollary 10). Such a phenomena was previously shown for light spanners [11], and for reliable sparse spanners [10]. We begin by observing that a PPCS for such metric spaces follow by the sparse covers of Filtser [10]. **Lemma 18** ([10] implicit).: _Every \(n\)-point metric space \((X,d_{X})\) with doubling dimension \(\operatorname{ddim}\) admits a \((2^{O(\frac{\operatorname{ddim}}{t})}\cdot\operatorname{ddim}\cdot t,t,\frac{ 1}{t})\)-PPCS, for any \(\Omega(1)\leq t\leq\operatorname{ddim}\)._ Proof.: Fix the scale parameter \(\Delta>0\). Filtser [10] constructed a collection \(\mathbb{P}=\{\mathcal{P}_{1},\ldots,\mathcal{P}_{s}\}\) of \(s=2^{O(\frac{\operatorname{ddim}}{t})}\cdot\operatorname{ddim}\cdot t\), \(\Delta\)-bounded partitions, such that every ball of radius \(R=\frac{2}{t}\cdot\Delta\) is fully contained in some cluster, in one of the partitions. We argue that \(\mathbb{P}\) is an \((2^{O(\frac{\operatorname{ddim}}{t})}\cdot\operatorname{ddim}\cdot t,t,\frac{ 1}{t})\)-PPCS. Consider two points \(x,y\) such that \(d_{X}(x,y)\leq\frac{1}{2}R=\frac{\Delta}{t}\). There is some partition \(\mathcal{P}_{i}\in\mathbb{P}\), and a cluster \(C\in\mathcal{P}_{i}\) such that \(B_{X}(x,R)\subseteq C\). For every point \(z\in B_{X}(y,\frac{1}{2}R)\), it holds that \(d_{X}(x,z)\leq d_{X}(x,y)+d_{X}(y,z)\leq\frac{1}{2}R+\frac{1}{2}R=R\), implying \(z\in B_{X}(x,R)\), and in particular \(B_{X}(y,\frac{1}{2}\cdot R)\subseteq C\). Similarly \(B_{X}(x,\frac{1}{2}\cdot R)\subseteq C\). It follows that \(\mathbb{P}\) is a \((2^{O(\frac{\operatorname{ddim}}{t})}\cdot\operatorname{ddim}\cdot t,t,\frac{ 1}{t})\)-PPCS as required. By applying Theorem 4, we conclude **Corollary 13**.: _Every \(n\)-point metric space \((X,d_{X})\) with doubling dimension \(\operatorname{ddim}\) admits an \(O(t^{2}\cdot\log n)\)-light \(\left(2^{O(\frac{\operatorname{ddim}}{t})}\cdot\operatorname{ddim}\cdot t \cdot\log t,t\right)\)-\(\frac{t^{2}}{2}\)-HST cover, for any \(\Omega(1)\leq t\leq\operatorname{ddim}\)._ Proof.: Fix \(k=8t^{2}\), \(\varepsilon=\frac{1}{12}\), and apply Theorem 4 on the PCSS of Lemma 18. As a result we obtain a \(O(t^{2}\cdot\log n)\)-light \((\tau,\rho)\)-\(k\)-HST cover for \[\tau =O(\frac{2^{O(\frac{\mathrm{ddim}}{t})}\cdot\mathrm{ddim}\cdot t}{ \varepsilon}\cdot\log k)=2^{O(\frac{\mathrm{ddim}}{t})}\cdot\mathrm{ddim} \cdot t\cdot\log t\] \[\rho =\frac{2}{1-6\varepsilon}\cdot t=4t\.\] The corollary follows by rescaling \(t\) accordingly. By applying Theorem 6 on the HST cover from Corollary 13 we obtain: **Corollary 14**.: _Every \(n\)-point metric space \((X,d_{X})\) with doubling dimension \(\mathrm{ddim}\) admits an oblivious \(\nu\)-reliable \(t\)-spanner with size \(n\cdot\tilde{O}\left(\nu^{-2}\right)\cdot 2^{O(\frac{\mathrm{ddim}}{t})}\cdot \mathrm{poly}(\mathrm{ddim},\log\log n)\) and lightness \(2^{O(\frac{\mathrm{ddim}}{t})}\cdot\tilde{O}(\log n\cdot\nu^{-2})\cdot \mathrm{poly}(\mathrm{ddim})\), for any \(\Omega(1)\leq t\leq\mathrm{ddim}\)._ Proof.: Using Theorem 6 upon the \(k\)-HST cover from Corollary 13, we obtain a \(\nu\)-reliable spanner with stretch \((2+\frac{20}{t^{2}})\cdot t\), size \(n\cdot\tilde{O}\left(\nu^{-2}\right)\cdot 2^{O(\frac{\mathrm{ddim}}{t})}\cdot \mathrm{poly}(\mathrm{ddim},\log\log n)\), and lightness \(2^{O(\frac{\mathrm{ddim}}{t})}\cdot\tilde{O}(\log n\cdot\nu^{-2})\cdot \mathrm{poly}(\mathrm{ddim})\). The corollary follows by scaling \(\varepsilon\) accordingly. A particularly interesting choice of parameters is \(t=\mathrm{ddim}\), where we will get an oblivious \(\nu\)-reliable ddim-spanner of size \(n\cdot\tilde{O}\left(\nu^{-2}\right)\cdot\mathrm{poly}(\mathrm{ddim},\log \log n)\), and lightness \(\tilde{O}(\log^{2}n\cdot\nu^{-2})\cdot\mathrm{poly}(\mathrm{ddim})\). ### General Ultrametric A major part of this paper is devoted to constructing light reliable spanners for \(k\)-HST. However, Theorem 1 requires \(k>1\), and the stretch grows as \(k\) is closer to \(1\). What about the general case of \(1\)-HST (a.k.a ultrametric)? A stretch of \(8\) can be obtained trivially by first embedding the ultrametric into a \(2\)-HST with distortion \(2\) (see [1]). However, we would like preserve the near optimal stretch of \(2+\varepsilon\). In this subsection we provide an answer for this question. We begin be constructing a \(k\)-HST cover for ultramertics. **Lemma 19**.: _For every \(\varepsilon\in(0,1)\), every ultrametric admits an \(\varepsilon^{-1}\)-light \(\left(O(\varepsilon^{-1}\log\frac{1}{\varepsilon}),1+\varepsilon\right)\)-\(\frac{1}{\varepsilon}\)-HST cover._ Proof.: Consider a \(1\)-HST \(T\). Fix \(N=\left\lceil\log_{1+\varepsilon}\frac{1}{\varepsilon}\right\rceil=O( \varepsilon^{-1}\log\frac{1}{\varepsilon})\). For every \(i\in\{0,1,\ldots,N\}\), let \(T_{i}\) be the HST \(T\), where we change the label of every internal node \(x\), from \(\Gamma_{x}\) to \((1+\varepsilon)^{i}\cdot\frac{1}{\varepsilon^{j}}\), for \(j\in\mathbb{Z}\) such that \[(1+\varepsilon)^{i}\cdot\frac{1}{\varepsilon^{j-1}}<\Gamma_{x}\leq(1+ \varepsilon)^{i}\cdot\frac{1}{\varepsilon^{j}}\.\] Finally, contract all the internal nodes that have the same label as their father. As a result, we obtain a dominating \(\frac{1}{\varepsilon}\)-HST \(T_{i}\), where the distance between every two vertices is increased by at most a factor of \(\frac{1}{\varepsilon}\). In particular, \(T_{i}\) has weight at most \(\frac{1}{\varepsilon}\) times larger than \(T\). It remains to show that the distance between every pair of leaves is preserved up to a factor of \(1+\varepsilon\) in one of the \(\frac{1}{\varepsilon}\)-HST's in the cover. Consider a pair \(u,v\) with lca \(x\), and let \(i\in\{0,\ldots,N\}\), \(j\in\mathbb{Z}\) such that \((1+\varepsilon)^{i-1}\cdot\frac{1}{\varepsilon^{j}}<\Gamma_{x}\leq(1+ \varepsilon)^{i}\cdot\frac{1}{\varepsilon^{j}}\). In the HST \(T_{i}\), the label of the lca of \(u,v\) will be changed to \((1+\varepsilon)^{i}\cdot\frac{1}{\varepsilon^{j}}\), and hence \(d_{T_{i}}(u,v)=(1+\varepsilon)^{i}\cdot\frac{1}{\varepsilon^{j}}<(1+ \varepsilon)\cdot\Gamma_{x}=(1+\varepsilon)\cdot d_{T}(u,v)\) By applying Theorem 6 on the HST cover from Lemma 19 (and scaling \(\varepsilon\) accordingly) we obtain: **Theorem 15**.: _For any parameters \(\nu,\varepsilon\in(0,1/12)\), every ultrametric (\(1\)-HST) \(T\) admits an oblivious \(\nu\)-reliable \((2+\varepsilon)\)-spanner of size \(n\cdot\tilde{O}\left(\varepsilon^{-3}\cdot(\nu^{-1}\cdot\log\log n)^{2}\right)\) and lightness \(\tilde{O}(\varepsilon^{-4}\cdot(\nu^{-1}\cdot\log\log n)^{2})\)._ ## 7 Light Reliable Spanner for the Path Graph In this section we present our hop-bounded oblivious reliable \(1\)-spanner for the weighted path graph. Let \(P_{n}=([n],E)\) be a weighted path on \(n\) vertices and let \(\nu\in(0,1)\), \(h\in[\log n]\) be two parameters of the construction. The parameter \(\nu\) is the input reliablity parameter, while the parameter \(h\) governs the tradeoff between the hop-bound of the spanner, to its size and lightness. As previous works [12, 11] were concerned with the hop parameter (as in some scenarios it governs stretch), we prove Theorem 16 for a general hop parameter \(h\). **Theorem 16**.: _For any parameters \(\nu\in(0,1)\), and \(h\in[\log n]\), any weighted path graph \(P_{n}\) admits an oblivious \(\nu\)-reliable \((2h+1)\)-hop \(1\)-spanner with lightness \(O\left(hn^{2/h}\cdot\left(\frac{\log(h/\nu)}{\nu}\right)^{2}\right)\) and size \(O\left(n^{1+1/h}\cdot\frac{\log(h/\nu)}{\nu}\right)\)._ By setting \(h=\lfloor(\log n-1)/2\rfloor\), we get the following corollary: **Corollary 17**.: _For any weighted path graph \(P_{n}\), and parameter \(\nu\in(0,1)\), there is an oblivious \(\nu\)-reliable, \(\log n\)-hop \(1\)-spanner with lightness \(\tilde{O}(\nu^{-2}\cdot\log n)\) and size \(O\left(\nu^{-1}\cdot n\cdot\log\left(\frac{\log n}{\nu}\right)\right)\)._ ### Construction Let \([n]=V_{0}\supseteq V_{1}\supseteq\cdots\supseteq V_{h}\) be a hierarchy of randomly selected sets, such that for all \(1\leq i\leq h\), every vertex of \(V_{i-1}\) is taken into \(V_{i}\) independently with probability \(p=n^{-1/h}\). Let \(\ell=c\cdot\nu^{-1}\cdot\ln\left(\frac{h}{\nu}\right)\) for some constant \(c\) to be fixed later. Assume w.l.o.g. that \(\ell\) is an integer. For every index \(0\leq i<h\) and \(x\in V_{i}\), let \(x\leq u_{1}<...<u_{\ell}=u\) be the first \(\ell\) vertices of \(V_{i+1}\) that lie to the right of \(x\), and similarly \(x\geq v_{1}>...>v_{\ell}=v\) the first \(\ell\) vertices of \(V_{i+1}\) that lie to the left of \(x\). If there are less than \(\ell\) such vertices to the right (resp., left), we simply define \(u=v_{n}\) as the last vertex (resp., \(v=v_{1}\) as the first vertex). Now, for every \(y\in[v,u]\cap V_{i}\), add the edge \(\{x,y\}\) to the spanner \(H\). In other words, we connect \(x\in V_{i}\) to every vertex of \(V_{i}\) that is not farther than the first \(\ell\) neighbors of \(x\) in \(V_{i+1}\) (in either direction). Finally, vertices in \(V_{h}\) connect to all other vertices in \(V_{h}\). Denote by \(E_{i}\) the edges we added at step \(i\) to the spanner. ### Analysis Size analysis.Take \(0\leq i<h\), and condition on any fixed choice of \(V_{i}\). Consider any vertex \(x\in V_{i}\), and arrange the vertices of \(V_{i}\) that lie to the right of \(x\) in increasing order. For each such vertex we throw an independent coin with probability \(p\) for success (meaning it goes to \(V_{i+1}\) with this probability). Note that the number of edges \(x\) adds to the right in step \(i\) is essentially the number of coins we throw until the \(\ell\)-th success. (In fact, the number of edges can only be smaller if there are less than \(\ell\) successes when we run out of vertices in \(V_{i}\).) The expected number of trials until we see \(\ell\) successes is \(\ell/p\). The same argument holds for the left side edges. This bound holds for any choice of \(V_{i}\). Note that for \(0\leq i\leq h\), \(E[|V_{i}|]=np^{i}\), so the expected number of edges added in step \(i\) for \(0\leq i<h\) is at most \[np^{i}\cdot 2\ell/p=2np^{i-1}\cdot\ell\,\] and over the first \(h\) steps it is at most \[2n\ell\cdot\sum_{i=0}^{h-1}p^{i-1}=O(n\ell/p)=O(n^{1+1/h}\cdot\ell)\,\] using that \(p=n^{-1/h}\leq 1/2\). For \(i=h\) we add at most \(|V_{h}|^{2}\) edges. In expectation: \[\mathbb{E}[|V_{h}|^{2}]=\sum_{i}\Pr[v_{i}\in V_{h}]+\sum_{i\neq j}\Pr[v_{i},v_ {j}\in V_{h}]=n\cdot p^{h}+n\cdot(n-1)\cdot p^{2h}<2. \tag{7}\] We conclude that the expected size of the spanner is \(O\left(n^{1+1/h}\cdot\ell\right)\). Lightness Analysis.Fix any edge \(\{u,v\}\in E(P_{n})\), we say that a spanner edge \(\{x,y\}\)_crosses_ the edge \(\{u,v\}\) if \(x\leq u\) and \(v\leq y\). Let \(c(u,v)\) denote the number of times \(\{u,v\}\) is crossed. Observe that the weight of each spanner edge is equal to the sum of weights of edges in \(P_{n}\) that it crosses, therefore, the total weight of the spanner is \[\sum_{e\in P(n)}c(e)\cdot w(e)\.\] Thus, it suffices to show that for every edge \(e\in E(P_{n})\): \[\mathbb{E}[c(e)]\leq O(hn^{2/h}\cdot\ell^{2})\.\] To this end, fix an edge \(\{u,v\}\in E(P_{n})\), and an index \(0\leq i<h\). We will bound the expected number of edges in \(E_{i}\) that cross \(\{u,v\}\). Condition on any fixed choice of \(V_{i}\). Note that an edge \(\{x,y\}\) with \(x,y\in V_{i}\), \(x\leq u\) and \(y\geq v\) is added to \(E_{i}\) by \(x\) iff there are less than \(\ell\) vertices of \(V_{i+1}\) in the interval \([x:y)\). Consider the vertices of \(V_{i}\) from \(u\) to the left in decreasing order, and similarly to the above lemma, let \(X\) be a random variable counting the number of coins (with probability for success) we throw until getting \(\ell\) successes. Denote by \(Y\) the symmetric random variable, when considering vertices of \(V_{i}\) from \(v\) to the right, in increasing order. Then observe that at most \(X\cdot Y\) edges of \(E_{i}\) cross \(\{u,v\}\). Since \(X,Y\) are independent, we have that \[\mathbb{E}[X\cdot Y]=\mathbb{E}[X]\cdot\mathbb{E}[Y]\leq(\ell/p)^{2}\.\] By (7), the expected number of edges in \(E_{h}\) is bounded by \(2\), so each edge of \(P_{n}\) is expected to be crossed at most twice by edges in \(E_{h}\). Overall, when considering all the \(h+1\) levels, for each \(e\in E(P_{n})\) \[\mathbb{E}[c(e)]\leq O(h\cdot\ell^{2}/p^{2})=O\left(h\cdot n^{2/h}\cdot\ell^{ 2}\right)\,\] We conclude that the expected lightness of the spanner is \(O\left(h\cdot n^{2/h}\cdot\ell^{2}\right)\). Stretch and hop-bound analysis.We say a path \(p=(v_{0},\ldots,v_{k})\) is monotone if it is either monotone increasing: \(v_{0}\leq\cdots\leq v_{k}\), or monotone decreasing: \(v_{0}\geq\cdots\geq v_{k}\). The following definition is crucial for our analysis of which vertices survive an attack \(B\), and which will be added to \(B^{+}\). **Definition 20**.: _We say a monotone increasing (resp. decreasing) path \(p=(v_{0},\ldots,v_{k})\) of the spanner \(H\) is usable for \(v_{0}\) if the following holds._ 1. _For every_ \(0\leq i\leq k\)_,_ \(v_{i}\in V_{i}\)_._ 2. _For every_ \(0\leq i<k\)_, if_ \(v_{i}\neq v_{i+1}\)_, then_ \(\{v_{i},v_{i+1}\}\in E_{i}\)_._ 3. \(v_{k}\) _is connected in_ \(H\) _to all vertices in_ \(V_{k}\cap[v_{k}:n]\) _(resp._ \(V_{k}\cap[1:v_{k}]\)_)_ _We say a vertex \(v\) is safe w.r.t. an attack \(B\subseteq V\), if it has a monotone increasing usable path and a monotone decreasing usable path which are both disjoint from the attack \(B\)._ The following lemma asserts that the spanner contains a shortest path that is not damaged by the attack (also with a bounded number of hops) between safe vertices. **Lemma 21**.: _If \(u,v\in[n]\) are safe w.r.t. an attack \(B\), then the spanner contains a \((2h+1)\)-hop monotone path between \(u,v\) that is disjoint from \(B\)._ Proof.: Assume w.l.o.g. that \(u<v\) and let \((u=u_{0},\ldots,u_{k})\) be a _usable_ monotone increasing path of \(u\) and \((v=v_{0},\ldots,v_{j})\) a monotone decreasing _usable_ path of \(v\). Additionally, assume w.l.o.g. that \(k\leq j\). If \(u_{k}\leq v_{k}\), then by item 3, \(u_{k}\) is connected to every vertex in \([u_{k}:n]\cap V_{k}\), in particular the spanner contains the edge \(\{u_{k},v_{k}\}\). Thus, we may take the monotone path \(u_{0},\ldots,u_{k},v_{k},\ldots,v_{0}\). Otherwise, there exists \(i<k\) s.t. \(u_{i}<v_{i}\) and \(u_{i+1}\geq v_{i+1}\). Recall that by our spanner construction, \(u_{i}\) is also connected to all the vertices \([u_{i}:u_{i+1}]\cap V_{i}\), and \(v_{i}\) is connected to all the vertices \([v_{i+1}:v_{i}]\cap V_{i}\). If \(v_{i}\leq u_{i+1}\) then \(v_{i}\in[u_{i}:u_{i+1}]\), and we may use the monotone path \(u_{0},\ldots,u_{i},v_{i},\ldots,v_{0}\). Else, \(u_{i+1}<v_{i}\), therefore \(u_{i+1}\in[v_{i+1}:v_{i}]\), and as \(u_{i+1}\in V_{i}\) as well, we have the monotone path \(u_{0},\ldots,u_{i+1},v_{i},\ldots,v_{0}\). It remains to bound the number of hops. Note that by item 1, a _usable_ path contains at most \(h\) edges, and every \(u-v\) path we considered here is a concatenation of (a prefix of) two such paths, so the number of edges used is at most \(2h+1\) Reliability analysis.Let \(B\) be an oblivious attack. For any spanner \(H\) in the support of the distribution, the faulty extension \(B^{+}:=B^{+}_{H}\) will consist of \(B\) and all the vertices \(v\) that are not _safe_. Recall that the attack is oblivious to our choice of the random sets \(V_{i}\). In the remainder of this section, for each vertex we analyse the probability that it is safe, which will depend on the number of faulty vertices in its neighborhoods, as captured by the notion of _shadow_. **Definition 22** ([2]).: _Let \(P_{n}\) be a path graph and let \(B\) be a subset of its vertices \((B\subseteq[n])\). The left \(\alpha\)-shadow of \(B\) is all the vertices \(b\) such for some \(a\in[n],a\leq b\), \(|[a:b]\cap B|\geq\alpha\cdot|[a:b]|\), denoted by \(\mathcal{S}_{L}(\alpha,B)\). The right \(\alpha\)-shadow \(\mathcal{S}_{R}(\alpha,B)\) is defined symmetrically. The set \(\mathcal{S}_{\alpha}(B)=\mathcal{S}_{L}(\alpha,B)\cup\mathcal{S}_{R}(\alpha,B)\) is called the \(\alpha\)-shadow of \(B\). If \(B\) is clear from context, we may simply write \(\mathcal{S}_{\alpha}\) for the \(\alpha\)-shadow of \(B\)._ **Lemma 23** ([2]).: _For any \(B\subseteq[n]\):_ * _For every_ \(\alpha\in[\frac{2}{3},1)\) _,_ \(|\mathcal{S}_{\alpha}|\leq\frac{|B|}{2\alpha-1}\)_._ * _For every_ \(\alpha\in(0,1)\)_,_ \(|\mathcal{S}_{\alpha}|\leq O\left(\frac{|B|}{\alpha}\right)\)_._ The following lemma provides a quantitative bound, exponential in the parameter \(\ell\), on the failure probability of vertices outside a certain shadow. **Lemma 24**.: _For any \(0<\alpha<1\), if \(x\in[n]\setminus S_{\alpha}\), then_ \[\Pr[x\text{ is not safe}]\leq O(\sqrt{\ell}\cdot h\cdot\alpha^{\ell-1})\.\] Proof.: Note that \(x\notin B\), as otherwise by definition it will be contained in \(S_{\alpha}\) for any \(0\leq\alpha\leq 1\). We will try to construct a usable monotone increasing path for \(x\), \((v_{0},v_{1},...,v_{k})\) for some \(0\leq k\leq h\), that is disjoint from \(B\). Initially set \(v_{0}=x\in V_{0}\setminus B\). Assume we built the path until \(v_{i}\in V_{i}\setminus B\), and now we attempt to find the next vertex \(v_{i+1}\in V_{i+1}\). Consider the first \(\ell\) vertices in \(V_{i+1}\) that lie to the right of \(x\). If there are less than \(\ell\) such neighbors, then observe that there are less than \(\ell\) vertices in \(V_{i+1}\) to the right of \(v_{i}\) as well (as \(v_{i}\geq x\)). In this case, by the spanner construction, \(v_{i}\) connects to all vertices in \(V_{i}\) to its right, and we can set \(k=i\) and stop the process (observe that \(v_{k}\) will satisfy item 3 in the definition of usable path, so indeed we may stop here). Otherwise, if there is a vertex in \(V_{i+1}\setminus B\) among the first \(\ell\) neighbors of \(x\), we may take the first such vertex as \(v_{i+1}\). Note that the path remains monotone: \(v_{i}\leq v_{i+1}\). This is because \(v_{i+1}\in V_{i}\), i.e. it was a valid choice for \(v_{i}\), and we always take the first possible vertex. We conclude that the only case the path-building fails is the event that all these \(\ell\) vertices in \(V_{i+1}\) fall in \(B\). By the virtue of \(x\notin S_{\alpha}\), we have that in any interval \([x:y]\) (for \(y>x\)), at most \(\alpha\) fraction of the vertices are in \(B\). Fix any \(y>x\), and condition on the event that \(y\) is the smallest such that the first \(\ell\) neighbors in \(V_{i+1}\) to the right of \(x\) are in the interval \(I=[x:y]\). Recall that every vertex is sampled to \(V_{i+1}\) obliviously to the attack \(B\). Note that the conditioning does create dependencies and change the probability to be in \(V_{i+1}\), but the main observation is, that except for the vertex \(y\in V_{i+1}\), every set of \(\ell-1\) vertices in \([x:y)\) has equal probability to be the remaining \(\ell-1\) vertices of \(V_{i+1}\). Thus, the failure probability at step \(i+1\), which is the probability that these \(\ell\) vertices in \(V_{i+1}\) are all taken from the set \(B\), is at most \[\frac{\binom{|I\cap B|}{\ell-1}}{\binom{|I|}{\ell-1}}\leq\frac{\binom{\alpha|I| }{\ell-1}}{\binom{|I|}{\ell-1}}\leq O(\sqrt{\ell}\cdot\alpha^{\ell-1}). \tag{8}\] The last inequality uses standard approximation of binomial coefficients, see Appendix A for a proof. The lemma follows by noticing that the bound obtained is independent of \(y\), and by taking a union bound over both sides (left and right) of the at most \(h\) steps \(i=0,1,...,h-1\). We will consider two regimes of shadows separately, the first when \(\alpha\) is close to \(1\), and the second for small \(\alpha\). For the first regime, define for each index \(0\leq j\leq\lfloor\log\frac{1}{3\nu}\rfloor\), \(\alpha_{j}=1-2^{j}\cdot\nu\). Note that for any such \(j\), \(\alpha_{j}\geq 2/3\), so by the first item in Lemma 23 we have \[|S_{\alpha_{j}}|\leq\frac{|B|}{2\alpha_{j}-1}=\frac{|B|}{1-2^{j+1}\nu}\leq(1+ 2^{j+2}\nu)|B|\.\] Since all vertices of \(B\) are included in any shadow, it follows that \[|S_{\alpha_{j}}\setminus B|\leq 2^{j+2}\nu|B|. \tag{9}\] For the smaller shadows, by the second item in Lemma 23 we have \[|S_{2^{-j}}|\leq O(2^{j}|B|). \tag{10}\] **Lemma 25**.: \(\mathbb{E}[|B^{+}|]\leq(1+O(\nu))|B|\)_._ Proof.: First, consider the case that \(B=\emptyset\). Note that in this case, every vertex is safe, as it has a monotone increasing and a monotone decreasing usable paths. To see the former: for \(0\leq i<h\), every vertex \(v\in V_{i}\) is either connected to the closest vertex of \(V_{i+1}\) that lie to the right of \(v\), or, if there is no such vertex, then \(v\) is connected to every vertex in \(V_{i}\cap[v:n]\). Thus one can easily build a monotone increasing path. Therefore, in this case \(B^{+}=B=\emptyset\). Notice that \[\mathbb{E}[|B^{+}|]\leq|B|+\sum_{x\in[n]\setminus B}\Pr[x\text{ is not safe}]. \tag{11}\] We analyze Equation (11) by considering vertices in different shadow regimes separately, i.e., \[[n]=S_{\alpha_{0}}+\sum_{j=1}^{\lfloor\log\frac{1}{3\nu}\rfloor}\left(S_{ \alpha_{j}}\setminus S_{\alpha_{j-1}}\right)+S_{1/2}\setminus S_{\alpha_{ \lfloor\log\frac{1}{3\nu}\rfloor}}+\sum_{j=2}^{\log n}S_{2^{-j}}\setminus S_{ 2^{-(j-1)}}\.\] Note that \(S_{1/n}=[n]\), as \(B\neq\emptyset\), so every vertex was accounted for. It holds that \[\mathbb{E}\left[|B^{+}|\right]\leq \underset{(1)}{\underbrace{|\mathcal{S}_{\alpha_{0}}|}}+\underset {(2)}{\underbrace{\sum_{j=1}^{\log\frac{1}{3\nu}}\sum_{x\in\mathcal{S}_{\alpha_{ j}}\backslash\mathcal{S}_{\alpha_{j-1}}}}}\Pr\left[x\in B^{+}\right]}\] \[+\underset{(3)}{\underbrace{\sum_{x\in\mathcal{S}_{\frac{1}{2} \backslash\mathcal{S}_{\alpha_{\lfloor\log\frac{1}{3\nu}\rfloor}}}}}\Pr\left[x \in B^{+}\right]}+\underset{(4)}{\underbrace{\sum_{j=2}^{\log n}\sum_{x\in \mathcal{S}_{2^{-j}}\backslash\mathcal{S}_{2^{-(j-1)}}}}}\Pr\left[x\in B^{+} \right]\.\] We next bound each one of the summands:13 Footnote 13: For convenience we will ignore the \(-1\) in the exponent of \(\alpha\) in lemma 24, it can easily be handled by increasing slightly \(\ell\). 1. By Equation (9), \((1)=|S_{\alpha_{0}}|\leq(1+4\nu)\cdot|B|\). 2. Fix \(1\leq j\leq\lfloor\log\frac{1}{3\nu}\rfloor\), and \(x\notin S_{\alpha_{j-1}}\), then by Lemma 24 the probability that \(x\) is not safe is at most \[O(\sqrt{\ell}\cdot h)\cdot(1-2^{j-1}\nu)^{c\cdot\nu^{-1}\cdot\ln(h/\nu)}\leq O (h/\nu)^{2}\cdot e^{-2^{j-1}\cdot c\cdot\ln(h/\nu)}\leq 2^{-2j}\,\] where the last inequality holds for large enough constant \(c\). By Equation (9), \(|S_{\alpha_{j}}\backslash B|\leq 4\nu\cdot 2^{j}|B|\). Summing over all indices \(j\) we conclude \((2)\leq\sum_{j=1}^{\log\frac{1}{3\nu}}4\nu\cdot 2^{j}|B|\cdot 2^{-2j}\leq 4\nu \cdot|B|\). 3. For the transition between large and small shadows, whenever \(x\in S_{1/2}\setminus S_{\alpha_{\lfloor\log\frac{1}{3\nu}\rfloor}}\), since \(\alpha_{\lfloor\log\frac{1}{3\nu}\rfloor}\leq 5/6\) we have that the probability that \(x\) is not safe is at most \[O(h/\nu)^{2}\cdot(5/6)^{-c\cdot\nu^{-1}\cdot\ln(h/\nu)}\leq\nu\,\] for large enough \(c\). By Equation (10), \(|\mathcal{S}_{\frac{1}{2}}\setminus B|\leq O(|B|)\), thus \((3)\leq O(\nu|B|)\). 4. For \(2\leq j\leq\log n\) and \(x\notin S_{2^{-(j-1)}}\), by Lemma 24 the probability that \(x\) is not safe is at most \[O(\sqrt{\ell}h)\cdot(2^{-(j-1)})^{c\cdot\nu^{-1}\cdot\ln(h/\nu)}\leq O(h/\nu )^{2}\cdot(\nu/h)^{j\cdot c}\leq 2^{-2j}\cdot\nu\,\] for large enough constant \(c\). By Equation (10), \(|S_{2^{-j}}|\leq O(2^{j}|B|)\). It follows that \((4)\leq\sum_{j=2}^{\log n}O(2^{j}|B|)\cdot 2^{-2j}\cdot\nu=O(\nu)\cdot|B|\). Combining the 4 cases together, we conclude that \(\mathbb{E}\left[B^{+}\right]\leq(1+O(\nu))\cdot|B|\), as required. Proof of Theorem 16.: The bounds on the expected size and lightness of the spanner were shown above, and by Lemma 8, they can be translated to worst-case bounds, incurring only a constant loss. Recall that we set \(B^{+}\) to be all the vertices which are not _safe_. By Lemma 21 we get a shortest path with \(2h+1\) hops for any pair of _safe_ vertices. By Lemma 25, the expected size of \(B^{+}\) is \((1+O(\nu))|B|\), the theorem follows by rescaling \(\nu\) by a constant. Improved Light Reliable Spanners for Minor-free Graphs In this section we refine our techniques in order to obtain near optimal stretch for light reliable spanners of minor-free graphs. More generally, we show that a certain property of the Pairwise Partition Cover Scheme (PPCS) allows us to improve the stretch to be almost \(2\), which is near optimal, while increasing the lightness by polylog factors. We begin by formally defining this property, which could be useful for other graph families as well. Throughout this section \(G=(X,E,w)\) is a weighted graph with \(n\) vertices excluding a constant size minor. \(d_{G}\) denotes the shortest path metric in \(G\). That is \(d_{G}(u,v)\) denotes the minimum weight of a path from \(u\) to \(v\) in \(G\). Centrally-padded PPCS for Minor-free Graphs.The property of PPCS we will exploit is captured by the following definition. **Definition 26**.: _A \((\tau,\rho,\varepsilon,\Delta)\)-pairwise partition cover \(\mathbb{P}=\{\mathcal{P}_{1},\ldots,\mathcal{P}_{s}\}\) of a metric space \((X,d)\) is called centrally-padded, if every cluster \(C\) in every partition has a designated center \(x\in X\), and for every pair \(u,v\) such that \(\frac{\Delta}{2\rho}\leq d_{G}(u,v)\leq\frac{\Delta}{\rho}\), there is a cluster \(C\) in one of the partitions \(\mathcal{P}_{i}\) such that \(C\) contains both closed balls \(B(u,\varepsilon\Delta),B(v,\varepsilon\Delta)\), and also_ \[d_{G}(u,x)+d_{G}(v,x)\leq(1+32\varepsilon)\cdot d_{G}(u,v). \tag{12}\] The following lemma asserts that our construction of PPCS for minor-free graphs in Section 4 is in fact centrally-padded. **Lemma 27**.: _For any minor-free graph \(G\) with \(n\) vertices and \(0<\varepsilon<1/12\), there exists \(\big{(}O(\varepsilon^{-1}\log n),\frac{2}{1-6\varepsilon},\varepsilon\big{)}\)-PPCS which is centrally-padded._ Proof.: Consider the construction of Lemma 12. Recall that every cluster is a ball centered at a net point, so we naturally define its center as that net point. For any \(u,v\in X\) with \(\frac{(1-6\varepsilon)\Delta}{4}\leq d(u,v)\leq\frac{(1-6\varepsilon)\Delta}{2}\), we found the first shortest path \(P\) in the SPD that intersects \(P_{uv}\) (the shortest \(u-v\) path) or at least one of the balls \(B_{u}=B(u,\varepsilon\Delta)\), \(B_{v}=B(v,\varepsilon\Delta)\) (see Figure 2). We denoted \(x\in P\) as a vertex on that intersection. Then we found a net-point \(z\in\mathcal{N}\) on \(P\) at distance at most \(\varepsilon\Delta\) from \(x\), and consider the cluster \(C=B(z,\Delta/2)\). If \(x\in P_{uv}\) then \[d(z,u)+d(z,v)\leq 2d(z,x)+d(x,u)+d(x,v)\leq 2\varepsilon\Delta+d(u,v)\leq(1+ 16\varepsilon)\cdot d(u,v)\.\] Otherwise, w.l.o.g. \(x\in B_{u}\) and we get that \[d(z,u)+d(z,v)\leq d(z,u)+d(z,u)+d(u,v)\leq 4\varepsilon\Delta+d(u,v)\leq(1+ 32\varepsilon)\cdot d(u,v)\,\] as required. \(k\)-HST Cover.The next step is to compute an \(k\)-HST cover, which is done exactly in the same manner as in Theorem 4, so we get a \(O(k\log n)\)-light \(\left(O(\varepsilon^{-2}\log n\cdot\log k),\frac{2(1+3\varepsilon)}{1-6 \varepsilon}\right)\)-\(k\)-HST cover. The main point is that we will use these \(k\)-HSTs to construct reliable spanners, but the edge weights and the stretch guarantees will be with respect to the original distances in the graph. That is, in some sense we ignore the distances induced by the \(k\)-HSTs, and just use their laminar structure. The property that we will use from the proof of Theorem 4 is the following. * For every pair \(u,v\), there exists a cluster \(C\) of diameter at most \(\Delta\) in the PPCS in which \(u,v\) are centrally-padded, and so \(C\) contains a net point. Thus, there will be a \(k\)-HST in the cover with an internal node \(x\) and label \(\Gamma_{x}=\Delta\) corresponding to \(C\), that contains \(u,v\). We remark that \(L(x)\) is not necessarily equal to \(C\), since we changed \(C\) a bit before making it an internal node of the \(k\)-HST (to guarantee the laminar structure, and a bound on the lightness). The main result of this section is the following theorem. **Theorem 18**.: _Let \(G=(V,E)\) be a graph with \(n\) vertices that excludes a fixed minor. Then for any \(0<\varepsilon<1/12\) and \(0<\nu^{\prime}<1\), \(G\) admits an oblivious \(\nu^{\prime}\)-reliable \(2(1+\varepsilon)\)-spanner of size \(\tilde{O}\left(\frac{n}{\varepsilon^{6}\nu^{2}}\right)\) and lightness \(\tilde{O}\left(\frac{\log^{8}n}{\varepsilon^{7}\nu^{\prime 2}}\right)\)._ Let \(k=c^{\prime}/\varepsilon\), for a constant \(c^{\prime}\) to be determined later. We create a \(O(k\log n)\)-light \(\left(\tau,\frac{2(1+3\varepsilon)}{1-6\varepsilon}\right)\)-\(k\)-HST cover for the graph \(G\), with \(\tau=O(\varepsilon^{-2}\log n\cdot\log k)\), as discussed above. Since we desire a \(\nu^{\prime}\)-reliable spanner for \(G\), we will use the parameter \(\nu=\frac{\nu^{\prime}}{5\tau}\) when devising a \(\nu\)-reliable spanner for each \(k\)-HST in the cover. Let \(T\) be one of the \(k\)-HSTs in the cover. Note that every internal node of \(T\) corresponds to a cluster \(C\) in the centrally-padded PPCS, which have a center \(x\). In a hope to avoid confusion, we will refer to \(x\) both as the cluster center, and as the internal node of \(T\). Recall that in Section 3 every internal node chose an arbitrary ordering on its leaves, which was used to define the preorder of \(T\). Here, the order will not be arbitrary. Instead, it will be defined with respect to distances in \(G\). That is, each internal node \(x\) orders its children \(x_{1},\ldots,x_{t}\) (each is a net point in the graph) by their distance to \(x\) (in \(G\)). Then, let \(P\) be the resulting preorder path on the leaves of \(T\). The intuition behind the sampling of the random bi-cliques, is that we want vertices "near" \(x\) to be chosen, since the centrally-padded property gives us a better stretch guarantee going through \(x\), than just \(2\Gamma_{x}\). To this end, let \(L(x)=(v_{1},v_{2},...,v_{s})\) be the ordering given by the restriction of \(P\) to \(L(x)\). We sample each \(v_{j}\) independently to be included in \(Z_{x}\) with probability \(p_{j}=\min\{1,\frac{c\cdot\ln n}{j\cdot\nu}\}\), for a constant \(c\) to be determined later. The edges of the spanner are: For every internal node \(x\in T\) with children \(x_{1},\ldots,x_{t}\), for every \(j=1,\ldots,t\), we add all the edges \(\{\{y,z\}\ :\ y\in Z_{x},z\in Z_{x_{j}}\}\) to the spanner \(H\), weighted according to the distances in the graph \(G\). The final spanner will consist of the union of all \(O(k\log n)\) spanners for all the \(k\)-HSTs in the cover. Safe Leaves.Fix a \(k\)-HST \(T\). Under an attack \(B\), we say that a vertex \(u\) is _safe_ w.r.t. \(B\), if for every ancestor \(x\in T\) of \(u\), \(Z_{x}\setminus B\) contain a vertex \(y\) such that \[d_{G}(x,y)\leq d_{G}(x,u)+2\Gamma_{x}/k. \tag{13}\] In other words, we want that every ancestor \(x\) of \(u\) to have a surviving vertex \(y\) in its sample set, which is not much farther than the distance of \(u\) to the center \(x\). Denote \(B_{T}^{+}\) as all the vertices which are not safe in \(T\) w.r.t. \(B\). The final bad set is defined as \(B^{+}=B\cup\bigcup_{T}B_{T}^{+}\). The following claim will be useful for bounding the size and lightness of our spanner. **Claim 28**.: _Fix any \(T\) in the cover, then for any \(x\in T\),_ \[\mathbb{E}[|Z_{x}|]\leq O((\ln^{2}n)/\nu)\.\] Proof.: Let \(L(x)=(v_{1},\ldots,v_{s})\), with the order induced by the restriction of \(P\) to \(L(x)\), then \[\mathbb{E}[|Z_{x}|]=\sum_{j=1}^{s}p_{j}\leq\frac{c\ln n}{\nu}\cdot\sum_{j=1}^{ s}\frac{1}{i}=O\left(\frac{\ln^{2}n}{\nu}\right)\.\] Size Analysis.Fix any tree \(T\) in the cover. Denote \(z_{x}=|Z_{x}|\), and note that these random variables \(\{z_{x}\}_{x\in T}\) are independent, so \(\mathbb{E}[z_{x}\cdot z_{y}]=\mathbb{E}[z_{x}]\cdot\mathbb{E}[z_{y}]\) whenever \(x\neq y\). Using Claim 28, the expected number of edges added to \(H\) by the random bi-cliques is \[\mathbb{E}\left[\sum_{x\in T}\sum_{i=1}^{\deg(x)}z_{x}\cdot z_{x _{i}}\right] =\sum_{x\in T}\sum_{i=1}^{\deg(x)}\mathbb{E}[z_{x}]\cdot\mathbb{E }[z_{x_{i}}].\] \[=O(\nu^{-2}\cdot\log^{4}n)\cdot\sum_{x\in T}\deg(x)=O(\nu^{-2} \cdot n\log^{4}n)\.\] The final spanner is a union of \(\tau=O(\varepsilon^{-2}\log n\cdot\log k)\) spanners for each \(T\) in the cover, and \(\nu^{\prime}=\nu\cdot\tau\), so the final size is \(\tau\cdot O((\frac{\tau}{\nu^{\prime}})^{2}\cdot n\log^{4}n)=n\cdot\nu^{\prime -2}\cdot\varepsilon^{-6}\cdot\log^{7}n\cdot\log^{3}k=\nu^{\prime-2}\cdot \varepsilon^{-6}\cdot\overset{\cdot}{O}(n)\). Lightness Analysis.Let \(T\) be any \(k\)-HST in the cover, and recall that the MST weight of \(T\) is equal to \[\sum_{x\in T}(\deg(x)-1)\cdot\Gamma_{x}\.\] Each edge that \(x\) adds to the spanner has weight at most \(\Gamma_{x}\) (even though we use the graph distance, as \(T\) is dominating). Using Claim 28, the total weight of edges in the random bi-cliques is expected to be at most \[\mathbb{E}\left[\sum_{x\in T}\sum_{i=1}^{\deg(x)}\Gamma_{x}\cdot z_ {x}\cdot z_{x_{i}}\right] = \sum_{x\in T}\Gamma_{x}\sum_{i=1}^{\deg(x)}\mathbb{E}[z_{x}]\cdot \mathbb{E}[z_{x_{i}}]\] \[\leq O((\log^{4}n)/\nu^{2})\cdot\sum_{x\in T}\Gamma_{x}\cdot\deg(x)\] \[= O((\log^{4}n)/\nu^{2})\cdot w(MST(T))\.\] Since every \(k\)-HST has lightness \(O(k\log n)\), and there are \(\tau=O(\varepsilon^{-2}\log n\cdot\log k)\) trees in the cover, and \(\nu^{\prime}=\nu\cdot\tau\), the lightness of the resulting spanner compared to \(G\) is \[\sum_{T\text{ in the cover}}O\left(\frac{\log^{4}n}{\nu^{2}} \right)\cdot\frac{w(MST(T))}{w(MST(G))} =O\left(\frac{\tau^{3}}{\nu^{\prime 2}}\cdot k\cdot\log^{5}n\right)\] \[=O\left(\frac{k\cdot\log^{8}n\cdot\log^{3}k}{\nu^{\prime 2}\cdot \varepsilon^{6}}\right)\] \[=\nu^{\prime-2}\cdot\tilde{O}(\varepsilon^{-7}\cdot\log^{9}n)\] Reliability Analysis.Fix an attack \(B\) and a tree \(T\), and define the shadow with respect to the path \(P\) and the set \(B\) (recall definition 22). We start by showing that for any internal node \(x\), the preorder of \(L(x)\) almost respects the distances to the center \(x\) in \(G\). **Claim 29**.: _Fix any node \(x\in T\), and let \(L(x)=(v_{1},\ldots,v_{s})\) be the ordering given by the restriction of \(P\) to \(L(x)\). Then for any \(1\leq i<j\leq s\) we have that_ \[d_{G}(x,v_{i})\leq d_{G}(x,v_{j})+2\Gamma_{x}/k\.\] Proof.: Let \(x_{i^{\prime}}\) (resp., \(x_{j^{\prime}}\)) be the child of \(x\) whose subtree contains \(v_{i}\) (resp., \(v_{j}\)). Since \(v_{i}\) appears in \(P\) before \(v_{j}\), it follows that the order on the children of \(x\) is such that \(x_{i^{\prime}}\) appears before \(x_{j^{\prime}}\) (we allow \(i^{\prime}=j^{\prime}\)). By our definition it means that \(d_{G}(x,x_{i^{\prime}})\leq d_{G}(x,x_{j^{\prime}})\). As \(T\) is a \(k\)-HST, all distances in \(T\) between vertices in \(L(x_{i^{\prime}})\) (resp., \(L(x_{j^{\prime}})\)) are at most \(\frac{\Gamma_{x}}{k}\). Since \(T\) is dominating, this also holds for the graph distances, which gives that both \(d_{G}(v_{i},x_{i^{\prime}}),d_{G}(v_{j},x_{j^{\prime}})\leq\frac{\Gamma_{x}}{k}\). We conclude that \[d_{G}(x,v_{i}) \leq d_{G}(x,x_{i^{\prime}})+d_{G}(x_{i^{\prime}},v_{i})\] \[\leq d_{G}(x,x_{j^{\prime}})+d_{G}(x_{i^{\prime}},v_{i})\] \[\leq d_{G}(x,v_{j})+d_{G}(x_{j^{\prime}},v_{j})+d_{G}(x_{i^{ \prime}},v_{i})\] \[\leq d_{G}(x,v_{j})+\frac{2\cdot\Gamma_{x}}{k}\.\] **Lemma 30**.: _For every tree \(T\), \(0<\alpha\leq 1-\nu\), and any vertex \(u\notin S_{\alpha}(B)\),_ \[\Pr[\text{$u$ is not safe}]\leq n^{-2}\.\] Proof.: Let \(x\) be any ancestor of \(u\), and let \(L(x)=(v_{1},\ldots,u=v_{j},\ldots,v_{s})\) be the ordering given by the restriction of \(P\) to \(L(x)\). If we want that \(u\) will not fail to be safe due to \(x\), it suffices that \(Z_{x}\setminus B\) contains a vertex in the prefix \((v_{1},\ldots,v_{j})\). This is because Claim 29 suggests that any such vertex will satisfy (13). Since \(u\notin S_{\alpha}(B)\), it follows that at most \(\alpha\) fraction of the vertices \((v_{1},\ldots,v_{j})\) are in \(B\). As the probability of being sampled to \(Z_{x}\) decreases with the index, it can be easily checked that the worst possible case is that \(\{v_{1},\ldots,v_{\lfloor\alpha\cdot j\rfloor}\}\subseteq B\) (in any other case the probability of success will only be higher). We assume that \(p_{\lfloor\alpha j\rfloor+1}<1\), as otherwise \(v_{\lfloor\alpha j\rfloor+1}\) is surely sampled into \(Z_{x}\). This means \(p_{i}=\frac{c\ln n}{i\cdot\nu}\) for all \(i>\alpha\cdot j\). Note that \(Z_{x}\) is sampled independently of \(B\), thus \[\Pr[Z_{x}\cap\{v_{\lfloor\alpha j\rfloor+1},\ldots,v_{j}\}=\emptyset] =\prod_{i=\lfloor\alpha\cdot j\rfloor+1}^{j}(1-p_{i}) \tag{14}\] \[\leq e^{-\sum_{i=\lfloor\alpha\cdot j\rfloor+1}^{j}p_{i}}\] \[=e^{-\sum_{i=\lfloor\alpha\cdot j\rfloor+1}^{j}\frac{c\ln n}{i \cdot\nu}}\] \[\leq n^{-\frac{c}{2\nu}\cdot(\ln j-\ln(\alpha\cdot j))}\] \[=n^{-\frac{c}{2\nu}\cdot(\ln(\frac{1}{\alpha})}\] \[\leq n^{-3}\.\] The last inequality holds as \(\alpha\leq 1-\nu\), so \[\ln(\frac{1}{\alpha})\geq\ln(\frac{1}{1-\nu})>\ln(1+\nu)\geq\frac{\nu}{2}\] and by picking a large enough constant \(c\geq 12\). The lemma follows by a union bound over all possible ancestors \(x\). We are now ready to bound the final set \(B^{+}\). **Lemma 31**.: \(\mathbb{E}[|B^{+}|]\leq(1+\nu^{\prime})\cdot|B|\) _._ Proof.: First consider the case \(B=\emptyset\). In this case \(B^{+}=\emptyset\) as well. This is because all the vertices are safe. Indeed, for every tree \(T\) and node \(x\), the first leaf in \(L(x)\) is sampled to \(Z_{x}\) with probability \(1\), and thus \(Z_{x}\not\subseteq B\). Thus we will assume \(B\neq\emptyset\). We can also assume that \(\nu^{\prime}\geq\frac{1}{n}\), as otherwise theorem 18 holds trivially with \(H=G\). Fix a tree \(T\) in the \(k\)-HST cover. As \(\nu<1/3\), by the first item of Lemma 23, the shadow \(S_{1-\nu}(B)\) of the path \(P\) satisfies \[|S_{1-\nu}(B)|\leq\frac{|B|}{2(1-\nu)-1}\leq(1+4\nu)\cdot|B|\.\] By Lemma 30, every vertex outside \(S_{1-\nu}(B)\) joins \(B_{T}^{+}\) with probability at most \(n^{-2}\). It follows that \[\mathbb{E}\left[\big{|}B_{T}^{+}\big{|}\right] \leq|S_{1-\nu}(B)\setminus B|+\sum_{v\notin S_{1-\nu}(B)}\Pr \left[v\in B_{T}^{+}\right]\] \[\leq 4\nu\cdot|B|+n\cdot\frac{1}{n^{2}}\] \[\leq 4\nu\cdot|B|+\nu\cdot|B|=5\nu\cdot|B|\.\] Summing up over all the \(\tau\) trees in the cover, and recalling that \(\nu=\frac{\nu^{\prime}}{5\tau}\), we conclude that \[\mathbb{E}\left[\big{|}B^{+}\setminus B\big{|}\right]\leq\sum_{T\text{ in the cover}}\mathbb{E}\left[\big{|}B_{T}^{+}\big{|}\right]\leq\sum_{T\text{ in the cover}}5\nu\cdot|B|\leq 5\tau\cdot\nu\cdot|B|=\nu^{\prime}\cdot|B|\.\] Stretch Analysis.Let \(u,v\notin B^{+}\) be two safe leaves in the \(k\)-HST \(T\) that has an internal node \(x\) corresponding to a cluster \(C\) in which \(u,v\) are centrally-padded. Let \(\Delta\) be the diameter bound on \(C\), and so \(\Gamma_{x}=\Delta\). By definition of padding we have that \(u,v\in L(x)\) and \[\frac{(1-6\varepsilon)\Delta}{4}\leq d_{G}(u,v)\leq\frac{(1-6\varepsilon) \Delta}{2}\, \tag{15}\] and as \(\varepsilon<1/12\), it follows that \(\Delta\leq 8d_{G}(u,v)\). Let \(x_{i},x_{j}\) be the children of \(x\) in \(T\) which are the ancestors of \(u\) and \(v\) respectively.14 As \(u,v\notin B^{+}\), it holds that there are vertices \(z\in Z_{x}\setminus B\), \(u^{\prime}\in Z_{x_{i}}\setminus B\), and \(v^{\prime}\in Z_{x_{j}}\setminus B\), and by (13) we also have that Footnote 14: Since \(k=c^{\prime}/\varepsilon>8\), and all vertices in \(L(x_{i})\) are at distance at most \(\Gamma_{x_{i}}\leq\Delta/k\leq 8d_{G}(u,v)/k\) from each other, it cannot be that \(i=j\). \[d_{G}(x,z)\leq\min\{d_{G}(x,u),d_{G}(x,v)\}+2\Delta/k. \tag{16}\] It follows that the edges \(\{u^{\prime},z\},\{v^{\prime},z\}\in H\) survive the attack, and furthermore \[d_{G}(u^{\prime},z) \leq d_{G}(u^{\prime},u)+d_{G}(u,z)\] \[\leq \Gamma_{x_{i}}+d_{G}(u,x)+d_{G}(x,z)\] \[\stackrel{{\eqref{eq:def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_ Since we used the same bi-clique construction as we did in theorem 1, and a more restrictive definition of _safe_, we have that lemma 10 still holds (with \(f(x)=x\), since we did not apply the heavy-path decomposition here). In particular, \(H\) contains a \(u-u^{\prime}\) path (resp., \(v-v^{\prime}\) path) which is disjoint from \(B\), of length at most \(\left(1+\frac{1}{k-1}\right)\cdot\Gamma_{x_{i}}\) (resp., \(\left(1+\frac{1}{k-1}\right)\cdot\Gamma_{x_{j}}\)). As \(T\) is a \(k\)-HST, \(\Gamma_{x_{i}},\Gamma_{x_{j}}\leq\frac{\Delta}{k}\), so we have that \[d_{H\setminus B}(u,v) \leq d_{H\setminus B}(u,u^{\prime})+d_{G}(u^{\prime},z)+d_{G}(z,v^{ \prime})+d_{H\setminus B}(v^{\prime},v)\] \[\leq 2(d_{G}(u,x)+d_{G}(v,x)+3\Delta/k)+\left(2+\frac{2}{k-1}\right) \cdot\frac{\Delta}{k}\] \[\leq 2d_{G}(u,v)\cdot\left(1+O(\varepsilon)\right)\;.\] The last inequality uses that \(\Delta\leq 8d_{G}(u,v)\), the definition of centrally-padded (12), and the choice of \(k=\Theta(1/\varepsilon)\). ## 9 Lower Bounds This section is devoted to proving lower bounds. All of our lower bounds hold for any finite stretch, that is, even if one requires from the reliable spanners just to preserve connectivity. For an attack \(B\) on a spanner \(H\), a valid super set \(B_{H}^{+}\) should satisfy that all the points in \(X\setminus B^{+}\) belong to the same connected component in \(H\setminus B\). In Section 9.1 we show that every deterministic reliable spanner for the path must have at least \(\Omega(n)\) lightness. This lower bound holds for any constant \(\nu>0\). The main point of this lower bound is that deterministic reliable spanners have huge lightness. In particular, we did not attempt to optimize the dependence on \(\nu\) (or other terms). Note that deterministic \(\nu\)-reliable \(1\)-spanner with lightness \(O(\nu^{-6}\cdot n\log n)\) follows from [1]. In Section 9.3 we prove that every _oblivious_\(\nu\)-reliable spanner for the path has lightness \(\Omega(\nu^{-2}\cdot\log(n\nu))\). In Section 9.2 we construct an ultrametric such that every oblivious \(\nu\)-reliable spanner has lightness \(\Omega(\nu^{-2})\). These two lower bounds show that the lightness parameters in our Theorems 1 and 16 are tight up to second order terms (even if we ignore the stretch factor). The proof of Section 9.2 appears before Section 9.3 as the two arguments are somewhat similar, while the proof in Section 9.2 is simpler. ### Lower bound for deterministic light reliable spanners **Theorem 19**.: _[Deterministic Lower bound for Path] For any constant \(\nu>0\), every deterministic \(\nu\)-reliable spanner for the unweighted path graph \(P_{n}\) has lightness \(\Omega(n)\)._ Proof.: Consider a deterministic \(\nu\)-reliable spanner \(H\) for \(P_{n}\). Set \(\varepsilon=\min\{\frac{1}{16},\frac{1}{16\nu}\}=\Omega(1)\). Denote by \(L=[1,(1/2-\varepsilon)n]\) and \(R=[(1/2+\varepsilon)n+1,n]\) the first and last \((1/2-\varepsilon)n\) vertices along \(P_{n}\) respectively. (For simplicity we assume that these are all integers.) Let \(E_{1}\) be the subset of \(H\) edges going from a vertex in \(L\) to a vertex in \(R\). Seeking contradiction, assume that \(|E_{1}|<\varepsilon n\). Let \(B\subseteq[n]\) be a subset consisting of all the vertices in \([(1/2-\varepsilon)n+1,(1/2+\varepsilon)n]\), and all the vertices in \(L\) that are contained in an edge of \(E_{1}\). Since \(|B|<3\varepsilon n\), it holds that \(|L\setminus B|\geq(1/2-4\varepsilon)n\), and \(|R\setminus B|=(1/2-\varepsilon)n\). However, the graph \(H\setminus B\) does not contain any edge from a vertex in \(L\setminus B\) to a vertex in \(R\). In particular, \(B^{+}\) must contain either all the vertices in \(L\), or all the vertices in \(R\). It follows that \[|B^{+}\setminus B|\geq(1/2-4\varepsilon)n\geq n/4\geq\nu\cdot 4\varepsilon n >\nu\cdot|B|\,\] a contradiction to the fact that \(H\) is a \(\nu\)-reliable spanner. It follows that \(|E_{1}|\geq\varepsilon n\). Note that each edge in \(E_{1}\) has weight at least \(2\varepsilon n\). We conclude that \[w(H)\geq|E_{1}|\cdot 2\varepsilon n\geq 2(\varepsilon n)^{2}=\Omega(n)\cdot w (\text{MST})\,\] where in the last equality we used that \(\nu\) is a constant. ### Lower Bound for HST Similarly to Theorem 21, the lower bound here holds even if one is only interested in preserving connectivity. **Theorem 20** (Oblivious Lower Bound for HST).: _For every \(\nu\in(0,1)\), there is an ultrametric such that every oblivious \(\nu\)-reliable spanner has lightness \(\Omega(\nu^{-2})\)._ _Proof._ Set \(\ell=\frac{1}{4\nu}\). Consider an ultrametric consisting of a root \(r\) with label \(1\), and \(\ell\) children \(\{v_{1},\ldots,v_{\ell}\}\), each with label \(\varepsilon=\frac{1}{\ell}\), and \(\ell\) children each, where \(\{v_{1}^{i},v_{2}^{i},\ldots,v_{\ell}^{i}\}\) are the children of \(v_{i}\). In total we have \(\ell^{2}\) leaves. See illustration on the right. The MST for this ultrametric will consist of \(\ell-1\) edges of weight \(1\), and \(\ell\cdot(\ell-1)\) edges of weight \(\varepsilon=\frac{1}{\ell}\). So the total weight is \(2(\ell-1)\). Consider an oblivious \(\nu\)-reliable spanner \(\mathcal{D}\), and let \(H\sim\operatorname{supp}(\mathcal{D})\). Let \(\mathcal{J}\in\{1,2\ldots,\ell\}^{\ell}\) be a string of \(\ell\) indices between \(1\) and \(\ell\). Let \(H_{\mathcal{J}}\) be the subgraph of \(H\) induced by \(\{v_{\mathcal{J}_{1}}^{1},v_{\mathcal{J}_{2}}^{2},\ldots,v_{\mathcal{J}_{\ell} }^{\ell}\}\). That is, for each \(i\), we keep only the vertex corresponding to the \(i\)'th index in \(\mathcal{J}\). Let \(\Psi_{\mathcal{J}}\) be the event that the graph \(H_{\mathcal{J}}\) contains at least \(\frac{\ell}{2}\) edges. Consider the attack \(B_{\mathcal{J}}\) which consist of all the vertices except \(\{v_{\mathcal{J}_{1}}^{1},v_{\mathcal{J}_{2}}^{2},\ldots,v_{\mathcal{J}_{\ell} }^{\ell}\}\). If the event \(\Psi_{\mathcal{J}}\) did not occur for a spanner \(H\), then \(H\setminus B_{\mathcal{J}}\), is disconnected, and the largest connected component has size at most \(\frac{\ell}{2}\). Observe that in order to preserve connectivity, \(B_{\mathcal{J}}^{+}\) must contain all vertices in all connected components of \(H\setminus B_{\mathcal{J}}\), except for one component. In particular, \(B_{\mathcal{J}}^{+}\setminus B_{\mathcal{J}}\) will contain at least \(\frac{\ell}{2}=2\nu\cdot\ell^{2}\geq 2\nu\cdot|B|\) vertices. As \(\mathcal{D}\) is \(\nu\)-reliable, it holds that \[\nu\cdot|B|\geq\mathbb{E}[|B^{+}\setminus B|]\geq\Pr\left[\overline{\Psi_{ \mathcal{J}}}\right]\cdot 2\nu\cdot|B|\,\] It follows that \(\Pr\left[\overline{\Psi_{\mathcal{J}}}\right]\leq\frac{1}{2}\), and in particular \(\Pr\left[\Psi_{\mathcal{J}}\right]\geq\frac{1}{2}\). We conclude that for every \(\mathcal{J}\) it holds that \(\mathbb{E}_{H\sim\mathcal{D}}\left[|E(H_{\mathcal{J}})|\right]\geq\Pr\left[ \Psi_{\mathcal{J}}\right]\cdot\frac{\ell}{2}\geq\frac{\ell}{4}\). On the other hand, denote by \(\widehat{H}\) the subset of \(H\) edges of weight \(1\) (i.e. between children of \(v_{i},v_{j}\) for \(i\neq j\)). Note that for every \(\mathcal{J}\), \(H_{\mathcal{J}}\subseteq\widehat{H}\) (as \(H_{\mathcal{J}}\) does not contain \(\varepsilon\)-weight edges). Every edge \(e\in\widehat{H}\) belongs to \(H_{\mathcal{J}}\) if and only if both its endpoints are chosen by \(\mathcal{J}\). If we choose \(\mathcal{J}\) u.a.r., \(e\) will survive with probability \(\frac{1}{\ell^{2}}\). We conclude \[\mathbb{E}_{\mathcal{J},H}\left[\left|E(H_{\mathcal{J}})\right| \right] =\mathbb{E}_{\mathcal{J}}\left[\mathbb{E}_{H}\left[\left|E(H_{ \mathcal{J}})\right|\right]\right]\geq\mathbb{E}_{\mathcal{J}}\left[\frac{ \ell}{4}\right]=\frac{\ell}{4}\] \[\mathbb{E}_{\mathcal{J},H}\left[\left|E(H_{\mathcal{J}})\right| \right] =\mathbb{E}_{H}\left[\mathbb{E}_{\mathcal{J}}\left[\left|E(H_{ \mathcal{J}})\right|\right]\right]=\mathbb{E}_{H}\left[\frac{1}{\ell^{2}}\cdot \left|\widehat{H}\right|\right]\.\] As all \(\widehat{H}\) edges have weight \(1\), \[\mathbb{E}_{H\sim\mathcal{D}}\left[w(H)\right]\geq\mathbb{E}_{H}\left[\left| \widehat{H}\right|\right]\geq\frac{\ell^{3}}{4}=\Omega(\nu^{-2})\cdot w(\text{ MST})\.\] ### Lower Bound for the Unweighted Path In this section we prove an \(\Omega(\nu^{-2}\cdot\log(n\nu))\) lower bound on the lightness any oblivious reliable spanner for the shortest path metric induced by the unweighted path (for any finite stretch parameter). As this metric has doubling dimension \(1\), it follows that our light reliable spanner for doubling metrics is tight (Corollary 8) up to second order terms (for constant \(\operatorname{ddim}\) and \(\varepsilon\)). **Theorem 21** (Oblivious Lower Bound for the Path).: _For every \(\nu\in(0,1)\), every oblivious \(\nu\)-reliable spanner for the unweighted path graph \(P_{n}\) has lightness \(\Omega(\nu^{-2}\cdot\log(n\nu))\)._ Proof.: This proof follow similar lines to the proof of Theorem 20, however, there are some required adaptations due to the path metric, and an additional \(\log n\) factor which is introduced due to the \(\log n\) different scales. For simplicity we will assume that \(n=2^{m}\) and \(\nu=2^{-s}\) are powers of \(2\). We will also assume that \(m\geq s+5\). For every index \(i\) and \(H\in\operatorname{supp}(\mathcal{D})\), denote by \(\mathcal{E}_{H}^{i}\) the subset of \(H\) edges of weight at least \(2^{i}\). **Claim 32**.: _For every index \(i\in\{s+3,\ldots,m-2\}\), \(\mathbb{E}_{H\sim\mathcal{D}}\left[\left|\mathcal{E}_{H}^{i}\right|\right]\geq \frac{1}{\nu^{2}}\cdot\frac{n}{2^{i+3}}\)._ Proof.: Set \(\ell=\frac{1}{8\nu}\). Divide the path \(P_{n}\) to \(\frac{n}{2^{i}}\) intervals of length \(2^{i}\). Remove every other interval. Every remaining interval, partition further into \(\ell\) intervals of length \(\frac{2^{i}}{\ell}\). Denote these intervals by \(\left\{A_{j}^{k}\right\}_{k\in[\frac{n}{2^{i+1}}],j\in[\ell]}\) where \[A_{j}^{k}=\left\{v_{2(k-1)\cdot 2^{i}+(j-1)\cdot\frac{2^{i}}{\ell}+1},\cdots,v_{ 2(k-1)\cdot 2^{i}+j\cdot\frac{2^{i}}{\ell}}\right\}\.\] See illustration below. For every subgraph \(H\in\operatorname{supp}(\mathcal{D})\), we create an unweighted supergraph \(G_{H}\) where its vertex set is \(\left\{A_{j}^{k}\right\}_{k\in[\frac{n}{2^{i+1}}],j\in[\frac{1}{\ell}]}\), and add an edge from \(A_{j}^{k}\) to \(A_{j^{\prime}}^{k^{\prime}}\) if and only if \(k\neq k^{\prime}\) and \(H\) contains an edge between points in \(A_{j}^{k}\) and \(A_{j^{\prime}}^{k^{\prime}}\). Note that \(G_{H}\) is a \(\frac{n}{2^{i+1}}\)-partite simple graph. Denote by \(V(G_{H})\) and \(E(G_{H})\) the vertex and edges sets of \(G_{H}\) respectively. Clearly, every edge in \(G_{H}\) corresponds to (at least one) edge of weight at least \(2^{i}\) in \(H\). Thus, \(|E(G_{H})|\leq|\mathcal{E}_{H}^{i}|\), and hence in order to prove the claim it suffices to lower bound \(\mathbb{E}_{H\sim\mathcal{D}}\left[|G_{H}|\right]\). We will proceed by a double-counting argument. Consider a \(\frac{n}{2^{i+1}}\)-tuple \(\mathcal{J}=\left(j_{1},\ldots,j_{\frac{n}{2^{i+1}}}\right)\in[\ell]^{\frac{n} {2^{i+1}}}\). The graph \(G_{H}^{\mathcal{J}}=G_{H}\left[\left\{A_{j_{k}}^{k}\right\}_{k\in[\frac{n}{2^ {i+1}}]}\right]\) is the induced graph by the \(\frac{n}{2^{i+1}}\) vertices \(\left\{A_{j_{k}}^{k}\right\}_{k\in[\frac{n}{2^{i+1}}]}\) of \(G_{H}\). Let \(B_{\mathcal{J}}=[n]\setminus\cup_{k}A_{j_{k}}^{k}\) be all the vertices not in the sub intervals specified by \(\mathcal{J}\) (in particular \(B_{\mathcal{J}}\) contains the \(\frac{n}{2}\) vertices of the removed intervals). Let \(\Psi_{\mathcal{J}}\) be an indicator for the event that \(G_{H}^{\mathcal{J}}\) contains at least \(\frac{n}{2^{i+2}}\) edges. Note that if the event \(\Psi_{\mathcal{J}}\) did not occur, then in \(H\setminus B_{\mathcal{J}}\) the maximum size of a connected component is \(\frac{n}{2^{i+2}}\cdot\frac{2^{i}}{\ell}=\frac{1}{4}\cdot\frac{n}{\ell}\) (since at most \(\frac{n}{2^{i+2}}\) of the \(\frac{n}{2^{i+1}}\) intervals can be connected, and each has \(\frac{2^{i}}{\ell}\) points). In particular, \(B_{\mathcal{J},H}^{+}\setminus B_{\mathcal{J}}\) must contain at least \(\frac{n}{4\ell}\) points. As \(H\) is an oblivous \(\nu\)-reliable spanner, it follows that \[(1+\nu)\cdot|B_{\mathcal{J}}|\geq\mathbb{E}_{H\sim\mathcal{D}}\left[|B_{ \mathcal{J},H}^{+}|\right]\geq|B_{\mathcal{J}}|+\frac{n}{4\ell}\cdot\Pr\left[ \overline{\Psi_{\mathcal{J}}}\right]\.\] Hence \(\Pr\left[\overline{\Psi_{\mathcal{J}}}\right]\leq\nu\cdot|B_{\mathcal{J}}| \cdot\frac{4\ell}{n}<\frac{1}{2}\), and thus \(\mathbb{E}_{H\sim\mathcal{D}}\left[\left|E(G_{H}^{\mathcal{J}})\right|\right] \geq\frac{n}{2^{i+2}}\cdot\Pr\left[\Psi_{\mathcal{J}}\right]\geq\frac{n}{2^{i +3}}\). We will abuse notation and state \((k,j)\in\mathcal{J}\) if the \(k\)'th index in \(\mathcal{J}\) is \(j\) (i.e. \(j_{k}=j\)). Next, we sample \(\mathcal{J}\) uniformly at random for all the possible \(\frac{n}{2^{i+1}}\)-tuples, and thus \(\Pr\left[(k,j)\in\mathcal{J}\right]=\frac{1}{\ell}\). It holds that for every \(H\in\operatorname{supp}(\mathcal{D})\), \[\mathbb{E}_{\mathcal{J}}\left[\left|E(G_{H}^{\mathcal{J}})\right|\right]=\sum_ {\left(A_{j}^{k},A_{j^{\prime}}^{k^{\prime}}\right)\in E(G_{H})}\Pr\left[(k,j ),(k^{\prime},j^{\prime})\in\mathcal{J}\right]=\frac{1}{\ell^{2}}\cdot|E(G_{H} )|\.\] We now sample both a subgraph \(H\sim\mathcal{D}\), and independently a tuple \(\mathcal{J}\). It holds that: \[\frac{1}{\ell^{2}}\cdot\mathbb{E}_{H}\left[|E(G_{H})|\right]=\mathbb{E}_{H} \left[\mathbb{E}_{\mathcal{J}}\left[\left|E(G_{H}^{\mathcal{J}})\right|\right] \right]=\mathbb{E}_{\mathcal{J}}\left[\mathbb{E}_{H}\left[\left|E(G_{H}^{ \mathcal{J}})\right|\right]\right]\geq\mathbb{E}_{\mathcal{J}}\left[\frac{n}{2 ^{i+3}}\right]=\frac{n}{2^{i+3}}\,\] and thus \(\mathbb{E}_{H}\left[|E(G_{H})|\right]\geq n\cdot\frac{\ell^{2}}{2^{i+3}}= \Omega(\frac{n}{2^{i}\cdot\nu^{2}})\) as required. Consider a pair \(p<q\in[n]\) such that \(2^{w}\leq q-p<2^{w+1}\). The event \((p,q)\in H\) occurs if and only if all the events \(\left\{(p,q)\in\mathcal{E}_{H}^{i}\right\}_{i=0}^{w}\) occurred (note that all these \(w+1\) events are actually equivalent). As \(q-p\geq 2^{w}>\sum_{i=0}^{w}2^{i-1}\), it holds that \[\Pr\left[(p,q)\in H\right]\cdot(q-p) \geq\sum_{i=0}^{w}\Pr_{H\sim\mathcal{D}}\left[(p,q)\in\mathcal{E} _{H}^{i}\right]\cdot 2^{i-1}\] \[=\sum_{i=0}^{m-1}\Pr_{H\sim\mathcal{D}}\left[(p,q)\in\mathcal{E}_{ H}^{i}\right]\cdot 2^{i-1}\geq\sum_{i=s+3}^{m-2}\Pr_{H\sim\mathcal{D}}\left[(p,q)\in \mathcal{E}_{H}^{i}\right]\cdot 2^{i-1}\,\] where the equality holds as for every \(i\geq w+1\), \(\Pr_{H\sim\mathcal{D}}\left[(p,q)\in\mathcal{E}_{H}^{i}\right]=0\). By Claim 32 \[\mathbb{E}_{H\sim\mathcal{D}}\left[w(H)\right] =\sum_{p<q}\Pr_{H\sim\mathcal{D}}\left[(p,q)\in H\right]\cdot(q-p)\] \[\geq\sum_{p<q}\sum_{i=s+3}^{m-2}\Pr_{H\sim\mathcal{D}}\left[(p,q) \in\mathcal{E}_{H}^{i}\right]\cdot 2^{i-1}\] \[=\sum_{i=s+3}^{m-2}\mathbb{E}_{H\sim\mathcal{D}}\left[\left| \mathcal{E}_{H}^{i}\right|\right]\cdot 2^{i-1}\] \[\geq\sum_{i=s+3}^{m-2}\Omega(\frac{n}{2^{i}\cdot\nu^{2}})\cdot 2 ^{i-1}\] \[=\frac{n}{\nu^{2}}\cdot\Omega(m-s-4)=\frac{n}{\nu^{2}}\cdot\Omega (\log(n\cdot\nu))\,\] where the last equality holds as \(m=\log n\), \(s=\log\frac{1}{\nu}\), and thus \(m-s-4=\log\frac{n\cdot\nu}{16}\). The theorem now follows.
\emph{ν}-信頼性の高いスペンナー}は、距離空間$(X,d)$の、支配的グラフ$H$である。 任意の故障集合$B \subseteq X$に対して、$B^+$は $B$ の大きさに比べて僅かに大きくて、$|B^+| \le (1 + \nu) \cdot |B|$ であり、$X \setminus B^+$ のペア間の距離は、$H \setminus B$ でほぼ保たれている。 最近、スパース信頼性の高いスペンナーに関する研究が数多く行われているが、その重さは分析されておらず、この研究では、\emph{軽い}信頼性の高いスペンナーの研究を開始する。 軽い信頼性の高いスペンナーは、$X$ の最小SpanningTree (MST)の重さに比例している。 まず、スパースとは異なり、任意の決定的な信頼性の高いスペン
2307.00070
Map-based cosmology inference with weak lensing -- information content and its dependence on the parameter space
Field-level inference is emerging as a promising technique for optimally extracting information from cosmological datasets. Indeed, previous analyses have shown field-based inference produces tighter parameter constraints than power spectrum analyses. However, estimates of the detailed quantitative gain in constraining power differ. Here, we demonstrate the gain in constraining power depends on the parameter space being constrained. As a specific example, we find that field-based analysis of an LSST Y1-like mock data set only marginally improves constraints relative to a 2-point function analysis in $\Lambda$CDM, yet it more than doubles the constraining power of the data in the context of $w$CDM models. This effect reconciles some, but not all, of the discrepant results found in the literature. Our results demonstrate the importance of using a full systematics model when quantifying the information gain for realistic field-level analyses of future data sets.
Supranta S. Boruah, Eduardo Rozo
2023-06-30T18:20:05
http://arxiv.org/abs/2307.00070v1
Map-based cosmology inference with weak lensing - information content and its dependence on the parameter space ###### Abstract Field-level inference is emerging as a promising technique for optimally extracting information from cosmological datasets. Indeed, previous analyses have shown field-based inference produces tighter parameter constraints than power spectrum analyses. However, estimates of the detailed quantitative gain in constraining power differ. Here, we demonstrate the gain in constraining power depends on the parameter space being constrained. As a specific example, we find that field-based analysis of an LSST Y1-like mock data set only marginally improves constraints relative to a 2-point function analysis in \(\Lambda\)CDM, yet it more than doubles the constraining power of the data in the context of \(w\)CDM models. This effect reconciles some, but not all, of the discrepant results found in the literature. Our results demonstrate the importance of using a full systematics model when quantifying the information gain for realistic field-level analyses of future data sets. keywords: large-scale structure of Universe - gravitational lensing: weak - methods: data analysis ## 1 Introduction Current lensing analyses typically rely on 2-point functions (Hikage et al., 2019; Heymans et al., 2021; Abbott et al., 2022). However, 2-point analyses are sub-optimal due to the highly non-Gaussian nature of the late-time density field. Indeed, one can extract additional cosmological information by supplementing 2-point function measurements with non-Gaussian summary statistics (Takada and Jain, 2003; Kilbinger and Schneider, 2005), e.g. peak counts (Liu et al., 2015; Harnois-Deraps et al., 2021; Zurcher et al., 2022), one-point PDFs (Thiele et al., 2020; Boyle et al., 2021), wavelet transforms (Cheng et al., 2020; Cheng and Menard, 2021; Ajani et al., 2021), and Minkowski functionals (Kratochvil et al., 2012; Petri et al., 2013). Field-level inference (Jasche and Wandelt, 2013; Wang et al., 2014; Modi et al., 2018) is a new approach in which one forward-models the cosmology-dependent density field of the Universe as constrained by the data. A field-based inference approach is fully optimal at any given scale: it automatically and self-consistently incorporates _all_ summary statistics up to the recovered scale. For this reason, it has been proposed to model a broad range of observables, including weak lensing (Porqueres et al., 2021, 2022; Fiedorowicz et al., 2022, 2022), CMB lensing (Millea et al., 2019, 2020, 2021), peculiar velocities (Boruah et al., 2022; Prideaux-Ghee et al., 2022; Bayer et al., 2022), and galaxy clustering (Ramanah et al., 2019; Dai and Seljak, 2022). Although numerically challenging, steady progress in numerical techniques (Modi et al., 2021; Li et al., 2022; Modi et al., 2022; Dai and Seljak, 2022) is helping realize the potential of this new technique. While there is consensus in the literature that field-based inference leads to tighter parameter constraints than 2-point analyses, there are also significant differences in the detailed quantitative measure of this improvement. Leclercq and Heavens (2021) found that field-based inference leads to massive improvement in parameter constraints over 2-pt function analysis, even for only mildly non-Gaussian fields. Similarly, Porqueres et al. (2022, 2023) found large gains for a field-level cosmic shear analysis. By contrast, Boruah et al. (2022) found field-based inference results in only modest improvements for cosmic shear analyses. In light of these differences, we have set out to examine the information gain from field-level inference of weak lensing data in more detail. ## 2 Formalism We model the convergence field as a lognormal random field. Ugnormal fields are commonly used to approximate non-Gaussian density and convergence fields in cosmological applications (Coles and Jones, 1991; Jasche and Kitaura, 2010; Clerkin et al., 2017; Xavier et al., 2016). Throughout this paper, we perform our analysis at a pixel scale of 10 arcminutes. This is sufficiently large for the lognormal distribution to provide a reasonable description of the underlying convergence field (Xavier et al., 2016; Clerkin et al., 2017; Friedrich et al., 2020). We do not consider smaller scales to avoid having to model baryonic feedback, which is expected to significantly impact the matter density distribution at higher resolution (e.g, Eiffler et al., 2015; Huang et al., 2019; Osato et al., 2021). When modelled as a lognormal variable, \(\kappa\) is related to a Gaussian variable \(y\) via \[\kappa=e^{y}-\lambda, \tag{1}\] where \(\lambda\) is called the shift parameter. The shift parameter denotes the minimum value that \(\kappa\) can take, and directly impacts the non-Gaussian features of the resulting convergence field. The mean of the \(y\)-field is chosen so as to enforce the condition that the \(\kappa\) field has a zero mean. We use the perturbation theory code cosmomentum (Friedrich et al., 2018, 2020) to calculate the cosmology-dependent shift parameters. For further details on lognormal fields, we refer the reader to Boruah et al. (2022). We use the field-level analysis pipeline of Boruah et al. (2022) to analyze synthetic weak lensing data generated from a lognormal convergence map. To create the synthetic data, we assume the redshift distribution forecasted for LSST-Y1 in The LSST Dark Energy Science Collaboration et al. (DESC-SRD, 2018). We then analyze the synthetic data using two different models: _(i)_ a two-parameter toy model presented in Section 3, and _(ii)_ a cosmological model in which the power spectrum and the shift parameters are determined by the underlying cosmological parameters. Following Leclercq & Heavens (2021), the toy-model analysis of section 3 is non-tomographic. The cosmological analysis of section 4 assumes the data is binned into 4 tomographic bins. ## 3 Toy model with scaling parameters Leclercq & Heavens (2021) used a two-parameter log-normal toy model to demonstrate that field-based analyses can dramatically outperform standard 2-point approaches. This result is apparently in tension with that of Boruah et al. (2023), who find only marginal improvements in a \(\Lambda\)CDM cosmology. To resolve this apparent discrepancy, we analyzed a synthetic data set using two different models: a toy model similar to the one used by Leclercq & Heavens (2021), and the standard \(\Lambda\)CDM model. Our toy model is constructed so that our fiducial toy model exactly matches our fiducial \(\Lambda\)CDM model. Our fiducial model is a flat \(\Lambda\)CDM universe with \(\Omega_{\rm m}=0.279\), \(\sigma_{\rm B}=0.82\), \(\Omega_{\rm b}=0.046\), \(h=0.7\), \(n_{\rm s}=0.97\). This choice defines the power-spectrum \(C_{y}(\ell)\) and the shift parameter \(\lambda\) of the lognormal random field \(\kappa\), where \(y=\ln(\kappa-\lambda)\), and \(y\) is a Gaussian random field. Our toy model depends on two parameters \(\alpha\) and \(\beta\) that rescale: 1) the power-spectrum \(C_{y}\); or 2) the shift parameter \(\lambda\). These rescalings are defined via \[\log C_{y}(\ell) \rightarrow \alpha\times\log C_{y}(\ell) \tag{2}\] \[\lambda \rightarrow \beta\times\lambda. \tag{3}\] For simplicity, we refer to this toy model as the \(\alpha\)-\(\beta\) model, with \(\alpha=\beta=1\) corresponding to our fiducial model. As in Leclercq & Heavens (2021), we restrict our analysis to a single tomographic redshift bin, for which we adopt the expected redshift distribution of source galaxies for the LSST-Y1 data set. We produce a lognormal realization of the fiducial model, which we then analyze using the field-based inference framework of Boruah et al. (2022). We perform our analysis both in the toy \(\alpha\)-\(\beta\) model and the \(\sigma_{\rm B}\)-\(\Omega_{\rm m}\) parameter space. Both analyses rely on the same noisy shear map as the data vector. Figure 1 compares the posteriors for the \(\alpha\)-\(\beta\) model (left) and the \(\Lambda\)CDM model (right). Red and blue contours correspond to posteriors from a field-based (red) and a power spectrum based (blue) analysis. Evidently, field-based inference dramatically improves parameter constraints in our toy model, but has only a modest impact on the posteriors in the \(\sigma_{\rm B}\)-\(\Omega_{\rm m}\) space. This demonstrates that: 1) despite being superficially different, the results of Leclercq & Heavens (2021) and Boruah et al. (2022) are fully consistent with each other; and 2) the amount of information gained from field-based inference depends on the parameter space of interest. We can readily understand the difference in gains between the Figure 1: Comparison of the constraints obtained using field-based inference and power spectrum analysis for the toy model described in section 3 (_left_), and a \(\Lambda\)CDM cosmological model (_right_). We use the same observed data vector — i.e. the noisy realization of the observed shear field for the two panels. We plot \(\beta/\alpha^{2}\) on the \(y\) axis for the toy model to account for the strong degeneracy between these two parameters. We see that field-based inference dramatically improves parameter constraints in the \(\alpha\)-\(\beta\) toy model, but have only a modest impact on cosmological posteriors (Boruah et al., 2022). That is, the gains due to field-based inference methods relative to 2-point analyses depend on the parameter space under consideration. two parameters spaces as follows. In the \(\alpha\)-\(\beta\) toy model, the 1-point and 2-point functions of the field vary in nonphysical and largely independent ways. However, in the real Universe, the power spectrum and the 1 pt PDF are determined by the same physics and therefore contain correlated information. To demonstrate this, we select models from the power spectrum posteriors in each of the two parameter space we considered. The models selected are exactly \(2\sigma\) away from the fiducial model and along the degeneracy direction of the 2-point posterior in each space. Figure 2 compares the 1-point function for each of these models. We see that the difference between the 1-point function for each of these models and that of the fiducial model is many times larger in the \(\alpha\)-\(\beta\) parameter space than in the \(\sigma_{8}\)-\(\Omega_{\rm m}\) space. Moreover, the differences in the 1-point functions in the \(\sigma_{8}\)-\(\Omega_{\rm m}\) parameter space are comparable to the cosmic variance errors, explaining why field-based inference results in only marginal gains relative to the 2-point posterior. In short, the reason the toy-model of Leclercq & Heavens (2021) results in large gains is because it allows for an unphysical de-correlation of the information content of the 1- and 2-point functions of the convergence field. ## 4 Implications for cosmological inference We have seen that that the choice of parameter space impacts the relative gain of field-based inference methods relative to traditional 2-point analyses. This raises the question: are there other cosmological parameters for which the gain in cosmology constraints is large? Here, we compare cosmological constraints in \(w\)CDM models from cosmic shear as derived from field-based and power spectrum analyses. In contrast to the previous section, we perform a tomographic analysis with 4 redshift bins, each containing the same number of galaxies. The redshift distribution of the bins is based on the expected LSST-Y1 redshift distributions. The source density is set to 10 galaxies/arcmin\({}^{2}\). Figure 3 summarizes our results. The figure demonstrates that a field-based approach significantly improves parameter constraints relative to the standard 2-point analysis in a \(w\)CDM cosmology. We quantify the improvement using the covariance matrix of the posterior. Specifically, we define the figure-of-merit \[{\rm FoM}_{ij}=\frac{1}{\sqrt{\det({\rm Cov}[\theta_{i},\,\theta_{j}])}}, \tag{4}\] where, \({\rm Cov}[\theta_{i},\,\theta_{j}]\) denotes the covariance matrix of the parameters \(\theta_{i}\) and \(\theta_{j}\) as computed from the MCMC posterior samples. We find that field-based inference leads to an improvement in the figure of merit by a factor of 2.2, 2.2, and 2.5 times in the \(\Omega_{\rm m}\)-\(A_{\rm s}\), \(\Omega_{\rm m}\)-\(w\) and \(A_{\rm s}\)-\(w\) subspaces respectively. These improvements are particularly noteworthy in that the cosmological information content of the shear power spectrum begins to saturate at \(\approx 10\) arcmin scales (Kayo et al., 2013; Boruah et al., 2023). That is, field-based analyses are a powerful complement to efforts centered on improving small scale modeling. As in section 3, the additional information in the field-based inference analysis comes from the 1-point function. This is illustrated in Figure 4. There, we compare: 1) the spread in the predicted 1-point functions obtained by sampling the power-spectrum analysis posterior; and 2) the observational uncertainties in the 1-point distribution. This comparison is done both for \(\Lambda\)CDM and \(w\)CDM posteriors, and each of the four tomographic bins. We see that the spread in the one-point function within the \(\Lambda\)CDM chain is less than or comparable to the statistical noise in the one-point function measurement. On the other hand, the spread in the predicted 1-point distributions from the \(w\)CDM power spectrum posterior is broader than observational uncertainties. Consequently, the 1-point distribution function adds significant information to the 2-point analysis for \(w\)CDM models. Conversely, a measurement of the 1-point distribution adds little information within the context of a \(\Lambda\)CDM analysis. Our results are in tension with those of Porqueres et al. (2022) and Porqueres et al. (2023), who report large gains from field-based inference in a \(\Lambda\)CDM cosmology. Barring numerical issues/bugs in one or both of these codes, this discrepancy can only be resolved by the differences in the forward models. The convergence field in Porqueres et al. (2023) is calculated using 2LPT simulations plus ray tracing, whereas we rely on an approximate log-normal model. However the lognormal model provides a good description of the convergence field at the current resolution (e.g, Xavier et al., 2016; Clerkin et al., 2017; Fiedorowicz et al., 2022) and therefore the massive difference between the two results is surprising. Understanding the sources of this discrepancy is outside the scope of this work, but the difference highlights the need for more extensive testing and detailed comparisons between different field-level inference codes. In this context we note that in Boruah et al. (2022) we have verified that our posteriors match the analytic expectation when using a Gaussian random field model. ## 5 Summary and discussion We used the lognormal model to study the relative information content from field-based and 2-point analyses of the convergence field. We confirm the finding that field-based parameter posteriors are significantly tighter than those of the corresponding 2-point analysis Figure 2: Comparison of the 1-point distributions of models that are \(2\sigma\) away from the fiducial value in the 2-point posterior analyses for both the \(\alpha\)–\(\beta\) red dashed and dotted lines) and \(\sigma_{8}\)–\(\Omega_{\rm m}\) (blue dashed and dotted lines) parameter spaces. The bottom panel shows the differences between the 1-point distributions from that of the fiducial model. The error bars in the bottom panel are the noise in the measured 1-point distribution. Note that the differences in the 1-point distribution are highly significant in the case of the \(\alpha\)–\(\beta\) parameter space, but only marginally significant in the \(\sigma_{8}\)–\(\Omega_{\rm m}\) space. in the case of the Leclercq and Heavens (2021) toy model. However, we have also demonstrated that the relative gains of field-based inference depend on the specific parameter space being investigated. In particular, we have found field-based inference leads to modest gains in \(\Lambda\)CDM, but large gains in \(w\)CDM. These improvements are driven by the information content in the 1-point distribution of the convergence field. It is important to note that in this analysis we have not considered systematic effects. As we saw in section 3, the constraining power depends on the parameter space considered. Therefore, the addition of systematic parameters to the model will impact our conclusions regarding the impact of field-based inference on cosmological posteriors. That said, several studies in the literature have shown that non-Gaussian information can improve constraints on systematics parameters such as photo-\(z\) biases (Jasche and Wandelt, 2012; Tsaprazi et al., 2023) and intrinsic alignment (Pyne and Joachimi, 2021), which would in turn likely produce gains in cosmological constraining power. Detailed quantification of these gains will require further analyses, which we leave for future work. ## Acknowledgement We thank Alan Heavens for suggestions that led to some of the early tests in the paper and Elisabeth Krause for useful discussions. The computation presented here was performed on the High Performance Computing (HPC) resources supported by the University of Arizona TRIF, UITS, and Research, Innovation, and Impact (RII) and maintained by the UArizona Research Technologies department. SSB is supported by the Department of Energy Cosmic Frontier program, grant DE-SC0020215. ER's work is supported is supported by NSF grant 2009401. ER also receives funding from DOE grant DE-SC0009913 and NSF grant 2206688. ## Data Availability Statement The data underlying this article will be shared on request to the corresponding authors. Figure 4: Spread in the 1-point function calculated for the cosmological parameters drawn from the power spectrum posterior for a \(\Lambda\)CDM (red) and a wCDM analysis (blue). The black bars show the expected statistical error in the recovered distributions including shape noise. The differences in the posterior predictions for the 1-point distributions are larger than the observational errors in wCDM, but smaller in \(\Lambda\)CDM. Consequently, field-based inference leads to large improvements in parameter constraints in the context of wCDM, but only modest improvements in \(\Lambda\)CDM. Figure 3: Comparison of the cosmological constraints with power spectrum analysis (blue) and map-based inference (red) for wCDM parameter space. We find that map-based inference leads to much stronger constraints than a power spectrum based analysis. This is in contrast to our findings within the context of \(\Lambda\)CDM, where field-based inference resulted in only modest improvements.
フィールドレベルの推定が、宇宙論的データセットから最適な情報抽出のための画期的な技術として注目されています。実際、過去の分析では、フィールドベースの推定はパワースペクトル分析よりもパラメータの制約をよりきめ細かく示すことが示されています。しかし、詳細な定量的利点を示す見積もりは異なります。ここでは、制約力を増加させるのに用いられるパラメータ空間によって、制約力が増加する効果が示されています。具体的な例として、私たちは、LSST Y1型のモックデータセットのフィールドベースの分析が、$\Lambda$CDMの場合に2点関数分析と比べて僅かに改善する効果を示し、$w$CDMモデルではデータの制約力を大きく増加させます。この効果は、文献で報告されている異なった結果の一部を一致させますが、それらすべてを一致させることはできません。私たちの研究結果により、現実的なフィールドレベル分析のための未来
2302.14803
Learned Risk Metric Maps for Kinodynamic Systems
We present Learned Risk Metric Maps (LRMM) for real-time estimation of coherent risk metrics of high dimensional dynamical systems operating in unstructured, partially observed environments. LRMM models are simple to design and train -- requiring only procedural generation of obstacle sets, state and control sampling, and supervised training of a function approximator -- which makes them broadly applicable to arbitrary system dynamics and obstacle sets. In a parallel autonomy setting, we demonstrate the model's ability to rapidly infer collision probabilities of a fast-moving car-like robot driving recklessly in an obstructed environment; allowing the LRMM agent to intervene, take control of the vehicle, and avoid collisions. In this time-critical scenario, we show that LRMMs can evaluate risk metrics 20-100x times faster than alternative safety algorithms based on control barrier functions (CBFs) and Hamilton-Jacobi reachability (HJ-reach), leading to 5-15\% fewer obstacle collisions by the LRMM agent than CBFs and HJ-reach. This performance improvement comes in spite of the fact that the LRMM model only has access to local/partial observation of obstacles, whereas the CBF and HJ-reach agents are granted privileged/global information. We also show that our model can be equally well trained on a 12-dimensional quadrotor system operating in an obstructed indoor environment. The LRMM codebase is provided at https://github.com/mit-drl/pyrmm.
Ross Allen, Wei Xiao, Daniela Rus
2023-02-28T17:51:43
http://arxiv.org/abs/2302.14803v1
# Learned Risk Metric Maps for Kinodynamic Systems ###### Abstract We present Learned Risk Metric Maps (LRMM) for real-time estimation of coherent risk metrics of high-dimensional dynamical systems operating in unstructured, partially observed environments. LRMM models are simple to design and train--requiring only procedural generation of obstacle sets, state and control sampling, and supervised training of a function approximator--which makes them broadly applicable to arbitrary system dynamics and obstacle sets. In a parallel autonomy setting, we demonstrate the model's ability to rapidly infer collision probabilities of a fast-moving car-like robot driving recklessly in an obstructed environment; allowing the LRMM agent to intervene, take control of the vehicle, and avoid collisions. In this time-critical scenario, we show that LRMMs can evaluate risk metrics 20-100x times faster than alternative safety algorithms based on control barrier functions (CBFs) and Hamilton-Jacobi reachability (HJ-reach), leading to 5-15% fewer obstacle collisions by the LRMM agent than CBFs and HJ-reach. This performance improvement comes in spite of the fact that the LRMM model only has access to local/partial observation of obstacles, whereas the CBF and HJ-reach agents are granted privileged/global information. We also show that our model can be equally well trained on a 12-dimensional quadrotor system operating in an obstructed indoor environment. The LRMM codebase is provided at [https://github.com/mit-drl/pyxmm](https://github.com/mit-drl/pyxmm). ## I Introduction The estimation of risk is a topic that spans many research domains: from medical treatments [1], to economics and portfolio optimization [2, 3], to robotics and autonomous systems [4]. Within the field of autonomous systems, we often consider risk in terms of probability that a system causes harm to itself or its environment, and the expected cost of such events. In this work we seek to estimate the probability of a dynamical system arriving at a state of failure given an initial configuration and control policy. Such risk estimation problems have been a major focus within autonomous automobile research where collision avoidance is of utmost concern [6]. Many of these works emphasise prediction of future states of other vehicles [7, 8, 9]. These works often adopt domain-specific assumptions that make it difficult to extend techniques beyond the automotive domain--such as strict assumptions on vehicle dynamics[8]; the assumption that all obstacles or other agents can be represented as points [8], balls [9], or ellipsoids [7]; and assumptions that other agents act in predictable [7, 9] and/or self-preserving [8] fashions. A closely related problem is that of trajectory collision probability assessment [10, 11] and risk-aware motion planning [10, 12, 13]. These works evaluate the probability that a particular trajectory intersects the obstacle space given bounded process and/or measurement noise. This is a related--yet _distinct_--problem from the one considered in this paper: i.e. evaluating the probability that a system will arrive at a future failure state given some initial state and control policy. Furthermore risk-aware motion planning algorithms almost all rely on an _a priori_ map of the obstacle space with, at most, bounded uncertainty on obstacle locations [12]. Therefore, even though our work shares concepts and keywords with the field of sampling-based motion planning, it occupies a different problem space than algorithms like rapidly-exploring random tree (RRT) [14]. Hamilton-Jacobi reachability (HJ-reach) analysis [15] and control barrier functions (CBFs) [16]--both of which are Fig. 1: Visualization of inferred risk metrics for 1000 sampled states of a Dubins vehicle within a procedurally generated maze [5]. The minimum turning radius of the vehicle is \(\sim\)10 pixels. The estimated probability of collision with walls for a random control policy over a finite time horizon is colorized. We see that states with an imminent wall collision are correctly estimated as high risk, whereas unobstructed states are low risk of collision. For clarification, the base of each arrow, not its tip, represent the exact \(xy\)-coordinate of the sampled state. discussed more thoroughly in Sec. II and IV--offer rigorous methods for estimating unsafe sets and providing safety-guaranteed controls for autonomous systems. However, the former suffers from limitations on scaling to high-dimensional systems (i.e the "curse of dimensionality") and the latter lacks generalizability to arbitrary system dynamics and obstacle sets. The scalability limitations of HJ-reachability and generalizability limitations of CBFs motivate the need for alternative methods for assessing safety of dynamical systems and discovering safe control policies. Drawing from the field of motion planning, sampling-based methods are a well-known technique for overcoming the curse of dimensionality[14]. Sampling-based techniques can be applied to systems with arbitrary dynamics and obstacle spaces since these properties are inherent to the sampling process. The primary drawback of sampling-based techniques to safety-critical applications is that they may lack the strict safety guarantees that alternative methods provide. Lew et al. [17, 18] provide strong justification for the use of sampling-based methods for safety-critical applications by proving asymptotic convergence to a conservative over-approximation of reachable sets using random set theory. In this paper we provide a simple, sampling-based method for approximating risk metrics for arbitrary dynamical systems in partially-observed environments and train a supervised-learning model--referred to as learned risk metric maps (LRMM)--that can rapidly infer failure probabilities in _a priori_ unknown environments. We then show that, in spite of HJ-reach and CBF safety guarantees, the LRMM model can provide a greater level of safety in a time-critical parallel autonomy task due to it's rapid estimation of risk. ## II Related Work In this section we position our work relative to the closely related techniques of Hamilton-Jacobi reachability (HJ-reach) and control barrier functions (CBF). The theoretical foundations for this paper can be traced back to the concept of inevitable collision states; i.e. the set of states for which a dynamical system cannot avoid future collisions with obstacles regardless of control input [19, 20]. HJ-reachability represents a modern treatment of inevitable collision states that provides a rigorous theoretical foundation for computing the set of unsafe states that may lead to failure (e.g. collision); often referred to as the "backward reachable set" (BRS) [15]. The BRS of the obstacle space, \(\mathcal{C}_{\texttt{obs}}\), is computed by solving for the value function, \(V(t,s_{\omega})\), of the Hamilton-Jacobi-Isaacs partial differential equation (PDE) for all states over time horizon \(t\)[21, 22]. The value function then allows us to described the BRS of the obstacle space as \(\mathcal{V}_{\texttt{obs}}(t)=\{s:V(t,x)\leq 0\}\). This is the set--often called the _unsafe set_--for which there exists a control action, \(a\) that would lead the system to collision with the obstacle space over time horizon \(t\). The HJ value function also provides a means to calculate the optimal control for avoiding obstacles as a gradient of the value function; see Bajcsy et al. for details [23]. HJ-reachability has been used to develop provably-safe autonomous navigation algorithms for arbitrary dynamical systems within partially-observed environments by treating the unknown portion of the environment as an obstacle [23]. While the general applicability of HJ-reachability makes it a powerful tool, it suffers from the "curse of dimensionality" that make it very difficult to apply the technique in real-time on anything save the simplest, low-dimensional, slow-moving systems [15, 24, 23]. Control Barrier Functions (CBF) offer an alternative, complementary approach to HJ-reachability that provide provably-safe control of autonomous systems in the presence of obstacles [16, 25]. In contrast to HJ-reachability, CBFs can provide real-time safe control for high-dimensional control-affine systems. CBF-based controllers--or more precisely, controllers based on CBFs _and_ control Lyapunov functions (CLFs)--work by mapping state (safety) constraints onto a set of control constraints by taking Lie derivatives of the constraints along the dynamics, and then encoding them as inequality constraints within a quadratic program (QP) that can be solved to determine stabilizing (i.e. target-tracking, using CLFs) and safe (i.e. obstacle-avoiding, using CBFs) control inputs; see Ames et al. [16] for a detailed discussion. General methods do not exist for discovering CBFs for a given control system, requiring that they be hand-designed. Furthermore CBFs require an explicit mathematical model of obstacles or keep-out regions to be avoided; something that is often impractical in real-world applications. Choi et al. [24] propose a method for blending HJ-reachability and CBFs; however this technique still suffers from the poor scalability to high-dimensional systems. The learned risk metric maps (LRMM) described in the following sections attempt to overcome the drawbacks of HJ-reach and CBF methods by providing a constructive (i.e. not hand-designed) model that rapidly estimates risk metrics for high-dimensional dynamical systems within arbitrary obstacle spaces. ## III Risk Metric Maps Our work considers partially observable Markov decision processes [26] defined as the tuple \((\mathcal{C},\mathcal{A},\mathcal{O},T,R)\). \(\mathcal{C}\) is the configuration space that defines the possible states of the system, \(s\in\mathcal{C}\), as well as the obstacles and/or failure-states of the system, \(\mathcal{C}_{\texttt{obs}}\subset\mathcal{C}\). Let the obstacle-free set of the configuration space be defined as \(\mathcal{C}_{\texttt{free}}=\mathcal{C}\setminus\mathcal{C}_{\texttt{obs}}\). \(\mathcal{A}\) is the action space and an action is given as \(a\in\mathcal{A}\). \(\mathcal{O}(s)\) is the observation function; an observation is given as \(o\sim\mathcal{O}(s)\). The state transition function, \(T\left(s^{\prime}|s,a\right)\), represents probability of arriving in state \(s^{\prime}\) when taking action \(a\) in state \(s\). The reward is drawn from the reward function \(r\sim R(s,a)\). A stochastic policy \(\pi(a|o)\in\Pi\) represents the probability, or probability density, of taking action \(a\) given observation \(o\). An observation-action trajectory from timestep \(\alpha\) to \(\omega\) is defined as \(\tau_{\alpha,\omega}=\left(o^{(\alpha)},a^{(\alpha)},o^{(\alpha+1)},a^{(\alpha +1)},...,o^{(\omega-1)},a^{(\omega-1)},o^{(\omega)}\right)\). Note that we use parenthetical superscripts to represent specific time steps. Let \(\mathcal{T}\) be the set of all possible trajectories \(\tau\) under dynamics \(T\) and policy \(\pi\). Let \(J(\tau_{\alpha,\omega}):\mathcal{T}\rightarrow\mathbb{R}\) be the _trajectory cost function_ that maps a trajectory to a real-valued number (e.g. fuel usage, time, etc.). With slight abuse of notation on the variables \(\alpha\) and \(\omega\), let us define the _cost-limited forward reachable set_[27] from state \(s_{\alpha}\) to all states \(s_{\omega}\) \[\Omega(s_{\alpha},J_{\mathtt{th}})=\{s_{\omega}\in\mathcal{C}|\exists\tau_{ \alpha,\omega}\in\mathcal{T},J(\tau_{\alpha,\omega})\leq J_{\mathtt{th}}\} \tag{1}\] Define the _failure cost function_ \[Z:\mathcal{C}\rightarrow\mathbb{R} \tag{2}\] that maps each state \(s\in\mathcal{C}\) to a real value that corresponds to the failure-state set \(\mathcal{C}_{\mathtt{obs}}\). In this work we define a failure cost function such that \[Z(s_{\omega}\in\mathcal{C}_{\mathtt{obs}})=1 \tag{3}\] \[Z(s_{\omega}\in\mathcal{C}_{\mathtt{tree}})=0\] Let \(\mathcal{Z}\) be the set of all failure cost functions \(Z\). Define the _risk metric_ as \[\rho:\mathcal{Z}\rightarrow\mathbb{R} \tag{4}\] In this work we only consider _coherent risk metrics_ such as conditional value at risk (CVaR), worst case, or expected cost [4]. Finally--with slight abuse of notation on \(\rho\)--we can define the _risk metric map_ \[\rho\left(s,\pi;Z,\mathcal{C}_{\mathtt{obs}},T\right):\mathcal{C}\times\Pi \rightarrow[0,1] \tag{5}\] which maps the configuration and policy space to the real-value range \([0,1]\) by assigning to each state-policy pair \((s,\pi)\) a risk value parameterized by the failure cost function \(Z\), obstacle space \(\mathcal{C}_{\mathtt{obs}}\), and vehicle dynamics \(T\). By using the binary failure cost function in Eqn. 3 and expected cost as our risk metric, our risk metric map is identical to the _probability_ the system arrives at a failure state over an infinite time horizon when starting from state \(s\) and following policy \(\pi\). For brevity of notation we drop the parameter variables and refer to the risk metric map as \(\rho(s,\pi)\) or even \(\rho(s)\) when the policy is assumed to be uniform random sampling of bounded controls. Note that we can relate the risk metric map to _inevitable collision obstacles_ (ICO) [19] as \[ICO\left(\mathcal{C}_{\mathtt{obs}}\right)=\{s\in\mathcal{C}|\forall\pi,\rho( s,\pi)=1\} \tag{6}\] ### _Approximate Risk Metrics_ Many kinodynamic systems of interest--such as cars, aircraft, maritime vessels, etc.--have continuous state and action spaces making enumeration over all states and control trajectories in Eqn. 1 impossible. Furthermore, obstacle spaces in Eqn. 3 are often implicitly defined and only sensed with partial observability. This means that the explicit derivation of risk metrics at each state of a system is impractical, if not impossible. We can, however, formulate a finite-horizon approximation of Eqn. 5 at a given state. Algorithm 1 gives a recursive, sampling-based estimate of the risk metric at state \(s\), which is arrived at by trajectory \(\tau\), and subsequently following policy \(\pi\) over a time horizon of \(t*m\). ``` procedureApprxRiskMetric(\(s,\tau,\pi,t,n,m\)) \(z\leftarrow\textsc{CheckFailure}(s,\tau)\) if\(z=1\)or\(m=0\)then return\(z\) \(P\leftarrow\emptyset\) \(V,E\leftarrow\textsc{SampleForwardReachableSet}(s,\pi,t,n)\) for\(s_{\omega},\tau_{\omega}\) in \(\textsc{zip}(V,E)\)do \(P\gets P\cup\{\textsc{ApprxRiskMetric}(s_{\omega},\tau_{\omega},\pi,t,n,m-1)\}\) returnCoherentRiskMetric(\(P\)) ``` **Algorithm 1** Approximate Risk Metric The algorithm works by checking if a state is in collision with obstacles (i.e. failure cost function from Eqn. 3) and then recursively sampling the stochastic policy \(\pi\) which samples the forward reachable set \(\Omega(s,t)\) of state \(s\) with a cost function of time \(t\) (Eqn. 1). This recursive sampling generates a tree rooted at \(s\) with a branching factor of \(n\) and depth of \(m\). The recursive base of the algorithm is to simply return the value of the failure cost function at the leaf node, which is either 0 or 1. Otherwise the algorithm returns a coherent risk metric--such as expected cost--over all immediately adjacent sampled states in the tree. ### _Learned Risk Metric Maps_ While Alg. 1 provides an approximation to risk metrics in Eqn. 5, this only provides an approximation at a single state and not an approximation over the entire state space. Furthermore, we need privileged access to obstacle information in order to perform collision checking during the sampling process. This means that Alg. 1 is of limited use in real-time systems where obstacles may only be partially sensed (e.g. via cameras or Lidar). We seek a method for approximating risk metrics over finite time horizons for kinodynamic systems operating in unstructured, partially observed obstacle spaces. To achieve this we use Alg. 1 as a data generator to train a function approximator (e.g. neural network) that takes as input the local observation, \(o\sim\mathcal{O}(s)\), and outputs the _inferred_ risk metric, \(\hat{\rho}(s)\). Training data is generated under policy \(\pi_{q}\) (e.g. uniform random from bounded control space [17, 18]). Therefore we define the _learned risk metric map_ (LRMM) function approximator as \[\hat{\rho}(o;\pi_{g}):\mathcal{O}\rightarrow[0,1] \tag{7}\] ``` procedureApprxRiskMetric(\(s,\tau,\pi,t,n,m\)) \(z\leftarrow\textsc{CheckFailure}(s,\tau)\) if\(z=1\)or\(m=0\)then return\(z\) \(P\leftarrow\emptyset\) \(V,E\leftarrow\textsc{SampleForwardReachableSet}(s,\pi,t,n)\) for\(s_{\omega},\tau_{\omega}\) in \(\textsc{zip}(V,E)\)do \(P\gets P\cup\{\textsc{ApprxRiskMetric}(s_{\omega},\tau_{\omega},\pi,t,n,m-1)\}\) returnCoherentRiskMetric(\(P\)) ``` **Algorithm 2** Approximate Risk Metric Algorithm 2 gives a rough overview of the relatively simple procedure for training the LRMM model in Eqn. 7. This consists of generating obstacle sets; sampling states from these obstructed configuration spaces; pulling observations from these states and computing risk metrics at each state; and finally, training a supervised model on these observations and risk metrics. Similar to sampling-based motion planners (e.g. RRT [14]) and online planning methods (e.g. monte carlo tree search [26]) approximate risk metrics use sampling techniques to explore the configuration space using tree structures. However, LRMM differs in that it does not directly attempt to determine policies or motion plans for navigating the configuration space. Therefore, it is not necessary to form a connected graph with all sampled states and instead allows for many disjoint trees sampled throughout the configuration space making it highly parallelizable. ## IV Experiments In this section we provide simulation experiments demonstrating the utility of learned risk metric maps. We provide a case study on parallel autonomy, comparing effectiveness of LRMMs with control barrier functions (CBFs) and Hamilton-Jacobi reachability (HJ-reach) in a 4-dimensional car-like robot. We then provide training results for high-dimensional systems in unstructured obstacles spaces such as a quadrotor in an obstructed room. For all experiments the LRMM models are trained with Alg. 2 using a branching factor \(n=32\), a tree depth \(m=2\), time horizon of \(t=2\) seconds per tree depth, and expected value of our failure cost function (Eqns. 3) as our coherent risk metric. Similarly, a held-out test set of data is generated. Models consist of a 64-neuron, single layer, feed-forward network with a sigmoid activation function. Another sigmoid function is applied at the output layer to bound outputs to the range \([0,1]\) so that outputs represent probabilities. The training process uses a mean squared error (MSE) loss function for the regression problem. For software implementation, state and control spaces are defined and sampled using Open Motion Planning Library (OMPL) [28]. Network models and training are implemented with PyTorch [29]. In Sec. IV-A CBF quadratic programs are solved using the CVXOPT library [30]. The HJ-reachability PDE is defined and solved with the OptimizedDP library [31]. In Sec. IV-B PyBullet [32] is used for quadrotor system dynamics, 3D collision checking, and rendering. All data generation, model training, and evaluation experiment configurations are managed using hydra-zen [33]. LRMM training and experiment software is provided at [https://github.com/mit-drl/pyrmm](https://github.com/mit-drl/pyrmm). ### _Parallel Autonomy Case Study_ **Problem setup.** We consider a Dubins-like kinodynamic system--referred to as Dubins4D and illustrated in Fig. 2--which is commonly studied in related works [23, 25] with dynamics given as \[\dot{x}=v\cos\theta+d_{x},\ \dot{y}=v\sin\theta+d_{y},\ \dot{\theta}=u_{1},\ \dot{v}=u_{2} \tag{8}\] where state, \(s:=(x,y,\theta,v)\), represents the location in Cartesian frame, heading, and linear speed of the vehicle, respectively; and controls, \(\mathbf{u}:=(u_{1},u_{2})\), are the turn-rate and linear acceleration, respectively. The system is subject to state constraints \(v_{\text{min}}\leq v\leq v_{\text{max}}\); control constraints \(u_{1,\text{min}}\leq u_{1}\leq u_{1,\text{max}}\), \(u_{2,\text{min}}\leq u_{2}\leq u_{2,\text{max}}\); and bounded disturbance \(\|d_{x}\|\leq d_{r},\|d_{y}\|\leq d_{r}\). We consider the case where the Dubins4D system in Eqn. 8 is operated in the presences of obstacles and steered by a "reckless driver", i.e. a controller that guides the system to a goal region but makes no effort to avoid obstacles1. The objective is to provide a parallel autonomous system--referred to as the "guardian agent"--that judiciously takes control from the driver to prevent collisions with obstacles [34]. It is desired that the guardian agent be minimally-interfering; it should only take control when collisions are imminent and is not responsible for optimizing navigation to the goal. Footnote 1: For our experiments the “reckless driver” is a control Lyapunov function (CLF) taken from [25]; however, any other controller could be used so long as it generates a non-zero frequency of obstacle collisions. The environment executes in a _non-blocking_ fashion; that is to say that simulation time continues to progress even while agents are computing their next action. This characteristic of the environment is designed to demonstrate the importance of computation time in safety-critical robot applications. The observation spaces is a 17-tuple with simulation time, relative position to goal, absolute heading and speed, and 12 ray-casts equally distributed around the vehicle that sense obstacle locations. In this experiment we compare three guardian agents: _LRMM_, _CBF_, and _HJ-reach_. **LRMM:** The _LRMM agent_ is trained with 35,072 samples drawn from 274 randomly generated obstacle configurations that are completely distinct from those configurations used during the evaluations in Table I; i.e. LRMM was _not_ allowed to train on the evaluation set. During runtime evaluation, the LRMM agent works by inferring risk from observation of Fig. 2: Dubins4D parallel autonomy simulation environment. Obstacles are randomly placed red circles, the goal is the partially obscured green circle, the agent and its velocity vector is indicated in blue, and the lidar ray casts are light gray. A “reckless” control system navigates the vehicle to the goal without regard to obstacles. We evaluate LRMM, CBF, and HJ-reach methods as “guardian agents” who can take temporarily take control of the vehicle to avoid obstacles. the system's current state. If the inferred risk is greater than a user-defined threshold (\(\hat{\rho}(s)>0.85\) in our experiments), then the LRMM agent takes active control of the vehicle and applies maximal braking and maximal turning control until the risk estimate drops below the threshold, at which time control is returned to the driver agent2. Footnote 2: Note that this collision avoidance policy is a heuristic and not derived from the LRMM output. Thus, in this case study, LRMM helps to answer the question of _when_ to take control, but not _what_ control to apply; i.e. the LRMM is _notcriptive_ of control. See Sec. V for discussion of future work on this topic. **CBF:** The _CBF agent_ is based on high-order CBFs described in [25]. CBF-based controllers are most often considered in the context of fully-autonomous systems, but we can easily adapt them to parallel autonomy systems. At each new state that the system encounters, the CBF agent formulates and solves the quadratic program (QP) described in Sec. II. If the CBF constraints are active in the QP solution--i.e. at least one CBF inequality constraint reaches an equality--then the CBF agent overrides the driver and applies the controls resulting from the QP solution. Note that CBFs require an explicit, analytical description of the obstacle space in order to formulate the CBF-based QPs. For our experiments we work around this by providing the CBF agent with privileged, global knowledge of the obstacle space that is not given to the LRMM agent. **HJ reachability:** The _HJ-reach agent_ is based upon Hamilton-Jacobi reachability [23, 15]. As the Dubins4D driver moves the vehicle through the state space, the HJ-reach agent assess the value function at each state and determines if it is within the unsafe set \(\mathcal{V}_{\text{obs}}(t)\). If the system arrives in the unsafe set, then the HJ-reach agent takes control and applies the optimal control to steer the system away from the obstacle set. Several modifications to the Dubins4D environment were necessary for the HJ-reach agent to be applied. First, like the CBF agent, the HJ-reach agent was granted privileged information about the obstacle space in order to formulate the HJ PDE3. Furthermore, we granted the HJ-reach agent a precomputation phase where the agent is allowed to _"peek"_ at the complete obstacle configuration prior to environment initiation and then given time to solve for the HJ value function over a discretized mesh of the state space. The computation of the HJ value function can take up to 2 minutes on the same computer hardware that solves CBF QPs in 50 milliseconds. This is so slow that--without this precompute phase--the entire simulation time window would have elapsed before the HJ-reach agent even had the chance to interact and control the system. Finally, we could only impose a single obstacle for the HJ-reach agent due to limitations of the software library used to implement the HJ solver. Footnote 3: Unlike CBFs, HJ-reachability does not explicitly require analytical descriptions of the obstacle space and can—in principle—work with an implicit description, e.g. a discretized mesh of the obstacle space. However, our implementation of HJ-reachability was constrained by the Python libraries available for solving the HJ PDE [31] which required explicit obstacle descriptions (e.g. circular regions of known radius and position) **Other Baselines:** We also compare against a set of baseline agents--_random_, _inactive_, _brakes-only_--that have simplistic policies that, respectively, intervene with random controls at random times; never intervene on the drivers actions; and always intervene and apply maximum braking at all times. These baselines help bound the possible performance metrics within the Dubins4D environment. **Comparative Results:** Table I provides the experimental results from our Dubins4D parallel autonomy case study. A set of 2048 environment configurations (i.e. placement of obstacles, goal, and initial vehicle state) are randomly generated and each agent is evaluated in each of these environment configurations--except the HJ-reach agent which had it's own set of 2048 environments due to its limitation to single obstacles. The agents are evaluated based on their policy computation time, success rate (i.e. proportion of trials that end at the goal state), collision rate (trials that end in collision with obstacles), timeout rate (trials that end in neither goal or collision), and intervention rate (i.e. the percent of time steps for which the guardian agent takes control from the driver). Ideally, a guardian agent would have a perfect success rate; a zero collision and timeout rate; and a minimal intervention rate (only takes control at the critical moments to avoid collision and then returns control to the driver); however, such a "perfect" guardian agent is not actually possible in this contrived scenario--see discussion below. The key insight from Table I is that the LRMM agent performs at least as well--if not better than--CBFs and HJ-reach methods in this real-time, safety-critical, parallel autonomy task. This is in spite of the fact that the LRMM agent only has access to a local, partial observation of the environment whereas the CBF and HJ-reach agents are granted privileged global knowledge of the obstacle space. The success of the LRMM agent is due in large part to its rapid computation time. Needing only to make a feed-forward pass through a shallow neural network in order to estimate risk metrics, the LRMM agent can run approximately 20x faster than the CBF agent--which needs to solve quadratic programs at each step--and \(>100\)x faster than the HJ-reach agent--which needs to interpolate a value function and its derivative on a 4-dimensional mesh of the state space. Even though CBFs and HJ reachability can provide theoretical guarantees about control robustness and optimality, these experiments highlight the practical challenges of their implementations in real-time safety-critical applications. The LRMM and CBF agents both produce roughly the same success rate of \(45\%\). LRMM slightly outperforms CBFs with a \(30\%\) collision rate. The Dubins4D environment is randomly initialized such that collisions are unavoidable in some episodes; i.e. the initial state of the vehicle is already within the region of inevitable collision (Eqn. 6) [19]. From the inactive-agent we see that--without intervention from a guardian--the "natural" proportion of episodes ending in collision is roughly \(55\%\). From the brakes-only agent--which always intervenes and immediately brings the vehicle to a stop--we see that an approximate lower bound on collision rate is \(9\%\). LRMM produces a higher rate of episode timeouts--with neither goal nor obstacle being reached--which tend to be caused by the guardian agent bringing the vehicle to a complete stop until the end of the episode. Both LRMM and CBFs have roughly a \(30\%\) intervention rate. LRMM and CBF agents significantly out-perform the baseline random agent across all metrics. Direct comparison with HJ-reach in terms of success, collision, and timeout rate is made difficult by the fact that it was not exposed to the same number of obstacles as other agents. HJ-reach results are included to highlight the long computation time relative to other methods. This is also what leads to it's low intervention rate. By the time that the HJ-reach agent has identified that the system has entered an unsafe set and computed the optimal evasion control, the vehicle has already collided with the obstacle. ### _High-Dimensional Systems & Unstructured Environments_ As demonstrated in Sec. IV-A, learned risk metric maps can infer a dynamical system's current risk of failure much more rapidly than existing techniques like CBFs and HJ-reachability. These alternative techniques become even less usable in higher-dimensional systems--where solutions of PDEs over the state space become prohibitively expensive--and unstructured environments--where obstacles cannot be explicitly/analytically described. In this section we show that learned risk metric maps can effectively estimate risk values in such challenging environments. In addition to the Dubins4D environment, we train risk models for a Dubins vehicle in highly-obstructed procedurally-generated mazes (see Fig. 1) and a quadrotor within procedurally-generated rooms with random polyhedron obstacles (see Fig. 3). Table II gives the risk metric map training and testing results for these three systems. In addition to the MSE training and testing loss we also report mean absolute error (MAE) at the end of training and on the test set. This gives more intuition on how accurately the models estimate risk. We see that all models average a risk estimation error of \(<10\%\) of the true risk on the test set. ## V Conclusion In this paper, we present learned risk metric maps (LRMMs) for real-time estimation of coherent risk metrics of high-dimensional dynamical systems operating in unstructured, partially observed environments. We compared the proposed LRMM with other state of the art methods: CBF and HJ-reachability, with results showing advantages of the proposed LRMM's computation efficiency and general applicability. As noted in Sec. IV-A, a shortcoming of LRMMs is that they estimate failure probabilities, but they do not prescribe the control necessary to avoid failure. We anticipate Fig. 3: Visualization of quadrotor in an highly unsafe state (i.e. upside down and near an obstacle). Color of the quadrotor is scaled from green-to-red based on the inferred risk metric which, here, accurately estimates the high risk of the current state. Obstacles and walls made transparent for easier visualization. Rendered in PyBullet [32]. that future work will remedy this--perhaps through some fusion with control barrier functions--and seek to provide formal guarantees on LRMM accuracy [17].
Learned Risk Metric Maps (LRMM) を用いて、高次元ダイナmicalシステムが、非構造化、部分的に観察される環境で、リアルタイムでリスクメトリックスを評価する。LRMM モデルは、障害物セットの生成、状態と制御サンプリング、関数近似器の supervised 訓練のみで設計と訓練が容易である。そのため、任意のシステムダイナミクスと障害物セットに対して広く適用可能である。 パラレルオートマンティ設定では、LRMM モデルの能力を、速度の速い自動車型ロボットが障害物を邪魔する環境で、衝突の確率を迅速に推定できることを示している。このため、LRMM アgente が車両を介入させ、制御し、衝突を回避する。この時間的に重要な状況では、LRMM はリスクメトリックスを評価する速度が、制御バリア関数 (CBF) と ハミルトン-Jacobi 再帰
2306.17625
An Integrated FPGA Accelerator for Deep Learning-based 2D/3D Path Planning
Path planning is a crucial component for realizing the autonomy of mobile robots. However, due to limited computational resources on mobile robots, it remains challenging to deploy state-of-the-art methods and achieve real-time performance. To address this, we propose P3Net (PointNet-based Path Planning Networks), a lightweight deep-learning-based method for 2D/3D path planning, and design an IP core (P3NetCore) targeting FPGA SoCs (Xilinx ZCU104). P3Net improves the algorithm and model architecture of the recently-proposed MPNet. P3Net employs an encoder with a PointNet backbone and a lightweight planning network in order to extract robust point cloud features and sample path points from a promising region. P3NetCore is comprised of the fully-pipelined point cloud encoder, batched bidirectional path planner, and parallel collision checker, to cover most part of the algorithm. On the 2D (3D) datasets, P3Net with the IP core runs 24.54-149.57x and 6.19-115.25x (10.03-59.47x and 3.38-28.76x) faster than ARM Cortex CPU and Nvidia Jetson while only consuming 0.255W (0.809W), and is up to 1049.42x (133.84x) power-efficient than the workstation. P3Net improves the success rate by up to 28.2% and plans a near-optimal path, leading to a significantly better tradeoff between computation and solution quality than MPNet and the state-of-the-art sampling-based methods.
Keisuke Sugiura, Hiroki Matsutani
2023-06-30T12:56:25
http://arxiv.org/abs/2306.17625v1
# An Integrated FPGA Accelerator for ###### Abstract Path planning is a crucial component for realizing the autonomy of mobile robots. However, due to limited computational resources on mobile robots, it remains challenging to deploy state-of-the-art methods and achieve real-time performance. To address this, we propose P3Net (PointNet-based Path Planning Networks), a lightweight deep-learning-based method for 2D/3D path planning, and design an IP core (P3NetCore) targeting FPGA SoCs (Xilinx ZCU104). P3Net improves the goitm and model architecture of the recently-proposed MPNet. P3Net employs an encoder with a PointNet backbone and a lightweight planning network in order to extract robust point cloud features and sample path points from a promising region. P3NetCore is comprised of the fully-pipelined point cloud encoder, batched bidirectional path planner, and parallel collision checker, to cover most part of the algorithm. On the 2D (3D) datasets, P3Net with the IP core runs 24.54-149.57x and 6.19-115.25x (10.03-59.47x and 3.38-28.76x) faster than ARM Cortex CPU and Nvidia Jetson while only consuming 0.255W (0.809W), and is up to 1049.42x (133.84x) power-efficient than the workstation. P3Net improves the success rate by up to 28.2% and plans a near-optimal path, leading to a significantly better tradeoff between computation and solution quality than MPNet and the state-of-the-art sampling-based methods. Path planning Neural path planning Point cloud processing Point Deep learning FPGA ## 1 Introduction Path planning aims to find a feasible path from a start to a goal position while avoiding obstacles. It is a fundamental component for mobile robots to autonomously navigate and accomplish a variety of tasks, e.g., farm monitoring [1], aerial package delivery [2], mine exploration [3], and rescue in a collapsed building [4]. Such robotic applications are often deployed on resource-limited edge devices due to severe constraints on the cost and payload. In addition, real-time performance is of crucial importance, since robots may have to plan and update a path on-the-fly in dynamic environments, and delays in path planning would affect the stability of the upstream applications. To cope with the strict performance requirements, FPGA SoCs are increasingly used in robotic applications such as visual odometry [5] and SLAM [6]. FPGA SoC integrates an embedded CPU with a reconfigurable fabric, which allows to develop a custom accelerator tailored for a specific algorithm. Taking these into account, an FPGA-based efficient path planning implementation becomes an attractive solution, which would greatly broaden the application range, since mobile robots can now perform expensive planning tasks on its own without connectivity to remote servers. In path planning, the sampling-based methods including Rapidly-exploring Random Tree (RRT) [7] and RRT* [8] are the most prominent, owing to their simplicity and theoretical guarantees. Instead of working on a discretized grid environment like graph-based methods (e.g., A* [9]), these methods explore the environment by incrementally building a tree that represents a set of valid robot motions. While this alleviates the curse of dimensionality to some extent, the computational and memory cost for finding a near-optimal solution is still high, since a sufficient number of tree nodes should be placed to fill the entire free space. A number of RRT variants, e.g., Informed-RRT* [10] and BIT* [11], have been proposed to improve the sampling efficiency and convergence speed. While they offer better tradeoffs between computational effort and solution quality, they rely on carefully designed heuristics, which imply the prior knowledge of the environment and may not be effective in certain scenarios. Due to their increased algorithmic complexity, it even takes up to tens of seconds to complete a task on an embedded CPU. On top of that, their inherently sequential nature would require intricate strategies to map onto a parallel computing platform. Motivated by the tremendous success of deep learning, the research effort is devoted to developing learning-based methods; the basic idea is to automatically acquire a policy for planning near-optimal paths from a large collection of paths generated by the existing methods. MPNet [12] is a such recently-proposed method. It employs two separate MLP networks for encoding and planning; the former embeds a point cloud representing obstacles into a latent feature space, while the latter predicts a next robot position to incrementally expand a path. Unlike sampling-based methods, MPNet does not involve operations on complex data structures (e.g., KNN search on a K-d tree), and DNN inference is more amenable to parallel processing. This greatly eases the design of a custom processor and makes MPNet a promising candidate for the low-cost FPGA implementation. Its performance is however limited due to the following reasons; the encoder does not take into account the unstructured and unordered nature of point clouds, which degrades the quality of extracted features and eventually results in a lower success rate. Furthermore, the planning network has a lower parameter efficiency, and MPNet has a limited parallelism as it processes only one candidate path at a time until a feasible solution is found. This paper addresses the above limitations of MPNet and proposes a new learning-based method for 2D/3D path planning, named **P3Net** (PointNet-based Path Planning Networks), along with its custom IP core for FPGAs (**P3NetCore**). While the existing methods often assume the availability of abundant computing resources (e.g., GPUs), which is not the case in practice, P3Net is designed to work on resource-limited edge devices and still deliver satisfactory performance. Besides, to our knowledge, P3NetCore is one of the first FPGA-based accelerators for fully learning-based path planning. The main contributions of this paper are summarized as follows: 1. To extract robust features and improve parameter efficiency, we utilize a PointNet [13]-based encoder architecture, which is specifically designed for point cloud processing, together with a lightweight planning network. PointNet is widely adopted as a backbone for point cloud tasks including classification [14] and segmentation [15]. Figure 1: Results on the P3Net2D dataset. While MPNet (blue) fails to plan feasible paths in the first two cases, the proposed P3Net (red) plans successfully in all these cases, while reducing the parameters by 32.32x. Figure 2: Results on the P3Net3D dataset. P3Net (red) produces feasible paths in these cases with 5.43x less parameters than MPNet. 2. We make two algorithmic modifications to MPNet; we introduce a batch planning strategy to process multiple candidate paths in parallel, which offers a new parallelism and improves the success rates without increasing the computation time. We then add a refinement phase at the end to iteratively optimize the path. 3. We design a custom IP core for P3Net, which integrates a fully-pipelined point cloud encoder, a bidirectional neural planner, and a parallelized collision checker. P3Net is developed using High-Level Synthesis (HLS) and implemented on a Xilinz ZCU104 Evaluation Kit. 4. Experimental results validate the effectiveness of the proposed algorithmic optimizations and new models. FPGA-accelerated P3Net achieves significantly better tradeoff between computation time and success rate than MPNet and the state-of-the-art sampling-based methods. It runs up to two orders of magnitude faster than the embedded CPU (ARM Cortex) or integrated CPU-GPU systems (Nvidia Jetson). P3Net quickly converges to near-optimal solutions in most cases, and is up to three orders of magnitude more power-efficient than the workstation. The rest of the paper is structured as follows: Section 2 provides a brief overview of the related works, while Section 3 covers the preliminaries. The proposed P3Net is explained in Section 4, and Section 5 elaborates on the design and implementation of P3NetCore. Experimental results are presented in Section 6, and Section 7 concludes the paper. ## 2 Related Works ### Sampling and Learning-based Path Planning The sampling-based methods, e.g., RRT [7] and its provably asymptotically-optimal version, RRT* [8], are prevalent in robotic path planning; they explore the environment by incrementally growing an exploration tree. Considering that the free space should be densely filled with tree nodes to find a high-quality solution, the computational complexity is at worst exponential with respect to the space dimension and may be intractable in some difficult cases such as the environments with narrow passages. The later methods introduce various heuristics to improve search efficiency; Informed-RRT* [10] uses an ellipsoidal heuristic, while BIT* [11] and its variants [16, 17] apply graph-search techniques. Despite the steady improvement, they still rely on sophisticated heuristics; deep learning-based methods have been extensively studied to automatically learn effective policies for planning high-quality paths. Several studies have investigated the hybrid approach, where deep learning techniques are incorporated into the classical planners. Ichter _et al._[18, 19] and Wang _et al._[20] extend RRT by generating informed samples from a learned latent space. Neural A* [21] is a differentiable version of A*, while WPN [22] uses LSTM to generate path waypoints and then A* to connect them. Aside from the hybrid approach, an end-to-end approach aims to directly learn the behavior of classical planners via supervised learning; Inoue _et al._[23] and Bency _et al._[24] train LSTM networks on the RRT* and A*-generated paths, respectively. CNN [25], PointNet [26, 27], and Transformer [28, 29, 30] are also employed to construct end-to-end models. MPNet and our P3Net follow the end-to-end supervised approach, and perform path planning on a continuous domain by directly regressing coordinates of the path points. P3Net is unique in that it puts more emphasis on the computational and resource efficiency and builds upon a lightweight MLP network, making it suitable for deployment on the low-end edge devices. Besides, the environment is represented by a space-efficient sparse point cloud as in [31, 26, 27], unlike the grid map which introduces quantization errors and leads to a large memory consumption, as it contains redundant grid cells for the free space and its size increases exponentially with respect to space dimensionality. ### Hardware Acceleration of Path Planning Several works have explored the FPGA and ASIC acceleration of the conventional graph and sampling-based methods, e.g., A* [32, 33, 34, 35] and RRT [36, 37, 38, 39]. Kosuge _et al._[33] develops an accelerator for A* graph construction and search on the Xilinx ZCU102. Since A* operates on grid environments and is subject to the curse of dimensionality, it is challenging to handle higher dimensional cases or larger maps. For RRT-family algorithms, Malik _et al._[36] proposes a parallelized architecture for RRT, which first partitions the workspace into grids and distributes them across multiple RRT processes. This approach involves redundant computations as some RRT processes may explore irrelevant regions for a given problem. Xiao _et al._[37, 38] proposes an FPGA accelerator for RRT* that runs tree expansion and rewiring steps in a pipelined manner. Chung _et al._[39] devises a dual-tree RRT with parallel and greedy tree expansion for ASIC implementation. Some studies leverage GPU [40] or distributed computing techniques [41, 42] to speed up RRT, while it degrades power efficiency and not suitable for battery-powered edge devices. In [40], collision checks between obstacles and a robot arm are parallelized on GPU; Devaurs _et al._[41] employs a large-scale distributed memory architecture and a message passing interface. RRT-family algorithms repeat tree expansion and rewiring steps alternately; they are inherently sequential and difficult to accelerate without a sophisticated technique (e.g., space subdivision, parallel tree expansion). In contrast, our proposed P3Net offers more parallelism and is hardware-friendly, as it mainly consists of DNN inferences, and does not operate on complex data structures (e.g., sorting, KNN on K-d trees, graph search). Only a few works [43, 44, 45] have considered the hardware acceleration of neural planners. Huang _et al._[44] presents an accelerator for a sampling-based method with a CNN model, which produces a probability map given an image of the environment for sampling the next robot position. In [45], an RTL design of a Graph Neural Network-based path explore rapidly evaluates priority scores for edges in a random geometric graph, and edges with high priority are selected to form a path. This paper extends our previous work [43]; instead of only accelerating the DNN inference in MPNet, we implement the whole bidirectional planning algorithm on FPGA. In addition, we derive a new path planning method, P3Net, to achieve both higher success rate and speedup. This paper is one of the first to realize a real-time fully learning-based path planner on a resource-limited FPGA device. ## 3 Preliminaries: MPNet In this section, we briefly describe the MPNet [12] algorithm (Alg. 1), which serves as a basis of our proposal. ### Notations and Overview of MPNet Let us consider a robot moving around in a 2D/3D environment \(\mathcal{X}\subset\mathbb{R}^{D}\) (\(D=2,3\)). For simplicity, the robot is modeled as a point-mass, and its state (configuration) is a position \(\mathbf{c}\in\mathbb{R}^{D}\). Note that MPNet is a general framework and can be applied to a wide range of problem settings. Given a pair of start and goal points \(\mathbf{c}_{\mathrm{start}},\mathbf{c}_{\mathrm{goal}}\in\mathbb{R}^{D}\), MPNet tries to find a collision-free path \(\tau=\{\mathbf{c}_{0},\mathbf{c}_{1},\ldots,\mathbf{c}_{T}\}\) (\(\mathbf{c}_{0},\mathbf{c}_{T}=\mathbf{c}_{\mathrm{start}},\mathbf{c}_{\mathrm{ goal}}\)) if exists. As illustrated in Fig. 3 (left), MPNet assumes that obstacle information is represented as a point cloud \(\mathcal{P}=\{\mathbf{p}_{0},\ldots,\mathbf{p}_{N-1}\}\in\mathbb{R}^{N\times 3}\) containing \(N\) points uniformly sampled from the obstacle region. The notation \(\tau\stackrel{{\leftarrow}}{{\leftarrow}}\mathbf{c}\) is a shorthand for \(\tau\leftarrow\tau\cup\{\mathbf{c}\}\). Importantly, MPNet uses two DNN models for encoding and planning, namely **ENet** and **PNet** (Fig. 3 (right)); ENet compresses the obstacle information \(\mathcal{P}\) into a feature embedding \(\mathbf{\phi}(\mathcal{P})\in\mathbb{R}^{M}\). PNet is responsible for sampling the next position \(\mathbf{c}_{t+1}\) which is one step closer to the goal, from the current and goal positions \(\mathbf{c}_{t},\mathbf{c}_{\mathrm{goal}}\) as well as the obstacle feature \(\mathbf{\phi}(\mathcal{P})\). Fig. 4 outlines the algorithm. MPNet consists of two main steps, referred to as (1) **initial coarse planning** and (2) **replanning**, plus (3) a final **smoothing** step, which are described in the following sections. ### MPNet Algorithm MPNet first extracts a feature \(\mathbf{\phi}(\mathcal{P})\) (Alg. 1, line 1) and proceeds to the **initial coarse planning** step (line 2) to roughly plan a path \(\tau\) between start and goal points \(\mathbf{c}_{\mathrm{start}},\mathbf{c}_{\mathrm{goal}}\). The bidirectional planning with PNet, referred to as \(\mathrm{NeuralPlanner}\) (lines 11-22), plays a central role in this step, which is described as follows. Given a pair of start-goal points \(\mathbf{c}_{s},\mathbf{c}_{g}\), \(\mathrm{NeuralPlanner}\) plans two paths \(\tau^{\mathrm{a}},\tau^{\mathrm{b}}\) in forward and reverse directions alternately (lines 15, 18). The forward path \(\tau^{\mathrm{a}}=\{\mathbf{c}_{s},\ldots,\mathbf{c}_{\mathrm{end}}^{\mathrm{a}}\}\) is incrementally expanded from start to goal by repeating the PNet inference (lines 16-17). From the current path endpoint \(\mathbf{c}_{\mathrm{end}}^{\mathrm{a}}\) and goal \(\mathbf{c}_{g}\), PNet computes a new waypoint \(\mathbf{c}_{\mathrm{new}}^{\mathrm{a}}\), which becomes a new endpoint of \(\tau^{\mathrm{a}}\) and used as input in the next inference. Similarly, the backward path \(\tau^{\mathrm{b}}=\{\mathbf{c}_{g},\ldots,\mathbf{c}_{\mathrm{end}}^{\mathrm{b}}\}\) is expanded from goal to start. In this case, PNet computes a next position \(\mathbf{c}_{\mathrm{new}}^{\mathrm{b}}\) from an input \(\left[\mathbf{\phi}(\mathcal{P}),\mathbf{c}_{\mathrm{end}}^{\mathrm{b}},\mathbf{c }_{s}\right]\) to get closer to the start position \(\mathbf{c}_{s}\) (lines 19-20). Note that the newly added edge \((\mathbf{c}_{\mathrm{end}}^{\mathrm{a}},\mathbf{c}_{\mathrm{new}}^{\mathrm{a}})\) (or \((\mathbf{c}_{\mathrm{end}}^{\mathrm{b}},\mathbf{c}_{\mathrm{new}}^{\mathrm{b}})\)) may be in collision; such edge is removed in the later replanning step. After updating \(\tau^{\mathrm{a}}\) or \(\tau^{\mathrm{b}}\), \(\mathrm{NeuralPlanner}\) attempts to connect them and create a path \(\tau=\{\mathbf{c}_{0},\mathbf{c}_{1},\ldots,\mathbf{c}_{T}\}\) between \(\mathbf{c}_{0},\mathbf{c}_{T}=\mathbf{c}_{s},\mathbf{c}_{g}\), if there is no obstacle between path endpoints \(\mathbf{c}_{\mathrm{end}}^{\mathrm{a}},\mathbf{c}_{\mathrm{end}}^{\mathrm{b}}\) (line 21). The above process, i.e., path expansion and collision checking, is repeated until a feasible path is obtained or the maximum number of iterations \(I\) is reached. The algorithm fails if \(\tau^{\mathrm{a}},\tau^{\mathrm{b}}\) cannot be connected after \(I\) iterations (line 22). The tentative path \(\tau\) connecting \(\mathbf{c}_{\mathrm{start}},\mathbf{c}_{\mathrm{goal}}\) is obtained from \(\mathrm{NeuralPlanner}\) (line 2); if \(\tau\) passes the collision check, then MPNet performs **smoothing** and returns it as a final solution (lines 4-5). In the smoothing process (Fig. 4 (right)), the planner greedily prunes redundant waypoints from \(\tau\) to obtain a shorter and smoother path; given three waypoints \(\mathbf{c}_{i},\mathbf{c}_{j},\mathbf{c}_{k}\) (\(i<j<k\)), the intermediate one \(\mathbf{c}_{j}\) is dropped if \(\mathbf{c}_{i}\) and \(\mathbf{c}_{k}\) can be directly connected by a straight line. This involves a collision checking on the new edge \((\mathbf{c}_{i},\mathbf{c}_{k})\). As already mentioned, the initial solution \(\tau\) may contain edges that collide with obstacles (Fig. 4 (left, red lines)); if this is the case, the planner moves on to the **replanning** process (lines 7, 23-32). For every edge \(\mathbf{c}_{i},\mathbf{c}_{i+1}\in\tau\) that is in collision, MPNet tries to plan an alternative sub-path \(\tau_{i,i+1}=\{\mathbf{c}_{i},\mathbf{c}_{i}^{(1)},\mathbf{c}_{i}^{(2)},\ldots, \mathbf{c}_{i+1}\}\) between \(\mathbf{c}_{i},\mathbf{c}_{i+1}\) to avoid obstacles (Fig. 4 (center), line 29). \(\mathrm{NeuralPlanner}\) is again called with \(\mathbf{c}_{i},\mathbf{c}_{i+1}\) as input start-goal points. The replanning fails if \(\mathrm{NeuralPlanner}\) cannot plan a detour. The new intermediate waypoints \(\mathbf{c}_{i}^{(1)},\mathbf{c}_{i}^{(2)},\ldots\) are then inserted to the path (line 31). If the resultant path is collision-free, MPNet returns it as a solution after smoothing (lines 8-9); otherwise, it runs the replanning again. In this way, MPNet gradually removes the non-collision-free edges from the solution. Replanning is repeated for \(I_{\mathrm{Replan}}\) times at maximum, and MPNet fails if a feasible solution is not obtained (line 10). One notable feature of MPNet is that, PNet exhibits a stochastic behavior as it utilizes dropout in the inference phase, unlike the typical case where the dropout is only enabled during training. PNet inference \([\mathbf{\phi}(\mathcal{P}),\mathbf{c}_{t},\mathbf{c}_{\mathrm{goal}}]\mapsto \mathbf{c}_{t+1}\) is hence viewed as sampling the next position \(\mathbf{c}_{t+1}\) from a learned distribution \(p(\mathbf{\phi}(\mathcal{P}),\mathbf{c}_{t})\) parameterized by a planning environment \(\mathbf{\phi}(\mathcal{P})\) and a current position \(\mathbf{c}_{t}\), which represents a promising region around an optimal path from \(\mathbf{c}_{t}\) to \(\mathbf{c}_{\mathrm{goal}}\). Such dropout sampling (Monte Carlo dropout) was first proposed in [46] as an efficient way to estimate the uncertainty in Bayesian Neural Networks (BNNs). As both \(\mathrm{NeuralPlanner}\) and Replan rely on PNet, they are also non-deterministic and produce different results on the same input. As a result, MPNet generates different candidate paths in the replanning phase, leading to an increased chance of avoiding obstacles and higher success rate. ## 4 Method: P3Net In this section, we propose a new path planning algorithm, **P3Net** (Algs. 2-3), by making improvements to the algorithm and model architecture of MPNet. Figure 4: Processing flow of the MPNet path planning. MPNet mainly consists of three steps. **initial coarse planning** step (left): MPNet roughly plans a path connecting start-goal points, which may not be collision-free (red edges). **Replanning** step (center): MPNet removes the edges colliding with obstacles from the initial path and replaces them with alternative edges (detours, blue) to obtain a collision-free path. **Smoothing** step (right): The redundant waypoints (semi-transparent points) are pruned to obtain a smooth and straight path (green edges). Our proposed method, **P3Net**, performs an additional **Refinement** step (right) to gradually improve the solution (orange edges). Figure 3: Overview of the MPNet algorithm. MPNet takes as input start-goal positions \(\mathbf{c}_{\mathrm{start}},\mathbf{c}_{\mathrm{goal}}\) as well as a point cloud \(\mathcal{P}\) representing obstacles in the environment (blue points). MPNet employs two DNNs, **ENet** and **PNet**, for feature extraction and planning. ENet extracts a feature \(\mathbf{\phi}(\mathcal{P})\) from the point cloud, while PNet computes waypoints one at a time and incrementally builds a path from \(\mathbf{c}_{\mathrm{start}}\) to \(\mathbf{c}_{\mathrm{goal}}\). Given \(\mathbf{\phi}(\mathcal{P})\), \(\mathbf{c}_{\mathrm{goal}}\), and a current position \(\mathbf{c}_{t}\) (path endpoint), PNet computes the next position \(\mathbf{c}_{t+1}\) to make a step forward to the goal. ### Algorithmic Improvements P3Net introduces two ideas, (1) **batch planning** and (2) **refinement step**, into the MPNet algorithm. As depicted in Figs. 4-5, P3Net (1) estimates multiple paths at the same time to increase the parallelism, and (2) iteratively refines the obtained path to improve its quality. #### 4.1.1 Batch Planning According to the \(\mathrm{NeuralPlanner}\) algorithm in MPNet (Alg. 1, lines 11-22), forward-backward paths \(\tau_{a},\tau_{b}\) are incrementally expanded from start-goal points \(\mathbf{c}_{s},\mathbf{c}_{g}\) until their endpoints \(\mathbf{c}_{\mathrm{end}}^{\mathrm{a}},\mathbf{c}_{\mathrm{end}}^{\mathrm{b}}\) are connectable. In this process, PNet computes a single next position \(\mathbf{c}_{\mathrm{new}}^{\mathrm{a}}\) (\(\mathbf{c}_{\mathrm{new}}^{\mathrm{b}}\)) from a current endpoint \(\mathbf{c}_{\mathrm{end}}^{\mathrm{a}}\) (\(\mathbf{c}_{\mathrm{end}}^{\mathrm{b}}\)) and the destination \(\mathbf{c}_{g}\) (\(\mathbf{c}_{s}\)) (lines 16, 19). Due to the input batch size of one, PNet cannot fully utilize the parallel computing capability of CPU/GPUs, and also suffers from the kernel launch overheads and frequent data transfers. To amortize this overhead, two PNet inferences (lines 16, 19) can be merged into one with a batch size of two, i.e., two next positions \(\left[\mathbf{c}_{s},\mathbf{c}_{g}\right]\) are computed at once from concatenated inputs \(\left[\mathbf{c}_{\mathrm{end}}^{\mathrm{a}},\mathbf{c}_{\mathrm{end}}^{ \mathrm{b}}\right],\left[\mathbf{c}_{g},\mathbf{c}_{s}\right]\). As shown in Fig. 5, our \(\mathrm{NeuralPlannerEx}\) algorithm (Alg. 3, lines 1-22) takes this idea further by creating a total of \(B\) pairs of forward-backward paths (i.e., \(\tau_{B}^{\mathrm{a}}=\left[\tau_{0}^{\mathrm{a}},\ldots,\tau_{B-1}^{\mathrm{a} }\right]\), \(\tau_{B}^{\mathrm{b}}=\left[\tau_{0}^{\mathrm{b}},\ldots,\tau_{B-1}^{\mathrm{b} }\right]\)). It serves as a drop-in replacement for \(\mathrm{NeuralPlanner}\). It keeps track of the forward-backward path pairs \((\tau_{B}^{\mathrm{a}},\tau_{B}^{\mathrm{b}})\), which are initialized with start-goal points (lines 4-5), as well as path lengths \(\ell_{\mathcal{B}}\) (initialized with all ones, line 6), path endpoints \(\mathbf{C}_{\mathcal{B}}\), and corresponding destination points \(\mathbf{C}_{\mathcal{B}}^{\mathrm{goal}}\) (lines 2-3). In each iteration \(i\), PNet takes \((\boldsymbol{\phi}(\mathcal{P}),\mathbf{C}_{\mathcal{B}},\mathbf{C}_{\mathcal{ B}}^{\mathrm{goal}})\) as input and computes the next waypoints \(\mathbf{C}_{\mathcal{B}}^{\mathrm{next}}=[\mathbf{c}_{0}^{\mathrm{a},i+1}, \mathbf{c}_{0}^{\mathrm{b},i+1},\ldots,\mathbf{c}_{B-1}^{\mathrm{a},i+1}, \mathbf{c}_{B-1}^{\mathrm{b},i+1}]\) for a batch of paths within one inference step (line 8), resulting in a total batch size of \(2B\). Note that \(\mathbf{c}_{j}^{\mathrm{a},i+1},\mathbf{c}_{j}^{\mathrm{b},i+1}\) denote \(i+1\)-th waypoints in \(j\)-th forward-backward paths (\(j\in[0,B)\)). Then, for each sample \(j\), the algorithm tries to connect a path pair \(\tau_{j}^{\mathrm{a}},\tau_{j}^{\mathrm{b}}\), by checking whether any of the three lines connecting \((\mathbf{c}_{j}^{\mathrm{a},i+1},\mathbf{c}_{j}^{\mathrm{b},i})\), \((\mathbf{c}_{j}^{\mathrm{a},i},\mathbf{c}_{j}^{\mathrm{b},i+1})\), and \((\mathbf{c}_{j}^{\mathrm{a},i+1},\mathbf{c}_{j}^{\mathrm{b},i+1})\) is obstacle-free and hence passable (lines 9-17). If this check passes, \(\tau_{j}^{\mathrm{a}}\) and \(\tau_{j}^{\mathrm{b}}\) are concatenated and the result \(\tau=\left[\mathbf{c}_{\mathrm{s}},\mathbf{c}_{\mathrm{s}}^{\mathrm{a},1}, \ldots,\mathbf{c}_{\mathrm{s}}^{\mathrm{b},1},\mathbf{c}_{j}\right]\) is returned to the caller (line 20, blue path in Fig. 5); otherwise, the algorithm updates the current endpoints \(\mathbf{C}_{\mathcal{B}}\) with the new ones \(\mathbf{C}_{\mathcal{B}}^{\mathrm{next}}\) and proceeds to the next iteration. \(\mathbf{C}_{\mathcal{B}}^{\mathrm{next}}\) is also used to update the paths \(\tau_{\mathcal{B}}^{\mathrm{a}},\tau_{\mathcal{B}}^{\mathrm{b}}\) accordingly (lines 18-19). It fails if no solution is found after the maximum number of iterations \(I\). \(\mathrm{NeuralPlannerEx}\) is more likely to find a solution compared to \(\mathrm{NeuralPlanner}\), as it creates \(B\) candidate paths for a given task. This allows the replanning process (Alg. 2, lines 7-12), which repetitively calls \(\mathrm{NeuralPlannerEx}\), to complete in a less number of trials, leading to higher success rates as confirmed in the evaluation (Sec. 6.3). To further improve success rates, P3Net runs the initial coarse planning for \(I_{\mathrm{Init}}\geq 1\) times (Alg. 2, lines 2-5), as opposed to MPNet which immediately fails when a feasible path is not obtained in the first attempt (Alg. 1, lines 2, 3). #### 4.1.2 Refinement Step The paths generated by MPNet may not be optimal, since it returns a first found path in an initial coarse planning or a replanning phase. As highlighted in Alg. 2, lines 13-18, the refinement phase is added at the end of P3Net to gradually improve the quality of output paths (Fig. 5). For a fixed number of iterations \(I_{\mathrm{Refine}}\), it computes a new collision-free path \(\tau_{\mathrm{new}}\) based on the current solution \(\tau_{\mathrm{best}}\) (with a cost of \(c_{\mathrm{best}}\)) using \(\mathrm{Refine}\) algorithm (Alg. 3, lines 23-32), and accepts \(\tau_{\mathrm{new}}\) as a new solution if it lowers the cost (\(c_{\mathrm{new}}<c_{\mathrm{best}}\)). Same as the replanning phase, \(\mathrm{Refine}\) also relies on \(\mathrm{NeuralPlannerEx}\) at its core. It takes the collision-free path \(\tau\) as an input and builds a new path \(\tau_{\mathrm{new}}\) as follows: for every edge \((\mathbf{c}_{i},\mathbf{c}_{i+1})\in\tau\), it plans a path \(\tau_{i,i+1}\) using \(\mathrm{NeuralPlannerEx}\) (line 26) and connects \(\mathbf{c}_{i},\mathbf{c}_{i+1}\) with \(\tau_{i,i+1}\) if it is collision-free (line 29). The evaluation results (Sec. 6.5) confirm that it converges to the optimal solution in most cases. Note that MPNet can be viewed as a special case of P3Net with \(B=1\), \(I_{\mathrm{Init}}=1\), and \(I_{\mathrm{Refine}}=0\). ``` 0: Start \(\mathbf{c}_{\mathrm{start}}\), goal \(\mathbf{c}_{\mathrm{goal}}\), obstacle point cloud \(\mathcal{P}\) 0: Path \(\tau=\{\mathbf{c}_{0},\ldots,\tau_{T}\}\) (\(\mathbf{c}_{0},\mathbf{c}_{T}=\mathbf{c}_{\mathrm{start}},\mathbf{c}_{\mathrm{goal}}\)) 1: Compute point cloud feature: \(\boldsymbol{\phi}(\mathcal{P})\leftarrow\mathrm{ENet}(\mathcal{P})\)\(\triangleright\)Initial coarse planning 2:for\(i=0,\ldots,I_{\mathrm{Init}}-1\)do 3:\(\tau\leftarrow\mathrm{NeuralPlannerEx}(\mathbf{c}_{\mathrm{start}},\mathbf{c}_{ \mathrm{goal}},\boldsymbol{\phi}(\mathcal{P}))\) 4:if\(\tau\neq\varnothing\):break 5:if\(\tau=\varnothing\):return\(\varnothing\)\(\triangleright\) Success 6:\(\tau\leftarrow\mathrm{Smoothing}(\tau)\)\(\triangleright\)Replanning 7:if\(\tau\) is not collision-free : 8:for\(i=0,\ldots,I_{\mathrm{Replan}}-1\)do 9:\(\tau\leftarrow\mathrm{Relpan}(\tau,\boldsymbol{\phi}(\mathcal{P}))\) 10:\(\tau\leftarrow\mathrm{Smoothing}(\tau)\) 11:if\(\tau\neq\varnothing\) and \(\tau\) is collision-free :break 12:if\(\tau=\varnothing\):return\(\varnothing\)\(\triangleright\)Selects 13:\(\tau_{\mathrm{best}}\leftarrow\tau\), \(c_{\mathrm{best}}\leftarrow\mathrm{Cost}(\tau_{\mathrm{best}})\)\(\triangleright\)\(\tau\) is now collision-free 14:for\(i=0,\ldots,I_{\mathrm{Refine}}-1\)do 15:\(\tau_{\mathrm{new}}\leftarrow\mathrm{Refine}(\tau_{\mathrm{best}},\boldsymbol{\phi}( \mathcal{P}))\) 16:\(\tau_{\mathrm{new}}\leftarrow\mathrm{Smoothing}(\tau_{\mathrm{new}})\) 17:\(\alpha_{\mathrm{new}}\leftarrow\mathrm{Cost}(\tau_{\mathrm{new}})\) 18:if\(c_{\mathrm{new}}<c_{\mathrm{best}}:c_{\mathrm{best}}=c_{\mathrm{new}},\ \tau_{\mathrm{best}}=\tau_{\mathrm{new}}\) 19:return\(\tau_{\mathrm{best}}\) ``` **Algorithm 2** P3Net (changed parts are highlighted in red) ``` 1:functionNeuralPlanumerEx(\(\mathbf{c}_{s},\mathbf{c}_{g},\boldsymbol{\phi}(\mathcal{P})\)) 2:\(\triangleright\) Initialize batch of current and goal positions 3:\(\mathbf{C}_{\mathcal{B}}\leftarrow\left[\mathbf{c}_{0}^{a,0},\mathbf{c}_{0}^{b,0},\dots,\mathbf{c}_{B-1}^{a,0},\mathbf{c}_{B-1}^{b,0}\right]\in\mathbb{R}^{2B\times D}\) (\(\forall j\;\mathbf{c}_{j}^{a,0}=\mathbf{c}_{s},\mathbf{c}_{j}^{b,0}=\mathbf{c}_{g}\)) 4:\(\mathbf{C}_{\mathcal{B}}^{\text{goal}}\leftarrow[\mathbf{c}_{g},\mathbf{c}_{s},\mathbf{c}_{g},\mathbf{c}_{s},\dots,\mathbf{c}_{g},\mathbf{c}_{s}]\in\mathbb{R }^{2B\times D}\) 5: Initialize batch of paths and lengths 6:\(\tau_{\mathcal{B}}^{a}=[\tau_{\mathcal{B}}^{a,1},\tau_{1}^{a},\dots,\tau_{B- 1}^{a}],\;\forall j\;\tau_{j}^{a}=\{\mathbf{c}_{s}\}\) 7:\(\tau_{\mathcal{B}}^{b}=[\tau_{\mathcal{B}}^{b},\tau_{1}^{a},\dots,\tau_{B- 1}^{b}],\;\forall j\;\tau_{j}^{a}=\{\mathbf{c}_{g}\}\) 8:\(\ell_{\mathcal{B}}=\left[\ell_{0}^{a},\ell_{0}^{b},\ell_{0}^{b},\dots,\ell_{ B-1}^{b,b}\right],\forall j\;\ell_{j}^{a}=|\tau_{j}^{\text{s}}|=1\) 9:for\(i=0,\dots,I-1\)do 10:\(\mathbf{C}_{\mathcal{B}}^{\text{next}}\leftarrow\text{PNet}(\boldsymbol{\phi}( \mathcal{P}),\mathbf{C}_{\mathcal{B}},\mathbf{C}_{\mathcal{B}}^{\text{goal}})\)\(\triangleright\) Next positions (\(\mathbf{C}_{\mathcal{B}}^{\text{next}}=\left[\mathbf{c}_{0}^{a,i+1},\mathbf{c}_{0}^{b,i+1},\dots, \mathbf{c}_{B-1}^{a,i+1},\mathbf{c}_{B-1}^{b,i+1}\right]\)) 11:for\(j=0,\dots,B-1\)do\(\triangleright\) Collision checks 12:if\((\mathbf{c}_{j}^{a,i+1},\mathbf{c}_{j}^{b,i})\) are connectable : 13:expandA\(\leftarrow\) 1, expandB\(\leftarrow\) 0, \(s\gets 1\) 14:elseif\((\mathbf{c}_{j}^{a,j},\mathbf{c}_{j}^{b,i+1})\) are connectable : 15:expandA\(\leftarrow\) 0, expandB\(\leftarrow\) 1, \(s\gets 1\) 16:else 17:expandA\(\leftarrow\) 1, expandB\(\leftarrow\) 1, \(s\gets 0\) 18:if\(\text{expandA}:\tau_{j}^{a}\leftarrow\{\mathbf{c}_{j}^{a,i+1}\}\), \(\ell_{j}^{a}\leftarrow\ell_{j}^{a}+1\) 19:if\(\text{expandB}:\tau_{j}^{b}\leftarrow\{\mathbf{c}_{j}^{b,i+1}\}\), \(\ell_{j}^{b}\leftarrow\ell_{j}^{b}+1\) 20:if\(s=1\):return\(\tau=\{\tau_{j}^{a},\tau_{j}^{b}\}\)\(\triangleright\) Success 21:\(\mathbf{C}_{\mathcal{B}}\leftarrow\mathbf{C}_{\mathcal{B}}^{\text{next}}\)\(\triangleright\) Failure 22:return\(\varnothing\)\(\triangleright\) Failure 23:functionRefine(\(\tau=\left\{\mathbf{c}_{0},\dots,\mathbf{c}_{T}\right\},\boldsymbol{\phi}( \mathcal{P})\)) 24:\(\tau_{\text{new}}\leftarrow\varnothing\) 25:for\(i=0,\dots,T-1\)do 26:\(\tau_{i,i+1}\leftarrow\)NeuralPlanerEx(\(\mathbf{c}_{i},\mathbf{c}_{i+1},\boldsymbol{\phi}(\mathcal{P})\)) 27:\(\triangleright\) Compute a new path connecting \(\mathbf{c}_{i}\) and \(\mathbf{c}_{i+1}\) 28:if\(\tau_{i,i+1}\neq\varnothing\) and \(\tau_{i,i+1}\) is collision-free : 29:\(\tau_{\text{new}}\leftarrow\tau_{i,i+1}\)\(\triangleright\) Use new path 30:else\(\triangleright\) Use current solution 31:\(\tau_{\text{new}}\leftarrow\{\mathbf{c}_{i},\mathbf{c}_{i+1}\}\)\(\triangleright\) Use new solution 32:return\(\tau_{\text{new}}\) ``` **Algorithm 3** Batch Planning and Refinement Step in P3Net ### Lightweight Encoding and Planning Networks As depicted in Fig. 3 (right), encoding and planning networks ([E, P]Net) are employed in the MPNet framework to extract features from obstacle point clouds and progressively compute waypoints on the output paths. Instead of {E, P}Net, P3Net uses a lightweight encoder with a PointNet [13] backbone (**ENetLite**) to extract more robust features, in conjunction with a downsized planning network (**PNetLite**) for better parameter efficiency and faster inference, which are described in the following subsections1. Footnote 1: To distinguish the models for 2D/3D planning tasks, we suffix them with **-2D**/-3D** when necessary. A fully-connected (FC) layer with \(m\) input and \(n\) output channels is denoted as \(\mathrm{FC}(m,n)\), a 1D batch normalization with \(n\) channels as \(\mathrm{BN}(n)\), a 1D max-pooling with window size \(n\) as \(\mathrm{MaxPool}(n)\), and a dropout with a rate \(p\in[0,1)\) as \(\mathrm{Dropout}(p)\), respectively. #### 4.2.1 ENetLite: PointNet-based Encoding Network As shown in Fig. 6 (top left), MPNet uses a simple encoder architecture (ENet) stacking four FC layers. It directly processes raw point coordinates, and hence costly preprocessing such as normal estimation or clustering is not required. ENet2D takes a point cloud containing 1,400 representative points residing on the obstacles \(\mathcal{P}\in\mathbb{R}^{1,400\times 2}\), flattens it into a 2,800D vector, and produces a 28D feature vector \(\mathbf{\phi}(\mathcal{P})\in\mathbb{R}^{28}\) in a single forward pass. Similarly, ENet3D extracts a 60D feature vector \(\mathbf{\phi}(\mathcal{P})\) from a 3D point cloud of size 2,000 with a series of FC layers2. Footnote 2: ENet3D is denoted as \(\mathrm{FC}(6000,784)\to\mathrm{ReLU}\to\mathrm{FC}(512)\to\mathrm{ReLU}\to \mathrm{FC}(256)\to\mathrm{ReLU}\to\mathrm{FC}(60)\). In spite of its simplicity, ENet has the following major drawbacks; (1) the number of input points is fixed to 1,400 or 2,000 regardless of the complexity of planning environments, (2) the number of parameters grows linearly with the number of points, and more importantly, (3) the output feature is affected by the input orderings. This means that ENet produces a different feature if any of the two points are swapped; since the input still represents exactly the same point set, the result should remain unchanged. P3Net avoids these drawbacks by using PointNet [13] as an encoder backbone, referred to as **ENetLite**. PointNet is specifically designed for point cloud processing, and Fig. 6 (top center) presents its architecture. It is still a fully-connected network and directly operates on raw point clouds. ENetLite2D first extracts 252D individual features \(\{\mathbf{\psi}(\mathbf{p}_{0}),\dots,\mathbf{\psi}(\mathbf{p}_{N-1})\}\) for each point using a set of blocks represented as \(\mathrm{BE}(2,64,64,64,128,252)\)3. It then computes a 252D global feature \(\mathbf{\phi}(\mathcal{P})=\max\left(\mathbf{\psi}(\mathbf{p}_{0}),\dots,\mathbf{\psi}( \mathbf{p}_{N-1})\right)\) by aggregating these pointwise features via max-pooling. ENetLite3D has the same structure as in the 2D case, except the first and the last building blocks are replaced with \(\mathrm{BE}(3,64)\) and \(\mathrm{BE}(250)\), respectively, to extract 250D features from 3D point clouds. Footnote 3: \(\mathrm{BE}(m,n)=\mathrm{FC}(m,n)\to\mathrm{BN}(n)\to\mathrm{ReLU}\) is a basic building block that maps \(m\mathrm{D}\) point features into an \(n\mathrm{D}\) space. Compared to ENet, ENetLite can handle point clouds of any size, and the number of parameters is independent from the input size. ENetLite2D(3D) provides informative features with 9x (28/252) and 4.17x (60/250) more dimensions, Figure 6: Encoding and planning networks for MPNet and P3Net (top right: sequential feature extraction in Sec. 5.1.1). \(\mathrm{BE}\) is a shorthand for a building block consisting of a fully-connected layer, batch normalization, and ReLU activation. \(\mathrm{BP}\) is a shorthand for a building block consisting of a fully-connected layer, ReLU activation, and a dropout with a probability of \(0.5\). while requiring 31.73x (1.60M/0.05M) and 104.47x (5.25M/0.05M) less parameters than ENet2D(3D). PointNet processes each point in a point cloud sequentially and thus avoids random accesses. In addition, as ENetLite involves only pointwise operations and a symmetric pooling function, its output \(\mathbf{\phi}(\mathcal{P})\) is invariant to the permutation of input points, leading to better training efficiency and robustness. #### 4.2.2 PNetLite: Lightweight Planning Network The original PNet is formed by a set of building blocks4, as shown in Fig. 6 (bottom left). It takes a concatenated input \([\mathbf{\phi}(\mathcal{P}),\mathbf{c}_{t},\mathbf{c}_{\mathrm{goal}}]\) consisting of an obstacle feature \(\mathbf{\phi}(\mathcal{P})\) passed from ENet, a current position \(\mathbf{c}_{t}\), and a destination \(\mathbf{c}_{\mathrm{goal}}\), and computes the next position \(\mathbf{c}_{t+1}\) which is one step closer to \(\mathbf{c}_{\mathrm{goal}}\). Notably, PNet2D/3D have the same set of hidden layers; the only difference is in the leading and trailing layers. PNet2D uses \(\mathrm{BP}(32,1280)\) and \(\mathrm{FC}(2)\) to produce 2D coordinates from \(28+2\cdot 2=32\)D inputs, whereas PNet3D uses \(\mathrm{BP}(66,1280)\) and \(\mathrm{FC}(3)\) to handle \(60+3\cdot 2=66\)D inputs and \(3\)D outputs. Footnote 4: \(\mathrm{BP}(m,n)=\mathrm{FC}(m,n)\rightarrow\mathrm{ReLU}\rightarrow\mathrm{ Dropout}(0.5)\). Such design has a problem of low parameter efficiency especially in the 2D case; PNet2D will contain redundant layers which do not contribute to the successful planning and only increase the inference time. The network architecture can be adjusted to the number of state dimensions and the complexity of planning tasks. In addition, as discussed in Sec. 4.2.1, PointNet encoder provides robust (permutation-invariant) features which better represent the planning environment. Assuming that MPNet uses a larger PNet in order to compensate for the lack of robustness and geometric information in ENet-extracted features, it is reasonable to expect that PointNet allows the use of more shallower networks for path planning. From the above considerations, P3Net employs more compact planning networks with fewer building blocks, **PNetLite**. PNetLite2D (Fig. 6 (bottom center)) is composed of six building blocks5 to compute a 2D position from a \(252+2\cdot 2=256\)D input. As described in Sec. 4.1.1, \(\mathrm{NeuralPlannerEx}\) plans \(B\) pairs of forward-backward paths and thus the input batch size increases to \(2B\); PNetLite computes a batch of next positions \(\mathbf{C}_{\mathcal{B}}^{\mathrm{next}}\in\mathbb{R}^{2B\times 2}\) from a matrix \([\mathbf{\phi}(\mathcal{P}),\mathbf{C}_{\mathcal{B}}^{\mathrm{next}},\mathbf{C}_{ \mathcal{B}}^{\mathrm{goal}}]\in\mathbb{R}^{2B\times 256}\). PNetLite3D is obtained by removing a few blocks from PNet3D and replacing the first block with \(\mathrm{BP}(256,1024)\) to process \(250+3\cdot 2=256\)D inputs, as depicted in Fig. 6 (bottom right). PNetLite2D(3D) has 32.58x (3.76M/0.12M) and 2.35x (3.80M/1.62M) less parameters than PNet2D(3D); combining the results from Sec. 4.2.1, {E, P}NetLite together achieves 32.32x (5.43x) parameter reduction in the 2D (3D) case. Footnote 5: PNetLite2D is denoted as \(\mathrm{BP}(256,256,128,64,64,64,2)\). ## 5 Implementation This section details the design and implementation of **P3NetCore**, a custom IP core for P3Net. It has three submodules, namely, (1) **Encoder**, (2) **NeuralPlanner**, and (3) **CollisionChecker** (Fig. 10), which cover most of the P3Net (Alg. 2) except the evaluation of path costs (lines 13, 17). ### Encoder Module While the number of parameter is greatly reduced, ENetLite requires a longer inference time than ENet (Table 3 (top)), as it extracts local features for each point, which amounts to \(N\) times of forward pass. **Encoder** module is to accelerate the ENetLite inference (Alg. 2, line 1). To reduce the memory cost from \(\mathcal{O}(N)\) to \(\mathcal{O}(1)\), it sequentially updates the output feature \(\mathbf{\phi}(\mathcal{P})\). In addition, it applies both coarse and fine-grained optimizations. #### 5.1.1 Memory-Efficient Sequential Feature Extraction The typical approach for extracting an ENetLite feature is to first compute individual features \(\mathbf{\Psi}=\{\mathbf{\psi}(\mathbf{p}_{0}),\ldots,\mathbf{\psi}(\mathbf{p}_{N-1})\}\) for all points in one shot, creating a matrix of size \((N,252)\) (or \((N,250)\) in the 3D case), and apply max-pooling over \(N\) features to produce a 252D (250D) global feature \(\mathbf{\phi}(\mathcal{P})=\max(\mathbf{\Psi})\). In this case, each building block involves a matrix-matrix operation6. While it offers a high degree of parallelism and are hardware-amenable, the buffer size for layer outputs is \(\mathcal{O}(N)\), which incurs a high utilization of the scarce on-chip memory. Footnote 6: Each building block is denoted as \(\mathrm{BE}(m,n)=\mathrm{ReLU}(\mathrm{BN}(\mathbf{XW}+\mathbf{1}\mathbf{b}^{ \top}))\), where \(\mathbf{X}\in\mathbb{R}^{N\times m}\) is a stack of \(m\)D features, \(\mathbf{W}\in\mathbb{R}^{m\times n},\mathbf{b}\in\mathbb{R}^{n}\) are weight and bias parameters of a FC layer, and \(\mathbf{1}\in\mathbb{R}^{N}\) is a vector of ones, respectively. Since the operations for each point is independent except the last max-pooling, instead of following the above approach, the module computes a point feature \(\mathbf{\psi}(\mathbf{p}_{i})\) one-by-one and updates the global feature by taking a maximum \(\mathbf{\phi}(\mathcal{P})\leftarrow\max(\mathbf{\phi}(\mathcal{P}),\mathbf{\psi}(\mathbf{ p}_{i}))\) (Fig. 6 (top right)). After repeating this process for all \(N\) points, the result \(\mathbf{\phi}(\mathcal{P})\) is exactly the same as the previous one, \(\max(\mathbf{\Psi})\). In this way, the computation inside each building block turns into a matrix-vector operation7. As it only requires an input and output buffer for a single point, the buffer size is reduced from \(\mathcal{O}(N)\) to \(\mathcal{O}(1)\); Encoder module can now handle point clouds of any size regardless of the limited on-chip memory. Footnote 7: \(\mathrm{BE}(m,n)=\mathrm{ReLU}(\mathrm{BN}(\mathbf{W}^{\top}\mathbf{x}+ \mathbf{b}))\). #### 5.1.2 ENetLite Inference Encoder module consists of three kinds of submodules: \(\mathrm{FC}(m,n)\), \(\mathrm{BN-ReLU}(n)\), and \(\mathrm{Max}\). \(\mathrm{FC}(m,n)\) involves a matrix-vector product \(\mathbf{W}^{\top}\mathbf{x}+\mathbf{b}\), and \(\mathrm{Max}\) updates the feature by \(\max(\mathbf{\phi}(\mathcal{P}),\mathbf{\psi})\). \(\mathrm{BN-ReLU}(n)\) couples the batch normalization and ReLU. It is written as \(\max(\mathbf{0},(\mathbf{x}-\mathbf{\mu}_{\mathcal{B}})\cdot\mathbf{s}+\mathbf{ \beta}),\mathbf{s}=\mathbf{\gamma}/\sqrt{\mathbf{\sigma}_{\mathcal{B}}^{2}+\varepsilon }\in\mathbb{R}^{n}\) with a little abuse of notations8. \(\mathrm{FC}(m,n)\) and \(\mathrm{BN-ReLU}(n)\) are parallelized by partially unrolling the loop and partitioning the relevant buffers. Besides, a dataflow optimization is applied for the fully-pipelined execution of these submodules (Fig. 7 (top)). This effectively reduces the latency while requiring a few additional memory resources for inserting pipeline buffers. Footnote 8: \(\max(\mathbf{0},\cdot)\) corresponds to \(\mathrm{ReLU}\), \(\mathbf{\mu}_{\mathcal{B}},\mathbf{\sigma}_{\mathcal{B}}^{2}\in\mathbb{R}^{n}\) denote the mean and standard deviation estimated from training data, \(\mathbf{\gamma}\), \(\mathbf{\beta}\in\mathbb{R}^{n}\) are the learned weight and bias, and \(\varepsilon>0\) is a small positive value to prevent zero division, respectively. The scale, \(\mathbf{s}\), is precomputed on the CPU instead of keeping individual parameters (\(\mathbf{\gamma}\), \(\mathbf{\sigma}_{\mathcal{B}}^{2}\)), which removes some operations (e.g., square root and division). #### 5.1.3 Processing Flow of Encoder Module Fig. 7 (bottom) shows the block diagram of Encoder module. Upon a request from the CPU, it first (1) initializes the output feature \(\mathbf{\phi}(\mathcal{P})\) with zeros. Then, the input \(\mathcal{P}\) is processed in fixed-size chunks as follows: the module (2) transfers a chunk from DRAM to a point buffer of size \((N_{C},D)\) (\(N_{C}=64,D=2,3\)) via burst transfer, and (3) the submodule pipeline consumes points in this buffer one-by-one to update the output. After repeating this process for all \(\lceil N/N_{C}\rceil\) chunks, (4) the output \(\mathbf{\phi}(\mathcal{P})\) is written to the on-chip buffer of size \((2B,256)\) for later use in NeuralPlanner module. On-chip buffers for an input chunk, an output \(\mathbf{\phi}\), intermediate outputs, and model parameters are all implemented with BRAM. The lightweight ENetLite model and a sequential update of outputs, which substantially reduce the model parameters and make buffer sizes independent from the number of input points, are essential to place the buffers on BRAM and avoid DRAM accesses during computation. ### NeuralPlanner Module As apparent in Algs. 1-2, the bidirectional neural planning (\(\mathrm{NeuralPlannerEx}\)) is at the core of P3Net and thus has a significant impact on the overall performance. In contrast to the previous work [43], which only offloads the PNet inference (line 8) to the custom IP core, **NeuralPlanner** module runs the entire \(\mathrm{NeuralPlannerEx}\) algorithm (Alg. 3). Considering that it alternates between PNetLite inference and collision checks (lines 9-17), implementing both in one place eliminates the unnecessary CPU-FPGA communication and greatly improves the speed. Figure 7: Pipeline of submodules for ENetLite (top) and block diagram of Encoder module (bottom). #### 5.2.1 PNetLite Inference For PNetLite inference, the module contains two types of submodules: \(\mathrm{FC}(m,n)\) and \(\mathrm{Dropout}\)-\(\mathrm{ReLU}(p)=\mathrm{ReLU}\to\mathrm{Dropout}(p)\) which fuses ReLU and dropout into a single pipelined loop. \(\mathrm{FC}(m,n)\) exploits the fine-grained (data-level) parallelism as described in Sec. 5.1.2. Dropout is the key for the stochastic behavior, and PNet inference is interpreted as a deeply-informed sampling (Sec. 3.2). \(\mathrm{Dropout}\)-\(\mathrm{ReLU}(p)\) with \(p=0.5\) is implemented using a famous Mersenne-Twister (MT) [47]; it first generates 32-bit pseudorandom numbers \([r_{bi}]\in\mathbb{N}^{2B\times n}\) for a batch input \(\mathbf{X}=[x_{bi}]\in\mathbb{R}^{2B\times n}\), and replaces \(x_{bi}\) with zero if \(x_{bi}<0\) (ReLU) or \(r_{bi}<2^{31}\) (the maximum 32-bit integer multiplied by \(p\)). Note that in the 3D case, parameters for the first three FC layers are kept on the DRAM due to the limited on-chip memory resources, which necessitates a single sweep (burst transfer) of weight and bias parameters in a forward pass. #### 5.2.2 Collision Checking In addition to PNetLite inference, NeuralPlanner module deals with collision checking. It adopts a simple approach based on the discretization [40, 12] to check whether the lines are in collision. The line between \(\mathbf{c}^{\mathrm{start}},\mathbf{c}^{\mathrm{end}}\) is divided into segments with a predefined interval \(\delta\) (Fig. 8 (right top)), producing equally spaced midpoints \(\mathbf{c}_{0},\ldots,\mathbf{c}_{M}\)9, and then each midpoint \(\mathbf{c}_{i}\) is tested. If any midpoint collides with any obstacle, the line is in collision. To simplify the implementation, the module assumes that each obstacle is rectangular and represented as a bounding box with minimum and maximum corner points \(\mathbf{c}_{i,\min}^{\mathrm{obs}},\mathbf{c}_{i,\max}^{\mathrm{obs}}\). The module contains eight \(\mathrm{Check}\) submodules to test a midpoint \(\mathbf{c}_{i}\) with eight obstacles in parallel (Fig. 8 (right bottom)). Footnote 9: \(\mathbf{c}_{i}=\mathbf{c}^{\mathrm{start}}+(i/M)\mathbf{\Delta}\), \(\mathbf{\Delta}=\mathbf{c}^{\mathrm{end}}-\mathbf{c}^{\mathrm{start}}\), \(M=\left\|\mathbf{\Delta}\right\|/\delta\). #### 5.2.3 Processing Flow of NeuralPlanner Module As shown in Figs. 8-9, the module interacts with submodules and manages several buffers to perform the bidirectional planning (Alg. 3). To start planning, the module first (1) reads the task configurations such as start-goal points \(\mathbf{c}_{s},\mathbf{c}_{g}\) from DRAM, as well as algorithmic parameters (e.g., the number of obstacles \(N^{\mathrm{obs}}\), maximum iterations \(I\), collision check step \(\delta\), etc.) from control registers. It then (2) initializes BRAM buffers (Fig. 9 (top)) for the current endpoints, destinations, next waypoints \(\mathbf{C}_{\mathcal{B}}\), \(\mathbf{C}_{\mathcal{B}}^{\mathrm{goal}},\mathbf{C}_{\mathcal{B}}^{\mathrm{ next}}\in\mathbb{R}^{2B\times D}\) (\(D=2,3\)) and path lengths \(\ell_{\mathcal{B}}\in\mathbb{N}^{2B}\), as well as the result buffers on DRAM (Alg. 3, lines 2-6). These result buffers store the entire forward-backward paths \(\tau_{\mathcal{B}}^{\mathrm{n}},\tau_{\mathcal{B}}^{\mathrm{n}}\) along with their lengths and success flags in a format depicted in Fig. 9 (bottom). Its size is bound by the maximum iterations \(I\). The on-chip length buffer \(\ell_{\mathcal{B}}\) serves as a pointer to the end of path data (Fig. 9, red arrows). The module proceeds to alternate between PNetLite inference (line 8) and collision checking (lines 9-17); it (3) writes PNetLite inputs \(\mathbf{C}_{\mathcal{B}}\), \(\mathbf{C}_{\mathcal{B}}^{\mathrm{goal}}\) and computes the next waypoints \(\mathbf{C}_{\mathcal{B}}^{\mathrm{next}}\) (Fig. 8 (left bottom)). The buffers for inference are implemented with BRAM or URAM. Using the current and next endpoints (\(\mathbf{c}_{j}^{\mathrm{a},i},\mathbf{c}_{j}^{\mathrm{b},i}\in\mathbf{C}_{ \mathcal{B}}\), \(\mathbf{c}_{j}^{\mathrm{a},i+1},\mathbf{c}_{j}^{\mathrm{b},i+1}\in\mathbf{C}_{ \mathcal{B}}^{\mathrm{next}}\)), the module (4) attempts to connect each path pair \(\tau_{j}^{\mathrm{a}},\tau_{j}^{\mathrm{b}}\) (\(j\in[1,B)\)) by the collision checking (Fig. 5). If the path connection succeeds, it transfers the success flag and path length (\(\ell_{j}^{\mathrm{a}},\ell_{j}^{\mathrm{b}}\in\ell_{\mathcal{B}}\)) to the DRAM result buffer and completes the task. The module appends new waypoints \(\mathbf{C}_{\mathcal{B}}^{\mathrm{next}}\) to the DRAM result buffers, increments path lengths \(\ell_{\mathcal{B}}\) accordingly, and updates the endpoint buffer \(\mathbf{C}_{\mathcal{B}}\) with \(\mathbf{C}_{\mathcal{B}}^{\mathrm{next}}\) for the next iteration. As discussed in Sec. 5.2.2, the collision check of a line segment (\(\mathbf{c}^{\mathrm{start}},\mathbf{c}^{\mathrm{end}}\)) is cast as checks on the interpolated discrete points \(\{\mathbf{c}_{i}\}\). The module reads a chunk of bounding boxes \(\{(\mathbf{c}_{i,\min}^{\mathrm{obs}},\mathbf{c}_{i,\max}^{\mathrm{obs}})\}\) into an on-chip obstacle buffer of size \((N_{C}^{\mathrm{obs}},D)\) (\(N_{C}^{\mathrm{obs}}=64\)), and tests each midpoint \(\mathbf{c}_{i}\) with multiple obstacles in parallel using an array of \(\mathrm{Check}\) submodules (Fig. 8 (right bottom)). ### CollisionChecker Module Since collision checking is performed throughout P3Net, it is implemented in a dedicated **CollisionChecker** module for further speedup. It is in charge of testing the path \(\tau=\{\mathbf{c}_{0},\ldots,\mathbf{c}_{T}\}\) by repeating the process described in Sec. 5.2.2 for each edge (\(\mathbf{c}_{i},\mathbf{c}_{i+1}\)). The path \(\tau\) and obstacle bounding boxes \(\{(\mathbf{c}_{i,\min}^{\mathrm{obs}},\mathbf{c}_{i,\max}^{\mathrm{obs}})\}\) are stored on the separate DRAM buffers, which are successively transferred to the on-chip buffers of size \((T_{C},D)\) and \((N_{C}^{\mathrm{obs}},D)\) (\(T_{C}=64,N_{C}^{\mathrm{obs}}=64\)). The module checks the collision between obstacles and interpolated points on the edge \((\mathbf{c}_{i},\mathbf{c}_{i+1})\), and the result (1 if \(\tau\) collides with any obstacle) is written to the control register. ### Board-level Implementation of P3NetCore Fig. 10 shows the board-level implementation for Xilinx Zynq UltraScale+ MPSoC devices. P3NetCore is implemented on the PL (Programmable Logic) part and communicates with the PS (Processing System) part via two AXI interfaces. One is an AXI4-Lite subordinate interface with a 32-bit data bus connected to the High-Performance Master (HPM0) port, which allows PS part to access control registers. The other is an AXI master interface with a 128-bit data bus connected to the High-Performance (HP0) port, through which the IP core transfers algorithm inputs/outputs in bursts (four 32-bit words per clock)10. We used Xilinx Vitis HLS 2022.1 to develop the IP core, and Vivado 2022.1 for synthesis and place-and-route. Two variants of P3NetCore were created for 2D and 3D planning tasks. The target FPGA SoC is a Xilinx ZCU104 Evaluation Kit (Fig. 11, Table 1). The clock frequency of the board is set to 200MHz. Footnote 10: To ensure the data is 128-bit aligned, the DRAM buffer sizes are rounded up to the nearest multiple of four when necessary, e.g., the point cloud buffer is of size \((N,4)\) instead of \((N,2)\) or \((N,3)\) To preserve the precision of PNetLite outputs (i.e., waypoint coordinates), Encoder and NeuralPlanner modules employ a 32-bit fixed-point with 16.16 format (i.e., a 16-bit integer part and 16-bit fraction part) for layer outputs, and 24-bit fixed-point with 8.16 format for model parameters. Besides, collision checking is performed using the 32-bit floating-point, taking into account that the interval \(\delta\) (Sec. 5.2.2) is set sufficiently small to prevent false negatives. Figure 8: Block diagram of NeuralPlanner module. As shown in Fig. 10, P3NetCore has six operation modes, namely, **Init(ENet, MT, PNet)** and **Run(Encoder, Planner, CollisionChecker)**. When the IP core is called from the CPU, it first reads an operation mode from the control register and performs the corresponding task. **Init(E, P)Net** is responsible for transferring the parameters of {E, P}NetLite from DRAM to the dedicated on-chip buffers via an AXI master port. When **InitMT** is specified, the main module reads a 32-bit random seed from the register and initializes the internal states (624 words) of Mersenne-Twister for dropout layers (Sec. 5.2.1). **Run(Encoder, Planner, CollisionChecker)** triggers the respective module (Sec. 5.1-5.3). The transferred data is summarized in Fig. 10 (bottom). ## 6 Evaluation This section evaluates the proposed P3Net to demonstrate the improvements on success rate, speed, quality of solutions, and power efficiency. For convenience, P3Net accelerated by the proposed IP core is referred to as **P3NetFPGA**. ### Experimental Setup MPNet and the famous sampling-based methods, RRT* [8], Informed-RRT* (IRRT*) [10], BIT* [11], and ABIT* [16] were used as a baseline. All methods including P3Net were implemented in Python using NumPy and PyTorch. The implementation of RRT*, Informed-RRT*, and BIT* is based on the open-source code [48]11. We implemented an ABIT* planner following the algorithm in the paper [16]. For MPNet, we used the code from the authors [12] as a reference; MPNet path planner and the other necessary codes for training and testing were rewritten from scratch. Footnote 11: For a fair performance comparison, we replaced a brute-force linear search with an efficient K-D tree-based search using Nanoflanm library (v1.4.3) [49]. In BIT* planner, we used a priority queue to pick the next edge to process, which avoids searching the entire edge queue, as mentioned in the original paper [11]. The experiments were conducted on a workstation, Nvidia Jetson [Nano, Xavier NX], and Xilinx ZCU104; their specifications and environments are summarized in Table 1. For ZCU104, we built PyTorch from source with -03 Figure 11: Xilinx ZCU104 Evaluation Kit with Texas Instruments INA219 power monitor (skyblue) and Raspberry Pi 3 Model B (bottom right). Figure 10: Board-level implementation of P3NetCore. optimization level and ARM Neon intrinsics enabled (-D_NEON__) to fully take advantage of the multicore architecture. Following the MPNet paper [12], we trained encoding and planning networks in an end-to-end supervised manner, which is outlined as follows. Given a training sample \((\mathcal{P},\mathbf{c}_{t}^{*},\mathbf{c}^{\mathrm{goal}},\mathbf{c}_{t+1}^{*})\), where \(\mathbf{c}_{t}^{*},\mathbf{c}_{t+1}^{*}\) denote a pair of waypoints on a ground-truth path12, the encoder first extracts a global feature \(\mathbf{\phi}(\mathcal{P})\), and then the planning network estimates the next waypoint \(\mathbf{c}_{t+1}\). The squared Euclidean distance \(\|\mathbf{c}_{t+1}-\mathbf{c}_{t+1}^{*}\|^{2}\) is used as a loss function, so that two models jointly learn to mimic the behavior of the planner used for dataset generation. We used an Adam optimizer with a learning rate of \(10^{-3}\) and coefficients of \(\beta_{1}=0.9,\beta_{2}=0.999\). We set the number of epochs to 200 and 50 when training MPNet and P3Net, and the batch size to 8192 for MPNet (2D), 1024 for MPNet (3D), and 128 for P3Net models, respectively. The number of iterations \(I\) in \(\mathrm{NeuralPlanner(Ex)}\) (Algs. 1, 3) is fixed to 50. Footnote 12: The ground-truth path is obtained by running an RRT* planner with a large number of iterations. RRT* and Informed-RRT* were configured with a maximum step size of 1.0 and a goal bias of \(0.05\). BIT* and ABIT* were executed with a batch size of 100 and Euclidean distance heuristic. As done in [16], ABIT* searched the same batch twice with two different inflation factors \(\varepsilon_{\mathrm{infl}}=10^{6},1+\frac{10}{q}\), and the truncation factor \(\varepsilon_{\mathrm{trunc}}\) was set to \(1+\frac{5}{q}\), where \(q\) is a total number of nodes. ### Path Planning Datasets For evaluation, we used a publicly-available dataset for 2D/3D path planning provided by the MPNet authors [12], referred to as **MPNet2D/3D13**. It is split into one training and two testing sets (**Seen**, **Unseen**); the former contains 100 workspaces, each of which has a point cloud \(\mathcal{P}\) representing obstacles, and 4000 planning tasks with randomly generated start-goal points and their respective ground-truth paths. **Seen** set contains the same 100 workspaces as the training set, but with each having 200 new planning tasks; **Unseen** set comes with ten new workspaces not observed during training, each of which has 2000 planning tasks. Footnote 13: Originally called Simple2D and Complex3D in the MPNet paper. Each workspace is a square (or cube) of size 40 containing randomly-placed seven square obstacles of size 5 (or ten cuboid obstacles with a side length of 5 and 10). Note that, trivial tasks in the testing sets are excluded, where start-goal pairs can be connected by straight lines and obstacle avoidance is not required. We only used the first 20/200 tasks for each workspace in **Seen/Unseen** dataset. As a result, the total number of planning tasks is 945/892 and 740/756 in MPNet2D (Seen/Unseen) and MPNet3D (Seen/Unseen) datasets, respectively. Both MPNet and P3Net models were trained with MPNet2D/3D training sets. In addition, we generated **P3Net2D/3D** dataset for testing (Figs. 1-2), which contains 100 workspaces, with 20 planning tasks for each. Compared to MPNet2D/3D, the number of obstacles is doubled to simulate more challenging tasks. \begin{table} \begin{tabular}{l|l l} \hline & Workstation & Nvidia Jetson Nano \\ \hline \multirow{2}{*}{CPU} & Intel Xeon W-2235 & ARM Cortex-A57 \\ & @3.8GHz, 12C & @1.43GHz, 4C \\ \hline GPU & Nvidia GeForce RTX 3090 & 128-core Nvidia Maxwell \\ \hline DRAM & 64GB (DDR4) & 4GB (DDR4) \\ \hline \multirow{2}{*}{OS} & Ubuntu 20.04.6 & Nvidia JetPack 4.6.3 \\ & (based on Ubuntu 18.04) \\ \hline \multirow{2}{*}{Python PyTorch} & 3.8.2 & 3.6.15 \\ & 1.11.0 (w/ CUDA 11.3) & 1.10.0 (w/ CUDA 10.2) \\ \hline \multirow{2}{*}{CPU} & Nvidia Jetson Xavier XX & Xilinx ZCU104 Evaluation Kit \\ \hline \multirow{2}{*}{CPU} & Nvidia Carmel ARM v8.2 & ARM Cortex-A53 \\ & @1.4GHz, 6C & @1.2GHz, 4C \\ \hline GPU & 384-core Nvidia Volta & – \\ \hline FPGA & – & XCZU7EV-2FFVC1156 \\ \hline DRAM & 8GB (DDR4) & 2GB (DDR3) \\ \hline \multirow{2}{*}{OS} & Nvidia JetPack 5.1 & Pyn Linux v2.7 \\ & (based on Ubuntu 20.04) & (based on Ubuntu 20.04) \\ \hline Python & 3.8.2 & 3.8.2 \\ PyTorch & 1.14.0a0 (w/ CUDA 11.4) & 1.10.2 \\ \hline \end{tabular} \end{table} Table 1: Evaluation Environments ### Planning Success Rate First, the tradeoff between planning success rate and the average computation time per task is evaluated on the workstation with GPU acceleration. #### 6.3.1 Comparison of Encoding and Planning Networks MPNet is executed under three combinations of models, i.e., {E, P}Net (original), ENetLite-PNet (**ELite**), and {E, P}NetLite (**EPLite**), with a varying number of replan iterations \(I_{\mathrm{Replan}}=\{10,20,50,100\}\). For P3Net, the refinement step is not performed (\(I_{\mathrm{Refine}}=0\)). The results on MPNet and P3Net test datasets are shown in Fig. 12 (left). In the 2D case (1st/3rd row), replacing {E, P}Net with {E, P}NetLite yields substantially higher success rate and even faster computation time, while reducing the parameters by 32.32x (Sec. 4.2.2). For \(I_{\mathrm{Replan}}=10\), MPNet with ELite setting is 15.58% (69.17/84.75%) and 23.75% (45.15/68.90%) more successful than the original setting on MPNet (Unseen) and P3Net datasets, respectively; PNetLite further improves the success rate by 3.48% (84.75/88.23%) and 4.45% (68.90/73.35%). MPNet (EPLite) is 1.40x (0.067/0.048s) and 1.12x (0.131/0.117s) faster than MPNet (original), indicating that the proposed models help MPNet algorithm to quickly find a solution in a less number of replan attempts. This empirically validates the discussion in Sec. 4.2.1 that the shallower PNet is sufficient since the PointNet encoder produces more robust and informative features. Considering the success rate improvements (19.06/28.20%) in these two datasets, the proposed models offer greater performance advantages in more difficult problem settings. In the 3D case (Fig. 12 (2nd/4th row, left)), MPNet (EPLite) maintains the same level of performance as MPNet (original), while achieving 5.43x parameter reduction (Sec. 4.2.2). ENetLite improves the success rate by 1.46% (89.82/91.27%) and 3.70% (79.35/83.05%), whereas PNetLite slightly lowers it by 0.40% (91.27/90.87%) and 3.95% (83.05/79.10%) on MPNet (Unseen) and P3Net datasets. This performance loss is compensated by the P3Net planner. Comparing the results from MPNet \(\mathsf{Seen}\)/\(\mathsf{Unseen}\) datasets (dashed/solid lines), the difference in the success rate is at most 3.46%, which confirms that our proposed models generalize to workspaces that are not observed during training. #### 6.3.2 Comparison of P3Net with MPNet Fig. 12 (left) also highlights the advantage of P3Net (\(B=\{1,2,4,8\},I_{\mathrm{Replan}}=\{10,20,50,100\}\)). Though MPNet exhibits gradual improvement in success rate with increasing \(I_{\mathrm{Replan}}\), P3Net consistently outperforms MPNet and achieves nearly 100% success rate14. In the 2D case (1st/3rd row), P3Net (\(B,I_{\mathrm{Replan}}=8,100\)) is 2.80% (96.30/99.10%) and 9.85% (87.60/97.45%) more likely to find a solution in 2.20x (0.101/0.046s) and 1.55x (0.326/0.210s) shorter time than MPNet (EPLite, \(I_{\mathrm{Replan}}=100\)) on MPNet (Unseen) and P3Net datasets. While MPNet shows a noticeable drop in success rate (96.30/87.69%) when tested on P3Net dataset, P3Net only shows a 1.65% drop (99.10/97.45%) and maintains the high success rate in a more challenging dataset. In the 3D case (2nd/4th row), it is 2.91% (96.69/99.60%) and 8.00% (91.75/99.75%) better while spending 2.70x (0.081/0.030s) and 1.02x (0.277/0.272s) less time. Footnote 14: Note that the success rate of P3Net (\(B=1\)) surpasses that of MPNet (EPLite), since the former performs more collision checks to connect a pair of forward-backward paths in each iteration (Alg. 1, line 21 and Alg. 3, lines 9-17). Notably, increasing the batch size \(B\) improves both success rate and speed, which clearly indicates the effectiveness of the batch planning strategy (Sec. 4.1.1). On P3Net2D dataset (3rd row), P3Net with \(B,I_{\mathrm{Replan}}=8,10\) is 5.25% more successful and 1.88x faster than with \(B=1\) (79.40/84.65%, 0.128/0.068s). The number of initial planning attempts \(I_{\mathrm{Init}}\) also affects the performance; increasing it from 1 to 5 yields a 2.85% better success rate on P3Net2D dataset (\(I_{\mathrm{Replan}}=10\)). Table 2 compares the success rate of P3NetFPGA and P3Net; despite the use of fixed-point arithmetic and a simple pseudorandom generator, P3NetFPGA attains a similar performance. #### 6.3.3 Comparison with Sampling-based Methods Fig. 12 (right) plots the results from sampling-based methods. The number of iterations is set to \(\{200,300,400,500\}\) for RRT*/IRRT*, and \(\{50,100,200\}\) for BIT*/ABIT*. As expected, sampling-based methods exhibit a higher success rate with increasing iterations; they are more likely to find a solution as they place more random nodes inside a workspace and build a denser tree. P3Net achieves a success rate comparable to ABIT*, and outperforms the other methods. On P3Net2D dataset, P3Net (\(B,I_{\mathrm{Replan}}=8,100\)) plans a path in an average of 0.214s with 97.60% success rate, which is 1.84/5.69x faster and 0.95/2.20% better than ABIT*/BIT* (200 iterations). ### Computation Time Fig. 13 visualizes the distribution of computation time measured on the workstation and SoC devices (Table 1). The sampling-based methods were run on the CPU. On Nvidia Jetson, MPNet and P3Net were executed with GPU. WS {CPU, GPU} refers to the workstation with and without GPU acceleration. On the basis of results from Sec. 6.3, hyperparameters of the planners were selected to achieve similar success rates. For a fair comparison, the PS-PL data transfer overhead is included in P3NetFPGA. As expected, P3NetFPGA is faster than the other planners on ZCU104 and Jetson (green, brown, skyblue) in most cases, and its median computation time is below 0.1s. In the 2D case (Fig. 13 (left)), it even outperforms sampling-based methods on the WS CPU (pink), and is comparable to MPNet/P3Net on the WS GPU (orange). On P3Net2D dataset, P3NetFPGA takes 0.062s in the median to solve a task, which is 2.15x, 1.13x, 6.11x, and 17.52x faster than P3Net (GPU), MPNet (GPU), ABIT*, and BIT* on the workstation. P3Net offers more performance advantages in a more challenging dataset. On the ZCU104 and P3Net/MPNet2D dataset, P3NetFPGA is 15.57x/9.17x faster than MPNet (CPU). In the 3D case (Fig. 13 (right)), while MPNet on the workstation looks faster than P3NetFPGA, it only solves easy planning tasks which require less replan trials, and the result shows around 10% lower success rate. We observe a significant reduction of the variance in computation time. On the ZCU104 and P3Net dataset (Fig. 13, bottom right), P3Net solves a task in 7.623\(\pm\)19.987s on average, which is improved to 4.651\(\pm\)8.909s and 0.145\(\pm\)0.170s in P3Net (CPU, FPGA). As seen from the results of WS {CPU, GPU}, GPU acceleration of the DNN inference does not Figure 12: Success rate and computation time tradeoffs (MPNet{2D, 3D}, P3Net{2D, 3D} dataset from top to bottom, measured on the workstation with GPU acceleration). Upper left is better. contribute to the overall performance improvement; since MPNet/P3Net alternates between collision checks (CPU) and PNet inference (GPU), the frequent CPU-GPU data transfer undermines the performance gain obtained by GPU. #### 6.4.1 Path Planning Speedup Fig. 14 shows the performance gain of P3NetFPGA over the other planners on the workstation and SoC devices. In the 2D case, P3NetFPGA is the fastest among the methods considered. On P3Net dataset (bottom left), it achieves 24.54-149.57x, 10.74-115.25x, 6.19-47.36x, and 2.34-10.76x speedups over the ZCU104, Jetson Nano, Jetson Xavier NX, and a workstation, respectively. Offloading the entire planning algorithm (collision checks and neural planning in Alg. 3) to the dedicated IP core eliminates unnecessary data transfers and brings more performance benefits than \begin{table} \begin{tabular}{c c c c|c c c c} \hline \hline \multicolumn{3}{c|}{MPNet (2D) dataset} & \multicolumn{5}{c}{MPNet (3D) dataset} \\ \hline \(B\) & \(I_{\mathrm{Replan}}\) & \% (w/) & \% (w/o) & \(B\) & \(I_{\mathrm{Replan}}\) & \% (w/) & \% (w/o) \\ \hline 4 & 10 & 91.70 & 91.26 & 4 & 10 & 97.88 & 97.88 \\ 4 & 20 & 95.18 & 94.84 & 4 & 20 & 99.07 & 99.47 \\ 4 & 50 & 98.54 & 98.54 & 4 & 50 & 99.60 & 99.60 \\ 4 & 100 & 99.78 & 99.66 & 4 & 100 & 99.60 & 99.60 \\ \hline \hline \multicolumn{3}{c|}{P3Net (2D) dataset} & \multicolumn{5}{c}{P3Net (3D) dataset} \\ \hline \(B\) & \(I_{\mathrm{Replan}}\) & \% (w/) & \% (w/o) & \(B\) & \(I_{\mathrm{Replan}}\) & \% (w/) & \% (w/o) \\ \hline 4 & 10 & 84.75 & 84.25 & 4 & 10 & 94.95 & 95.60 \\ 4 & 20 & 91.40 & 91.00 & 4 & 20 & 98.90 & 98.90 \\ 4 & 50 & 95.45 & 95.45 & 4 & 50 & 99.60 & 99.75 \\ 4 & 100 & 98.25 & 97.80 & 4 & 100 & 99.80 & 100.0 \\ \hline \hline \end{tabular} \end{table} Table 2: Success rate of P3NetFPGA and P3Net (w/ and w/o P3NetCore) Figure 13: Computation time distribution (top: MPNet, bottom: P3Net dataset, left: 2D, right: 3D dataset). Hyperparameters are as follows. P3Net2D: \((B,I_{\mathrm{Init}},I_{\mathrm{Replan}},I_{\mathrm{Refine}})=(4,5,50,5)\), P3Net3D: \((B,I_{\mathrm{Init}},I_{\mathrm{Replan}},I_{\mathrm{Refine}})=(4,5,20,5)\), MPNet2D/3D: [E, P]NetLite, \(I_{\mathrm{Replan}}=100\), ABIT*/BIT* (2D): 100/200 iterations for MPNet/P3Net dataset, ABIT*/BIT* (3D): 50 iterations, IRRT*/RRT* (2D/3D): 500 iterations. The red dashed line is a median time of P3NetFPGA. -highend CPUs. In this evaluation, we compared the sum of execution times for successful planning tasks to compute average speedup factors. Unlike MPNet, P3Net also performs an extra refinement phase (\(I_{\text{Refine}}=5\)). This means P3NetPGA completes more planning tasks in a shorter period of time than MPNet (e.g., 7.85% higher success rate and 2.92x speedup than MPNet (WS GPU)). Additionally, while P3NetFPGA shows a 1.13x speedup over P3Net (WS GPU) in terms of median time (Fig. 13 (bottom left)), the total execution time is reduced by 2.92x (Fig. 14 (bottom left)). This implies P3Net solves challenging tasks much faster than MPNet, which is attributed to the improved algorithm and model architecture. Also in the 3D case, P3NetFPGA outperforms the other planners except a few cases. On P3Net3D dataset (Fig. 14 (bottom right)), it provides 10.03-59.47x, 6.69-28.76x, 3.38-14.62x, and 0.79-3.65x speedups compared to the ZCU104, Jetson Nano, Jetson Xavier NX, and a workstation, respectively. #### 6.4.2 Computation Time Breakdown The computation time breakdown of MPNet/P3Net is summarized in Fig. 15. P3NetCore effectively reduces the execution time of all three phases. On P3Net2D dataset (bottom left), the replanning phase (pink) took 3.041s and accounted for 89.24% of the entire execution time in MPNet, which is almost halved to 1.557s (51.37%) in P3Net (CPU), and is brought down to only 0.062s (51.46%) in P3NetFPGA. The saved time can be used to perform the additional refinement step (orange) and improve the quality of solutions. #### 6.4.3 Speedup of Inference and Collision Checking Table 3 (top) lists the inference time of {E, P}Net and {E, P}NetLite, measured on the ZCU104 with and without the P3NetCore. The data is the average of 50 runs. As mentioned in Sec. 5.1, ENetLite basically involves \(N\) times of forward passes to compute pointwise features, which increases the inference time by 3.78/1.57x than ENet2D/3D (\(N=1400,2000\)). P3NetCore accelerates the inference by 45.47/45.34x and as a result achieves 12.01/28.88x faster feature extraction than ENet. P3NetCore consistently attains around a 45x speedup on a wide range of \(N\), owing to the combination of inter-layer pipelining (Fig. 7 (top)) and parallelization within each layer. The inference time increases proportionally to \(N\), which corresponds to the \(\mathcal{O}(N)\) complexity of ENetLite, and P3NetCore yields a better performance gain with a larger \(N\) (e.g., 45.47/47.13x in \(N=1400,4096\), 2D). Figure 14: Average speedup factors (top: MPNet, bottom: P3Net, left: 2D, right: 3D dataset). Hyperparameter settings are the same as in Fig. 13. PNetLite has a 28.87/2.32x shorter inference time than PNet2D/3D, mainly due to the 32.58/2.35x parameter reduction (Sec. 4.2.2). It does not grow linearly with the batch size \(B\), indicating that the batch planning strategy in P3Net effectively improves success rates without incurring a significant additional overhead. With P3NetCore, PNetLite is sped up by 10.15-11.49/16.55-17.44x, resulting in an overall speedup of 280.45-326.25/38.27-40.48x over PNet. Table 3 (bottom) presents the computation time of collision checking on the ARM Cortex CPU and P3NetCore. We conducted experiments with four random workspaces of size 40 containing different numbers of obstacles \(N^{\mathrm{obs}}\), and 50 random start-goal pairs with a fixed distance of \(d=5.0,20.0\) for each workspace. The interval \(\delta\) is set to 0.01. P3NetCore gives a larger speedup with the increased problem complexity (larger \(d\) or \(N^{\mathrm{obs}}\)). It runs 2D/3D collision checking 10.49-55.19/31.92-221.45x (11.34-60.56/30.82-181.12x) faster in case of \(d=5.0\) and \(20.0\), respectively. The parallelization with an array of pipelined submodules (Sec. 5.1.3) contributes to the two orders of magnitude performance improvement. The result from P3NetCore does not show a linear increase with \(N^{\mathrm{obs}}\), as P3NetCore terminates the checks as soon as any of midpoints between start-goal points is found to be in collision, and such early-exit is more likely to occur in a cluttered workspace with more obstacles. A slight latency jump at \(N^{\mathrm{obs}}=128\) is due to the limited buffer size for obstacle positions and the increased data transfer from DRAM. \begin{table} \begin{tabular}{l|r r r|r r r} \hline & \multicolumn{3}{c|}{2D} & \multicolumn{3}{c}{3D} \\ \hline Model & \(N\) & CPU (ms) & **IP** (ms) & \(N\) & CPU (ms) & **IP** (ms) \\ \hline ENet & 1400 & 43.25 & – & 2000 & 146.40 & – \\ \hline \multirow{3}{*}{**ENetLite**} & 1400 & 163.70 & 3.60 & 2000 & 229.85 & 5.07 \\ & 2048 & 242.05 & 5.18 & 4096 & 478.06 & 10.15 \\ & 4096 & 477.47 & 10.13 & 8192 & 950.91 & 20.11 \\ \hline Model & \(B\) & CPU (ms) & **IP** (ms) & \(B\) & CPU (ms) & **IP** (ms) \\ \hline \multirow{3}{*}{PNet} & 1 & 102.77 & – & 1 & 103.22 & – \\ & 2 & 107.79 & – & 2 & 108.31 & – \\ & 4 & 130.41 & – & 4 & 130.68 & – \\ \hline \multirow{3}{*}{**PNetLite**} & 1 & 3.56 & 0.315 & 1 & 44.48 & 2.55 \\ & 2 & 4.02 & 0.350 & 2 & 46.85 & 2.83 \\ & 4 & 4.72 & 0.465 & 4 & 56.63 & 3.38 \\ \hline \hline \multirow{3}{*}{} & \multicolumn{3}{c|}{2D} & \multicolumn{3}{c}{3D} \\ \hline Dist. & \(N^{\mathrm{obs}}\) & CPU (ms) & **IP** (ms) & \(N^{\mathrm{obs}}\) & CPU (ms) & **IP** (ms) \\ \hline \multirow{3}{*}{5.0} & 16 & 3.02 & 0.288 & 16 & 3.29 & 0.290 \\ & 32 & 5.23 & 0.290 & 32 & 5.75 & 0.291 \\ & 64 & 9.65 & 0.284 & 64 & 10.68 & 0.291 \\ & 128 & 18.49 & 0.335 & 128 & 20.59 & 0.340 \\ \hline \multirow{3}{*}{20.0} & 16 & 10.12 & 0.317 & 16 & 11.22 & 0.364 \\ & 32 & 18.97 & 0.300 & 32 & 21.18 & 0.380 \\ \cline{1-1} & 64 & 36.91 & 0.301 & 64 & 41.09 & 0.386 \\ \cline{1-1} & 128 & 72.86 & 0.329 & 128 & 80.78 & 0.446 \\ \hline \end{tabular} \end{table} Table 3: Latency for inference (top) and collision checking (bottom) on Xilinx ZCU104 Figure 15: Computation time breakdown of P3NetFPGA in comparison with MPNet and P3Net (top: MPNet, bottom: P3Net, left: 2D, right: 3D dataset). Hyperparameter settings are the same as in Fig. 13. ### Path Cost This subsection evaluates the quality of solutions returned from P3Net in comparison with the other planners. The relative path cost is used as a quality measure; it is computed by dividing a length of the output path by that of the ground-truth available in the dataset15. Fig. 16 shows the results on the MPNet/P3Net datasets (\(I_{\text{Refine}}\) is set to 5). Footnote 15: Since ground-truth paths were obtained by running sampling-based planners with a large number of iterations, the relative cost may become less than one when a planner finds a better path with a smaller cost than the ground-truth. P3Net (skyblue) produces close-to-optimal solutions and an additional refinement step further improves the quality of outputs (red). On P3Net2D dataset, P3Net with/without refinement achieves the median cost of 1.001/1.026, which is comparable to the sampling-based methods (1.038, 1.012, 1.174, and 1.210 in ABIT*, BIT*, IRRT*, and RRT*). If we apply smoothing and remove unnecessary waypoints from the solutions in these sampling-based methods, the median cost reduces to 1.036, 1.011, 0.996, and 1.000, respectively; while IRRT* and RRT* seem to provide better quality solutions, their success rates (85.35/86.00%) are lower than P3Net (95.45%). These results confirm that PNetLite samples waypoints that are in close proximity to the optimal path, and helps the algorithm to quickly find a solution by creating only a few waypoints. Fig. 17 plots the path costs over time on the three planning tasks taken from P3Net2D dataset. We run P3Net and IRRT* for 100 seconds and recorded the evolution of path costs to verify the effectiveness of the refinement step. P3NetFPGA (red) finds an initial solution within 0.1s and reaches a close-to-optimal solution within 1.0s, unlike IRRT* (skyblue) that still does not converge after 100s. Figs. 1-2 show examples of the output paths obtained from MPNet and P3NetFPGA on the P3Net dataset. ### FPGA Resource Consumption Table 4 summarizes the FPGA resource consumption of P3NetCore (\(B=4\)). The BRAM consumption is mainly due to the buffers for model parameters and layer outputs. In the 2D case, {E, P}NetLite fits within the BRAM thanks to Figure 16: Distribution of the relative path cost (top: MPNet, bottom: P3Net, left: 2D, right: 3D dataset). Figure 17: Evolution of the cost on P3Net2D dataset. the parameter reduction and memory-efficient sequential feature extraction (Sec. 5.1.1), which minimizes the memory access latency and brings two orders of magnitude speedup as seen in Table 3 (top left). Since the original {E, P}Net has 32.32x more parameters, they should be stored on the external memory, which would limit the performance gain. While P3NetCore is a package of the three modules with a collision checker and two DNNs, it consumes less than 30% of the onboard DSP, FF, and LUT resources available in both 2D/3D cases. These results demonstrate the resource efficiency of P3NetCore. The batch size \(B\) could be set larger (e.g., 8) for further performance (Fig. 12). ### Power Efficiency Finally, we compare the power consumption of P3NetFPGA with that of P3Net and ABIT*. tegrastats is used to measure the power of Nvidia Jetson. For the workstation, we used 8-tui [50] and nvidia-smi to record the power consumption of Intel Xeon W-2235 and Nvidia GeForce RTX 3090. As shown in Fig. 11, we measured the power of the entire ZCU104 board using a Texas Instruments INA219 power monitor (skyblue). We run each planner with P3Net2D/3D datasets for five minutes and averaged the power consumption. To exclude the power consumption of peripherals (e.g., switches and LEDs on the ZCU104), we subtracted the average power consumption in the idle state. Table 5 presents the results. In the 2D (3D) case, P3NetFPGA consumes 124.51x, 147.80x, and 448.47x (39.64x, 45.60x, and 145.48x) less power than ABIT*, P3Net (CPU), and P3Net (GPU) on the workstation. It also achieves 4.40-5.38/4.58-6.10x (1.36-1.77/1.55-2.07x) power savings over ABIT*/P3Net on Nvidia Jetson. While P3NetFPGA consumes slightly more power than P3Net in the 3D case, this indicates that the power consumption of the P3NetCore itself is at most 0.318W (0.809 - 0.491). Combined with the results from Fig. 14 (bottom), P3NetFPGA offers 65.22-171.71/28.35-65.54x (5.80-17.50/4.53-19.77x) power efficiency than ABIT*/P3Net on ARM Cortex and Nvidia Jetson in the 2D (3D) case. The power efficiency reaches 422.09x, 354.73x, and 1049.42x (44.40x, 43.78x, and 133.84x) when compared with ABIT*, P3Net (CPU), and P3Net (GPU) on the workstation. ## 7 Conclusion In this paper, we have presented a new learning-based path planning method, P3Net. P3Net aims to address the limitations of the recently-proposed MPNet by introducing two algorithmic improvements: it (1) plans multiple paths in parallel for computational efficiency and higher success rate, and (2) iteratively refines the solution. In addition, P3Net (3) employs hardware-amenable lightweight DNNs with 32.32/5.43x less parameters to extract robust features and sample waypoints from a promising region. We designed P3NetCore, a custom IP core incorporating neural path planning and collision checking, to realize a planner on a resource-limited edge device that finds a path in \(\sim\)0.1s while consuming \(\sim\)1W. P3NetCore was implemented on the Xilinx ZCU104 board and integrated to P3Net. Evaluation results successfully demonstrated that P3Net achieves a significantly better tradeoff between computational cost and success rate than MPNet and the state-of-the-art sampling-based methods. In the 2D (3D) case, P3NetFPGA obtained 24.54-149.57x and 6.19-115.25x (10.03-59.47x and 3.38-28.76x) average speedups over ARM Cortex and \begin{table} \begin{tabular}{c|c|c c c c c} \hline \hline & & BRAM & URAM & DSP & FF & LUT \\ \hline & Total & 312 & 96 & 1728 & 460800 & 230400 \\ \hline \multirow{2}{*}{2D} & Use & 275 & – & 462 & 38656 & 57404 \\ & \% & 88.14 & – & 26.74 & 8.39 & 24.91 \\ \hline \multirow{2}{*}{3D} & Use & 264 & 42 & 484 & 50653 & 69410 \\ & \% & 84.62 & 43.75 & 28.01 & 10.99 & 30.13 \\ \hline \hline \end{tabular} \end{table} Table 4: FPGA resource consumption of P3NetCore (\(B=4\)) \begin{table} \begin{tabular}{l|c c c|c c c c} \hline \hline & \multicolumn{3}{c|}{2D} & \multicolumn{3}{c}{3D} \\ \hline \multirow{2}{*}{Machine} & ABIT* & \multicolumn{2}{c|}{P3Net} & ABIT* & \multicolumn{2}{c}{P3Net} \\ & CPU & CPU & +GPU & **+IP** & CPU & CPU & +GPU & **+IP** \\ \hline ZCU104 & 0.480 & 0.461 & – & 0.255 & 0.480 & 0.491 & – & 0.809 \\ Nano & 1.373 & – & 1.556 & – & 1.434 & – & 1.678 & – \\ Xavier & 1.123 & – & 1.168 & – & 1.097 & – & 1.250 & – \\ WS & 31.75 & 37.69 & 114.36\({}^{*}\) & – & 32.07 & 36.89 & 117.69\({}^{*}\) & – \\ \hline \hline \end{tabular} \({}^{*}\)114.36W: 29.72 + 84.64 (CPU/GPU); 117.69W: 32.51 + 85.17 (CPU/GPU) \end{table} Table 5: Comparison of the power consumption (W) Nvidia Jetson, respectively, and its performance was even comparable to the workstation. P3NetFPGA was 28.35-1049.42x and 65.22-422.09x (4.53-133.84x and 5.80-44.40x) more power efficient than P3Net and ABIT* on the Nvidia Jetson and workstation, showcasing that FPGA SoC is a promising solution for the efficient path planning. We also confirmed that P3Net converges fast to the close-to-optimal solution in most cases. P3Net is currently evaluated on the simulated static 2D/3D environments, and the robot is modeled as a point-mass. As a future work, we plan to extend P3Net to more complex settings, e.g., multi-robot problems, dynamic environments, and higher state dimensions. P3NetCore employs a standard fixed-point format for DNN inference; while it already provides satisfactory performance improvements, low-precision formats and model compression techniques (e.g., pruning, low-rank factorization) could be used to further improve the resource efficiency and speed.
モバイルロボットの自律性を実現するために、パースプランニングは重要な要素です。しかし、モバイルロボットの計算リソースが限られているため、最新手法を導入し、リアルタイム性能を実現することは容易ではありません。この課題に対処するために、私たちはP3Net (PointNetベースのパースプランニングネットワーク) を提案しました。これは2D/3Dパースプランニングのための軽量な深層学習ベースの方法であり、FPGA SoC (Xilinx ZCU104) ターゲットのIPコア (P3NetCore) を設計しました。P3Netは、MPNetのアルゴリズムとモデルアーキテクチャを改良しています。P3Netは、PointNetの骨幹と軽量なプランニングネットワークを使用し、可能性のある領域から安定した点群の特徴とサンプルパスポイントを抽出します。P3NetCoreは、フルパイプラインの点群エンコーダ、バッチ処理の双方向パスプランナー、並列
2309.08180
AVM-SLAM: Semantic Visual SLAM with Multi-Sensor Fusion in a Bird's Eye View for Automated Valet Parking
Accurate localization in challenging garage environments -- marked by poor lighting, sparse textures, repetitive structures, dynamic scenes, and the absence of GPS -- is crucial for automated valet parking (AVP) tasks. Addressing these challenges, our research introduces AVM-SLAM, a cutting-edge semantic visual SLAM architecture with multi-sensor fusion in a bird's eye view (BEV). This novel framework synergizes the capabilities of four fisheye cameras, wheel encoders, and an inertial measurement unit (IMU) to construct a robust SLAM system. Unique to our approach is the implementation of a flare removal technique within the BEV imagery, significantly enhancing road marking detection and semantic feature extraction by convolutional neural networks for superior mapping and localization. Our work also pioneers a semantic pre-qualification (SPQ) module, designed to adeptly handle the challenges posed by environments with repetitive textures, thereby enhancing loop detection and system robustness. To demonstrate the effectiveness and resilience of AVM-SLAM, we have released a specialized multi-sensor and high-resolution dataset of an underground garage, accessible at https://yale-cv.github.io/avm-slam_dataset, encouraging further exploration and validation of our approach within similar settings.
Ye Li, Wenchao Yang, Dekun Lin, Qianlei Wang, Zhe Cui, Xiaolin Qin
2023-09-15T06:11:14
http://arxiv.org/abs/2309.08180v2
AVM-SLAM: Semantic Visual SLAM with Multi-Sensor Fusion in a Bird's Eye View for Automated Valet Parking ###### Abstract Automated Valet Parking (AVP) requires precise localization in challenging garage conditions, including poor lighting, sparse textures, repetitive structures, dynamic scenes, and the absence of Global Positioning System (GPS) signals, which often pose problems for conventional localization methods. To address these adversities, we present AVM-SLAM, a semantic visual SLAM framework with multi-sensor fusion in a Bird's Eye View (BEV). Our framework integrates four fisheye cameras, four wheel encoders, and an Inertial Measurement Unit (IMU). The fisheye cameras form an Around View Monitor (AVM) subsystem, generating BEV images. Convolutional Neural Networks (CNNs) extract semantic features from these images, aiding in mapping and localization tasks. These semantic features provide long-term stability and perspective invariance, effectively mitigating environmental challenges. Additionally, data fusion from wheel encoders and IMU enhances system robustness by improving motion estimation and reducing drift. **To validate** AVM-SLAM's efficacy and robustness, we provide a large-scale, high-resolution underground garage dataset, available at [https://github.com/yale-cv/avm-slam](https://github.com/yale-cv/avm-slam). This dataset enables researchers to further explore and assess AVM-SLAM in similar environments. ## I Introduction Automated Valet Parking (AVP) has recently gained momentum as a solution to alleviate parking congestion and improve driver convenience. It is typically employed in semi-enclosed spaces with low speeds and no passengers, making it well-suited for achieving Level 4 autonomous driving. In this context, the primary technical challenge is achieving precise mapping and localization. However, parking garages often present challenges, such as poor lighting, limited texture diversity, repetitive architectural layouts, changing environmental conditions, and the absence of Global Positioning System (GPS) signals. These factors pose substantial obstacles for traditional localization methods. To tackle this challenge, we employ semantic attributes derived from garage road markings to build the map and localize the vehicle, as shown in Fig. 1. These semantic characteristics offer enduring stability and perspective invariance, including lane lines, parking spots, zebra crossings, and indicating arrows. Extracting these semantic features is accomplished using Convolutional Neural Networks (CNNs) working on Bird's Eye View (BEV) image. The BEV image is captured by surrounding cameras and generated by the Around View Monitor (AVM) subsystem. In certain extreme scenarios, the extraction of semantic features might encounter difficulties. As technology has progressed, wheel encoders and IMUs have become more affordable and nearly standard equipment in vehicle sensor arrays. Consequently, a multi-sensor hybrid fusion approach has been devised, distinct from both loosely and tightly methods, to amalgamate data from various sensors, resulting in the development of the AVM-SLAM system, which stands for "a semantic visual SLAM with multi-sensor fusion in a bird's-eye view perspective." Furthermore, a large-scale and high-resolution dataset has been created and published, comprising synchronized multi-sensor data collected within an underground garage. This dataset serves to validate the effectiveness and robustness of the aforementioned methods. In summary, the primary contributions of this article can be outlined as follows: * A pragmatic and innovative semantic visual SLAM framework, known as AVM-SLAM, has been developed. It integrates multiple sensors and employs a bird's-eye view perspective, enabling robust and efficient operation within underground garages. * A multi-sensor hybrid fusion strategy has been developed to enhance robustness and efficiency, distinct from both loosely and tightly approaches. This strategy aims to maximize the benefits of multi-sensor fusion. * A large-scale and high-resolution dataset of typical underground garage data, contains four surround cameras' images, one synthesized bird's eye view, four wheel encoders' measurements and an IMU's measurements. This dataset will be beneficial for further research in SLAM, especially for autonomous vehicle localization in underground garages. Fig. 1: Semantic visual map of the garage build by our AVM-SLAM system. It fuses data from surround view cameras, wheel encoder and IMU in a bird’s eye view. ## II Related Work ### _Traditional Visual SLAM and Lidar SLAM_ Over the past decade, traditional SLAM fields have witnessed the emergence of numerous exceptional solutions. These solutions typically employ conventional texture and 3D structural features of the environment for mapping and localization purposes. Examples include ORB-SLAM [1, 2, 3], which relies on the ORB visual feature, SVO [4], and DSO [5], both leveraging optical flow. Additionally, Cartographer [6], LOAM [7], and LeGO-LOAM [8] are based on 2D/3D lidar structural features. These approaches have demonstrated impressive performance in scenarios characterized by ample lighting, rich textures, and well-defined structures. ### _Methods of Multi-Sensor Fusion_ Both lidar and visual sensors have inherent limitations. Lidar is sensitive to structural information and can fail in structure-scarce environments. Visual sensors are sensitive to texture and can fail in scenarios with poor textures, low lighting, or overexposure. In recent years, SLAM technology has shifted towards multi-sensor fusion, progressing from visual-inertial fusion to lidar-inertial fusion and ultimately fusing lidar, visual, inertial sensors, wheel encoders, and GPS data. Notable advancements include VINS-Mono [9], VINS-Fusion [10], and OpenVINS [11] for visual-inertial fusion, LIC-Fusion [12] and VIL-SLAM [13] for lidar-visual-inertial fusion, [14] and [15] for enhanced fusion with wheel encoders, VIWO [16] for sliding-window filtering to fuse multi-modal data, and [17] for introducing wheel encoder pre-integration theory and noise propagation formula, enabling tight integration with sensor data. By amalgamating data from multiple sensors, these approaches significantly enhance simultaneous localization and mapping robustness and accuracy. ### _Semantic Visual SLAM for AVP_ Semantics-enhanced visual SLAM is crucial for autonomous driving, particularly AVP applications. Challenges in garage environments persist for both visual and lidar SLAM, driving the need for enduring semantic features. Research by H. Grimmett [18] and U. Schwesinger [19] explores mapping and localization using surround cameras and semantic features. Z. Xiang [20] extracts free space contours for pose tracking from a bird's-eye view, creating virtual lidar points. J. Hu [21] and T. Qin [22] develop a comprehensive visual SLAM system using road markings for mapping and parking facility localization. X. Shao [23] establishes tightly-coupled semantic SLAM with visual, inertial, and surround-view sensors. Z. Xiang [24] utilizes hybrid edge information from bird's-eye view images to enhance semantic SLAM, while C. Zhang [25] leverages HD vector map directories for parking lot localization. These studies offer valuable insights for future research. ## III Framework This paper introduces the AVM-SLAM system, consisting of two core modules: VIWFusion and Mapping, as shown in Fig. 2. Our design utilizes a unique multi-sensor hybrid fusion strategy, departing from traditional approaches, ensuring seamless collaboration between these modules for maximum multi-sensor fusion benefits. VIWFusion is a loosely-coupled, multi-sensor weighted fusion front-end, encompassing the AVM subsystem, Semantic Extractor and Matcher, IMU Tracker, Wheel Odometer, Pose Predictor, and Keyframe Filter. It applies weighted Fig. 2: The framework of the proposed AVM-SLAM system consists of two core modules: VIWFusion and Mapping. VIWFusion is a loosely multi-sensor weighted fusion front-end, while the Mapping module serves as a tightly integrated semantic mapping back-end. \(w_{1}\) and \(w_{2}\) are the fusion weights for IMU and wheel odometry, respectively. fusion to data from surround cameras, wheel encoders, and IMU sensors, rooted in Extended Kalman Filter (EKF) theory, providing initial values for visual semantic matching and kinematic constraints for subsequent back-end optimization through pre-integrated (IMU and wheel) values among adjacent semantic keyframes. The Mapping module is a tightly-coupled, semantic mapping back-end, comprising the Loop Detector, Global Optimizer, Sub Mapper, and Global Mapper. We utilize semantic ICP registration for loop detection, incorporating a semantic pre-qualification (SPQ) mechanism to streamline loop detection and minimize mismatches. Extra multi-sensor kinematic constraints, like pre-integrated values of IMU and Wheel between adjacent keyframes, expedite global optimization convergence and enhance mapping accuracy. ## IV Method ### _Around View Monitor_ The AVM is crucial for generating BEV images and enhancing the SLAM system's perceptual range and robustness. It undistorts and applies Inverse Perspective Mapping (IPM) to fisheye images from the four surrounding cameras, merging them into a comprehensive BEV image (see Fig. 3). The four surrounding cameras are strategically positioned around the vehicle, with offline calibrated intrinsic and extrinsic parameters. The virtual BEV camera is precisely located above the vehicle's center, with its optical axis aligned vertically downward. Relevant intrinsic and extrinsic parameters for this virtual camera are derived through the IPM process. In this section, we use the widely accepted generic fisheye camera model proposed by Kannala and Brandt [26] to rectify distortion, applying the following equation: \[r(\theta)=k_{1}\theta+k_{2}\theta^{3}+k_{3}\theta^{5}+k_{4}\theta^{7}+k_{5} \theta^{9} \tag{1}\] where \(\theta\) is the angle between the principal axis and the incoming ray, \(r\) is the distance between the image point and the principal point, \(k_{i}(i=0,1,2,3,4)\) is the coefficient of distortion correction, calculated by offline calibration. The IPM projection can be formulated as the follows: \[\left[\begin{array}{c}u_{bev}\\ v_{bev}\\ 1\end{array}\right]=H*\left[\begin{array}{c}u_{undist}\\ v_{undist}\\ 1\end{array}\right] \tag{2}\] where \(\left[\begin{array}{cc}u_{undist}&v_{undist}\end{array}\right]\) is the pixel location in the undistortion fisheye image. \(\left[\begin{array}{cc}u_{bev}&v_{bev}\end{array}\right]\) is the pixel location in the bird's eye view. \(H\) is the homography matrix from undistortion fisheye image to bird's eye view. ### _Semantic Extractor and Matcher_ * Flare Removal The ground's light reflection creates diverging flares in the bird's eye view, interfering with road marking line extraction (Fig. 3(a)). To address this issue, we propose a flare removal model based on the U-Net architecture [27], incorporating perceptual [28] and L1 losses for improved performance. We efficiently generate flare-removed data for model training by using a specular highlight detection algorithm [29] to create a highlight mask (Fig. 3(b)). This mask is then refined by merging it with manually annotated foreground information to eliminate false detections (Fig. 3(c)). Finally, we apply an image inpainting algorithm [30] to effectively remove highlights (Fig. 3(d)). This approach simplifies the labor-intensive process of labeling flare removal data. * Semantic Extractor The garage's road markings, including lane lines, parking spots, zebra crossings, and indicating arrows, exhibit enduring stability and remain perspective-invariant. These qualities make them ideal for semantic visual mapping and vehicle localization. We conducted a comparative analysis of semantic segmentation networks [31, 32, 33, 34] and ultimately selected DDRnet [34], which balances efficiency and accuracy, for extracting road markings from the BEV images. Fig. 3(e) illustrates the semantic labels, and Fig. 3(f) showcases the segmentation results. Fig. 4: Flare removal and semantic segmentation Fig. 3: Generate bird’s eye view from surround fisheye images by AVM After segmentation, the results are downsampled for efficiency and then reconstructed into 3D space in the virtual BEV camera coordinate system, as shown below: \[\left[\begin{array}{c}x_{c}\\ y_{c}\\ z_{c}\end{array}\right]=K^{-1}*\left[\begin{array}{c}u_{bev}\\ v_{bev}\\ 1\end{array}\right] \tag{3}\] where \(K\) is the intrinsic parameter of the virtual BEV camera. \(K^{-1}\) is the inverse of the \(K\). \(\left[\begin{array}{cc}x_{c}&y_{c}&z_{c}\end{array}\right]\) is the 3D coordinate value in the virtual BEV camera coordinate system, in this case, the \(z_{c}\) of the semantic features on the ground are always equal to 1 meter. Finally, transfer the 3D semantic features from virtual BEV camera coordinate into vehicle coordinate as follows: \[\left[\begin{array}{c}x_{v}\\ y_{v}\\ z_{v}\\ 1\end{array}\right]=T_{vc}*\left[\begin{array}{c}x_{c}\\ y_{c}\\ z_{c}\\ 1\end{array}\right] \tag{4}\] where \(\left[\begin{array}{cc}x_{v}&y_{v}&z_{v}\end{array}\right]\) is the 3D coordinate value in the vehicle coordinate system. \(T_{vc}\) is the transformation matrix from virtual BEV camera coordinate system to vehicle coordinate system, which calibrated offline. * Semantic Matcher In this task, we employ the Iterative Closest Point (ICP) algorithm for matching 3D semantic features. The cumulative error issue inherent in frame-to-frame matching is mitigated by implementing frame-to-map matching. This approach offers efficiency and robustness, particularly when equipped with a reliable initial pose estimation. The initial pose estimation is facilitated by a pose predictor that integrates data from an IMU and a wheel encoder. ### _Pose Predictor_ * System Initialization The proposed AVM-SLAM system is centered around BEV semantic features. Therefore, the pose predictor always considers the time \(t_{0}\) of the first frame in the BEV semantic frame data queue \(leqDevCam\) as the initial candidate time for initialization. To determine if system initialization is possible, we assess the fusion mode settings and check whether there is data in the selected sensors' data queues before time \(t_{0}\). System initialization occurs only when data is available at time \(t_{0}\) and earlier in all selected sensors' data queues. If these conditions aren't met, we remove the first frame from \(leqQevCam\) and continue evaluating the time \(t_{1}\) of the next semantic frame. The system initializes successfully when the specified conditions are met. At this point, the vehicle coordinate system serves as both the initial coordinate system for the global map and the initial coordinate system for the first submap. To enhance initialization accuracy, we perform linear interpolation on the data in the selected sensors' data queues to obtain data corresponding to the time of the relevant semantic frame. * Pose Prediction There is evidence that the linear velocity accuracy of the wheel odometry is higher when the vehicle is doing linear motion, and the angular velocity accuracy of the IMU is higher when it is doing rotational motion, and the two are obviously complementary. In order to improve the accuracy and robustness of the pose prediction, this paper adopts the EKF approach to weighted fusion multiple sensors' data. The fusion process is divided into a prediction step and an update step. The prediction step can be described by the prediction equations 5 and 6, as shown below: \[\hat{x}_{k} =f(\hat{x}_{k-1},u_{k-1}) \tag{5}\] \[P_{k} =F_{k}P_{k-1}F_{k}^{T}+Q_{k} \tag{6}\] where \(\hat{x}_{k}\) is estimated state vector at time \(k\). \(f\) is non-linear state transition function. \(u_{k-1}\) is control input at time \(k-1\). \(P_{k}\) is covariance matrix of the state estimation error at time \(k\). \(F_{k}\) is Jacobian matrix of the state transition function with respect to the state variables. \(Q_{k}\) is process noise covariance matrix at time \(k\). The update step can be described by the update equations 7-9, as the follows: \[K^{\prime} =P_{k}H_{k}^{T}(H_{k}P_{k}H_{k}^{T}+R_{k})^{-1} \tag{7}\] \[\hat{x}_{k}^{\prime} =\hat{x}_{k}+K^{\prime}(\overrightarrow{z_{k}}-h(\hat{x}_{k}))\] (8) \[P_{k}^{\prime} =(I-K^{\prime}H_{k})P_{k} \tag{9}\] where \(K^{\prime}\) is Kalman gain matrix at time \(k\). \(H_{k}\) is measurement Jacobian matrix at time \(k\). \(R_{k}\) is measurement noise covariance matrix at time \(k\). \(\overrightarrow{z_{k}}\) is measurement vector at time \(k\). \(h\) is non-linear measurement function. \(I\) is identity matrix. Along with the multi-sensor weighted fusion pose prediction, we also pre-integrated the IMU and wheel odometry data between two consecutive keyframes for further optimization of the global pose-graph. ### _SubMapper and GlobalMapper_ To enhance the efficiency of semantic feature ICP matching between frames and the map, we employ a keyframe-submap-globalmap structure for constructing the semantic map (Fig. 5). Semantic frames are filtered by the KeyFrame-Filter and inserted into the submap if they exhibit more than \(50\%\) difference with the previous keyframe. Each submap contains a fixed number of keyframes, typically 10 frames, but this can be adjusted as needed. The number of semantic points in the submap is significantly lower than in the global map, improving efficiency during frame-to-submap semantic feature ICP matching and reducing error accumulation in frame-to-frame matching. Within the Mapping module, we maintain two submaps: the current submap and the upcoming submap, ensuring sufficient co-visibility area between adjacent submaps (set at \(50\%\) in this system). Keyframes are simultaneously inserted into both submaps. Once the maximum number of keyframes is reached in the current submap, we execute point cloud correction and local optimization. Then, the current submap is integrated into the global map, and the next submap takes its place, starting the creation of a new subsequent submap. ### _Loop Detector_ Loop detection is crucial for global optimization, affecting mapping scale and SLAM speed. In complex environments, like underground garages with repetitive structures, not all keyframes and submaps are suitable for loop detection. To address this, we've developed SPQ (Semantic Pre-qualification) mechanism to filter potential loop frames and submaps, reducing detection and preventing mismatches. SPQ evaluates candidates based on the category number and weights of semantic features within keyframes and submaps. Candidates exceeding a preset threshold qualify as loop frames and submaps. And then, added to the loop sequences for subsequent ICP semantic matching. ### _Global Optimizer_ We employ a pose-graph approach for global optimization. As depicted in Fig. 6, the pose-graph's Nodes encompass both keyframes and submaps, while Edges represent constraints involving keyframe-to-keyframe and keyframe-to-submap. Keyframe-to-keyframe constraints encompass semantic visual constraints between adjacent keyframes, alongside additional kinematic constraints derived from pre-integrated values (IMU and Wheel). Keyframe-to-submap constraints involve semantic visual constraints between keyframes and submaps, along with semantic loop constraints obtained from semantic loop detection. The global optimizer periodically performs optimization operations on the collected Nodes and Edges, subsequently updating the results for each keyframe and submap. ## V Experiments ### _Benchmark Dataset_ To validate the proposed AVM-SLAM system, tests were conducted in a 220mx110m underground garage with over 430 parking spots using a test vehicle equipped with four surround-view fisheye cameras, four wheel encoders, and an IMU, all synchronized and calibrated offline. The four fisheye cameras formed an AVM subsystem for real-time bird's eye view stitching at 30Hz. The proposed benchmark dataset to be publicly included four fisheye image sequences, one BEV image sequence, four wheel encoder data, and one IMU data. Fisheye images had a resolution of 1280x960, and BEV images had a resolution of 1354x1632, representing a physical area of 14.25mx17.18m (1.05cm per pixel on the ground). Both image sequences were stored at 10Hz. Our system used the BEV image as input for validation, while other visual algorithms used front fisheye camera images, which did not affect their operation. ### _Robustness and Accuracy of Mapping_ It is well known that underground garages do not have the GPS signals to construct ground truth by using GPS-based Real-time Kinematic (RTK), and their repetitive structures and variable environments do not allow the use of structure-based lidar to construct ground truth. Therefore, we designed the following method to verify the robustness and accuracy of the proposed method. * Robustness We used images from a front-mounted fisheye camera as input and tried to run feature-based ORB-SLAM3 [3], optical flow based SVO [4] and DSO [5], and vision-inertia fusion based VINS-Mono [9]. Unexpectedly the traditional visual SLAM for the above state-of-the-art (SOTA) all suffer from initialization failures, frequent loss of tracking, and runtime failure because of the poor lighting, sparse texture, and scene variability of garage. On the other hand, the method in this paper is very stable because it adopts the vision-inertia-wheel fusion method for pose tracking and uses the semantic features of the road marking extracted from the bird's-eye view to construct the map. The tracking and mapping are stable using datasets under different conditions, which confirms the robustness and reliability of our algorithm. * Accuracy Firstly, a comparative experiment was conducted based on the benchmark dataset 1. The map constructed from the poses output by the VIWFusion module is shown in Fig. 7. It is clear that there is an unavoidable accumulated error in the front-end. The back-end of Mapping is used with loop detection and global optimization techniques, which effectively eliminates the long-term drift and improves the Fig. 5: Cyan submap and gray global map. The global map consists of submaps, which consist of keyframes. Fig. 6: Schematically of pose-graph with additional kinematic constraints accuracy of mapping. Fig. 7(a) shows the results of pose-graph optimization via regular loop detection without extra kinematic constraints. Fig. 7(b) shows the results of pose-graph optimization via SPQ loop detection with extra kinematic constraints. It can be seen that the pose-graph optimization with the results of SPQ loop detection and extra kinematic constraints can construct more realistic maps. Secondly, we performed a qualitative analysis using a schematic map of the planar structure of the garage (as shown in Fig. 8(a)) with the semantic map we constructed. The two are made to have the same size by scaling and are overlapped for comparison, as shown in Fig. 8(b). Obviously, the semantic map built by our system can be perfectly aligned with the schematic map of the garage, with very high mapping accuracy. Finally, we conducted a quantitative analysis by comparing the world distances between the six points, denoted as ABCDEF in Fig.8(a), with the corresponding map distances. The world distances were measured using a high-precision laser rangefinder, while the map distances were computed from the respective 3D point coordinates. Table I presents the mean values of multiple measurements for both world distances and map distances. Table II shows the mean absolute error, maximum error, and root mean square error (RMSE) between these map distances and world distances, and attaches experimental data from the AVP-SLAM [22] and BEV Edge SLAM [24] for comparison. It should be noted that the authors of these two methods have not open-sourced them, so the data in the table are derived from the literature. Although the datasets are different, the application scenarios of the three methods are all underground garage, so they can still be used as references to each other. From Table II, it is obvious that ours AVM-SLAM system fusing multi-sensors has a higher accuracy of mapping.
Accurate localization in challenging garage environments with poor lighting, sparse textures, repetitive structures, dynamic scenes, and the absence of GPS is crucial for automated valet parking (AVP) tasks. Addressing these challenges, our research introduces AVM-SLAM, a cutting-edge semantic visual SLAM architecture with multi-sensor fusion in a bird's-eye view (BEV). This novel framework synergizes the capabilities of four fisheye cameras, wheel encoders, and an inertial measurement unit (IMU) to construct a robust SLAM system. Unique to our approach is the implementation of a flare removal technique within the BEV imagery, significantly enhancing road marking detection and semantic feature extraction by convolutional neural networks for superior mapping and localization. Our work also pioneers a semantic pre-qualification (SPQ) module, designed to adeptly handle the challenges posed by environments with repetitive textures, thereby enhancing loop detection and system robustness. To demonstrate the effectiveness and resilience of AVM
2309.14581
Assessing Utility of Differential Privacy for RCTs
Randomized control trials, RCTs, have become a powerful tool for assessing the impact of interventions and policies in many contexts. They are considered the gold-standard for inference in the biomedical fields and in many social sciences. Researchers have published an increasing number of studies that rely on RCTs for at least part of the inference, and these studies typically include the response data collected, de-identified and sometimes protected through traditional disclosure limitation methods. In this paper, we empirically assess the impact of strong privacy-preservation methodology (with \ac{DP} guarantees), on published analyses from RCTs, leveraging the availability of replication packages (research compendia) in economics and policy analysis. We provide simulations studies and demonstrate how we can replicate the analysis in a published economics article on privacy-protected data under various parametrizations. We find that relatively straightforward DP-based methods allow for inference-valid protection of the published data, though computational issues may limit more complex analyses from using these methods. The results have applicability to researchers wishing to share RCT data, especially in the context of low- and middle-income countries, with strong privacy protection.
Soumya Mukherjee, Aratrika Mustafi, Aleksandra Slavković, Lars Vilhuber
2023-09-26T00:10:32
http://arxiv.org/abs/2309.14581v1
# Assessing Utility of Differential Privacy for RCTs ###### Abstract Randomized control trials, RCT, have become a powerful tool for assessing the impact of interventions and policies in many contexts. They are considered the gold-standard for inference in the biomedical fields and in many social sciences. Researchers have published an increasing number of studies that rely on RCTs for at least part of the inference, and these studies typically include the response data collected, de-identified and sometimes protected through traditional disclosure limitation methods. In this paper, we empirically assess the impact of strong privacy-preservation methodology (with differential privacy (DP) guarantees), on published analyses from RCTs, leveraging the availability of replication packages (research compendia) in economics and policy analysis. We provide simulations studies and demonstrate how we can replicate the analysis in a published economics article on privacy-protected data under various parametrizations. We find that relatively straightforward DP-based methods allow for inference-valid protection of the published data, though computational issues may limit more complex analyses from using these methods. The results have applicability to researchers wishing to share randomized control trials (RCTs) data, especially in the context of low- and middle-income countries, with strong privacy protection. Introduction Randomized control trials, RCT, have become a powerful tool for assessing the impact of interventions and policies in many contexts (e.g., in economics, see Esther Duflo's Nobel Prize lecture Duflo, 2020). Today, they are considered the gold-standard for inference in the biomedical fields and in many social sciences. In economics, much of the growth has been since the 1990s. Studies can involve small-scale interventions, randomized at the personal, family, or village, but are sometimes also measured with province- or national-level outcomes. Researchers have published an increasing number of studies that rely on RCTs for at least part of the inference. In a parallel development, the quest for improved transparency in the social sciences has lead to more of the supplementary materials for articles to be made public as "replication packages". For instance, the American Economic Association (AEA) journals for applied economics (AEJ:Applied) and economic policy (AEJ:Economic Policy) have required that analysis data and code be made available since the journals' creation in 2009. The increased availability of complete replication packages has allowed other researchers to leverage the materials, and conduct re-analyses and meta-analyses, furthering our understanding of the methods as well as of the conclusions drawn from these studies. Meager (2019) re-analyzed numerous RCTs to assess the robustness of their findings using Bayesian hierarchical analysis (BHA). Roth (2022) selected event studies for which complete replication packages were available, to re-analyze them in light of pre-treatment time trends. These kinds of studies are possible because of the increased availability of complete replication materials.1 Footnote 1: It should be noted that Roth (2022) still had to exclude nearly four times as many papers as they included because data were not readily available. The data included in such replication packages usually allow to reproduce the results in the papers exactly, suggesting that all the analysis is conducted on these data. However, the typical guidance followed by researchers who conduct RCTs (Department of Health and Human Services, 2012; Kopper, Sautmann and Turitto, 2020; DIME, 2020) suggests primarily de-identification, the most basic anonymization, as the protection mechanism, and where further anonymization is suggested, more traditional disclosure avoidance methods (e.g., \(l\)-diversity, Machanavajjhala et al. (2006); Hundepool et al. (2012), and other aggregation-based methods are suggested). Differential privacy (DP) (Dwork et al., 2016) is sometimes referenced (Wood et al., 2021), but we are not aware of the application of DP in the context of RCTs. This suggests that much of the current literature on RCTs publishes replication packages that contain inadequately protected data. This is particularly concerning in the economic data setting we are exploring because many of these studies have data from respondents in low and middle income countries (LMIC). One of the reasons for the absence of strong privacy protection methods in this literature is that no tools are available to non-specialists that would allow for easy but efficient protection using differentially private tools. Efficiency here is defined as "perturbing inference as little as possible compared to the unprotected inference." We note that inference even in the "unprotected" case is already subject to uncertainty that is often not adequately taken into account, as evidenced by Meager (2019). This is even more important for the uncertainty and data modifications that are generated through statistical disclosure limitation (SDL). Abowd and Schmutte (2015); Slavkovic and Seeman (2022) demonstrate the need to account for the privacy-preserving noise in analyses. Slavkovic and Seeman (2022), and references therein, discuss a way to make an adjustment for privacy-preservation noise in addition to other source of uncertainty. The present article is part of a project that aims to provide an assessment of the feasibility of using privacy enhancing technologies (PETs), in particular differentially private methods, for data publication and adjusted inference in the context of RCTs. More broadly, we contribute to a literature on privacy-aware analysis, and privacy-aware planning for such analyses. The project is, as far as we know, the first systematic exploratory analysis of RCTs to understand the impact of privacy-preservation and with the focus on LMIC data, and here we focus on some of our early explorations. Broadly, we aim to contribute on two separate dimensions. First, we will assess the feasibility of privacy protections that are stronger than the simple de-identification usually used, in the context of data collected for RCTs, taking into account the ability to make robust inferences. Second, we do so while maintaining feasibility of application, here defined as computationally feasible on commodity hardware used by researchers in these fields (and in particular, in LMIC). Our focus on RCTs is intentionally narrow. We believe that exploring the impact of privacy-preserving technologies in the context of RCTs is useful for several reasons. First, methods are, in general, quite straightforward: standard linear regression, difference-in-difference methods, possibly even simple difference in means across treated and untreated populations. These are amongst the first analysis methods for which adaptations to DP protection have been studied (e.g., Awan and Slavkovic, 2020; Alabi et al., 2020; Slavkovic and Molinari, 2021; Barrientos et al., 2018; Bowen et al., 2020). If formal privacy-preserving methods cannot be made to work "out-of-the-box" and at scale in this context, then it will be much more difficult to argue for more broad application. Second, most RCTs are small-scale, using samples of the overall population, potentially allowing us to leverage privacy-amplifying methods (Balle, Barthe and Gaboardi, 2018), but also not be faced with insurmountable computational constraints. Third, RCTs are often accompanied by pre-analysis plans, with specific hypotheses in mind and with the intent to avoid false discovery. These areas have also been explored within the DP framework (e.g., Vu and Slavkovic, 2009; Pistner, 2020; Dwork, Su and Zhang, 2021)). Furthermore, it is already understood in the privacy community that the inherent noisiness of the sampling may affect inference (e.g., Slavkovic and Seeman, 2022). The analogy between adding noise for the purpose of BHA, Meager (2019), and adding noise for privacy protection may be a convenient analogy to improve acceptance of such methods. A similar Bayesian framework can be used to adjust noisy inference due to privacy (e.g., Seeman, Slavkovic and Reimherr (2020).) Specifically, we explore the impact of privacy-preserving methods through a set of simulations and in the context of a published study, Blattman, Jamison and Sheridan (2017_a_) [henceforth the "Liberia study"]. The Liberia study is one of many published articles based on RCTs, for which typically the complete data underlying the study is available.2 Problem setup We focus on a particular key table in the Liberia study, Table 2. It is the result of several independent regressions, each with a separate dependent (response) variable of interest, measured in the short term (Panel a) and long term (Panel b), controlling for a set of assignment indicators for the various treatments, other covariates, and stratifiers.3 These are "intent-to-treat" (ITT) regressions. They are typical of RCTs, where the experimenter is interested in determining whether a particular treatment has any effect on a response variable when the treatment is applied to an entity, individual, or treatment unit. The experimental treatment is randomly assigned to the treatment units according to some chosen experimental design, and response variables are recorded at various points after the treatment. As is typical for this type of analysis, the Liberia study has both discrete and continuous covariates. The response variables are also a mix of continuous and discrete outcomes. Footnote 3: Blattman, Jamison and Sheridan (2017_a_) stratify the treatment across a few subgroups, but for expositional ease, we mostly ignore that aspect. In the empirical analysis, we implement the authors’ stratified random assignment as per their methodological description. The various characteristics and attributes of the treatment unit, including membership in various strata, as well as the actual outcomes, may all be sensitive. In the Liberia study, participation in therapy, receipt of cash (two outcomes), and self-reported criminal activity (covariates) are all quite sensitive attributes, and may pose privacy concerns. The inclusion of many covariates in the regression analysis improves the statistical utility of the estimate of the treatment effect. However, these covariates also pose a privacy concern for the treatment units participating in the study. An attacker, who may access the database provided as part of the replication package, may be able to reidentify individuals, and learn new, sensitive information about them. This would constitute a privacy violation for the treatment units. Thus, we consider a setting where the typical researcher attempts to pursue three goals: publish a sufficiently precise inference of the effect of the treatment on the treated from the model, given the data (Aim 1); release (publish) the database so that others can scrutinize the analysis; and protect the privacy of the respondents whose data is contained in the database (Aim 2). In this paper, we focus on privacy-protection that in part relies on differentially private synthetic data generation, and assess the ability to suitably meet Aims 1 and 2. ## 3 Synthetic data generation approach based on perturbed histogram Let the analysis data be available in the form of a dataframe with \(n\) rows and \(p+t+b+1\) columns, where \(n\) is the total number of treatment units, \(p\) is the number of covariates, \(t\) is the number of mutually exclusive treatment assignments (the control group being one of them), and \(b\) is the number of blocking variables. One column contains the values of the response variable \(y\).4 Footnote 4: In the Liberia study, there are many response variables, but each is treated independently, and can be assumed to be a different dataframe with the same \(p+t+b\) columns as other dataframes. Assuming that a linear regression model is suitable, the regression model of interest in the absence of blocking variables is given by \[y_{i}=\alpha+\sum_{k=1}^{b}\tau_{k}T_{k,i}+\sum_{l=1}^{p}\gamma_{l}X_{l,i}+ \epsilon_{i},\quad i=1,\ldots,n, \tag{1}\] where \(T_{k}\) represent the dummy variables for the treatment level combinations and \(X_{l}\) represent the covariates variables associated with the \(n\) treatment units and \(\epsilon_{i}\overset{i.i.d}{\sim}N(0,\sigma^{2})\). When stratification is used with a total of \(m\) block combinations and \(n_{j}\) treatment units are assigned to \(j\)-th block combination, the corresponding regression model is given by \[\begin{split}& y_{ij}=\alpha+\sum_{k=1}^{b}\tau_{k}T_{k,i}+\sum_{l=1} ^{p}\gamma_{l}X_{l,ij}+\epsilon_{ij}\\ & i=1,\ldots,n_{j},j=1,\ldots,m,\sum_{j}^{m}n_{j}=n\end{split} \tag{2}\] In both the above models, the parameter(s) of interest to the experimenter are the fixed effects \(\tau_{k}\), \(k=1,\ldots,b\). From the point of view of the experimenter, statistical utility is preserved if the inference concerning the fixed effects \(\tau_{k}\) is affected as little as possible by the data release mechanism used to sanitize the data in order to protect privacy. Using Equation (1) and the private dataframe \(D=[Y,T,X]\), we obtain point estimates of regression coefficients \(\hat{\tau}_{k}\) and \(\hat{\gamma}_{l}\), along with a point estimate of the residual variance \(\hat{\sigma}^{2}\). We now adopt a synthetic data generation approach that aims to preserve the inference concerning the (estimated) fixed effects \(\hat{\tau}_{k}\) while ensuring that the data release mechanism produces a (protected) dataframe \(\widetilde{D}\) with \(n\) observations5 and which satisfies \(\epsilon\)-differential privacy (DP), with the following caveats. Footnote 5: We note that it is not strictly necessary to output the exact same \(n\) observations as in the private data frame, but this seems to be the convention. * We prioritize inference validity by using the private parameters \(\hat{\tau}\) estimated on the private data \(D\) as part of the algorithm. The released \(\widetilde{\tau}\) is protected because it is based on \(\epsilon\)-DP-protected \(\widetilde{X}\) (as outlined below), but is not itself DP-protected. * We assume that the \(t\) treatment assignments do not need to be protected. As assignment is randomized, the \(t\) columns are independent of (orthogonal to) the \(p\) covariate columns, and contain no information as to the sensitive attributes. In the mechanism, we simply redraw new assignments conforming to the specified design of the RCT. * We ignore the effect of the sanitization mechanism on parameters \(\gamma\), because \(\gamma\) are not released (published). To create \(\epsilon\)-DP covariate synthetic data \(\widetilde{X}\), we follow one of the originally proposed mechanisms by sampling from the perturbed histogram (e.g., see Dwork et al., 2006\(a\); Wasserman and Zhou, 2010). We use the covariate information \(X\) to construct a generative model using a multidimensional histogram. Where necessary, we choose precision \(\zeta\) and discretize continuous variables. The histogram counts are sanitized using the Laplace mechanism, adding independent noise with mean 0 and scale parameter \(2/\epsilon\) to each count. We sample from the protected histogram to generate \(\epsilon\)-DP protected \(\widetilde{X}\). We then reimplement the treatment design, by doing random assignment (where appropriate, within strata) for \(n\) synthetic treatment units, thus creating treatment indicators \(\widetilde{T}\). The private parameter estimates \(\hat{\tau}_{k}\), the treatment indicators \(\widetilde{T}\), the protected covariates \(\widetilde{X}\), and a suitable model are then used as a generative model for the protected response variable \(\widetilde{Y}\). Note that one suitable model is (1), but we also experiment with other data generating processes. Finally, we can publish \(\widehat{\tau}_{k}\) and associated standard errors, estimated using (1) and \(\widetilde{D}=\left[\widetilde{Y},\widetilde{T},\widetilde{X}\right]\), and release \(\widetilde{D}(\epsilon,\zeta)\) as part of the replication package. In principle, it is possible to extend this approach to other regression models (such as logistic regression) which might be more suitable than linear regression in some scenarios. ### Algorithm Here we describe the basic algorithm for the case where there are no blocking variables. The only change for the case where there are blocking variables is in the experimental design used to assign treatment levels and block combinations to the \(n\) synthetic treatment units, which is straightforward. 1. Construct a multivariate histogram for the \(p\)-dimensional covariate data \(X\). Number of bins along each of the dimensions corresponding to the continuous variables is taken to be of the order \(n^{\zeta}\) (with \(\zeta\) defined below), and number of bins along the dimensions corresponding to the discrete variables is equal to the known number of distinct values of the variable.6 Let \(q\) be the number of bins required to construct the histogram. Let \(C_{i}\) be the count/frequency of the observations in the covariate dataframe corresponding to the \(i\)-th bin, \(i=1,\ldots,q\). Let \(C\) be the vector of counts given by \(C=(C_{1},\ldots,C_{q})\). Footnote 6: Strictly, we would also collapse discrete variables if the number of distinct values is greater than \(n^{\zeta}\), but in practice, this never occurred. 2. Draw \(q\) i.i.d observations \(Z_{1}\),..., \(Z_{q}\) from a Laplace distribution with location parameter/mean 0 and variance \(8/\epsilon^{2}\) (equivalently scale parameter \(2/\epsilon\)). Compute the sanitized vector of counts \(F=(F_{1},\ldots,F_{q})\) where \(F_{i}=C_{i}+Z_{i}\), \(i=1,\ldots,q\). Since some of the sanitized counts could be negative valued, we transform the negative counts to 0 and renormalize the counts to obtain a vector of sanitized relative frequencies as \(\widetilde{F}=(\widetilde{F}_{1},\ldots,\widetilde{F}_{q})\) where \(\widetilde{F}_{i}=\frac{F_{1}\mathbf{I}_{F_{i}>0}}{\sum_{i=1}^{q}D_{1}\mathbf{ I}_{F_{i}>0}}\), \(i=1,\ldots,q\). 3. Draw \(n\) i.i.d \(p\)-dimensional vectors \(\widetilde{X}_{1}\),..., \(\widetilde{X}_{n}\) using simple random sampling with replacement from the \(q\) bins of the constructed histogram in Step 1 using the sanitized relative frequency vector \(\widetilde{F}\) as the corresponding probabilities of each of the \(q\) bins. The sanitized covariate dataframe is denoted by \(\widetilde{X}^{n\times p}=\left[\widetilde{X}_{1}^{T}\,\,...\,\,\widetilde{X}_{n}^ {T}\right]^{T}\). 4. Construct the \(t\) dummy variables corresponding to the treatment assignments using the experimental design and denote it by \(\widetilde{T}^{m\times t}=\left[\widetilde{T}_{1}\,\,...\,\,\widetilde{T}_{t}\right]\). The synthetic dataframe corresponding to the treatment level assignment dummy variables and the covariates is denoted as \(\widetilde{M}=[\widetilde{T},\widetilde{X}]\). 5. Compute \(\hat{\tau}_{k}\),\(\hat{\gamma}_{l}\) and \(\hat{\sigma}^{2}\) based on linear regression analysis using the original dataframe (without any privatization). (We can generalize this to any regression model). 6. Construct \(\widetilde{Y}=(\widetilde{Y}_{1},\,...\,,\widetilde{Y}_{N})\) using the privately computed \(\hat{\tau}_{k}\),\(\hat{\gamma}_{l}\) and \(\hat{\sigma}^{2}\) using \[\widetilde{Y}_{i}=\widetilde{M}\hat{\beta}+E_{i}\] where \(E_{i}\stackrel{{ i.i.d}}{{\sim}}N(0,\hat{\sigma}^{2})\), \(i=1,\,...\,,\,n\). (We can generalize this to any prediction model based on estimated regression model). 7. Release \(\widetilde{D}(\epsilon,\zeta)=[\widetilde{Y},\,\widetilde{M}]=[\widetilde{Y},\widetilde{T},\widetilde{X}]\), \(\widetilde{\tau}\) and its associated standard errors. The proof of differential privacy guarantee is based on Proposition 1 in Dwork et al. (2006\(b\)) along with the post-processing property of pure differential privacy, while the statistical optimality is based on Theorem 4.4 of Wasserman and Zhou (2008). We explore several variations of this algorithm. To assess the contribution to variability due to the resampling from the histogram (Step 3), we compute a version of the data where no noise is injected at Step 2 (denoted by \(\epsilon=\infty\)). We also experiment with variations in \(\zeta\), the precision of the discretization of continuous variables. We initially choose \(\zeta=2/3\) and denote this as "high-precision" models. By reducing \(\zeta\) to \(1/3\), thus coarsening the histogram created, we introduce a loss of precision of the synthesis, but reduce the computational cost as well. These results are denoted as "low-precision" models. This may be relevant when the protection mechanism needs to be run on researchers' laptops, rather than on a high-performance compute cluster. In fact, in the Liberia study, we initially scaled \(p\), the number of covariates, to be feasibly computed on a laptop, given the size \(q\) of the histogram and 32-bit limitations in R. This turned out to be fewer covariates than originally used by the authors. Finally, for a given low-precision \(\zeta\), we exploit the ability to add additional variables, allowing us to better approximate the authors' original model. These results are denoted as "expanded specification." ## 4 Evaluating the Mechanism Given a private dataset \(D=[Y,M]\) and a sanitized that is protected version of the same dataset (synthetic dataset) \(\widetilde{D}=[\widetilde{Y},\widetilde{M}]\) obtained using our proposed algorithm with a given privacy budget \(\epsilon\) and precision \(\zeta\), we compute the following four metrics of comparison to verify whether Aim 1 is achieved : 1. **Metric 1 - C.I. overlap indicator:** This binary (0 or 1) metric computes whether there is any overlap between the 95% confidence intervals (C.I.) for the regression coefficients (individual C.I.'s for each regression coefficient) computed based on the private dataset and the protected dataset. 2. **Metric 2 - Estimate coverage by sanitized C.I. indicator:** This binary (0 or 1) metric computes whether the point estimates for the regression coefficients computed based on the private dataset fall within the confidence intervals for the regression coefficients computed based on the sanitized dataset. A value of 1 indicates that the deviation of the inference regarding the regression coefficients based on the private dataset from the same inference based on the sanitized dataset is likely to be small. 3. **Metric 3 - C.I. overlap measure:** This metric computes a measure of the overlap between the 95% confidence intervals (C.I.) for the regression coefficients (individual C.I.'s for each regression coefficient) computed based on the private dataset and the protected dataset (Karr et al., 2006). Specifically, having chosen a particular regression coefficient \(\beta\), if \((L,U)\) is the C.I. for \(\beta\) computed based on the unsanitized dataset and \((\widetilde{L},\widetilde{U})\) is the C.I. for \(\widetilde{\beta}\) computed based on the sanitized dataset. Let \(L^{\mathit{over}}=\max(L,\widetilde{L})\) and \(U^{\mathit{over}}=\min(U,\widetilde{U})\). Then the average overlap in confidence intervals \(\widetilde{O}\) is \[\widetilde{O}=\frac{1}{2}\left[\frac{U^{\mathit{over}}-L^{\mathit{over}}}{U-L }+\frac{U^{\mathit{over}}-L^{\mathit{over}}}{\widetilde{U}-\widetilde{L}} \right]\,.\] This metric is a continuous measurement version of Metric 1. The average overlap \(\widetilde{O}\) can vary between 0 and 1, with higher values near 1 indicating that there is a large degree of overlap. Thus, higher values (near 1) indicate that the deviation of the inference regarding the regression coefficients based on the private dataset from the same inference based on the protected dataset is small. 4. **Metric 4 - Empirical Squared Error in Estimate:** This metric computes \((\beta-\widetilde{\beta})^{2}\), the square of the difference between the private and sanitized point estimates of the regression coefficients. Smaller values (near 0) indicate that the deviation of the inference regarding the regression coefficients based on the private dataset from the same inference based on the sanitized dataset is small. In order to verify whether Aim 2 is satisfied, we choose a statistic (**Metric 5**) that depends only on the private covariate data \(X\), computing it for pairs of datasets, treating the original dataset as the benchmark. In this study, we use the empirical squared error, the squared difference between the two values of the statistic computed. All statistics are averaged across multiple runs of the algorithm. Thus, metrics 1 and 2 will be reported as proportions, and metrics 3, 4 and 5 will be reported as means. We emphasize that in practice, researchers would likely only use a single run of the algorithm, and not publish multiple versions of the synthetic dataset. In the next two sections, we use these metrics to evaluate the performance of our proposed algorithm, first on simulation studies, and then on the Liberia study. Numerical Experiments There are two separate sources of noise addition to the original private dataset. The first source is the statistical noise introduced due to the uncertainty involved in estimating the distribution of the covariate data and the sampling of the synthetic dataset from the histogram. The second source is due to differential privacy (addition of Laplace noise). To assess the individual effect of noise from the second source separated from the first source, we perform the same synthetic data generation process, but without the addition of DP noise to the histogram counts (Step 2), creating what we refer to as non-DP synthetic dataset \(D^{*}=\widetilde{D}(\infty,\zeta)\). We then calculate the above four metrics, using \(D^{*}\) instead of \(\widetilde{D}\), and compare the two sets. This way we empirically observe an effect of the differential privacy constraint on our data generation process. Additionally, if the comparison metric values for the DP and non-DP procedures do not differ very much, we could argue for the DP implementation in practice since non-DP outputs are vulnerable to reconstruction attacks and other forms of loss in privacy. We perform two simulations studies to capture the above comparisons. Simulation Study 1 uses a single covariate, which we further explore by drawing from different distributions. We discuss here Simulation Study 1 with a single sensitive covariate simulated from the Uniform distribution. In Appendix A, we report on simulations where the covariate is based on the Beta distribution. Simulation Study 2 generalizes the covariate structure to include a mix (discrete and continuous variables). It leads to qualitatively similar results, and is reported in Appendix A. Throughout this section and in the next, we consider 3 different choices of the privacy-loss budget \(\epsilon=\{0.1,0.5,1\}\), and we use \(\epsilon=\infty\) to denote non-DP synthetic data. For a given privacy budget, we simulate 100 different private datasets (response variable, treatment variable and covariates combined). For each of these 100 datasets, we independently generate 20 protected synthetic datasets using our proposed algorithm. We evaluate the OLS point estimates and confidence intervals for the regression coefficients when computing the Metrics 1, 2, 3 and 4 to measure the degree of preservation of utility of the inference, not only for the treatment effects but also for the other regression coefficients. To compute Metric 5, we choose the variance of the covariate \(x_{1}\). ### Results of Simulation Study 1 For our first simulation study we consider a dataframe with \(n=100\) observations, 1 treatment variable, \(t_{1}\), with two treatment levels, "0" and "1" denoting whether or not the treatment was applied to the corresponding treatment unit and \(p=1\) continuous covariate, \(x_{1}\), where we considered two different distributions for the continuous covariate: Uniform(-5,5) and Beta(1,2). The treatment variable is generated from a binomial distribution with equal probabilities for the two treatment levels. All variables are generated independent of each other. We choose the true regression coefficient as \(\alpha=0.05\) (Intercept term), \(\tau_{1}=1,\gamma_{1}=0.2\) and the true residual variance to be \(0.5\). In Table 1, we compute the metric values for three different choices of the privacy budget \(\epsilon=0.1,0.5\), \(1\), and non-DP synthetic data, that is \(\epsilon=\infty\). We observe that Metric 1 always have value 1 indicating that in all these cases there is always an overlap between the original and synthetic data. The values under Metric 2 indicate that in all the cases, for all the regression coefficients, around \(94-95\%\) of the time the value of the point estimate of the original/unsanitized dataset lies within the confidence intervals for the regression coefficients computed based on the synthetic dataset. From the values under Metric 3 we can conclude that the measure of overlap between the confidence intervals of the original and synthetic dataset is around \(78-79\%\). From the values under Metric 4, we observe that the square of the differences between the point estimates of the regression coefficients based on the unsanitized dataset and the sanitized dataset are quite small. We observe that, irrespective of the privacy budget \(\epsilon\), the effect of the privatization on the utility of the estimates of the regression parameters is quite small. Thus, we can conclude on the basis of these results that the utility of the inference regarding the treatment effects as well as the remaining regression coefficients are is preserved to a large extent even under privatization using our proposed algorithm. On the other hand, we expect that as the privacy budget \(\epsilon\) decreases, we expect larger degrees of distortion of the covariate data in the synthetic data generation process. Thus, we should expect larger differences (as \(\epsilon\) decreases) between the values of sensitive statistics (which depend on the sensitive covariate data and which we aim to provide privacy protection) when computed using the unsanitized dataset and the sanitized dataset. From Table 2, we observe that the squared differences between the sensitive statistic (which we chose to be the variance of \(x_{1}\)) based on the private dataset and the sanitized dataset are increases as the privacy budget \(\epsilon\) decreases. Other choices of the sensitive statistic also yield similar results. Thus, we conclude that both Aim 1 and Aim 2 are satisfied to a large extent, based on this simulation study using uniform covariates. We also observe that the values for non-DP synthetic data are similar to the values computed based on the protected data generation procedure, thus empirically proving our conclusion that adding differential privacy guarantees is not coming at much extra cost. Further, in Table 2 we compute Metric 5 (MSE) for the sensitive statistic (Variance of Age) based on both DP and non-DP synthetic data generation procedures. The larger value of Metric 5 using DP synthetic data generation in comparison to the smaller value using non-DP synthetic data generation is indicative of the additional distortion introduced by the privatization. \begin{table} \begin{tabular}{l l c c c} \hline \hline **Privacy budget** & **Variable names** & **Metric 1** & **Metric 2** & **Metric 3** & **Metric 4** \\ \hline \(\epsilon=0.1\) & (Intercept) & 1.00000 & 0.95000 & 0.79427 & 0.01127 \\ & \(t_{1}\) & 1.00000 & 0.94650 & 0.79718 & 0.02099 \\ & \(x_{1}\) & 1.00000 & 0.95350 & 0.78564 & 0.00076 \\ \hline \(\epsilon=0.5\) & (Intercept) & 1.00000 & 0.95450 & 0.79703 & 0.01054 \\ & \(t_{1}\) & 1.00000 & 0.94600 & 0.79684 & 0.02094 \\ & \(x_{1}\) & 1.00000 & 0.94750 & 0.79166 & 0.00069 \\ \hline \(\epsilon=1\) & (Intercept) & 1.00000 & 0.95000 & 0.79809 & 0.01046 \\ & \(t_{1}\) & 1.00000 & 0.94900 & 0.79737 & 0.02094 \\ & \(x_{1}\) & 1.00000 & 0.95700 & 0.79582 & 0.00065 \\ \hline \(\epsilon=\infty\) & (Intercept) & 1.00000 & 0.95400 & 0.80176 & 0.00987 \\ & \(t_{1}\) & 1.00000 & 0.94900 & 0.79460 & 0.02119 \\ & \(x_{1}\) & 1.00000 & 0.95500 & 0.79557 & 0.00064 \\ \hline \hline \end{tabular} \end{table} Table 1: Effect on inference regarding regression coefficients measured using Metrics 1-4 for Simulation Study 1 with uniform covariate, averaged over 100 simulations of the sensitive dataframe, using 20 independently generated synthetic dataframes (with varying privacy-loss budgets \(\epsilon=0.1,0.5,1\) and \(\epsilon=\infty\) that is non-DP synthetic data) for each sensitive dataframe. Application to "Reducing Crime and Violence: Experimental Evidence from Cognitive Behavioral Therapy in Liberia" (Blattman, Jamison and Sheridan, 2017_a_) In this section, we apply and evaluate the potential of the proposed methodology on a real-world randomized control trial by focusing on the analyses as reported in Blattman, Jamison and Sheridan (2017_a_). The associated replication files, including the de-identified data, are available in Blattman, Jamison and Sheridan (2017_b_). ### Setup For our evaluation, we focus on the results reported in Table 2 Panel B of Blattman, Jamison and Sheridan (2017_a_). Specifically, the authors consider the long-term (12-13 months after the program)7 effect of therapy and cash grant on a variety of outcome variables both individually, and through a summary indicator called _Antisocial behaviours z-score_ (referred to as fam_asb_lt). The sample is composed of 999 high-risk youths in Monrovia, Liberia. A \(2\times 2\) factorial design is used with two stratification variables based on the groups the youths were in when they were randomly assigned the treatments, once at the time of being assigned to therapy (there were 55 such groups), and once at the time of being assigned to receive cash grant of 200 USD (there were 20 such groups). Blattman, Jamison and Sheridan (2017_a_) find that neither cash nor therapy alone have a lasting (12-13 month) effect, but that the combination of both treatments does reduce "antisocial behavior". Footnote 7: Formally, we use Round 5 data, as the original code (Blattman, Jamison and Sheridan, 2017_b_) specifies. The analysis data are obtained from the file named STYL_Final.dta as provided in the replica \begin{table} \begin{tabular}{l c c c c} \hline **Privacy Budget** & \(\epsilon\) **=0.1** & \(\epsilon\) **=0.5** & \(\epsilon\) **=1** & **Non-DP Synthesis** \\ \hline MSE of Variance of \(x_{1}\) & 6.888821 & 2.388792 & 1.273375 & 0.594822 \\ \hline \end{tabular} \end{table} Table 2: Effect on value of sensitive statistic (based on covariate data) measured using Metric 5 (MSE) for Simulation Study 1 using uniform covariate. Results are reported for DP synthesis with varying privacy budget \(\epsilon\) and non-DP synthesis, each type of synthesis being averaged over 100 simulations of the sensitive dataframe, using 20 independently generated synthetic dataframes for each sensitive dataframe. tion package (Blattman, Jamison and Sheridan, 2017_b_). The treatment assignments are encoded using 3 binary treatment variables tpassonly (indicating that only therapy was received), cashassonly (indicating that a cash-only grant was received), and tpcashass (indicating that both therapy and cash grant were received). The therapy assignment based blocking variable is tp_strata_alt while the cash grant assignment based blocking variable is cg_strata. In addition to the treatment variables and the blocking variables, we include 7 covariates in the core regressions: age_b, asbhostil_b, drugssellever_b, drinkboozeself_b, druggrassself_b, harddrugsever_b, steals_b. The first 2 covariates are age and antisocial behaviour index (Barret Antisocial Behavior (ASB) and Hostility z-score) for the individuals participating in the study. These are continuous variables. The remaining covariates record the antisocial behaviour of the youths in terms of ever having sold drugs, whether they drink alcohol, whether they smoke grass/opium, whether they have ever consumed hard drugs and whether they have exhibited stealing behaviour in the 2 weeks prior to their interview, respectively. The values of these covariates are recorded to be 1 if the answer is affirmative, otherwise 0. The variables of interest are shown in Table 3. ### Results Table 4 display the core results from the application of the mechanism to the Liberia study, in the form of the statistical inference regarding the treatment effects of the Cash Grant Only treatment, Therapy Only treatment and Both Cash and Therapy treatment on the various response variables. Panel (i) shows the replication of the authors' analysis on the original (unmodified) data \(D\), using the covariates listed in Table 3. This replication is necessary, because these covariates constitute a strict subset of the covariates that Blattman, Jamison and Sheridan (2017_a_) control for. This is done for computational reasons, to which we return later. Panel (ii) shows the same specification, estimated using the data protected using a single run of the mechanism described in Section 3, with \(\epsilon=1\) and \(\zeta=\frac{2}{3}\) (i.e., \(\widetilde{D}(1,\frac{2}{3})\)). Panel (iii) show the same specification, estimated on the non-DP synthetic data \(\widetilde{D}(\infty,\frac{2}{3})\). Figure 1 summarizes the results for the treatment effects for the first rows of each panel, for a single response variable, in this case the focal "ASB z-score". Specifically, Figure 1 displays the \begin{table} \begin{tabular}{l l} \hline \hline **Name of variable in database** & **Description** \\ \hline Outcomes & \\ \hline fam\_asb\_lt & Antisocial Behaviors, Z-score \\ drugsellever\_e & Usually sells drugs \\ crimes2wx\_e & No. of thefts/robberies in past 2 weeks \\ disputes\_all\_z\_e & Disputes and fights in past 2 weeks, z-score \\ carryweapon\_e & Carries a weapon on body at follow-up \\ arrested\_e & Arrested in past 2 weeks at follow-up \\ hostilitystd\_e & Aggressive behaviors, z-score at follow-up \\ domabuse\_z & Verbal/physical abuse of partner, z-score \\ \hline Treatments & \\ \hline cashassonly & Cash Only \\ tpassonly & Therapy Only \\ tpcashass & Both \\ \hline Covariates & \\ \hline age\_b & Age \\ ashbostil\_b & Barret ASB index \\ drugsellever\_b & Drugs Sell indicator at baseline \\ drinkboozeself\_b & Alcohol self indicator at baseline \\ druggrassself\_b & Grass/Opium self indicator at baseline \\ harddrugsever\_b & Hard Drugs indicator at base line \\ steals\_b & Steal self indicator at base line \\ \hline \hline \end{tabular} \end{table} Table 3: Variables of interest in the Liberia study \begin{table} \begin{tabular}{l c c c c c c c c c c c c c} \hline & \multicolumn{4}{c}{Therapy Only} & \multicolumn{4}{c}{Cash Only} & \multicolumn{4}{c}{Both} \\ \cline{2-13} & Estimate & Std. Err & p-value & Estimate & Std. Err & p-value & Estimate & Std. Err & p-value \\ \hline (i) Original & & & & & & & & & & & & \\ \hline Antisocial behaviors & \(-0.026\) & ( 0.085) & [ & 0.755] & 0.098 & ( 0.086) & [ & 0.257] & \(-0.224\) & ( 0.087) & [ & 0.010] \\ z-score & & & & & & & & & & & \\ Usually sells drugs & \(-0.029\) & ( 0.029) & [ & 0.316] & 0.025 & ( 0.030) & [ & 0.398] & \(-0.086\) & ( 0.030) & [ & 0.004] \\ No. of thefts/roboberies in & 0.075 & ( 0.514) & [ & 0.884] & 0.071 & ( 0.523) & [ & 0.892] & \(-1.087\) & ( 0.526) & [ & 0.039] \\ past 2 weeks & & & & & & & & & & & \\ Carries a weapon on body & \(-0.045\) & ( 0.031) & [ & 0.141] & 0.019 & ( 0.031) & [ & 0.548] & \(-0.057\) & ( 0.031) & [ & 0.067] \\ Arrested in past 2 weeks & & & & & & & & & & & \\ Argressive behaviors & \(-0.000\) & ( 0.031) & [ & 0.994] & 0.004 & ( 0.032) & [ & 0.901] & \(-0.026\) & ( 0.032) & [ & 0.420] \\ Aggressive behaviors & \(-0.041\) & ( 0.089) & [ & 0.649] & \(-0.011\) & ( 0.091) & [ & 0.904] & \(-0.201\) & ( 0.092) & [ & 0.029] \\ Yerbal/physical abuse of partner z-score & & & & & & & & & & & \\ \hline (ii) Synthetic & & & & & & & & & & & \\ \hline Antisocial behaviors & \(-0.110\) & ( 0.082) & [ & 0.181] & \(-0.026\) & ( 0.086) & [ & 0.762] & \(-0.312\) & ( 0.084) & [ & 0.000] \\ z-score & & & & & & & & & & & \\ Usually sells drugs & \(-0.032\) & ( 0.033) & [ & 0.336] & 0.065 & ( 0.034) & [ & 0.060] & \(-0.119\) & ( 0.034) & [ & 0.000] \\ No. of thefts/roboberies in & 0.016 & ( 0.190) & [ & 0.932] & 0.226 & ( 0.198) & [ & 0.253] & \(-1.336\) & ( 0.195) & [ & 0.000] \\ past 2 weeks & & & & & & & & & & & \\ Carries a weapon on body & \(-0.060\) & ( 0.032) & [ & 0.060] & 0.038 & ( 0.033) & [ & 0.253] & \(-0.055\) & ( 0.033) & [ & 0.094] \\ Arrested in past 2 weeks & & & & & & & & & & & & \\ Argressive behaviors & \(-0.129\) & ( 0.087) & [ & 0.137] & \(-0.142\) & ( 0.090) & [ & 0.117] & \(-0.293\) & ( 0.089) & [ & 0.001] \\ Yerbal/physical abuse of partner z-score & & & & & & & & & & & & \\ \hline (iii) Non-DP Synthetic & & & & & & & & & & & \\ \hline Antisocial behaviors & \(-0.037\) & ( 0.081) & [ & 0.647] & 0.159 & ( 0.083) & [ & 0.056] & \(-0.207\) & ( 0.083) & [ & 0.013] \\ z-score & & & & & & & & & & & & \\ Usually sells drugs & \(-0.026\) & ( 0.026) & [ & 0.327] & \(-0.007\) & ( 0.027) & [ & 0.793] & \(-0.081\) & ( 0.027) & [ & 0.003] \\ No. of thefts/roboberies in & 0.101 & ( 0.150) & [ & 0.502] & 0.151 & ( 0.153) & [ & 0.325] & \(-0.893\) & ( 0.154) & [ & 0.000] \\ past 2 weeks & & & & & & & & & & & & \\ Carries a weapon on body & \(-0.015\) & ( 0.028) & [ & 0.598] & 0.007 & ( 0.028) & [ & 0.795] & \(-0.044\) & ( 0.028) & [ & 0.126] \\ Arrested in past 2 weeks & & & & & & & & & & & & \\ Argressive behaviors & \(-0.052\) & ( 0.085) & [ & 0.542] & 0.053 & ( 0.087) & [ & 0.546] & \(-0.182\) & ( 0.088) & [ & 0.038] \\ z-score & & & & & & & & & & & & \\ Yerbal/physical abuse of partner z-score & & & & & & & & & & & & \\ \hline \end{tabular} Panel (i) provides estimated coefficients, standard errors, and the associated p-value, using the original data and a reduced specification closely resembling Blattman, Jamison and Sheridan (2017). P-values are not adjusted. Panel (ii) displays the same estimated parameters, for the same model, when using the protection mechanism described in the text with \(\epsilon=1\). Panel (iii) shows results when setting \(\epsilon=\infty\), but otherwise following the same approach. See text for further details. \end{table} Table 4: Table 2(b) with original, protected, and modified data treatment effect estimates (represented by dots), the standard error of the treatment effect estimates (represented by intervals) and the unadjusted p-value for the individual tests of significance of the treatment coefficients. The key inferential results - that the combination of cash and therapy is the only effective treatment - is maintained across all three panels. The replication reported in Panel (i) shows no significant coefficients for therapy (first set of columns), and only one outcome (abuse) shows a significant effect of cash payments, whereas multiple coefficients are significant for the combined treatment, closely replicating the results from Blattman, Jamison and Sheridan (2017) (for unadjusted coefficients). The same pattern is also observed in Panels (ii) and (iii), with some small differences. While most of our estimates of standard errors are reasonably close to the those computed from the original data, we observe a general trend that has been discussed in the privacy literature: the sanitized data often gives smaller standard errors of parameter estimates when the sanitized protected data (synthetic, DP or not) are being naively used in place of the original data. These standard errors are misleading and will not give honest confidence intervals. We observe the same in our analysis as shown in Table 4, where most (but not all) of the standard errors in the Synthetic and non-DP Synthetic panels are (marginally) smaller. For instance, for the key treatment of "Both" for the response variable "ASB z-score", standard errors are 0.087 for the unmodified data, but 0.084 and 0.083, respectively, for the two synthetic datasets. Solutions have been proposed that account for the noise due to privacy-preservation that will lead to wider confidence intervals but honest inference \begin{table} \begin{tabular}{l c c c c c c c c c c c c c} \hline & \multicolumn{4}{c}{T therapy Only} & \multicolumn{4}{c}{Cash Only} & \multicolumn{4}{c}{Both} \\ \cline{2-13} & Estimate & Std. Err & p-value & Estimate & Std. Err & p-value & Estimate & Std. Err & p-value \\ \hline Outcome: ASB Zscore & & & & & & & & & & & & & \\ \hline Antisocial behaviors & \(-0.026\) & ( 0.085) & [ & 0.755] & 0.098 & ( 0.086) & [ & 0.257] & \(-0.224\) & ( 0.087) & [ & 0.010] \\ z-score & & & & & & & & & & & & & & \\ Antisocial behaviors & \(-0.110\) & ( 0.082) & [ & 0.181] & \(-0.026\) & ( 0.086) & [ & 0.762] & \(-0.312\) & ( 0.084) & [ & 0.000] \\ z-score & & & & & & & & & & & & & & \\ Antisocial behaviors & \(-0.037\) & ( 0.081) & [ & 0.647] & 0.159 & ( 0.083) & [ & 0.056] & \(-0.207\) & ( 0.083) & [ & 0.013] \\ z-score & & & & & & & & & & & & & & \\ \hline \end{tabular} Estimated coefficients, standard errors, and the associated p-value, using the original data and a reduced specification closely resembling Blattman, Jamison and Sheridan (2017), the synthetic data for \(\epsilon=1\), and the non-DP synthetic data for \(\epsilon=\infty\). See text for further details. \end{table} Table 5: Table 2(b), ASB Z-score only, with original, protected, and modified data (e.g., Seeman, Slavkovic and Reimherr (2020), and others). In future work, we will explore that result in this setting. Tables 6 and 7 show results from experimentation with the privacy budget \(\epsilon\), the precision \(\zeta\) of the transformation of continuous to discrete during the process, and the method of imputing and protecting the response variable, across various levels of the privacy budget, for a single response variable (in this case, fam_asb_z). The top panel in each table reproduces the key estimates from Table 4, for ease of referenced. Panel (i) of Table 6 shows results for the same \(\zeta=\frac{2}{3}\) as used in Table 4, but with increasing levels of protection (decreasing \(\epsilon\)). None of the treatment effects reported change in any substantial fashion. Panel (ii) shows results when decreasing \(\zeta\) by half to \(\frac{1}{3}\). Most point estimates are affected (similarly across all levels of \(\epsilon\)), and the "Therapy only" treatment would now appear to be marginally significant, though all numbers are not statistically significant from those using the higher \(\zeta\). However, the previously favored "Both" treatment is numerically quite close to the higher-\(\zeta\) numbers. Thus, in \begin{table} \begin{tabular}{l r r r r r r r r r r r r} \hline \hline & \multicolumn{3}{c}{Therapy Only} & \multicolumn{3}{c}{Cash Only} & \multicolumn{3}{c}{Both} \\ \cline{2-13} & Estimate & Std. Err & p-value & Estimate & Std. Err & p-value & Estimate & Std. Err & p-value \\ \hline Reference value & & & & & & & & & & & & \\ \hline (Original) & \(-0.026\) & ( & 0.085) & [ & 0.755] & 0.098 & ( & 0.086) & [ & 0.257] & \(-0.224\) & ( & 0.087) & [ & 0.010] \\ ( Synthetic) & \(-0.110\) & ( & 0.082) & [ & 0.181] & \(-0.026\) & ( & 0.086) & [ & 0.762] & \(-0.312\) & ( & 0.084) & [ & 0.000] \\ ( Non DP Synthetic) & \(-0.037\) & ( & 0.081) & [ & 0.647] & 0.159 & ( & 0.083) & [ & 0.056] & \(-0.207\) & ( & 0.083) & [ & 0.013] \\ \hline \hline \multicolumn{13}{l}{(i) High precision discretization} & & & & & & & & & & & & \\ \hline \(\epsilon=0.1\) & \(-0.103\) & ( & 0.082) & [ & 0.212] & \(-0.027\) & ( & 0.086) & [ & 0.754] & \(-0.306\) & ( & 0.084) & [ & 0.000] \\ \(\epsilon=0.5\) & \(-0.102\) & ( & 0.082) & [ & 0.216] & \(-0.028\) & ( & 0.086) & [ & 0.748] & \(-0.307\) & ( & 0.084) & [ & 0.000] \\ \(\epsilon=1\) & \(-0.110\) & ( & 0.082) & [ & 0.181] & \(-0.026\) & ( & 0.086) & [ & 0.762] & \(-0.312\) & ( & 0.084) & [ & 0.000] \\ \(\epsilon=\infty\) & \(-0.037\) & ( & 0.081) & [ & 0.647] & 0.159 & ( & 0.083) & [ & 0.056] & \(-0.207\) & ( & 0.083) & [ & 0.013] \\ \hline \hline \multicolumn{13}{l}{(ii) Low precision discretization} & & & & & & & & & & & & & \\ \hline \(\epsilon=0.1\) & \(-0.168\) & ( & 0.081) & [ & 0.038] & \(-0.101\) & ( & 0.083) & [ & 0.223] & \(-0.338\) & ( & 0.083) & [ & 0.000] \\ \(\epsilon=0.5\) & \(-0.170\) & ( & 0.081) & [ & 0.036] & \(-0.103\) & ( & 0.083) & [ & 0.214] & \(-0.339\) & ( & 0.083) & [ & 0.000] \\ \(\epsilon=1\) & \(-0.168\) & ( & 0.080) & [ & 0.037] & \(-0.089\) & ( & 0.083) & [ & 0.283] & \(-0.337\) & ( & 0.083) & [ & 0.000] \\ \(\epsilon=\infty\) & \(-0.029\) & ( & 0.086) & [ & 0.739] & 0.026 & ( & 0.085) & [ & 0.763] & \(-0.291\) & ( & 0.089) & [ & 0.001] \\ \hline \hline \end{tabular} All results for a single response variable, here: “Antisocial behavior z-score” (fam_asb_lt) (Panel (i) provides estimated coefficients, standard errors, and the associated p-value, using protected data where the continous covariates have been discretized and then protected using \(\zeta=n^{2/3}\) (high precision), for various values of \(\epsilon\). P-values are not adjusted. Panel (ii) displays the same estimated parameters, when \(\zeta=n^{1/3}\) (low precision). See text for further details. **WARNING Row 2 should match High Precision, \(\epsilon=1\), and Row 3 should match High Precision \(\epsilon=\infty\)** \end{table} Table 6: Varying precision and privacy budget this case, reducing precision from the preferred value of \(\zeta=\frac{2}{3}\) would potentially lead to "misleading" inferences. Much of this appears to be driven by changes (biases) in the point estimates, as the (naive) standard errors are not changed much. In our experiments, both the low and high precision discretization maintain the direction of the effect but the magnitude and the significant change. This data-dependent result will in part depend on the treatment variable and their interaction with the continous covariates being discretized. In other applications, this may have different effects. For Table 7, we switch to discussing the variable drugsellever_e ("Usually sells drugs") (a dummy variable), as its optimal imputation method was set to be a logistic regression, whereas the analysis in the Liberia study regresses it on the covariates using a linear regression. In Table 4, the results depicted for this variable in row 2 of panels (ii) and (iii) thus reflect imputation of \(y\) using the logistic regression. As before, the results are quite stable across multiple levels of \(\epsilon\) (Panel (ii) of Table 7). Switching to the method that, in principle, is more congenial to the analysis in the \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{therapy Only} & \multicolumn{3}{c}{Cash Only} & \multicolumn{3}{c}{Both} \\ \cline{2-13} & Estimate & Std. Err & p-value & Estimate & Std. Err & p-value & Estimate & Std. Err & p-value \\ \hline Reference value & & & & & & & & & & & & & \\ \hline (Original) & \(-0.029\) & ( 0.029) & [ 0.316] & 0.025 & ( 0.030) & [ 0.398] & \(-0.086\) & ( 0.030) & [ 0.004] \\ ( Synthetic) & \(-0.032\) & ( 0.033) & [ 0.336] & 0.065 & ( 0.034) & [ 0.060] & \(-0.119\) & ( 0.034) & [ 0.000] \\ ( Non DP Synthetic) & \(-0.026\) & ( 0.026) & [ 0.327] & \(-0.007\) & ( 0.027) & [ 0.793] & \(-0.081\) & ( 0.027) & [ 0.003] \\ \hline \multicolumn{13}{l}{(i) Logistic Regression} \\ \hline \(\epsilon=0.1\) & \(-0.019\) & ( 0.032) & [ 0.562] & 0.079 & ( 0.033) & [ 0.019] & \(-0.092\) & ( 0.033) & [ 0.005] \\ \(e=0.5\) & \(-0.024\) & ( 0.032) & [ 0.467] & 0.074 & ( 0.034) & [ 0.028] & \(-0.096\) & ( 0.033) & [ 0.004] \\ \(e=1\) & \(-0.032\) & ( 0.033) & [ 0.336] & 0.065 & ( 0.034) & [ 0.060] & \(-0.119\) & ( 0.034) & [ 0.000] \\ \(e=\infty\) & \(-0.026\) & ( 0.026) & [ 0.327] & \(-0.007\) & ( 0.027) & [ 0.793] & \(-0.081\) & ( 0.027) & [ 0.003] \\ \hline \multicolumn{13}{l}{(ii) Linear Regression} \\ \hline \(\epsilon=0.1\) & \(-0.056\) & ( 0.028) & [ 0.050] & \(-0.018\) & ( 0.030) & [ 0.546] & \(-0.114\) & ( 0.029) & [ 0.000] \\ \(e=0.5\) & \(-0.055\) & ( 0.028) & [ 0.051] & \(-0.018\) & ( 0.030) & [ 0.540] & \(-0.115\) & ( 0.029) & [ 0.000] \\ \(e=1\) & \(-0.058\) & ( 0.028) & [ 0.040] & \(-0.018\) & ( 0.029) & [ 0.552] & \(-0.116\) & ( 0.029) & [ 0.000] \\ \(e=\infty\) & \(-0.033\) & ( 0.028) & [ 0.236] & 0.046 & ( 0.028) & [ 0.107] & \(-0.080\) & ( 0.029) & [ 0.005] \\ \hline \hline \end{tabular} All results for a single response variable, here: “Usually sells drugs” (drugsellever_e) (Panel (i) provides estimated coefficients, standard errors, and the associated p-value, using protected data where the response variable is generated using (Gaussian) linear regression, for various values of \(\epsilon\). P-values are not adjusted. Panel (ii) displays the same estimated parameters, when the response variable is generated using logistic regression. See text for further details. **WARNING Row 2 should match Logistic Regression, \(\epsilon=1\), and Row 3 should match Logistic Regression \(\epsilon=\infty\)** \end{table} Table 7: Varying response variable generation method and privacy budget paper does lead, as before, to potentially incorrect inferences, as "Therapy only" again displays coefficients that are marginally significant for finite values of \(\epsilon\). The primary inference for treament with "Both" again remains unaffected. To see the average performance of the mechanism across multiple data generations, we compute Metrics 1-5 across 100 independently generated, across values of \(\epsilon\). Tables 8 and 9 show Metrics 1-4 for \(\epsilon=1\) and \(\epsilon=\infty\), respectively, with qualitatively similar results for other values of \(\epsilon\) in Appendix B. Table 10 contains values of Metric 5 across all values of \(\epsilon\), using the MSE for age as Figure 1: Comparison of inference regarding treatment effect in the Liberia study the base. As in the simulation studies, and as already observed in Tables 4 through 7, the differences between the estimates based on the original data and the synthetic data are quite small, implying that the protection mechanism has not significantly affected the inference about the regression parameters. This is true even when \(\epsilon=\infty\), suggesting that adding DP guarantees to the covariates is not coming at much extra cost. Table 10 nevertheless shows that reasonable values of \(\epsilon\) add significantly more distortion, and by extension, protection, to the underlying data, as expected. \begin{table} \begin{tabular}{l l l l l} \hline **Privacy Budget** & **Epsilon 0.1** & **Epsilon 0.5** & **Epsilon 1** & **Non-DP Synthesis** \\ \hline MSE of Variance of Age & 4481.74 & 4508.79 & 4503.16 & 0.9 \\ \hline \end{tabular} \end{table} Table 10: Effect on value of sensitive statistic (based on covariate data) measured using Metric 5 (MSE) for Liberia study, with varying privacy budget \(\epsilon\) and non-DP synthesis, averaged over 100 simulations of the sensitive dataframe, using 20 independently generated synthetic dataframes for each sensitive dataframe. \begin{table} \begin{tabular}{l c c c c} \hline **Variable names** & **Metric 1** & **Metric 2** & **Metric 3** & **Metric 4** \\ \hline (Intercept) & 1.00000 & 0.95000 & 0.80074 & 0.03692 \\ Cash Only & 1.00000 & 0.98000 & 0.81815 & 0.00567 \\ Therapy Only & 1.00000 & 0.96000 & 0.83278 & 0.00478 \\ Both & 1.00000 & 0.94000 & 0.80123 & 0.00676 \\ Therapy Block & 1.00000 & 0.99000 & 0.82447 & 0.00000 \\ Cash Block & 1.00000 & 0.96000 & 0.77586 & 0.00003 \\ Age & 1.00000 & 0.94000 & 0.79168 & 0.00004 \\ Barret ASB index & 1.00000 & 0.93000 & 0.79746 & 0.00103 \\ Drugs Sell indicator & 1.00000 & 0.97000 & 0.80715 & 0.00618 \\ Alcohol self indicator & 1.00000 & 0.97000 & 0.79696 & 0.00447 \\ Grass/Opium self indicator & 1.00000 & 0.92000 & 0.79709 & 0.00470 \\ Hard Drugs indicator & 1.00000 & 0.97000 & 0.78692 & 0.00639 \\ Steal self indicator & 1.00000 & 0.95000 & 0.78054 & 0.00531 \\ \hline \end{tabular} \end{table} Table 9: Effect on inference regarding regression coefficients measured using Metrics 1-4 for Liberia study, averaged over 100 independently generated non-DP synthetic dataframes. Discussion Much of the literature (in economics) publishes replication packages with either weakly protected (de-identified) data (contrary to our stated Aim 2), or withholds the data out of privacy concerns (impeding the ability for others to investigate inference, related to what we called Aim 1). The trigger for the analysis conducted here was the need for privacy protection in the presence of a requirement to publish data underlying an RCT, while maintaining reasonably broad usability of the data. We start with a specific focus on economists and social scientists interested in replication of RCTs. We explore using one of the simplest DP mechanisms (Laplace with perturbed histograms) for generation of the protected sensitive covariate data. We show that we can produce a protected dataset (Aim 2) and that analyses from such data would sufficiently maintain precise inference (Aim 1). Even for low values of the privacy-loss budget (i.e., stronger privacy), we can obtain comparable estimates in our regression models of interest. The mechanism described in Section 3 and evaluated in Sections 5 and 6 is a partially \(\epsilon\)-differentially-private mechanism offering strong privacy protection at conventional levels of \(\epsilon\) for covariate data, while providing relaxed (non-DP) protection for a small number parameters of central interest. Parameters that are not of interest are not released. Outcomes are imputed based on the protected covariates and parameters, and thus inherit the combined protection methods. In the real-world experiment, there are between 14 and 20 parameters (point estimates and associated standard errors), while the square root of the dimension of the protected data matrix, is approximately \(\sqrt{d*N}=83\). The mechanism allows for the release of the protected data and the perturbed parameters as part of a privacy-preserving replication package. Publication of replication package is a requirement in an increasing number of social science journals. The mechanism works quite well to allow for reasonable inferences in both simulations and real-world examples, in general close to the original (unprotected) inferences on population (intent-to-treat) parameters. This is achieved by leveraging the targeted structure, focusing on a small number of parameters of interest. Some caveats apply. Firstly, in our experiments, we did not protect stratifiers used for the random assignment, and thus did not fully protect the data matrix necessary for the full replication analysis. Such stratifiers tend to be indicators of large subsets of the population. Further protecting these may substantially affect the quality of random assignment, and therefore inference quality. On the other hand, they usually split the population into mutually exclusive sub-populations, and may thus compose. We have not further explored this aspect in the present study. Second, we have intentionally constrained the application of the mechanism to be compatible with recent computing hardware, within a reasonable amount of time, and have therefore reduced the number of covariates with respect to the original study. Yet we have not optimized the code computationally. Our primary constraint is memory required to construct and protect the fully expanded histogram. We explored the boundaries of this constraint by reducing the precision of the discretization (Table 6). The results suggests that the reduction in resolution may negatively impact the otherwise reasonable inference validity (in line with the literature). Future work should therefore focus in part on improvements in computational efficiency, in order to increase the size of the releasable protected covariate datasets. Third, we have in our analysis assumed naive standard errors. The original study itself applies corrections that account for multiple inferences. Inferences in the particular context here also need to account for the noise added through the mechanism (Abowd and Schmutte, 2015; Slavkovic and Seeman, 2022). We do not expect that such adjustments would substantially alter our conclusions in the two cases explored in this article, but any more general application will need to account for such adjustments. Furthermore, in our initial analysis, we focused on pure differential privacy (\(\epsilon\)-DP) for the protection of sensitive covariates. The privacy literature is rapidly growing by developing new privacy definitions, methods and algorithms, aiming to improve the privacy-utility-computational trade-offs in many data contexts, and some of the next steps should consider relaxations of privacy definitions (e.g., Desfontaines and Pejo (2022)), and new methods for release of formally private synthetic data and protected estimates alone, including focusing on pre-analysis. Finally, how useful is the privacy-protected replication package for broader questions posed by researchers? Does the privacy-protected replication package allow for robustness tests, for instance through other specifications? The mechanism is tightly bound to the original authors' proposed specification, and may not perform well when other researchers apply non-congenial specifications. In such cases, access to the original data and additional privacy budget may be necessary in order to create additional protected datsets that allow for such investigation. Transparency, however, is increased, as the privacy-preserving mechanism would be released as part of the replication package, allowing for replication in other contexts, for instance by collecting new data. We note that we have relied on some key features of the typical RCT: Randomization is orthogonal to the observed covariates of participants, and thus is non-informative about those covariates. Relatively few parameters are of key interest, and estimated coefficients on other control variables are typically not published. Overall, this reduces the amount of information that needs to be released, greatly facilitating the application of strong privacy protection. Relaxing any of these features may lead to less favorable results. We demonstrate that a simple method with strong privacy guarantees could become an "out-of-the-box" method. While we did reduce the number of covariates in this first study, ongoing work will explore improvements in several dimensions, as noted above. The ideas and results reported here are the first step towards better understanding of feasible privacy-preservation of RCTs-based data, ensuring that privacy of data contributors to RCTs, often from LMIC countries, will be more strongly protected, while maintaining the ability to draw meaningful inferences. While policy-oriented stakeholders are primarily interested in the latter, citizens that contribute their data to RCTs and companies, such as fin-tech providers, that provide key data to researchers are also heavily invested in protecting privacy. Consumer and citizen protection agencies, ethic review boards, and other regulators, should be interested in knowing of the existence of privacy-enhancing methods, possibly facilitating approval of studies in the presence of strong privacy guarantees.
ランダム化されたコントロール試験(RCT)は、様々な状況において介入や政策の影響を評価するための強力なツールとなりつつあります。生物医学分野や社会科学においては、その評価基準として「gold-standard」とみなされています。研究者たちは、少なくとも一部の推論にRCTを使用する研究が増加しており、その研究は、通常、プライバシー保護のためのデータ収集、匿名化、および一部のケースでは、従来の公開制限手法によって保護されています。本論文では、強いプライバシー保護手法(DP保証)の恩恵を、経済学と政策分析における公開された分析に適用するために、公開された分析から実証的な評価を行います。私たちは、複製パッケージ(研究集)の利用を活かして、経済学の論文でプライバシー保護されたデータの分析を再現可能な方法を検証します。その結果、比較的単純なDPベースの方法を用いることで、公開データの推論に有効な
2309.04792
Individual subject evaluated difficulty of adjustable mazes generated using quantum annealing
In this paper, the maze generation using quantum annealing is proposed. We reformulate a standard algorithm to generate a maze into a specific form of a quadratic unconstrained binary optimization problem suitable for the input of the quantum annealer. To generate more difficult mazes, we introduce an additional cost function $Q_{update}$ to increase the difficulty. The difficulty of the mazes was evaluated by the time to solve the maze of 12 human subjects. To check the efficiency of our scheme to create the maze, we investigated the time-to-solution of a quantum processing unit, classical computer, and hybrid solver.
Yuto Ishikawa, Takuma Yoshihara, Keita Okamura, Masayuki Ohzeki
2023-09-09T13:36:27
http://arxiv.org/abs/2309.04792v2
# Individual subject evaluated difficulty of adjustable mazes generated using quantum annealing ###### Abstract In this paper, the maze generation using quantum annealing is proposed. We reformulate a standard algorithm to generate a maze into a specific form of a quadratic unconstrained binary optimization problem suitable for the input of the quantum annealer. To generate more difficult mazes, we introduce an additional cost function \(Q_{update}\) to increase the difficulty. The difficulty of the mazes was evaluated by the time to solve the maze of 12 human subjects. To check the efficiency of our scheme to create the maze, we investigated the time-to-solution of a quantum processing unit, classical computer, and hybrid solver. quantum annealing, combinatorial optimization, maze generation, bar-tipping algorithm, time-to-solution 2019 ## 1 Introduction A combinatorial optimization problem is minimizing or maximizing their cost or objective function among many variables that take discrete values. In general, it takes time to solve the combinatorial optimization problem. To deal with many combinatorial optimization problems, we utilize generic solvers to solve them efficiently. Quantum annealing (QA) is one of the generic solvers for solving combinatorial optimization problems Kadowaki and Nishimori (1998) using the quantum tunneling effect. Quantum annealing is a computational technique to search for good solutions to combinatorial optimization problems by expressing the objective function and constraint time requirements of the combinatorial optimization problem by quantum annealing in terms of the energy function of the Ising model or its equivalent QUBO (Quadratic Unconstrained Binary Optimization), and manipulating the Ising model and QUBO to search for low energy states Shu Tanaka and Seki (2022). Various applications of QA are proposed in traffic flow optimization Neukart et al. (2017); Hussain et al. (2020); Inoue et al. (2021), finance Rosenberg et al. (2016); Orus et al. (2019); Venturelli and Kondratyev (2019), logistics Feld et al. (2019); Ding et al. (2021), manufacturing Venturelli et al. (2016); Yonaga et al. (2022); Haba et al. (2022), preprocessing in material experiments Tanaka et al. (2023), marketing Nishimura et al. (2019), steel manufacturing Yonaga et al. (2022), and decoding problems Ide et al. (2020); Arai et al. (2021a). The model-based Bayesian optimization is also proposed in the literature Koshikawa et al. (2021) A comparative study of quantum annealer was performed for benchmark tests to solve optimization problems Oshiyama and Ohzeki (2022). The quantum effect on the case with multiple optimal solutions has also been discussed Yamamoto et al. (2020); Maruyama et al. (2021). As the environmental effect cannot be avoided, the quantum annealer is sometimes regarded as a simulator for quantum many-body dynamics Bando et al. (2020); Bando and Nishimori (2021); King et al. (2022). Furthermore, applications of quantum annealing as an optimization algorithm in machine learning have also been reported Neven et al. (2012); Khoshaman et al. (2018); O'Malley et al. (2018); Amin et al. (2018); Kumar et al. (2018); Arai et al. (2021b); Sato et al. (2021); Urushibata et al. (2022); Hasegawa et al. (2023); Goto and Ohzeki (2023). In this sense, developing the power of quantum annealing by considering hybrid use with various techniques is important, as in several previous studies Hirama and Ohzeki (2023); Takabayashi and Ohzeki (2023). In this study, we propose the generation of the maze by quantum annealing. In the application of quantum annealing to mazes, algorithms for finding the shortest path through a maze have been studied Pakin (2017). Automatic map generation is an indispensable technique for game production, including roguelike games. Maze generation has been used to construct random dungeons in roguelike games, by assembling mazes mok Bae et al. (2015). Therefore, considering maze generation as one of the rudiments of this technology, we studied maze generation using a quantum annealing machine. Several algorithms for the generation of the maze have been proposed. In this study, we focused on maze-generating algorithms. One can take the bar-tipping algorithm Alg (2023a), the wall-extending algorithm Alg (2023b), and the hunt-and-kill algorithm Alg (2023c). The bar-tipping algorithm is an algorithm that generates a maze by extending evenly spaced bars one by one. For the sake of explanation, we will explain the terminology here. A path represents an empty traversable part of the maze and a bar a filled non traversable part. Figure 1 shows where the outer wall, bars, and coordinate \((i,j)\) are in a \(3\times 3\) maze. The maze is surrounded by an outer wall as in Figure 1. It requires the following three constraints. First, each bar can be extended by one cell only in one direction. Second, the first column can be extended in four directions: up, down, left, and right, while the second and subsequent columns can be extended only in three directions: up, down, and right. Third, adjacent bars cannot overlap each other. We explain the detailed process of the bar-tipping algorithm using the \(3\times 3\) size maze. In this study, a maze generated by extending the \(N\times N\) bars is called \(N\times N\) size maze. First, standing bars are placed in every two cells in a field surrounded by an outer wall, as in Figure 1. Second, Figure 2 shows each step of bar-tipping algorithm. Figure 2 (a) shows the first column of bars extended. The bars in the first column are randomly extended in only one direction with no overlaps, as in Figure 2 (a). The bars can be extended in four directions (up, down, right, left) at this time. Figure 2 (b) shows the second column of bars being extended. Third, the bars in the second column are randomly extended in one direction without overlap as in Figure 2 (b). The bars can be extended in three directions (up, down, right) at this time. Figure 2 (c) shows the state in which the bars after the second column are extended. Fourth, the bars in subsequent columns are randomly extended in one direction, likewise the bars in the second column, as in Figure 2 (c). Figure 2 (d) shows the complete maze in its finished state. Following the process, we can generate a maze as in Figure 2 (d). If multiple maze solutions are possible, the maze solution is not unique, simplifying the time and difficulty of reaching the maze goal. These constraints must be followed for the reasons described below. The first constraint prevents a maze from generating a maze with multiples maze solutions and closed circuits. Figure 3 (a) shows a maze state that violates the first constraint. The step violating the first constraint because one bar in the upper right corner is extended in two directions as Figure 3 (a). The second constraint prevents generating a maze from a maze with closed circuits and multiple maze solutions. Figure 3 (b) shows a state that violates the second constraint. The second constraint is violated, it has a closed circuit and multiple maze solutions, as Figure 3 (b). The third constraint prevents maze generation from a maze with multiple maze solutions. Figure 3 (c) shows a state that violates the third constraint. The bars overlap in the upper right corner, making it the third constraint as Figure 3 (c). Next, we describe the wall-extending algorithm. It is an algorithm that generates a maze by extending walls. Figure 4 shows the extension starting coordinates of the wall-extending algorithm. Figure 5 (a) shows the initial state of the wall expansion algorithm. First, as an initial condition, the outer perimeter of the maze is assumed to be the outer wall, and the rest of the maze is assumed to be the path as Figure 5 (a). Coordinate system is different from the bar-tipping algorithm, all cells are labeled coordinates. As Figure 4 shows, the coordinates where both \(x\) and \(y\) are even and not walls are listed as starting coordinates for wall extending. The following process is repeated until all starting coordinates change to walls, as shown in Figure 5(c). Randomly choose the coordinates from the non-wall extension start coordinates. The next extending direction is randomly determined from which the adjacent cell is a path. Figure 5 (b) shows how the path is extended. The extension will be repeated while two cells ahead of the extending direction to be extended is a path as Figure 5 (b). Figure 5 (c) shows all starting coordinates changed to walls. These processes are repeated until all the starting coordinates change to walls as in Figure 5 (c). Figure 5 (d) shows a maze created by wall-extending. Following the process, we can generate a maze as in Figure 5 (d). Figure 1: Positions where outer wall, bars, and coordinate \((i,j)\) are in \(3\times 3\) maze. As a third, the hunt-and-kill algorithm is explained below. It is an algorithm that generates a maze by extending paths. Figure 6 shows the extension starting coordinates of the hunt-and-kill algorithm. Figure 7 (a) shows the initial state of the hunt-and-kill algorithm. The entire surface is initially walled off as Figure 7 (a). Coordinates, where both \(x\) and \(y\) are odd, are listed as starting coordinates for path extension as in Figure 6. As with the wall-extending algorithm, all cells are set to coordinates. Figure 7 (b) shows the state in which the path is extended. A coordinate is chosen randomly from the starting coordinates, and the path is extended from there as in Figure 7 (b). Figure 7 (c) shows the coordinate selection and re-extension after the path can no longer be extended. If the path can no longer be extended, a coordinate is randomly selected from the starting coordinates, which are already paths, and extension starts again from it as in Figure 7 (c). This process is repeated until all the starting coordinates turn into paths to generate the maze. Figure 7 (d) shows the complete maze with the hunt-and-kill algorithm. Following the process, we can generate a maze as in Figure 7 (d). Figure 2: Step of bar-tipping algorithm. **(a)** step1: bars in first column are extended. **(b)** step2: bars in second column are extended. **(c)** step3: bars in subsequent column are extended. **(d)** step4: A complete maze through these steps. Of the three maze generation algorithms mentioned above, the bar-tipping algorithm is relevant to the combinatorial optimization problem. In addition, unlike other maze generation algorithms, the bar-tipping algorithm is easy to apply because it only requires the consideration of adjacent elements. Thus, we have chosen to deal with this algorithm. Other maze generation algorithms could be generalized by reformulating them as combinatorial optimization problems. The wall-extending and hunt-and-kill algorithms will be implemented in future work, considering the following factors. The former algorithm introduces the rule that adjacent walls are extended and so are their walls. The number of connected components will be computed for the latter, and the result will be included in the optimization. Using the bar-tipping algorithm, we reformulated it to solve a combinatorial optimization problem that generates a maze with a longer solving time and optimized it using quantum annealing. Quantum annealing (DW_2000Q_6 from D-Wave), classical computing (simulated annealing, simulated quantum annealing, and algorithmic solution of the bar-tipping algorithm), and hybrid computing were compared with each other according to the generation time of mazes, and their performance was evaluated. The solver used in this experiment is as follows: DW_2000Q_6 from D-Wave, simulated annealer called SASampler and simulated quantum annealer called SQASampler from OpenJij ope (2023), D-Wave's quantum-classical hybrid solver called hybrid_binary_quadratic_model_version2 (BQM) and classical Figure 4: Red cells represent options of starting coordinates for the wall-extending algorithm. Figure 3: Mazes violated the constraints. **(a)** A maze violate the first constraint. **(b)** A maze violate the second constraint. **(c)** A maze violated the third constraint. computer (MacBook Pro(14-inch, 2021), OS: macOS Monterey Version 12.5, Chip: Apple M1 Pro, Memory: 16GB) This comparison showed that quantum annealing was faster. This may be because the direction of the bars is determined at once using quantum annealing, which is several times faster than the classical algorithm. We do not use an exact solver to solve the combinatorial optimization problem. We expect some diversity in the optimal solution and not only focus on the optimal solution in maze generation. Thus, we compare three solvers, which generate various optimal solutions. In addition, we generate mazes that reflect individual characteristics, whereas existing maze generation algorithms rely on randomness and fail to incorporate other factors. In this case, we incorporated the maze solution time as one of the other factors to solve the maze. The maze solving time was defined as the time (in seconds) from the start of solving the maze to the end of solving the maze. The paper is organized as follows. In the next Section, we explain the methods of our experiments. In Sec. 3, we describe the results of our experiments. In Sec. 4, we summarize this paper. Figure 5: **(a)** Initial state for wall-extending algorithm. **(b)** Step 1 for wall-extending algorithm. **(c)** Step 2 for wall-extending algorithm. **(d)** Maze generated using wall-extending algorithm. ## 2 Methods ### Cost function To generate the maze by quantum annealer, we need to set the cost function in the quantum annealer. One of the important features of the generation of the maze is diversity. In this sense, the optimal solution is not always unique. Since it is sufficient to obtain a structure consistent with a maze, the cost function is mainly derived from the necessary constraints of a maze, as explained below. Three constraints describe the basis of the algorithm of the bar-tipping algorithm. The cost function will be converted to a QUBO matrix to use the quantum annealer.To convert the cost function to a QUBO, the cost function must be written in a quadratic form. Using the penalty method, we can convert various constraints written in a linear form into a quadratic function. The penalty method is a method to rewrite the equality constant as a quadratic function. For example, the penalty method can rewrite an equation constant \(x=1\) to \((x-1)^{2}\). Thus, we construct the cost function for generating the maze using the bar-tipping algorithm below. The constraints of the bar-tipping algorithm correlate with each term in the cost function described below. The first constraint of the bar-tipping algorithm is that the bars can be extended in only one direction. It prevents making closed circuits. The second constraint of the bar-tipping algorithm is that the bars of the first column be extended randomly in four directions (up, right, down, and left), and the second and subsequent columns can be extended randomly in three directions (up, right, and down). It also prevents the creation of closed circuits. The third constraint of the bar-tipping algorithm is that adjacent bars must not overlap. Following the constraint in the bar-tipping algorithm, we can generate a maze with only one path from the start to the goal. The cost function consists of three terms to reproduce the bar-tipping algorithm according to the three constraints and to determine the start and goal. \[\begin{split} E(\{x_{i,j,d},X_{m,n}\})=\sum_{i,i^{\prime}}\sum_{ j,j^{\prime}}\sum_{d,d^{\prime}}Q_{(i,j,d),(i^{\prime},j^{\prime},d^{\prime})}x_{ i,j,d}x_{i^{\prime},j^{\prime},d^{\prime}}+\lambda_{1}\sum_{i}\sum_{j}\Biggl{(} \sum_{d}x_{i,j,d}-1\Biggr{)}^{2}\\ +\lambda_{2}\Biggl{(}\sum_{m}\sum_{n}X_{m,n}-2\Biggr{)}^{2}, \end{split} \tag{1}\] Figure 6: Red cells represent options of starting coordinates for the hunt-and-kill algorithm. where \(x_{i,j,d}\) denotes whether the bar in \(i\)-th row, \(j\)-th column extended in direction \(d\left(\mathrm{up}\colon 0,\mathrm{right}\colon 1,\mathrm{down}\colon 2, \mathrm{left}\colon 0\right)\) When the bar in coordinate \((i,j)\) is extended in direction, \(x_{i,j,d}\) takes \(1\), otherwise takes \(0\). Due to the second constraint of the bar-tipping algorithm, the bars after the second column cannot be extended on the left side; only the first column has \((d=3)\). Furthermore, \(Q_{(i,j,d)(i^{\prime},j^{\prime},d^{\prime})}\) in Equation 1 depends on \(i,j,d,i^{\prime},j^{\prime}\), and \(d^{\prime}\) and is expressed as follows \[Q_{(i,j,d),(i^{\prime},j^{\prime},d^{\prime})}=\left\{\begin{array}{ll}1&(i=i ^{\prime}-1,j=j^{\prime},d=2,d^{\prime}=0)\\ 1&(i=i^{\prime}+1,j=j^{\prime},d=0,d^{\prime}=2)\\ 0&\mathrm{otherwise}.\end{array}\right. \tag{2}\] The coefficients of \(\lambda_{1}\) and \(\lambda_{2}\) are constants to adjust the effects of each penalty term. The first term prevents the bars from overlapping and extending each other face-to-face. It represents the third constraint of the bar-tipping algorithm. Here, due to the second constraint, bars in the second and subsequent columns Figure 7: **(a)** Initial state for hunt-and-kill algorithm. **(b)** Step 1 for hunt-and-kill algorithm. **(c)** Step 2 for hunt-and-kill algorithm. **(d)** Maze generated using hunt-and-kill algorithm. cannot be extended to the left. Therefore, the adjacent bars in the same row cannot extend and overlap. This corresponds to the fact that \(d\) cannot take \(3\) when \(j\geq 1\). Thus, there is no need to reflect, considering the left and right. In particular, the first term restricts the extending and overlapping between the up and down adjacent bars. For example, the situation in which one bar in \((i,j)\) extended down \((d=2)\) and the lower bar in \((i+1,j)\) extended up \((d=0)\) is represented by \(x_{i,j,0}x_{i+1,j,2}=1\), and \(Q(i,j,0),(i+1,j,2)\) takes \(1\). In the same way, thinking of the relation between the bar in \((i,j)\) and the upper bar in \((i-1,j)\), \(Q_{(i-1,j,2),(i,j,0)}=1\). Thus, \(Q_{(i-1,j,2),(i,j,0)}x_{i,j,0}x_{i+1,j,2}\) takes 1, and the value of the cost function taken will increase. By doing this, the third constraint is represented as a first term. The second term is a penalty term that limits the direction of extending to one per bar. It represents the first constraint of the bar-tipping algorithm. This means that for a given coordinate \((i,j)\), the sum of \(x_{i,j,d}\left(d=0,1,2(,3)\right)\) must take the value \(1\). Here, the bars in the second and subsequent columns cannot extend to the left by the second constraint. Thus, \(d\) takes (0, 1, 2, 3) when \(j=0\), and \(d\) takes (0, 1, 2) when \(j\geq 1\). The third term is the penalty term for selecting two coordinates of the start and the goal from the coordinates \((m,n)\). This means that a given coordinate \((m,n)\), the sum of \(X_{m,n}\) takes \(2\). The start and the goal are commutative in the maze. They are randomly selected from the two coordinates determined by the third term. \(X_{m,n}\) denotes whether or not to set the start and goal at the \(m\)-th row and \(n\)-th column of options of start and goal coordinates. When the \((m,n)\) coordinate is chosen as the start and goal, \(X_{m,n}\) takes \(1\). Otherwise, it takes \(0\). There are no relations between \(X_{m,n}\) and \(x_{i,j,d}\) in Equation 1. This means that the maze structure and the start and goal determination coordinates have no relations. Figure 8 shows the coordinates \((m,n)\) that are the options of the start and the goal. As Figure 8 shows, \((m,n)\) is different from the coordinate setting bars; it is located at the four corners of the bars, where the bars do not extend. \(X_{m,n}\) and \(x_{i,j,d}\) are different. \(X_{m,n}\) are options of start and goal, and \(x_{i,j,d}\) are options of coordinates and directions to extend the bars. We have shown the simplest implementation of the maze generation following the bar-tipping algorithm by quantum annealer. Following the above, a maze, depending on randomness, is generated. To Generate a unique maze independent of randomness, we add the effect to make the maze more difficult in the cost function, and the difficulty is defined in terms of time (in seconds). Figure 8: Black cells represent outer walls and inner bars \((i,j)\). Red cells represent options of start and goal coordinates \((m,n)\). ### Update rule We propose an additional \(Q_{update}\) term to increase the time to solve the maze. We introduce a random term that takes random elements to change the maze structure. It is added to the Equation 1. First, \(Q_{update}\) term, the additional term which includes the new QUBO matrix \(Q_{update}\), is given by \[\lambda_{update1}\sum_{i,i^{\prime}}\sum_{j,j^{\prime}}\sum_{d,d ^{\prime}}Q_{update(k,k^{\prime})}x_{i,j,d}x_{i^{\prime},j^{\prime},d^{\prime}}\] \[+\lambda_{update1}\sum_{i}\sum_{j}\sum_{d}\sum_{m}\sum_{n}Q_{ update(k,l)}x_{i,j,d}X_{m,n}\] \[+\lambda_{update1}\sum_{i}\sum_{j}\sum_{d}\sum_{m}\sum_{n}Q_{ update(l,k)}X_{m,n}x_{i,j,d} \tag{3}\] \[+\lambda_{update2}\sum_{m,m^{\prime}}\sum_{n,n^{\prime}}Q_{ update(l,l^{\prime})}X_{m,n}X_{m^{\prime},n^{\prime}},\] where \[\left\{\begin{array}{ll}k=d+(3N+1)i&(j=0)\\ k=d+3j+1+(3N+1)i&(j\neq 0)\\ l=(3N+1)N+(N+1)m+n.&\end{array}\right. \tag{4}\] Figure 9 shows the structure of \(Q_{update}\) and roles. Here, \(k^{\prime},l^{\prime}\) are the replacement of \(i,j,m,n\) in \(k,l\) with \(i^{\prime},j^{\prime},m^{\prime},n^{\prime}\). \(N\) in Equation 4 is the size of the maze. The coefficients \(\lambda_{update1}\) and \(\lambda_{update2}\) are constants to adjust the effect of terms. The elements of \(Q_{update}\) related to maze generation, part A in Figure 9 is multiplied by the \(\lambda_{update1}\). The elements of \(Q_{update}\) related to the relation between the start and goal determination and the maze generation, part B, C in Figure 9 is multiplied by the \(\lambda_{update1}\). The elements of \(Q_{update}\) related to the start and goal determination, part D in Figure 9 is multiplied by the \(\lambda_{update2}\). These are to control the maze difficulty without breaking the bar-tipping algorithm's constraints. Equation Figure 9: Structure of \(Q_{update}\). Part A is related to maze generation. Part B and part C are related to the relation between maze generation and the start and goal determination. Part D is related to the start and goal determination. 3 is represented by the serial number \(k\) of each coordinate \((i,j)\) at which bars can extend, and the sum \(l\) of the total number of coordinates at which the bars can extend and the serial number of coordinates \((m,n)\), which are options for the start and the goal. Furthermore, The second term and the third term in Equation 3 allows the maze to consider the relation between the structure of the maze and the coordinates of the start and the goal. Second, \(Q_{update}\), the new QUBO matrix, is given by \[Q_{update}:=p(t)Q_{update}+\big{\{}1-p(t)\big{\}}Q_{random}, \tag{5}\] where \(Q_{random}\) is a matrix of random elements from \(-1\) to \(1\) and \(p(t)\) depends on time \(t\) (in seconds) taken to solve the previous maze and is expressed as follows \[p(t)=\frac{1}{1+e^{-at}}. \tag{6}\] The \(Q_{update}\) is a matrix that was made with the aim of increasing the maze solving time through the maze solving iteration. The initial \(Q_{update}\) used in the first maze generation is a random matrix, and the next \(Q_{update}\) that is used in the second or subsequent maze generation is updated using Equation5, the maze solving time \(t\), and the previous \(Q_{update}\). The longer the solving time \(t\) of the maze is, the higher the percentage of the previous \(Q_{update}\) in the current \(Q_{update}\) and the lower the percentage of \(Q_{random}\); inversely, when \(t\) is small, the ratio of the previous \(Q_{update}\) is small, and the percentage of \(Q_{random}\) is significant. In other words, the longer the solving time \(t\) of the previous maze, the more characteristics of the previous term \(Q_{update}\) remain. Here, \(a\) is a constant to adjust the percentage. The \(p(t)\) is a function that increases monotonically with \(t\) and takes \(0\) to \(1\). Thus, \(Q_{random}\) that is, the random elements in \(Q_{update}\) increase as time \(t\) increases. After the maze is solved, the next maze QUBO is updated by Equation 5 using the time taken to solve the maze. The update is carried out only once before the maze generation. Repetition of the update will make the maze gradually difficult for individuals. The sum of Equation 1 and Equation 3 is always used to generate a new maze annealing from a maximally mixed state. ### Experiments #### 2.3.1 Generation of maze We generate mazes by optimizing the cost function using DW_2000Q_6. Since the generated maze will not be solved, the update term is excluded for this experiment. \(\lambda_{1}=2\) and \(\lambda_{2}=2\) were chosen. #### 2.3.2 Computational cost We compare the generation times of \(N\times N\) maze in DW_2000Q_6 from D-Wave, simulated annealer called SASampler and simulated quantum annealer called SQASampler from OpenJij, D-Wave's quantum-classical hybrid solver called hybrid_binary_quadratic_model_version2 (hereinafter referred to as "Hybrid Solver") and classical computer (MacBook Pro(14-inch, 2021), OS: macOS Monterey Version 12.5, Chip: Apple M1 Pro, Memory: 16GB) based on bar-tipping algorithm coded with Python 3.11.5 (hereinafter referred to as "Classic"). The update term was excluded from this experiment. We set \(\lambda_{1}=2\) and \(\lambda_{2}=2\). DW_2000Q_6 was annealed 1000 times for 20\(\upmu\)s, and its QPU annealed time for maze generation as calculated using time-to-solution (TTS). SASampler and SQASampler were annealed with 1000 sweeps. These parameters were constant throughout this experiment. Regression curves fitted using least squares method were drawn from the results to examine the dependence of computation time on maze size. #### 2.3.3 Effect of update term The solving time of \(9\times 9\) maze generated without \(Q_{update}\) and using \(Q_{update}\) were measured. This experiment asked 12 human subjects to solve mazes one set (30 times). To prevent the players from memorizing maze structure, they can only see the limited \(5\times 5\) cells. In other words, only two surrounding cells can be seen. The increase rate from the first step of simple moving average of ten solving times was plotted on the graph. For this experiment, \(\lambda_{1}=2\), \(\lambda_{2}=2\), \(\lambda_{update1}=0.15\), \(\lambda_{update2}=0.30\) and \(a=0.05\) were chosen. For two \(\lambda_{update}\), we chose larger values that do not violate the constraints of the bar-tipping algorithm. We chose a value in which Equation 6 will be about 0.8 (80%) when \(t=30\) seconds as a constant \(a\). ### 2.4 Applicatons The cost function in this paper has many potential applications by generalizing it. For example, it can be applied to graph coloring and traffic light optimization. Graph coloring can be applied by allowing adjacent nodes to have different colors. Traffic light optimization can address the traffic light optimization problem by looking at the maze generation as traffic flow. Roughly speaking, our cost function can be applied to the problem of determining the next state by looking at adjacent states. \(Q_{update}\) can be applied to the problem of determining the difficulty of the next state from the previous result. The selection of personalized educational materials is one of the examples. Based on the solving time of the previously solved problems, the educational materials can be selected at a difficulty suitable for the individual. This is the most fascinating direction in future studies. As described above, we should emphasize that \(Q_{update}\) proposed in this paper also has potential use in various fields related to training and education. ## 3 Results ### 3.1 Generation of maze Figure 10 shows execution examples of \(9\times 9\) and \(15\times 15\) mazes generated by optimizing the cost function using DW_2000Q_6. ### Computational cost Figure 11: Time to reach the ground state with 99% success probability as a function of the maze size in DW_2000Q_6. The error bars represent a 95% confidence interval. The regression curve is given by \(\big{(}(3.231\pm 0.076)N+(11.40\pm 0.69)\big{)}\) for linear regression and \(\big{(}(7.4\pm 1.8)\cdot 10^{-2}N^{2}+(2.05\pm 0.30)N+(14.8\pm 1.0)\big{)}\) for quadratic regression. Figure 10: Left: \(9\times 9\) maze generated by DW_2000Q_6. Right: \(15\times 15\) maze generated by DW_2000Q_6. Red cells represent a start and a goal for the maze. Figure 12: **(a)** The time to reach the ground state as a function of the maze size in Classic. The error bars represent a 95% confidence interval. The regression curve is \(\big{(}(0.855\pm 0.090)N^{2}+(0.6\pm 1.5)N+(2.2\pm 5.1)\big{)}\). **(b)** Time to reach the ground state as a function of the maze size in SASampler. The error bars represent a 95% confidence interval. The regression curve is \(\big{(}(28.8\pm 1.2)N^{2}+(36\pm 20)N+(129\pm 71)\big{)}\). **(c)** Time to reach the ground state as a function of the maze size in SQASampler. The error bars represent a 95% confidence interval. The regression curve is \(\big{(}(172.8\pm 4.4)N^{2}+(287\pm 73)N-(1.5\pm 2.5)\cdot 10^{2}\big{)}\) Figure 13: Comparison of maze generation time between DW_2000Q_6 and Classic. Fits of the form \(aN^{2}+bN+c\) are applied to each of the datasets using least squares method. The results are as follows. Figure 11 shows the relation between TTS for maze generation and maze size on DW_2000Q_6. DW_2000Q_6 is \(\mathcal{O}(N)\) or \(\mathcal{O}(N^{2})\). Even if it is quadratically dependent on the maze size, its deviation is smaller than the other solvers. Figure 12 shows the relation between maze generation time and maze size on Classic, SASampler, and SQASampler. Classic \(\big{(}(0.855\pm 0.090)N^{2}+(0.6\pm 1.5)N+(2.2\pm 5.1)\big{)}\), SASampler \(\big{(}(28.8\pm 1.2)N^{2}+(36\pm 20)N+(129\pm 71)\big{)}\), and SQASampler \(\big{(}(172.8\pm 4.4)N^{2}+(287\pm 73)N-(1.5\pm 2.5)\cdot 10^{2}\big{)}\) exhibit quadratic dependence on the maze size \(\mathcal{O}(N^{2})\). Most of the solvers introduced here are \(\mathcal{O}(N^{2})\) since they are extending \(N\times N\) bars to generate a maze. Figure 13 shows the comparison of maze generation time between DW_2000Q_6 and Classic. DW_2000Q_6 has a smaller coefficient \(N^{2}\) than the classical algorithm, and after \(N=5\), DW_2000Q_6 shows an advantage over Classic in the maze generation problem. The improvement using quantum annealing occurred because it determines the direction of \(N\times N\) bars at once. Figure 14 shows the relation between maze generation time and maze size on Hybrid Solver. Linear and quadratic fits applied to the dataset indicate the Hybrid Solver is \(\mathcal{O}(1)\) or \(\mathcal{O}(N)\)\(\big{(}(3.29\pm 0.83)\cdot 10^{2}N+(2.99325\pm 0.00090)\cdot 10^{6}\big{)}\big{)}\) between \(N=1\) and \(N=18\) and then shifted to \(\mathcal{O}(N^{2})\)\(\big{(}(6.899\pm 0.065)\cdot 10^{3}N^{2}-(0.4\pm 3.2)\cdot 10^{3}N+(6.90\pm 0.39)\cdot 1 0^{5}\big{)}\). The shift in the computational cost of Hybrid Solver may have resulted from a change in its algorithm. ### Effect of update term Here, 12 human subjects are asked to solve the maze one set (30 times), and the maze is shown to increase in difficulty as it adapts to each human subject. Figure 15 (a) shows the increase rate from the first step of simple moving average of 10 solving time of maze generated without \(Q_{update}\) and individual increase rate. The solving time of the maze without \(Q_{update}\) was slightly getting shorter overall. Figure 15 (b) shows the increase rate from the first step of simple moving average of 10 solving time of maze generated using \(Q_{update}\) and individual increase rate. The solving time of the maze using \(Q_{update}\) was getting longer overall. Most of the players increased their solving time, but some players decreased or didn't change their solving time. In addition, nine players' average of the solving time of the maze Figure 14: Time to reach the ground state as a function of maze size in the Hybrid Solver. The error bars represent a 95% confidence interval. generated using \(Q_{update}\) increased than that of the maze generated without \(Q_{update}\). These show that \(Q_{update}\) has potential to increase the difficulty of the mazes. ## 4 Discussion In this paper, we show that generating difficult (longer the maze solving time) mazes using the bar-tipping algorithm is also possible with quantum annealing. By reformulating the bar-tipping algorithm as the combinatorial optimization problem, we generalize it more flexibly to generate mazes. In particular, our approach is simple but can adjust the difficulty in solving mazes by quantum annealing. In Sec.3.2, regarding comparing computational costs to solve our approach to generating mazes using TTS, DW_2000Q_6 has a smaller coefficient of \(N^{2}\) than the classical counterpart. Therefore, as \(N\) increases, the computational cost of DW_2000Q_6 can be expected to be lower than that of the classical simulated annealing for a certain time. Unfortunately, since the number of qubits in the D-Wave quantum annealer is finite, the potential power of generating mazes by quantum annealing is limited. However, our Figure 15: **(a)** Left: Increase rate from the first step of simple moving average of 10 solving time of \(9\times 9\) maze generated without \(Q_{update}\). The error bars represent standard errors. Right: All players’ increase rate from the first step of simple moving average of 10 solving time of \(9\times 9\) maze generated without \(Q_{update}\). **(b)** Left: Increase rate from the first step of simple moving average of 10 solving time of \(9\times 9\) maze generated using \(Q_{update}\). The error bars represent standard errors. Right: All players’ increase rate from the first step of simple moving average of 10 solving time of \(9\times 9\) maze generated using \(Q_{update}\). insight demonstrates some advantages of quantum annealing against its classical counterpart. In addition, we observed that the hybrid solver's computational cost was constant up to \(N=18\). This indicates that hybrid solvers will be potentially effective if they are developed to deal with many variables in the future. In Sec. 3.3, we proposed \(Q_{update}\) to increase the solving time using quantum annealing. We demonstrated that introducing \(Q_{update}\) increased the time to solve the maze and changed the difficulty compared to the case where \(Q_{update}\) was not introduced. At this time, the parameters (\(\lambda_{update1}\), \(\lambda_{update2}\), and \(a\)) were fixed. Difficult maze generation for everyone may be possible by adjusting the parameters individually. One of the directions in the future study is in applications of our cost function in various realms. We should emphasize that \(Q_{update}\) proposed in this paper also has potential use in various fields related to training and education. The powerful computation of quantum annealing and its variants opens the way to such realms with high-speed computation and various solutions. ## Conflict of Interest Statement Sigma-i employs author Masayuki Ohzeki. The remaining authors declare that the research was conducted without any commercial or financial relationships that could be construed as a potential conflict of interest. ## Author Contributions Y. I., T. Y., and K. O. conceived the idea of the study. M. O. developed the statistical analysis plan, guided how to use the quantum annealing to find the optimal solution, and contributed to interpreting the results. Y. I., T. Y., and K. O. drafted the original manuscript. M. O. supervised the conduct of this study. All authors reviewed the manuscript draft and revised it critically on intellectual content. All authors approved the final version of the manuscript to be published. ## Funding The authors thank financial support from the MEXT-Quantum Leap Flagship Program Grant No. JPMXS0120352009, as well as Public\Private R&D Investment Strategic Expansion PrograM (PRISM) and programs for Bridging the gap between R&D and the IDeal society (society 5.0) and Generating Economic and social value (BRIDGE) from Cabinet Office. ## Acknowledgments The authors thank the fruitful discussion with Reo Shikanai and Yoshihiko Nishikawa on applications of our approach to another application. This paper is the result of research developed from an exercise class held at Tohoku University in Japan in the past called "Quantum Annealing for You, 2nd party!". We want to thank one of the supporters, Rumiko Honda, for supporting the operations. The participants were a diverse group, ranging from high school students to university students, graduate students, technical college students, and working adults. As you can see from the authors' affiliations, this is a good example of a leap from the diversity of the participants to the creation of academic and advanced content.
この論文では、量子 Annealing を用いた迷路の生成方法を提案します。量子 Annealer に適合する二次の非制約的二値最適化問題に、標準アルゴリズムを再構成します。より難しい迷路を作成するには、追加のコスト関数 $Q_{update}$ を導入し、迷路の難しさのレベルを増加します。迷路の難易度を評価するために、12人の人間参加者による解決時間を使用して、迷路の難易度を評価しました。この迷路の作成方法の効率性を検証するために、量子処理装置、古典計算機、ハイブリッドソルバーのタイムソリューションを調査しました。
2309.15847
Disinformation Detection: An Evolving Challenge in the Age of LLMs
The advent of generative Large Language Models (LLMs) such as ChatGPT has catalyzed transformative advancements across multiple domains. However, alongside these advancements, they have also introduced potential threats. One critical concern is the misuse of LLMs by disinformation spreaders, leveraging these models to generate highly persuasive yet misleading content that challenges the disinformation detection system. This work aims to address this issue by answering three research questions: (1) To what extent can the current disinformation detection technique reliably detect LLM-generated disinformation? (2) If traditional techniques prove less effective, can LLMs themself be exploited to serve as a robust defense against advanced disinformation? and, (3) Should both these strategies falter, what novel approaches can be proposed to counter this burgeoning threat effectively? A holistic exploration for the formation and detection of disinformation is conducted to foster this line of research.
Bohan Jiang, Zhen Tan, Ayushi Nirmal, Huan Liu
2023-09-25T22:12:50
http://arxiv.org/abs/2309.15847v1
# Disinformation Detection: An Evolving Challenge in the Age of LLMs ###### Abstract The advent of generative Large Language Models (LLMs) such as ChatGPT has catalyzed transformative advancements across multiple domains. However, alongside these advancements, they have also introduced potential threats. One critical concern is the misuse of LLMs by disinformation spreaders, leveraging these models to generate highly persuasive yet misleading content that challenges the disinformation detection system. This work aims to address this issue by answering three research questions: (1) To what extent can the current disinformation detection technique reliably detect LLM-generated disinformation? (2) If traditional techniques prove less effective, can LLMs themself be exploited to serve as a robust defense against advanced disinformation? and, (3) Should both these strategies falter, what novel approaches can be proposed to counter this burgeoning threat effectively? A holistic exploration for the formation and detection of disinformation is conducted to foster this line of research. ## 1 Introduction The rise of Large Language Lodels (LLMs), exemplified by models such as ChatGPT [15] and Llama [26], been a significant milestone in the domain of Computational Social Science (CSS). While LLMs have paved the way for expansive studies of human language and behavior [34], a pressing concern is their potential for misuse such as disinformation generation and propagation. As these models evolve in their capacity to generate increasingly persuasive human-level content, there exists a concomitant risk of their deployment in intentionally generating misleading information at scale. A concerning remark from a recent study underscores this [24] -- _"the fact that AI-generated disinformation is not only cheaper and faster, but also more effective, gives me nightmares."_ In the era preceding LLMs, research in AI-generated disinformation detection predominantly revolved around relatively Smaller Language Models (SLMs) such as BERT [4], GPT-2 [2], and T5 [19]. The advent of LLMs, with their billion-scale parameters, has dramatically escalated the complexity of disinformation detection. The textual content generated by these LLMs is natural and human-sounding [23]. This evolution raises critical questions about the robustness and adaptability of current disinformation detection techniques, which were primarily designed around SLMs. Despite the significance, the consequences of this shift have not been extensively studied. This paper is motivated to bridge this knowledge gap by answering the following research questions: * **RQ1:** Are existing disinformation detection techniques apt for LLM-generated disinformation? * **RQ2:** If not, can LLMs themselves be adapted to detect such disinformation? * **RQ3:** If both avenues fall short, what alternative solutions can be considered? To ensure our findings are grounded in practical implications, we contextualize our research within a real-world scenario: supposing a scenario wherein a malicious actor intends to leverage LLMs to generate "advanced" disinformation with the goal of fooling auto Figure 1: Overview of disinformation generation and detection using ChatGPT. We input human-crafted disinformation (left) along with distinct prompts to produce three separate LLM-generated disinformation datasets (center). We subsequently evaluate the efficacy of the disinformation detection system (right) against LLM-generated disinformation. Detailed descriptions for the different types of prompts are elaborated later. mated detection systems and swaying public perception. To this end, in this study, we start with a widely-used benchmark dataset [1] comprising human-written news articles that are categorized as either _fake_ or _true_. Based on this dataset, we construct novel disinformation datasets (\(D_{\text{gpt\_std}}\), \(D_{\text{gpt\_mix}}\), and \(D_{\text{gpt\_cot}}\)) of varying complexity levels with ChatGPT (GPT-3.5 and 4) using three prompt techniques. A high-level overview of our disinformation generation and detection is presented in Figure 1. Addressing **RQ1**, we initially employ a state-of-the-art disinformation detection method [8], which involves fine-tuning a RoBERTa-based model [13] on human-written disinformation datasets (\(D_{\text{human}}\)). Subsequently, we evaluate the effectiveness of the fine-tuned RoBERTa-based model in detecting LLM-generated disinformation. For **RQ2**, we turn the lens towards LLMs themself, probing their ability to discern the self-generated disinformation. Lastly, for **RQ3**, we propose an innovative promoting method that aspires to emulate the human fact-checking process, leveraging LLMs to effectively detect advanced disinformation. Through comprehensive experiments, we obtained several crucial observations. \(\spadesuit\) Initial analyses indicate that while the fine-tuned RoBERTa model can accurately detect "simple" LLM-generated disinformation, it fails when confronted with disinformation of higher difficulty level generated from "advanced" prompts. Notably, for disinformation generated using chain-of-thought (CoT) prompts as detailed by [28], the fine-tuned detection model has an alarmingly high misclassification rate of 77.9%. Particularly, a more detailed examination reveals a discernible political bias in the detection model. Our results demonstrate that while the model exhibits a pronounced inclination towards categorizing center-leaning news as _true_, it tends to classify liberal- and conservative-leaning narratives as _fake_. \(\spadesuit\) Furthermore, we observe that vanilla ChatGPT cannot effectively detect disinformation generated even by itself. \(\spadesuit\) However, our research also unveils an avenue for improvement: By leveraging a carefully designed CoT-inspired prompt, we can significantly increase the detection accuracy. In summary, this work has the following contributions: * _Dataset_ _Curation_: We construct three LLM-generated datasets to facilitate the area of disinformation detection; * _Problem Validation_: In contrast to previous work, we demonstrate that existing detection techniques (SLMs) cannot effectively detect LLM-generated disinformation; * _Framework_ _Proposed_: We propose novel methods for LLM-generated disinformation detection. ## 2 Related Work Our focus in this paper is twofold: i) disinformation detection; and ii) text generation using LLMs. We now discuss the related literature across them. ### Disinformation Detection While _misinformation_ refers to false or inaccurate information that is spread without necessarily having the intent to deceive, _disinformation_ is deliberately fabricated or manipulated information intended to deceive or mislead people. They belong to the family of fake news [22]. The inundation of misleading content in today's digital age surpasses the capacities of conventional manual fact-checking approaches, compelling the pursuit of automated countermeasures. In response, scholars have gravitated towards advanced computational methodologies for the automated detection of disinformation [14]. In recent years, an important milestone in disinformation detection has been the development of deep learning. People train deep neural networks or on a large corpus to learn various textual features such as semantic meaning, writing styles, and tonal subtleties [31]. For instance, Ruchansky et al. [20] introduced the CSI model, a hybrid deep learning model fusing content, social, and temporal information to enhance fake news detection. FakeBERT [9], a BERT-based model that amalgamates the several blocks of a single-layer convolutional neural network (CNN), endowed with varied kernel dimensions and filters, alongside BERT. However, there remains a gap in the literature addressing the detection of disinformation generated by LLMs. A recent work provides an initial exploration into the effects of AI-generated COVID-19 misinformation on both detection systems and human perception [33]. [24] posited that disinformation generated by GPT-3 is often indistinguishable from that crafted by humans. Our study aims to address this challenge, concentrating explicitly on LLM-generated disinformation and assessing the robustness of current detection methods when faced with this novel challenge. ### Large Language Models for Text Generation In recent years, the natural language processing (NLP) community has witnessed a shift in language models from SLMs with millions of parameters to the emergence of LLMs boasting billions of parameters. This transition has yielded significant advancements in various text generation tasks [32]. Notably, models such as LaMDA [25] with its impressive 137 billion parameters, OPT's 175 billion parameters [30], Bloom's 176 billion parameters [21], and PaLM's 540 billion parameters [3], alongside the popular GPT family (including GPT-3, 3.5, and 4) [10], have shown increasingly abil ity to generate human-level responses based on the input few-shot or zero-shot [17]. However, it is essential to acknowledge that the format of the input prompt can impact the performance [29]. Leveraging advanced prompt engineering techniques, such as those explored in recent research [28], can effectively guide LLMs to produce responses that are not only more accurate but also of higher quality. Trained on a large online corpus, ChatGPT is a repository of diverse knowledge. What sets ChatGPT apart is its unique training methodology -- Reinforcement Learning with Human Feedback (RLHF) [16]. In this way, human feedback is systematically incorporated into generating and selecting optimal results. Moreover, ChatGPT is accessible to the public through OpenAI APIs and a concise online chatbot interface. Given these features, our work harnesses ChatGPT (GPT-3.5 and 4) to generate "high-quality" disinformation. ## 3 Datasets In this section, we introduce the data collection process. We begin with a human-written fake news dataset (i.e., \(D_{human}\)). Based on it, we build three LLM-generated fake news datasets (i.e., \(D_{\text{gpt\_std}}\), \(D_{\text{gpt\_mix}}\), and \(D_{\text{gpt\_cot}}\)) upon \(D_{\text{human}}\) using distinct zero-shot prompt techniques. We contribute these datasets as novel resources for facilitating future research in the detection of LLM-generated disinformation. Statistics of our datasets are provided in Table 1. ### Human-written Dataset. In the field of disinformation detection, the _Fake and Real News Dataset_ (\(D_{human}\)) stands as one of the benchmark datasets [1]. Real news within this dataset was sourced from _Reuters_. In total, 21,417 real news were collected and categorized into World News and Politics News. On the other hand, 23,525 fake news articles were collected from various unreliable sources, which had been flagged by fact-checking websites such as _Politifact_. This assortment of fake news is categorized into six distinct topics: General News, US News, Government News, Left-Wing News, Middle-East News, and Politics. ### LLM-Generated Dataset. ChatGPT possesses the capacity to generate human-level text by responding to a given prompt, which serves as task directives. In this work, we leverage ChatGPT (GPT-3.5 and 4) to curate three novel LLM-generated disinformation datasets: * \(D_{\text{gpt\_std}}\): This dataset collects 23,278 LLM-generated disinformation by minimally modifying the human-written disinformation. * \(D_{\text{gpt\_mix}}\): We merge human-written true news with fake news, constructing a more challenging dataset for evaluation. \begin{table} \begin{tabular}{l|c|c|c|c} \hline & \(D_{\text{human}}\) & \(D_{\text{gpt\_std}}\) & \(D_{\text{gpt\_mix}}\) & \(D_{\text{gpt\_cot}}\) \\ \hline \# of samples & 23,525 & 23,278 & 1,000 & 1,737 \\ \hline headline & ✓ & ✓ & & \\ \hline content & ✓ & ✓ & ✓ & ✓ \\ \hline \end{tabular} \end{table} Table 1: Datasets statistics Figure 2: Templates of three zero-shot prompts. * \(D_{\text{gpt\_cot}}\): Leveraging chain-of-thought prompts, we guide ChatGPT to emulate human cognitive processes in the creation of misleading content, further diversifying our disinformation datasets. The prompt templates of _standard, mixture,_ and _chain-of-thought_ are shown on the top of each box in Figure 2. In the bottom part, we outline the special input variables we used in each prompt. \(D_{\text{gpt\_std}}\)**:** The _standard_ prompt has been commonly used to generate textual content zero-shot [33]. We leverage this prompt to effect minimal modifications on human-written disinformation, polishing it with a formal tone and refined vocabulary. Compared to the human-written version, the LLM-generated disinformation remains faithful to its original content without introducing extraneous information. \(D_{\text{gpt\_mix}}\)**:** Our subsequent objective is to generate a more "advanced" disinformation, one that can melts true stories with false content [6]. Thereby, we design the _mixture_ prompt to generate disinformation by combining true and fake news. \(D_{\text{gpt\_cot}}\)**:** However, ChatGPT sometimes simply stacks two news pieces as responses to the _mixture_ prompt. For example, _"[true news]. Meanwhile, [fake news]."_ To address this problem, we propose to guide ChatGPT through a step-by-step process to generate disinformation that mirrors human cognitive processes. Our methodology draws inspiration from Rudyard Kipling's timeless framework of the six fundamental questions for news writing [7]: _who, what, how, where, when,_ and _why_. We incorporate these essential inquiries into the design of the _chain-of-thought_ prompt. An example is presented in Figure 3. In the first step, we guide ChatGPT to extract the main _characters (who), places (where), time stamps (when),_ and _key events (what, how, why)_ in a given news article. We then ask ChatGPT to hallucinate a _fake event_. This type of disinformation is often called "False Context" _-- where genuine content is shared with false contextual information_[6]. In this work, we use ChatGPT to generate such disinformation by reconstructing genuine content in the context of the 2028 U.S. presidential election. Furthermore, our prompt engineering takes into account the importance of diversity in media perspectives. We incorporate a wide-ranging selection of news media outlets, each representing distinct ideological stances. Specifically, we include _CNN, FOX News,_ and _Reuters_, which respectively epitomize liberal, conservative, and neutral media outlets1. The inclusion of these news media in our prompts serves a dual purpose: (1) it enriches the diversity of generated content; and (2) it facilitates an examination of the impact of media bias on the detection system. Footnote 1: [https://www.allsides.com/media-bias/media-bias-chart](https://www.allsides.com/media-bias/media-bias-chart) **Data Validation.** To ensure that the generated disinformation meets our predefined criteria, we compare samples from \(D_{\text{human}}\) against those from \(D_{\text{gpt\_std}}\) as a case study. Particularly, we analyze linguistic and semantic similarities to verify whether _standard_ prompt can guide ChatGPT to minimally modify human-written disinformation by focusing on polishing the language. We exploit the Linguistic Inquiry and Word Count (LIWC) [11] and t-SNE [27], respectively. For linguistic analysis, we focused on eleven distinct categories of linguistic and psychological features: 1. Analytic: Logic and formal thinking; 2. Linguistic: General word usage and expressions; 3. Drives: Primary motivations behind behavior, such as achievement or power; 4. Cogproc: Cognitive processes related to information reception and processing; 5. Emotion: Usage of emotional words; 6. Swear: Use of swear words; 7. Prosocial: Behaviors indicating care or help towards Figure 3: An example of chain-of-thought prompt. others, particularly at the interpersonal level; 8). Moral: Words reflecting judgmental language; 9). Culture: Words related to cultural domains like politics, ethnicity, and technology; 10). Perception: Sensory and experiential aspects; and 11). Conversation: Use of informal words and slang. In Figure 4, we illustrate the linguistic differences across samples in \(D_{\text{human}}\) and \(D_{\text{gpt\_std}}\). LLM-generated disinformation involves an increase in prosocial terminologies (44.4%), emphasizing compassion and supportiveness, and a boost in political and ethical themes (Culture, 26.3%). It also amplifies primary motivational cues (Drives, 14.9%), embeds moral undertones (13.6%), and enhances logical coherence (Analytic, 6.8%). Conversely, ChatGPT reduces the usage of emotional terms (11.1%). Most significantly, it significantly diminishes the use of profanities and colloquialisms, as evident in the categories Swear (74.1% decrease) and Conversation (74.5% decrease). The semantic differences are shown in Figure 5. We notice that the high overlapping blue (Human-written) and orange (ChatGPT-generated) dots can be a great indicator of similar semantic meaning (around 86.8%). ## 4 Experiments and Results In this section, we conduct disinformation detection on our collected datasets. We evaluate detection models' performance in classifying human-written and LLM-generated disinformation. We further present the efficacy of the proposed approach. ### Existing technique (RQ1). We demonstrate that the current state-of-the-art models for disinformation detection are insufficiently robust when faced with advanced disinformation. In this study, we leverage a RoBERTa-based model to detect our collected disinformation [8]. The RoBERTa model was fine-tuned on human-written news articles derived from diverse online news outlets. A noteworthy constraint of this model is its 500-word input limit. To address this, we employ tiktoken2, an OpenAI Python library, for the truncation and tokenization of long text inputs. Our experiment begins with testing the model with samples from \(D_{human}\), which was then challenged with LLM-generated disinformation. Footnote 2: [https://github.com/openai/tikitoken](https://github.com/openai/tikitoken) **Performance on \(D_{\text{human}}\):** As shown in Table 2, the RoBERTa model performs exceptionally well in detecting the human-written disinformation from \(D_{\text{human}}\). We observe a very low misclassification rate of only 0.07%. Subsequently, we evaluate the model on \(D_{\text{gpt\_std}}\), \(D_{\text{gpt\_mix}}\), and \(D_{\text{gpt\_cot}}\). **Performance on \(D_{\text{gpt\_std}}\):** In our evaluation on the \(D_{\text{gpt\_std}}\) dataset, the detection model exhibits a notably low misclassification rate of 1.20%, as detailed in Table 2. This performance aligns with the findings presented by [33]. Such results underscore the model's robust capacity to detect disinformation from the \(D_{\text{gpt\_std}}\). **Performance on \(D_{\text{gpt\_mix}}\) and \(D_{\text{gpt\_cot}}\):** The datasets, \(D_{\text{gpt\_mix}}\) and \(D_{\text{gpt\_cot}}\), were curated using advanced prompt engineering methodologies. These datasets were designed to be particularly challenging for the detection model. As shown in Table 2, they display high misclassification rates of 15.40% and 77.93%, re Figure 4: Linguistic differences between disinformation in \(D_{\text{human}}\) and \(D_{\text{gpt\_std}}\) Figure 5: Comparison of text embeddings of \(D_{\text{human}}\) (blue) and \(D_{\text{gpt\_std}}\) (orange) using t-SNE. spectively. In contrast to the baseline datasets, \(D_{\text{gpt\_std}}\), which is relatively straightforward in its generation, both \(D_{\text{gpt\_mix}}\) and \(D_{\text{gpt\_cot}}\) inculcate greater diversity by generating disinformation with both facts and falsehoods. This intricate mixing creates a rich set of data points that is different from the distribution of training data. Drawing insights from a recent work [5], we posit that the detection model might be facing challenges in effectively transferring knowledge to discern such out-of-distribution disinformation samples. To evaluate the **political bias** in the detection model, we evaluate the model's performance across LLM-generated disinformation from diverse ideological spectrums: liberal (CNN), conservative (FOX News), and centrist (Reuters). The results in Table 3 clearly indicate the presence of political bias in the detection model. The model tends to classify center-leaning disinformation as _true_ news with a misclassification rate of approximately 66.4%. In comparison, the model achieves relatively moderated misclassification rates for liberal and conservative outlets, approximated at 52.5% and 50.8% respectively. Such patterns suggest a tendency of the model to predict politically biased news as fake news. Echoing findings from previous work [12], it's evident that media outlets with extreme political biases tend to weaponize disinformation to sway public perceptions. We speculate that the model's political bias should come from the training data. In summary, our results suggest that the current disinformation detection technique **fails to effectively detect LLM-generated disinformation**. Although it can accurately detect disinformation in \(D_{\text{human}}\) and \(D_{\text{gpt\_std}}\), it faces increased challenges in scenarios involving well-disguised disinformation, as seen in the \(D_{\text{gpt\_mix}}\) and \(D_{\text{gpt\_cot}}\) datasets. Moreover, the model is less equitable in its treatment of politically biased narratives. Addressing this bias is critical for ensuring that the disinformation detection system is fair and robust. ### LLMs (RQ2): In this subsection, we illustrate that LLMs struggle to effectively detect self-generated disinformation. We conduct experimentation with ChatGPT to evaluate the proficiency of LLMs in identifying disinformation generated by LLMs. ChatGPT, with its advanced generative abilities, can produce various types of responses even when instructed with the same prompt, introducing an inherent variability. This unpredictability has a potential impact on downstream text generation and classification tasks. In this work, we tested the model with a common prompt, _"Does this news article [...] contain any misleading information?"_ The spectrum of ChatGPT's replies ranged from succinct affirmatives or negatives to elaborative multi-step explanations. Sometimes, extended explanations seemed to contradict prior shorter responses. To examine the impact of this variability in response lengths on disinformation detection, we harness ChatGPT, prompting it to produce concise answers or more detailed explanations under different prompts: * _Standard (w/o explanation)_: binary response without explanation (simply _yes_ or _no_). * _Standard (w/ explanation)_: binary response accompanied by an analytic process. The detailed templates of our prompts are illustrated in Figure 6. We straightforwardly test whether GPT-3.5 and GPT-4 will each be detecting LLM-generated disinformation using \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \hline & **Total** & **Gen. News** & **Politics** & **Left News** & **Gov. News** & **U.S. News** & **M.E. News** \\ \hline **Misclassified (\%)** & 18 (0.07\%) & 0 & 10 (0.16\%) & 4 (0.09\%) & 4 (0.27\%) & 0 & 0 \\ \hline \multicolumn{10}{|c|}{\(D_{\text{gpt\_std}}\)} \\ \hline & **Total** & **Gen. News** & **Politics** & **Left News** & **Gov. News** & **U.S. News** & **M.E. News** \\ \hline **Misclassified (\%)** & 273 (1.20\%) & 54 (0.60\%) & 95 (1.49\%) & 39 (0.91\%) & 44 (2.99\%) & 20 (2.68\%) & 21 (2.8\%) \\ \hline \multicolumn{10}{|c|}{\(D_{\text{gpt\_mix}}\)} \\ \hline **Misclassified (\%)** & \multicolumn{3}{c|}{154 (15.40\%)} & **Misclassified (\%)** & \multicolumn{3}{c|}{445 (77.93\%)} \\ \hline \hline \end{tabular} \end{table} Table 2: Performance of the RoBERTa-based detection model. _Misclassified_ (\(\downarrow\)) represents statistics of fake news predicted as true news. Note that samples in \(D_{\text{gpt\_mix}}\) and \(D_{\text{gpt\_cot}}\) involve mixed or hallucinated content and are not categorized by topic. \begin{table} \begin{tabular}{l|c|c|c} \hline \hline & **CNN** & **Fox News** & **Reuters** \\ \hline **Misclassified (\%)** & 300 (52.5\%) & 290 (50.8\%) & 379 (66.4\%) \\ \hline \hline \end{tabular} \end{table} Table 3: Performance of the RoBERTa-based detection model on \(D_{\text{gpt\_cot}}\). _Misclassified_ (\(\downarrow\)) represents statistics of fake news predicted as true news. these prompts. Our results, presented in Figure 7, yield two important observations. Firstly, GPT-4 performs slightly better than GPT-3.5 in identifying LLM-generated disinformation using both prompt types, hinting at the potential advancements in newer LLM iterations in detecting disinformation. According to OpenAI [15], the post-training alignment procedure employed for GPT-4 improves its performance of factuality measurement. Secondly, instructing ChatGPT to output its **analytic process** prior to making a final prediction (_yes_ or _no_) significantly improves the performance. This is presumably due to the inherent nature of generative LLMs, which predict subsequent tokens based on existing sequences [18], thereby enhancing the prediction of the final decision. In addition, although ChatGPT shows moderate proficiency in detecting disinformation using _standard prompts (w/ explanation)_ in a zero-shot manner, it generally performs worse than the fine-tuned RoBERTa model. In conclusion, even with the advancements in GPT-4, it is evident that current **LLMs still face challenges in effectively detecting LLM-generated disinformation**. ### Proposed Solution (RQ3) In this subsection, we introduce a novel approach to detect LLM-generated disinformation, specifically targeting samples within \(D_{\text{gpt\_mix}}\) and \(D_{\text{gpt\_cot}}\). Our prior experiments have highlighted a unique challenge in detecting "advanced" disinformation -- characterized by a blend of genuine information and misleading content (see Section 4.1). To address this, it is vital to guide the LLM to systematically identify and fact-check key content elements. We also observed a notable improvement in ChatGPT's detection accuracy when allowing the model to detail its analytic process (see Section 4.2). Leveraging these insights, we craft a specialized chain-of-thought prompt for disinformation detection, as illustrated in Figure 8. This figure demonstrates the depth of analysis that the CoT prompt is designed to invoke, guiding ChatGPT step by step, while concurrently seeking coherence and transparency in its reasoning process. Such a structured prompt template can be instrumental in dissecting complex problems, particularly in _False Context_ disinformation detection demanding a multi-faceted understanding and evaluation. Similar to Section 4.2, we systematically evaluate the performance of GPT-3.5 and GPT-4 on LLM-generated disinformation detection. We conduct an ablation study to assess the impact of each contextual element on the detection performance. * _CoT (w/o person)_: ablating characters in steps 1, 3. * _CoT (w/o place)_: ablating place names in steps 1, 3. * _CoT (w/o time)_: ablating time stamps in steps 1, 3. * _CoT (w/o event)_: ablating key events in steps 1, 2, 3. * _CoT (all_binary)_: output "yes" or "no" in step 4. * _CoT (all_scale)_: output on a scale of 1 to 100 in step 4. In a modified prompt template, step 4 is updated to _"..., detail your analytic process and provide a confidence score ranging from 1 to 100."_ for _CoT (all_scale)_. Figure 6: Prompt template for detecting disinformation. Figure 7: Comparison of misclassification rate (\(\downarrow\)) of GPT-3.5 and 4. _w/o_ and _w/_ represents LLMs respond _without_ and _with_ the analytic process, respectively. Table 4 demonstrates ChatGPT's performance across various CoT prompts and datasets. Notably, GPT-4 consistently outperforms GPT-3.5 across all configurations. The misclassification rates for GPT-4 (all_scale)" are recorded at 4.7%, 11.9%, and 22.2% for \(D_{\text{gpt\_std}}\), \(D_{\text{gpt\_mix}}\), and \(D_{\text{gpt\_cot}}\), respectively. Critical elements to model performance are the _event_ and _time_ elements. Interestingly, "GPT-4 (w/o person)" and "GPT-4 (w/o place)" produce relatively good results on \(D_{\text{gpt\_cot}}\). We speculate this could be attributed to the retention of original _person_ and _place_ information in our LLM-generated disinformation. This ablation study provides a deeper understanding of the importance of contextual elements for disinformation detection, suggesting that advanced prompts paired with LLMs hold the potential to effectively counter LLM-generated disinformation. ## 5 Conclusion In this work, we provide a comprehensive examination of detecting LLM-generated disinformation. Utilizing ChatGPT, we curate three distinct LLM-generated disinformation datasets. Our findings reveal that existing detection techniques including LLMs, struggle to consistently identify the collected disinformation. To address this challenge, we introduce advanced prompts designed to guide LLMs in detecting such disinformation. Through empirical evaluations, our methods present a significant improvement in detecting LLM-generated disinformation, a claim further substantiated by our ablation studies highlighting the significance of contextual elements. As we look forward, investigating other types of LLM-generated disinformation, such as False Connection and Manipulated Content, offers a promising direction. Furthermore, the emergent advanced prompting methods, such as Chain-of-Thought-Self-Consistency, present potential methodologies to further facilitate the detection of LLM-generated disinformation.
GPTのような生成型大規模言語モデル(LLM)の出現により、多岐にわたる分野で革新的な進歩が促進されています。しかし、これらの進歩とともに、潜在的な脅威も導入されました。重要な懸念の一つは、デзинformazioneの拡散者によってLLMの悪用です。これらのモデルを使って、説得力がありながらも誤った情報を生成し、デインフォーマーション検出システムを挑発する可能性があります。本研究は、この問題に対処するために、以下の3つの研究質問に焦点を当てています。 (1) 現在デインフォーマーション検出技術はLLM生成のデインフォーマーションを信頼性良く検出できますか? (2) 従来の技術が効果的でない場合、LLM自体が強力な防御手段として活用できるのでしょうか?そして (3) これらの戦略が失敗した場合、デインフォーマーションの脅威を効果的に打ち破るための新たな
2309.06185
Dynamics and spreading speeds of a nonlocal diffusion model with advection and free boundaries
In this paper, we investigate a Fisher-KPP nonlocal diffusion model incorporating the effect of advection and free boundaries, aiming to explore the propagation dynamics of the nonlocal diffusion-advection model. Considering the effects of the advection, the existence, uniqueness, and regularity of the global solution are obtained. We introduce the principal eigenvalue of the nonlocal operator with the advection term and discuss the asymptotic properties influencing the long-time behaviors of the solution for this model. Moreover, we give several sufficient conditions determining the occurrences of spreading or vanishing and obtain the spreading-vanishing dichotomy. Most of all, applying the semi-wave solution and constructing the upper and the lower solution, we give an explicit description of the finite asymptotic spreading speeds for the double free boundaries on the effects of the nonlocal diffusion and advection compared with the corresponding problem without an advection term.
Chengcheng Cheng
2023-09-12T12:53:18
http://arxiv.org/abs/2309.06185v1
# Dynamics and spreading speeds of a nonlocal diffusion model with advection and free boundaries ###### Abstract In this paper, we investigate a Fisher-KPP nonlocal diffusion model incorporating the effect of advection and free boundaries, aiming to explore the propagation dynamics of the nonlocal diffusion-advection model. Considering the effects of the advection, the existence, uniqueness, and regularity of the global solution are obtained. We introduce the principal eigenvalue of the nonlocal operator with the advection term and discuss the asymptotic properties influencing the long-time behaviors of the solution for this model. Moreover, we give several sufficient conditions determining the occurrences of spreading or vanishing and obtain the spreading-vanishing dichotomy. Most of all, applying the semi-wave solution and constructing the upper and the lower solution, we give an explicit description of the finite asymptotic spreading speeds for the double free boundaries on the effects of the nonlocal diffusion and advection compared with the corresponding problem without an advection term. keywords: Nonlocal diffusion, Advection, Free boundary, Principal eigenvalue, Asymptotic behavior, Spreading speed Msc: [2010] 35R35, 35B40, 35R09, 35K57 + Footnote †: journal: Journal of LaTeX Templates ## 1 Introduction In the past years, the nonlocal diffusion equation has been recognized to better describe the long-distance dispersal of species and propagation of epidemics, such as [1; 2; 3; 4] and so on. The researches on nonlocal problems have attracted widespread attention. Berestycki et al. [5] considered the Fisher-KPP equation with a nonlocal saturation effect and mainly studied the existence of the steady state and the traveling waves. Later, the persistence criteria for populations with nonlocal diffusion were analyzed in [6]. In the natural environment, the migrations of the species and epidemic spreading usually change with time. The free boundary used to describe the migration and spreading frontiers is more reasonable for dynamical studies in reality. The investigations of the nonlocal diffusion model with free boundaries met great developments (see [7; 8; 9; 10; 11; 12; 13; 14] and the references therein). The advection movements (e.g., the wind direction and the water flow) play a significant role in epidemic dispersal [15; 16; 17; 18; 19] and species survival in the river environment [20; 21; 22; 23]. Especially, Maidana and Yang found out that the West Nile virus spread from New York to California, which was recognized as a consequence of the diffusion and advection movements of birds [24]. However, few studies have formally explored the important effects of the advection on nonlocal diffusion problems over the past decades. It is increasingly reasonable to introduce the advection to study the propagation dynamics of the nonlocal diffusion model. As is well known, the spreading speed is one of the most concernings in the research for epidemic propagation. The classical work on studying the spreading speed of the reaction-diffusion equation is that Fisher [25] and Kolmogoroff, Petrovsky, and Piscounoff [26] in 1937 considered the following reaction-diffusion equation \[\begin{cases}u_{t}=u_{xx}+f(u),&t>0,\ x\in\mathbb{R},\\ u(0,x)=u_{0}(x),&x\in\mathbb{R}.\end{cases} \tag{1.1}\] They proved the existence of minimal wave speed \(c_{0}^{*}>0\) such that (1.1) possessed a traveling wave solution \[u(x,t)=\omega(x-ct)\text{ if and only if }c>c_{0}^{*}.\] Weinberger, Lewis, and their collaborators investigated the traveling waves and spreading speeds of cooperative models [27, 28] and competition models [29] in a series of meaningful works. Recently, Liang and Zhou [30] discussed the spreading speed of the positive and negative directions for local KPP equations with advection in an almost periodic environment. For the nonlocal Fisher-KPP model with free boundaries, Du et al. in [31] first investigated the boundary spreading speeds by applying the semi-wave and traveling wave and they found the threshold condition on the kernel function which determines when the spreading speed is finite. A more interesting question that attracts us is the difference in the asymptotic spreading speed between the leftward front and the rightward front when the spreading occurs. Many of these previous studies have made great progress, whereas the dynamics and spreading speeds for the nonlocal diffusion problem with the advection and free boundaries have not been systematically explored yet. In this paper, we first investigate the propagation dynamics and spreading speeds of the following nonlocal diffusion problem with the advection term and free boundaries. \[\left\{\begin{array}{ll}u_{t}=d\int_{g(t)}^{h(t)}J\left(x-y\right)u\left(t,y \right)\mathrm{d}y-d\ u-\nu u_{x}+f(t,x,u),&t>0,\ g\left(t\right)<x<h\left(t \right),\\ h^{\prime}\left(t\right)=\mu\int_{g(t)}^{h(t)}\int_{h(t)}^{\infty}J\left(x-y \right)u\left(t,x\right)\mathrm{d}y\mathrm{d}x,&t\geq 0,\\ g^{\prime}\left(t\right)=-\mu\int_{g(t)}^{h(t)}\int_{-\infty}^{g(t)}J\left(x- y\right)u\left(t,x\right)\mathrm{d}y\mathrm{d}x,&t\geq 0,\\ u\left(t,g\left(t\right)\right)=u\left(t,h\left(t\right)\right)=0,&t\geq 0,\\ u\left(0,x\right)=u_{0}\left(x\right),&x\in[-h_{0},h_{0}],\\ h\left(0\right)=h_{0},\ g\left(0\right)=-h_{0}.\end{array}\right. \tag{1.2}\] The model (1.2) can be seen as a nonlocal extension for the diffusion-advection model with free boundaries in [15]. Here, \(u(t,x)\) represents the population density; \(d\) is the diffusion rate; \(\nu\) represents the advection rate; \(\mu\) denotes the boundary expanding ability. \(g(t)\) and \(h(t)\) denote the leftward spreading front and rightward spreading front, respectively. \(\int_{g(t)}^{h(t)}\int_{-\infty}^{g(t)}J\left(x-y\right)u\left(t,x\right) \mathrm{d}y\mathrm{d}x\) and \(\int_{g(t)}^{h(t)}\int_{h(t)}^{\infty}J\left(x-y\right)u\left(t,x\right) \mathrm{d}y\mathrm{d}x\) are the outward flux of the double fronts. We always assume that \(u(t,x)=0,\text{ for }x\notin[g(t),h(t)]\), and the boundary expanding ratio is proportional to the outward flux. Further, assume that the initial function \(u_{0}(x)\) satisfies \[u_{0}(x)\in C^{1}\left(\left[-h_{0},h_{0}\right]\right),\ u_{0}\left(-h_{0} \right)=u_{0}\left(h_{0}\right)=0,\ u_{0}(x)>0,\ x\in\left(-h_{0},h_{0}\right). \tag{1.3}\] The kernel function \(J(x):\mathbb{R}\rightarrow\mathbb{R}\) satisfies the following conditions. \(\left(\mathbf{J}\right)\)\(J\in C^{1}(\mathbb{R})\cap L^{\infty}(\mathbb{R})\), \(J\geq 0\), \(J(x)=J(-x)\), \(J(0)>0\), and \(\int_{\mathbb{R}}J(x)dx=1\). Assumptions on the reaction term \(f:\mathbb{R}^{+}\times\mathbb{R}\times\mathbb{R}^{+}\rightarrow\mathbb{R}\). \(\left(\mathbf{f1}\right)\)\(f(t,x,0)=0\) and \(f(t,x,u)\) is locally Lipschitz continuous in \(u\in\mathbb{R}^{+}\). That is, for any \(U^{*}>0\), there is a constant \(k=k(U^{*})>0\) such that \(\left|f\left(t,x,u_{1}\right)-f\left(t,x,u_{2}\right)\right|\leq k\left|u_{1} -u_{2}\right|\text{ for }u_{1},u_{2}\in[0,U^{*}]\). \(\left(\mathbf{f2}\right)\) There is \(k_{0}>0\) such that \(f(t,x,u)<0\) for \(u\geq k_{0}\). \(\left(\mathbf{f3}\right)\) For any given \(\tau\), \(l\), \(u^{*}>0\), there is a constant \(\tilde{k}\left(\tau,l,u^{*}\right)\) and \(\alpha\in(0,1)\) such that \[\left\|f(\cdot,x,u)\right\|_{C^{\alpha}([0,\tau])}\leq\tilde{k}\left(\tau,l,u ^{*}\right)\] for \(x\in[-l,l],\ u\in[0,u^{*}]\). Assume that \(f\) is of Fisher-KPP type as exploring the long-time asymptotic behaviors, that is, the following additional conditions for \(f\) hold. \(\left(\mathbf{f4}\right)\)\(f=f(u)\in C^{1}\) independent of \((t,x)\). \(f(0)=f(1)=0<f(x)\), for \(x\in(0,1)\). \( f^{\prime}(0):=f_{0}>0>f^{\prime}(1)\), and \(\dfrac{f(u)}{u}\) is nonincreasing in \(u>0\). Additionally, considering the small advection rate and the spreading propagation criterion in Section 5, we always assume that \[0<\nu<\tilde{c}, \tag{1.4}\] where \(\tilde{c}\) is exactly the minimal spreading speed of the traveling wave solution of the following equation \[\begin{cases}u_{t}=d\int_{\mathbb{R}}J\left(x-y\right)u(t,y)\mathrm{d}y-d\ u(t,x)+f(t,x,u),&t>0,\ x\in\mathbb{R},\\ u(0,x)=u_{0}(x),&x\in\mathbb{R}.\end{cases} \tag{1.5}\] In this paper, we mainly investigate the propagation dynamics and the spreading speed of a nonlocal diffusion model (1.2) with the advection and free boundaries which are firstly systematically discussed. The existence, uniqueness, and regularity, the long-time dynamical behavior, and the asymptotic spreading speeds for (1.2) are investigated. In general, the solution \(u(t,x)\) for a nonlocal model may not be differentiable in \(x\). In our model, considering the effect of the advection described by a gradient term, we can obtain the regularity of the solution \(u(t,x)\) in \(x\). Further, we define the corresponding principal eigenvalue with the advection term and show that the advection can influence the spreading or vanishing by affecting the value of the principal eigenvalue. Especially, the spreading speed plays an important role in predicting the epidemic propagation speed and scale. Under the effects of advection and nonlocal diffusion, we construct sharp estimates of the spreading speeds of the double free boundaries. We show that the leftward front and the rightward front move at different spreading speeds compared with the non-advection case. The remainder of the paper is organized as follows. In Section 2, based on the contraction mapping theorem and the associated Maximum principle, we prove that (1.2) admits a unique global solution defined for all \(t>0\). In Section 3, we define the principal eigenvalue associated with the corresponding nonlocal operators and then obtain the asymptotic properties. In Section 4, under the conditions \(\left(\mathbf{J}\right)\) and \(\left(\mathbf{f1}\right)-\left(\mathbf{f4}\right)\), we study the long-time dynamical behavior of the solution to (1.2), and obtain the spreading-vanishing dichotomy regimes. In Section 5, we investigate the effects of the advection on the spreading speeds and get sharp estimates by constructing the upper solution and the lower solution and applying the corresponding semi-wave solution. ## 2 Existence and uniqueness For given \(h_{0}>0\), and any \(0<T<\infty\), denote \[\begin{split}&\mathscr{D}_{h_{0}}:=[0,T]\times[-h_{0},h_{0}]\,,\\ &\mathbb{H}_{h_{0},T}:=\left\{h\in C([0,T])\mid h(0)=h_{0},\inf_{ 0\leq t_{1}<t_{2}\leq T}\frac{h\left(t_{2}\right)-h\left(t_{1}\right)}{t_{2}- t_{1}}>0\right\},\\ &\mathbb{G}_{h_{0},T}:=\left\{g\in C([0,T])\mid-g\in\mathbb{H}_{ h_{0},T}\right\},\\ &\mathcal{D}_{T}:=D_{g,h}^{T}=\left\{(t,x)\in\mathbb{R}^{2}\mid 0 <t\leq T,\ g(t)<x<h(t)\right\}.\end{split} \tag{2.1}\] Now we present the existence and uniqueness of the solution for (1.2) in this section. **Theorem 2.1**.: _Assume that \((\mathbf{J})\) and \((\mathbf{f1})-(\mathbf{f3})\) hold, for any initial function satisfies (1.3) and for \(T\in(0,\infty)\), the problem (1.2) admits a unique solution_ \[(u,g,h)\in C^{1+\alpha,1}(\overline{D}_{g,h}^{T})\times C^{1+\alpha}([0,T]) \times C^{1+\alpha}([0,T]).\] We first show the following Maximum principle, which plays a vital part in proving the positivity of \(u(t,x)\). **Lemma 2.1** (Maximum principle).: _Assume that \((\mathbf{J})\) holds, and \(g\in\mathbb{G}_{h_{0},T}\), \(h\in\mathbb{H}_{h_{0},T}\) for some \(h_{0},\,T>0\). Suppose that \(u_{x}\), \(u_{t}\in C(\overline{D}_{g,h}^{T})\) and \(c(t,x)\in L^{\infty}(D_{g,h}^{T})\), satisfy that_ \[\begin{cases}u_{t}(t,x)\geq d\int_{g(t)}^{h(t)}J\left(x-y\right)u(t,y)\mathrm{ d}y-d\ u-\nu u_{x}+c(t,x)u,&t\in(0,T],\ x\in(g(t),h(t)),\\ u(t,g(t))\geq 0,\ u(t,h(t))\geq 0,&t>0,\\ u(0,x)\geq 0,&x\in[-h_{0},h_{0}],\end{cases} \tag{2.2}\] _Then \(u(t,x)\geq 0\) for all \(0\leq t\leq T\) and \(x\in[g(t),h(t)]\). Moreover, if \(u(0,x)\not\equiv 0\) in \([-h_{0},h_{0}]\), it follows that \(u(t,x)>0\) in \(\mathcal{D}_{T}\)._ Proof.: Let \(w(t,x)=e^{-\xi t}u(t,x)\) with \(\xi>\|c(t,x)\|_{L^{\infty}(\mathcal{D}_{T})}\). Then \[\begin{split}& w_{t}(t,x)+\nu w_{x}(t,x)-d\int_{g(t)}^{h(t)}J \left(x-y\right)w(t,y)\mathrm{d}y+dw(t,x)\\ &\geq-\xi w(t,x)+c(t,x)w(t,x),\ \text{for}\ (t,x)\in\mathcal{D}_{T}. \end{split} \tag{2.3}\] Now we wish to prove that \(w\geq 0\) in \(\mathcal{D}_{T}\). On the contrary, for \(0<T^{*}\leq T\), assume that \(\min_{(t,x)\in\overline{\mathcal{D}}_{T^{*}}}w(t,x)<0\). According to (2.2), \(w\geq 0\) on the parabolic boundary of \(\mathcal{D}_{T}\), and then there exists \((t^{*},x^{*})\in\mathcal{D}_{T^{*}}\) such that \(w\left(t^{*},x^{*}\right)=\min_{(t,x)\in\overline{\mathcal{D}}_{T^{*}}}w(t,x)\). Thus, \[\begin{split}& 0\geq w_{t}(t^{*},x^{*})+\nu w_{x}(t^{*},x^{*})-d \int_{g(t^{*})}^{h(t^{*})}J(x^{*}-y)w(t^{*},y)\mathrm{d}y+dw(t^{*},x^{*})\\ &\geq-\xi w\left(t^{*},x^{*}\right)+c(t^{*},x^{*})w\left(t^{*},x^ {*}\right),\\ &>0,\ \text{for}\ (t,x)\in\mathcal{D}_{T^{*}},\end{split} \tag{2.4}\] which yields a contradiction. It follows that \(w(t,x)\geq 0\) and thus \(u(t,x)\geq 0\) for all \((t,x)\in\mathcal{D}_{T^{*}}\). If \(T^{*}=T\), then \(u(t,x)\geq 0\) in \(\mathcal{D}_{T}\); while if \(T^{*}<T\), we can finitely repeat this process with \(u_{0}(x)\) replaced by \(u\left(T_{*},x\right)\) and \((0,T]\) replaced by \(\left(T^{*},T\right]\) to complete the proof. If \(u(0,x)\not\equiv 0,\ \text{in}\ \left[-h_{0},h_{0}\right],\) the remaining only needs to show that \(w>0\) in \(\mathcal{D}_{T}\). Motivated by the arguments in proving Lemma 2.2 [32], it is not difficult to complete the proof by contradiction. The following result can be proved by the arguments in the proof of Maximum principle. **Lemma 2.2** (Comparison criterion).: _Assume that \(\left(\mathbf{J}\right)\) holds. For any \(h_{0}>0\), \(T>0\), suppose that \(u_{x}(t,x)\), \(u_{t}(t,x)\in C(\mathscr{D}_{h_{0}})\), and \(c(t,x)\in L^{\infty}\left(\mathscr{D}_{h_{0}}\right)\), satisfy that_ \[\begin{cases}u_{t}(t,x)\geq d\int_{-h_{0}}^{h_{0}}J\left(x-y\right)u(t,y) \mathrm{d}y-d\ u-\nu u_{x}+c(t,x)u,&t\in(0,T],\ x\in\left[-h_{0},h_{0}\right], \\ u(0,x)\geq 0,&x\in\left[-h_{0},h_{0}\right].\end{cases}\] _Then \(u(t,x)\geq 0\) for all \((t,x)\in\mathscr{D}_{h_{0}}\). Moreover, if \(u(0,x)\not\equiv 0\) in \([-h_{0},h_{0}]\), then \(u(t,x)>0\) in \((0,T]\times[-h_{0},h_{0}]\)._ According to Lemmas 2.1 and 2.2, we have the following result. **Lemma 2.3** (Comparison principle).: _Assume that \(\left(\mathbf{J}\right)\) holds. Let \(T>0,\ \bar{h},\,\bar{g}\in C^{1}([0,T]).\) Suppose that \(\bar{u}_{x},\)\(\bar{u}_{t}\in C(D^{T}_{\bar{g},\bar{h}})\) and satisfy_ \[\begin{cases}\bar{u}_{t}\geq d\int_{\bar{g}(t)}^{\bar{h}(t)}J\left(x-y\right) \bar{u}(t,y)\mathrm{d}y-d\ \bar{u}-\nu\bar{u}_{x}+f(\bar{u}),&(t,x)\in D^{T}_{\bar{g},\bar{h}},\\ \bar{h}^{\prime}(t)\geq\mu\int_{\bar{g}(t)}^{\bar{h}(t)}\int_{h(t)}^{\infty}J \left(x-y\right)\bar{u}(t,x)\mathrm{d}y\mathrm{d}x,&0<t\leq T,\\ \bar{g}^{\prime}(t)\leq\mu\int_{\bar{g}(t)}^{\bar{h}(t)}\int_{-\infty}^{\bar{g }(t)}J\left(x-y\right)\bar{u}(t,x)\mathrm{d}y\mathrm{d}x,&0<t\leq T,\\ \bar{u}(t,\bar{g}(t))\geq 0,\ \bar{u}(t,\bar{h}(t))\geq 0,&0<t\leq T,\\ \bar{u}(0,x)\geq u_{0}(x),&x\in[-h_{0},h_{0}],\\ \bar{g}(0)\leq-h_{0},\ \bar{h}(0)\geq h_{0}.\end{cases} \tag{2.5}\] _Let \((u,g,h)\) be the unique solution of \(\left(\ref{eq:1}\right)\), we have_ \[u\leq\bar{u},\ g\geq\bar{g},\ h\leq\bar{h}\ \ \text{in}\ D^{T}_{g,h}.\] **Remark 2.1**.: \((\bar{u},\bar{g},\bar{h})\) _is called as the upper solution of \(\left(\ref{eq:1}\right)\). Clearly, the lower solution \(\left(\underline{u},\underline{g},\underline{h}\right)\) of \(\left(\ref{eq:1}\right)\) can be similarly defined by reversing the inequalities of \(\left(\ref{eq:2}\right)\)._ Now we complete the proof of Theorem 2.1. _The proof of Theorem 2.1._ Let \[y=\frac{2x}{h(t)-g(t)}-\frac{h(t)+g(t)}{h(t)-g(t)},\ w(t,y)=u(t,x), \tag{2.6}\] which transforms \(x\in(g(t),h(t))\) into \(y\in(-1,1)\), then \[\begin{split}\frac{\partial y}{\partial x}&=\frac{2 }{h(t)-g(t)}\\ &:=\mathscr{A}(t,g(t),h(t)),\\ \frac{\partial y}{\partial t}&=-\frac{y\left(h^{\prime}(t )-g^{\prime}(t)\right)+\left(h^{\prime}(t)+g^{\prime}(t)\right)}{h(t)-g(t)}\\ &:=\mathscr{B}\left(y,g(t),g^{\prime}(t),h(t),h^{\prime}(t)\right). \end{split} \tag{2.7}\] Denote \[h_{*}:=\mu\int_{-1}^{1}\int_{1}^{\infty}J(h_{0}(y-z))u_{0}(h_{0}y)h_{0}^{2}{\rm d}z {\rm d}y,\] \[g_{*}:=-\mu\int_{-1}^{1}\int_{-\infty}^{-1}J(h_{0}(y-z))u_{0}(h_{0}y)h_{0}^{2}{ \rm d}z{\rm d}y.\] For \(0<T<\min\left\{1,\dfrac{h_{0}}{4(1+h_{*})},\dfrac{h_{0}}{4(1-g_{*})}\right\}\), set \(\Delta_{T}:=[0,T]\times[-1,1]\), \[\mathcal{D}_{h,T}:=\left\{h\in C^{1}([0,T])\mid h(0)=h_{0},\ h^{ \prime}(0)=h_{*},\ \left\|h^{\prime}-h_{*}\right\|_{C([0,T])}\leq 1\right\},\] \[\mathcal{D}_{g,T}:=\left\{g\in C^{1}([0,T])\mid g(0)=-h_{0},\ g^{ \prime}(0)=g_{*},\ \left\|g^{\prime}-g_{*}\right\|_{C([0,T])}\leq 1\right\}.\] Given \(h_{i}\in\mathcal{D}_{h,T}\), \(g_{i}\in\mathcal{D}_{g,T},\ i=1,2\), for \(h_{i}(0)=h_{0}\), \(g_{i}(0)=-h_{0}\), we can obtain that \[\left\|h_{1}-h_{2}\right\|_{C([0,T])}\leq T\left\|h_{1}^{\prime}-h_{2}^{ \prime}\right\|_{C([0,T])}, \tag{2.8}\] \[\left\|g_{1}-g_{2}\right\|_{C([0,T])}\leq T\left\|g_{1}^{\prime}- g_{2}^{\prime}\right\|_{C([0,T])}.\] For \(h\in\mathcal{D}_{h,T},\ g\in\mathcal{D}_{g,T}\), it follows \[\left\|h-h_{0}\right\|\leq T(1+h_{*})\leq\dfrac{h_{0}}{4},\ \ \text{and}\ \left\|g+h_{0}\right\|\leq T(1-g_{*})\leq\dfrac{h_{0}}{4},\] then the translations \((\ref{eq:h_1})-(\ref{eq:h_1})\) are well defined. Further, one can see that \(w(t,y)\) satisfies \[\begin{cases}w_{t}=d\int_{-1}^{1}J(\frac{y-z}{\mathscr{A}^{\prime}})w(t,z) \frac{1}{\mathscr{A}^{\prime}}{\rm d}z-d\ w(t,y)+\left(\nu\mathscr{A}+ \mathscr{B}\right)w_{y}+f(t,y,w),&t>0,\ y\in(-1,1),\\ w(t,1)=w(t,-1)=0,&t>0,\\ w(0,y)=u_{0}\left(h_{0}y\right),&y\in[-1,1].\end{cases} \tag{2.9}\] According to the standard \(L^{p}\) theory and the Sobolev embedding theorem [34] with the arguments of Section 4 in [2], in view of \((\mathbf{J})\) and \((\mathbf{f}\mathbf{1})-(\mathbf{f}\mathbf{3})\), the equation \((\ref{eq:h_1})\) admits a unique solution \[w(t,y)\in C^{1+\alpha,1}\left(\Delta_{T}\right)\ \text{and}\ \|w(t,y)\|_{C^{1+\alpha,1}( \Delta_{T})}\leq C,\] where positive constant \(C\) depends on \(h_{0},\alpha\) and \(\|u_{0}\|_{C^{1}([-h_{0},h_{0}])}\). Moreover, for the above \(w\), define \[h(t) =h_{0}+\mu\int_{0}^{t}\int_{-1}^{1}\int_{1}^{\infty}J(\frac{y-z}{ \mathscr{A}_{0}})w(\tau,y)\frac{1}{\mathscr{A}_{0}^{\prime 2}}{\rm d}z{\rm d}y{\rm d}\tau, \tag{2.10}\] \[g(t) =-h_{0}-\mu\int_{0}^{t}\int_{-1}^{1}\int_{-\infty}^{-1}J(\frac{y- z}{\mathscr{A}_{0}})w(\tau,y)\frac{1}{\mathscr{A}_{0}^{\prime 2}}{\rm d}z{\rm d}y{\rm d}\tau,\] where \(\mathscr{A}_{0}=\dfrac{2}{h(\tau)-g(\tau)},\ \tau\in[0,t]\). Since \(w\) depends on \(h\) and \(g\), the problem \((\ref{eq:h_1})\) has a unique solution \((\tilde{h},\tilde{g})\), where \(\tilde{h}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h),\ \tilde{g}(t)= \tilde{h}(t;g,h),\ \tilde{g}(t)=\tilde{h}(t;g,h) \(\tilde{g}(t;g,h)\). Then \(\tilde{h}(0)=h_{0},\ \tilde{g}(0)=-h_{0}\), \[\begin{split}&\tilde{h}^{\prime}(t)=\mu\int_{-1}^{1}\int_{1}^{ \infty}J(\frac{y-z}{\mathscr{A}})w(t,y)\frac{1}{\mathscr{A}^{2}}\mathrm{d}z \mathrm{d}y,\\ &\tilde{h}^{\prime}(0)=\mu\int_{-1}^{1}\int_{1}^{\infty}J(\frac{y -z}{\mathscr{A}})w(0,y)\frac{1}{\mathscr{A}^{2}}\mathrm{d}z\mathrm{d}y,\\ &\tilde{g}^{\prime}(t)=-\mu\int_{-1}^{1}\int_{\infty}^{-1}J(\frac {y-z}{\mathscr{A}})w(t,y)\frac{1}{\mathscr{A}^{2}}\mathrm{d}z\mathrm{d}y,\\ &\tilde{g}^{\prime}(0)=-\mu\int_{-1}^{1}\int_{-\infty}^{-1}J( \frac{y-z}{\mathscr{A}})w(0,y)\frac{1}{\mathscr{A}^{2}}\mathrm{d}z\mathrm{d}y.\end{split} \tag{2.11}\] And for any \(h\in\mathcal{D}_{h,T},\,g\in\mathcal{D}_{g,T}\), one sees that \(\tilde{h}\in C^{1+\alpha}([0,T])\), continuously depends on \(w\in C^{1+\alpha,1}\left(\Delta_{T}\right)\), \(h\in\mathcal{D}_{h,T}\), and \(g\in\mathcal{D}_{g,T}\). So does \(\tilde{g}\). Then there is \(\hat{C}>0\) such that \[\begin{split}&\tilde{h}^{\prime}\in C^{\alpha}([0,T])\text{ and }\left\|\tilde{h}^{\prime}\right\|_{C^{\alpha}([0,T])}\leq\hat{C},\\ &\tilde{g}^{\prime}\in C^{\alpha}([0,T])\text{ and }\left\|\tilde{g}^{\prime}\right\|_{C^{ \alpha}([0,T])}\leq\hat{C}.\end{split} \tag{2.12}\] Define \(\mathcal{F}:\mathcal{D}_{h,T}\times\mathcal{D}_{g,T}\longrightarrow C^{1}\left( [0,T]\right)\times C^{1}([0,T])\) by \[\mathcal{F}(h,g)=(\tilde{h},\tilde{g}).\] It can be seen that \(\mathcal{F}\) is continuous in \(\mathcal{D}_{h,T}\times\mathcal{D}_{g,T}\), and \((g,h)\) is a fixed point of \(\mathcal{F}\) if and only if \((w;h,g)\) solves (2.9) with (2.10). According to (2.11)-(2.12), it follows that \(\mathcal{F}\) is compact and satisfies \[\|\tilde{h}^{\prime}-h^{*}\|_{C([0,T])}+\|\tilde{g}^{\prime}-g^{*}\|_{C([0,T]) }\leq\left(\|h^{\prime}\|_{C^{\alpha}([0,T])}+\|g^{\prime}\|_{C^{\alpha}([0,T] )}\right)T^{\alpha}\leq\mathscr{C}T^{\alpha},\] where positive constant \(\mathscr{C}\geq 2\hat{C}\). Therefore, if \[T<\min\left\{1,\ \frac{h_{0}}{4(1+h_{*})},\ \frac{h_{0}}{4(1-g_{*})},\ \mathscr{C}^{- \frac{1}{\alpha}}\right\}, \tag{2.13}\] we can obtain that \(\mathcal{F}\) maps \(\mathcal{D}_{h,T}\times\mathcal{D}_{g,T}\) into itself. Next, we need to show that \(\mathcal{F}\) is a contraction mapping on \(\mathcal{D}_{h,T}\times\mathcal{D}_{g,T}\). For any \(w_{1}\), \(w_{2}\) defined on \(\Delta_{T}\) and satisfying (2.9), set \(\omega:=w_{1}-w_{2}\), it follows \[\begin{cases}\omega_{t}=d\int_{-1}^{1}J(\frac{y-z}{\mathscr{A}})\omega(t,z) \frac{1}{\mathscr{A}}\mathrm{d}z-d\ \omega(t,y)+\left(\nu\mathscr{A}+\mathscr{B}\right)\omega_{y}+f(t,y,w_{1})-f(t, y,w_{2}),&t>0,\ y\in(-1,1),\\ \omega(t,1)=\omega(t,-1)=0,&t>0,\\ \omega(0,y)=0,&y\in[-1,1].\end{cases}\] Using the \(L^{p}\) estimates for partial differential equations and Sobolev embedding theorem, we obtain \[\|w_{1}-w_{2}\|_{C^{1+\alpha,1}(\Delta_{T})}\leq\tilde{C}\left(\|w_{1}-w_{2}\| _{C(\Delta_{T})}+\|h_{1}-h_{2}\|_{C^{1}([0,T])}+\|g_{1}-g_{2}\|_{C^{1}([0,T])} \right), \tag{2.14}\] where \(\tilde{C}\) depends on \(\hat{C}\) and the functions \(\mathscr{A}\) and \(\mathscr{B}\) in (2.7). According to (2.11) and (2.12), it gives \[\begin{split}&\left\|\tilde{h}_{1}^{\prime}-\tilde{h}_{2}^{ \prime}\right\|_{C^{\alpha}([0,T])}\leq\mu\mathscr{C}_{*}\left\|w_{1}-w_{2}\right\| _{C(\Delta_{T})},\\ &\left\|\tilde{g}_{1}^{\prime}-\tilde{g}_{2}^{\prime}\right\|_{C^{ \alpha}([0,T])}\leq\mu\mathscr{C}_{*}\left\|w_{1}-w_{2}\right\|_{C(\Delta_{T}) },\end{split} \tag{2.15}\] where positive constant \(\mathscr{C}_{*}\) depends on \(T\) and \(\alpha\). Next, we give an estimate of \(\|w_{1}-w_{2}\|_{C(\Delta_{T})}\). By (**f1**) and (2.8) with (2.14), direct calculus gives \[\begin{split} w_{1}-w_{2}&=d\int_{0}^{t}\int_{-1}^{ 1}J\big{(}\frac{y-z}{\mathscr{A}}\big{)}(w_{1}-w_{2})(\tau,z)\frac{1}{\mathscr{ A}}\mathrm{d}z\mathrm{d}\tau-d\int_{0}^{t}(w_{1}-w_{2})(\tau,y)\mathrm{d}\tau\\ &+\int_{0}^{t}(\nu\mathscr{A}+\mathscr{B})(w_{1,y}-w_{2,y}) \mathrm{d}\tau+\int_{0}^{t}f(\tau,y,w_{1})-f(\tau,y,w_{2})\mathrm{d}\tau\\ &\leq l(d,\mathscr{A})T\|w_{1}-w_{2}\|_{C(\Delta_{T})}+(\nu \mathscr{A}+\mathscr{B})T\left\|w_{1,y}-w_{2,y}\right\|_{C(\Delta_{T})}+kT\|w_ {1}-w_{2}\|_{C(\Delta_{T})}\\ &\leq\mathscr{P}T\left(\|h_{1}^{\prime}-h_{2}^{\prime}\|_{C([0, T])}+\|g_{1}^{\prime}-g_{2}^{\prime}\|_{C([0,T])}\right),\end{split} \tag{2.16}\] where positive constants \(l(d,\mathscr{A})\) depends on \(d,\mathscr{A}\) and \(\mathscr{P}\) depends on \(d\), \(\nu\), \(\mathscr{A}\), \(\mathscr{B}\) and \(T\). Combining (2.15) and (2.16), we then can obtain that \[\|\tilde{h}_{1}^{\prime}-\tilde{h}_{2}^{\prime}\|_{C([0,T])}+\|\tilde{g}_{1}^{ \prime}-\tilde{g}_{2}^{\prime}\|_{C([0,T])}\leq 2\mu\mathscr{P}\mathscr{C}_{*}T \left(\|h_{1}^{\prime}-h_{2}^{\prime}\|_{C([0,T])}+\|g_{1}^{\prime}-g_{2}^{ \prime}\|_{C([0,T])}\right). \tag{2.17}\] Choose \(T\) which satisfies (2.13) such that \(2\mu\mathscr{P}\mathscr{C}_{*}T<1\), we can get that \(\mathcal{F}\) is a contraction mapping. By the contraction mapping theorem, \(\mathcal{F}\) admits a unique fixed point \((h,g)\in\mathcal{D}_{h,T}\times\mathcal{D}_{g,T}\), and problem (2.9) with (2.10) has a unique solution \((w;h,g)\). Moreover, by the Maximum principle, \[w(t,y)>0,\text{ for }t>0,y\in(-1,1),\] it follows \[h^{\prime}(t)>0\text{ and }g^{\prime}(t)<0,\text{ for }t>0. \tag{2.18}\] Therefore, the function \(u(t,x)=w\left(t,y\right)\) satisfies \[u\in C^{1+\alpha,1}(\overline{D}_{g,h}^{T}),\ u>0\text{ in }D_{g,h}^{T},\] and \((u;h,g)\) solves (1.2) with \(h,\ g\in C^{1+\alpha}([0,T])\). According to the proof of the Theorem 2.1, we immediately have the following result. **Theorem 2.2**.: _For the assumptions given by Theorem 2.1, let \((u,g,h)\) be the solution of (1.2), then_ \[h^{\prime}(t)>0\text{ and }g^{\prime}(t)<0,\text{ for }t>0. \tag{2.19}\] Moreover, we intend to complete the global existence of this nonlocal diffusion equation with the advection. **Theorem 2.3**.: _Under the assumptions of Theorem 2.1, the solution for (1.2) exists for all \(t>0\)._ Proof.: Now we prove that the unique solution of (2.9) with (2.10) defined over \(0<t\leq T\) can be uniquely extended to all \(t>0\). This extension can be done in a similar method as in Step 2 of the proof of Theorem 2.1 in [35] arguing by contradiction. Since only obvious modifications are needed, the details are omitted. ## 3 Principal eigenvalue For any \(h>0\), \(a_{0}>0\), \(\varphi\in C^{1}([-h,h])\), we define the operator \(\tilde{\mathcal{L}}:=\mathcal{L}_{(-h,h)}^{d}+a_{0}-\nu\nabla\) by \[\tilde{\mathcal{L}}[\varphi](x):=d\int_{-h}^{h}J\left(x-y\right)\varphi\left(y \right)\mathrm{d}y-d\varphi\left(x\right)-\nu\varphi^{\prime}\left(x\right)+a _{0}\varphi(x),\ x\in[-h,h].\] The principal eigenvalue of \(\mathcal{L}_{(-h,h)}^{d}+a_{0}-\nu\nabla\) is given by \[\lambda_{p}(\mathcal{L}_{(-h,h)}^{d}+a_{0}-\nu\nabla)=\inf\left\{\lambda\in \mathbb{R}\mid\left(\mathcal{L}_{(-h,h)}^{d}+a_{0}-\nu\nabla\right)\Phi\leq \lambda\Phi\ \text{in}\ [-h,h]\ \text{for some}\ \Phi\in C^{1}([-h,h]),\ \Phi>0\right\}. \tag{3.1}\] Especially, let \(\mathbb{Y}=\{u\in C^{1}([0,h])\mid u(0)=0,u>0\ \text{on}\ (0,h]\}\), equipped with the \(C^{1}\)-norm. Considering the following operator \(\tilde{\mathcal{L}}:\mathbb{Y}\longrightarrow C([0,h])\) defined by \[\tilde{\mathcal{L}}[u](x)=(\mathcal{L}_{(0,h)}^{d}+a_{0}-\nu\nabla)[u](x),\ x \in(0,h], \tag{3.2}\] according to the Theorem 4.1 by Li et al. in [36], the operator \(\tilde{\mathcal{L}}\) admits a real principal eigenvalue \[\lambda_{p}(\tilde{\mathcal{L}})=\sup_{0<u\in C^{1}((0,h])}\inf_{x\in(0,h]} \frac{\tilde{\mathcal{L}}[u](x)}{u(x)} \tag{3.3}\] with a positive eigenfunction \(\phi_{h}(x)\in Y.\) Moreover, the following vital properties hold. **Property 3.1**.: _Assume that \(\left(\mathbf{J}\right)\) holds, \(a_{0}>0\) and \(0<h<+\infty,\) the following properties hold\(:\)_ \((1)\ \ \lambda_{p}(\tilde{\mathcal{L}})\) _is strictly increasing and continuous in \(h\in(0,\infty);\)_ \((2)\ \lambda_{p}(\tilde{\mathcal{L}})\) _is strictly increasing in \(a_{0};\)_ \((3)\lim\limits_{h\rightarrow+\infty}\lambda_{p}(\tilde{\mathcal{L}})=a_{0};\)__ \((4)\ \limsup\limits_{h\to 0^{+}}\lambda_{p}(\tilde{\mathcal{L}})\leq a_{0}-d.\)__ Proof.: Motivated by Proposition 3.4 in [32], the property \((1)\) can be obtained. Clearly, we can get the strict monotonicity of \(\lambda_{p}(\mathcal{L}_{(-h,h)}^{d}+a_{0}-\nu\nabla)\) in \(a_{0}\) by the definition of \((3.1).\) Now we aim to prove the asymptotic properties \((3)\) and \((4).\) The proof of property \((3)\). Denote \(\mathcal{D}(h):=H^{1}\left([-h,h]\right),\) according to the variational method, \(\lambda_{p}(\tilde{\mathcal{L}})\) can be expressed as \[\begin{array}{ll}\lambda_{p}(\tilde{\mathcal{L}})&=\sup_{0\neq\psi\in \mathcal{D}(h)}\frac{d\int_{-h}^{h}\int_{-h}^{h}J\left(x-y\right)\psi(y)\psi(x )\mathrm{d}y\mathrm{d}x}{\int_{-h}^{h}\psi^{2}(x)\mathrm{d}x}-\frac{\nu\int_ {-h}^{h}\psi(x)\psi^{\prime}(x)\mathrm{d}x}{\int_{-h}^{h}\psi^{2}(x)\mathrm{d }x}-d+a_{0}\\ &=\sup_{0\neq\psi\in\mathcal{D}(h)}\frac{d\int_{-h}^{h}\int_{-h}^{h}J\left(x-y \right)\psi(y)\psi(x)\mathrm{d}y\mathrm{d}x}{\int_{-h}^{h}\psi^{2}(x) \mathrm{d}x}-\frac{\frac{\nu}{2}\left(\psi^{2}(h)-\psi^{2}(-h)\right)}{\int_{-h }^{h}\psi^{2}(x)\mathrm{d}x}-d+a_{0}\\ &\leq\frac{d\int_{-h}^{h}\psi^{2}(x)\mathrm{d}x}{\int_{-h}^{h}\psi^{2}(x) \mathrm{d}x}-\frac{\frac{\nu}{2}\left(\psi^{2}(h)-\psi^{2}(-h)\right)}{2h\psi^ {2}(\tilde{h})}-d+a_{0}\\ &\leq a_{0}+\frac{\frac{\nu}{2}\left(\psi^{2}(h)+\psi^{2}(-h)\right)}{2h\psi^ {2}(\tilde{h})}\\ &\longrightarrow a_{0},\ \text{as}\ h\rightarrow\infty,\end{array} \tag{3.4}\] where \(\tilde{h}\in(-h,h)\), then \[\limsup\limits_{h\rightarrow\infty}\lambda_{p}(\tilde{\mathcal{L}})\leq a_{0}. \tag{3.5}\] By (**J**), for any small \(\epsilon>0\), there exists \(L=L(\epsilon)>0\) such that \[\int_{-L}^{L}J\left(x\right)\mathrm{d}x>1-\epsilon.\] Taking \(\Phi\equiv 1\) as the test function in the variational characterization of \(\lambda_{p}(\tilde{\mathcal{L}})\). We obtain \[\lambda_{p}(\tilde{\mathcal{L}}) \geq\frac{d\int_{-h}^{h}\int_{-h}^{h}J\left(x-y\right)\mathrm{d}y \mathrm{d}x}{2h}-d+a_{0} \tag{3.6}\] \[\geq\frac{d\int_{-h+L}^{h-L}\int_{-h}^{h}J\left(x-y\right)\mathrm{ d}y\mathrm{d}x}{2h}-d+a_{0}\] \[\geq\frac{d(2h-2L)\int_{-L}^{L}J\left(s\right)\mathrm{d}s}{2h}-d +a_{0}\] \[\geq\frac{d(2h-2L)(1-\epsilon)}{2h}-d+a_{0}\] \[\to-\epsilon d+a_{0},\ \ \text{as}\ h\to\infty.\] Since small \(\epsilon>0\) is arbitrarily chosen, we can obtain \[\liminf_{h\to+\infty}\lambda_{p}(\tilde{\mathcal{L}})\geq a_{0}.\] Combining the above inequality with (3.5), it follows \[\lim_{h\to+\infty}\lambda_{p}(\tilde{\mathcal{L}})=a_{0}.\] The proof of property (4). Since \(a_{0}\) and \(\nu\) are fixed constants, \(\lambda_{p}(\mathcal{L}_{(-h,h)}^{d}+a_{0}-\nu\nabla)\) only depends on the integral interval \((-h,h)\) from the definition (3.1). Without loss of generality, denote \[\lambda_{p}(\mathcal{L}_{(-h,h)}^{d}+a_{0}-\nu\nabla):=\lambda_{p}(\mathcal{L }_{(0,\hat{h})}^{d}+a_{0}-\nu\nabla).\] According to (3.3), \(\lambda_{\hat{h}}:=\lambda_{p}(\mathcal{L}_{(0,\hat{h})}^{d}+a_{0}-\nu\nabla)\) is the principal eigenvalue with an eigenfunction \(\Phi_{\hat{h}}\) satisfying \(\Phi_{\hat{h}}(x)>0\), \(x\in(0,\hat{h}]\) and \(\Phi_{\hat{h}}(0)=0\) such that \[d\int_{0}^{\hat{h}}J\left(x-y\right)\Phi_{\hat{h}}(y)\mathrm{d}y-d\ \Phi_{\hat{h}}(x)+a_{0}\Phi_{\hat{h}}(x)-\nu\Phi_{\hat{h}}^{\prime}(x)= \lambda_{\hat{h}}\Phi_{\hat{h}},\ \text{for}\ x\in(0,\hat{h}).\] Therefore, \[\lambda_{\hat{h}}-a_{0}+d =\frac{d\int_{0}^{\hat{h}}\int_{0}^{\hat{h}}J\left(x-y\right) \Phi_{\hat{h}}(y)\Phi_{\hat{h}}(x)\mathrm{d}y\mathrm{d}x}{\int_{0}^{\hat{h}} \Phi_{\hat{h}}^{2}(x)\mathrm{d}x}-\frac{\nu\int_{0}^{\hat{h}}\Phi_{\hat{h}}(x) \Phi_{\hat{h}}^{\prime}(x)\mathrm{d}x}{\int_{0}^{\hat{h}}\Phi_{\hat{h}}^{2}(x) \mathrm{d}x} \tag{3.7}\] \[\leq\frac{d\|J\|_{\infty}\left(\int_{0}^{\hat{h}}\Phi_{\hat{h}}(x )\mathrm{d}x\right)^{2}}{\int_{0}^{\hat{h}}\Phi_{\hat{h}}^{2}(x)\mathrm{d}x}- \frac{\frac{\nu}{2}\left(\Phi_{\hat{h}}^{2}(\hat{h})-\Phi_{\hat{h}}^{2}(0) \right)}{\int_{0}^{\hat{h}}\Phi_{\hat{h}}^{2}(x)\mathrm{d}x}\] \[\leq\frac{d\|J\|_{\infty}\hat{h}\int_{0}^{\hat{h}}\Phi_{\hat{h}}^{ 2}(x)dx}{\int_{0}^{\hat{h}}\Phi_{\hat{h}}^{2}(x)\mathrm{d}x}-\frac{\frac{\nu}{ 2}\left(\Phi_{\hat{h}}^{2}(\hat{h})-\Phi_{\hat{h}}^{2}(0)\right)}{\int_{0}^{ \hat{h}}\Phi_{\hat{h}}^{2}(x)\mathrm{d}x}\] \[=d\|J\|_{\infty}\hat{h}-\frac{\frac{\nu}{2}\left(\Phi_{\hat{h}}^{ 2}(\hat{h})-\Phi_{\hat{h}}^{2}(0)\right)}{\int_{0}^{\hat{h}}\Phi_{\hat{h}}^{2}(x )\mathrm{d}x}.\] Since \[\frac{\frac{\nu}{2}\left(\Phi_{\hat{h}}^{2}(\hat{h})-\Phi_{\hat{h}}^{2}(0) \right)}{\int_{0}^{\hat{h}}\Phi_{\hat{h}}^{2}(x)\mathrm{d}x}=\frac{\frac{\nu}{ 2}\Phi_{\hat{h}}^{2}(\hat{h})}{\hat{h}\Phi_{\hat{h}}^{2}(\hat{h}_{0})}, \tag{3.8}\] where \(\hat{h}_{0}\in\left(0,\hat{h}\right)\). According to the choice of \(\Phi_{\hat{h}}(x)\) and \(\hat{h}_{0}\), we can see that \(\Phi_{\hat{h}}(\hat{h})\geq\Phi_{\hat{h}}(\hat{h}_{0})>0\) as \(\hat{h}\to 0\). Then the right side of (3.8) is greater than \(0\). For \(\nu>0\), it follows \(\lambda_{\hat{h}}<a_{0}-d.\) Thus, \[\limsup_{h\to 0^{+}}\lambda_{p}(\mathcal{L}_{(-h,h)}^{d}+a_{0}-\nu\nabla)\leq a _{0}-d.\] **Remark 3.1**.: _As we can see that when the advection rate \(\nu>0,\ \lambda_{p}(\mathcal{L}_{(-h,h)}^{d}+a_{0}-\nu\nabla)\) can be strictly less than \(a_{0}-d\) as \(h\) is taken small enough. This implies the nonnegligible effect of the advection on the principal eigenvalue compared with the case without the advection._ ## 4 Long-time asymptotic behavior Given (2.19), according to the strict monotonicity of \(g(t)\) and \(h(t)\) in \(t\), denote \[g_{\infty}:=\lim_{t\to\infty}g(t),\ \ h_{\infty}:=\lim_{t\to\infty}h(t),\] then \(g_{\infty}\in\left[-\infty,-h_{0}\right),\ \text{and}\ h_{\infty}\in\left(h_{0}, \infty\right].\) **Definition 4.1**.: _The vanishing happens if_ \[h_{\infty}-g_{\infty}<\infty\ \text{and}\ \limsup_{t\to\infty}u\left(t,x \right)=0;\] _the spreading happens if_ \[h_{\infty}-g_{\infty}=\infty\ \text{and}\ \liminf_{t\to\infty}u(t,x)>0.\] For \(-\infty<h_{1}<h_{2}<+\infty\), denote \(\mathscr{D}:=\left(h_{1},h_{2}\right).\) Considering the following problem over \(\mathscr{D}\) \[\begin{cases}u_{t}=d\int_{h_{1}}^{h_{2}}J\left(x-y\right)u(t,y)\mathrm{d}y-d \ u(t,x)-\nu u_{x}+f(u),&t>0,\ x\in\mathscr{D},\\ u(0,x)=u_{0}(x),&x\in\mathscr{D},\end{cases} \tag{4.1}\] According to the arguments in [34] and [37], it implies **Proposition 4.1**.: _Suppose that \(\left(\mathbf{J}\right)\) and \(\left(\mathbf{f1}\right)-\left(\mathbf{f4}\right)\) hold. The problem (4.1) has a unique positive steady state \(u_{\mathscr{D}}\) in \(C^{1}(\mathscr{D})\) if and only if_ \[\lambda_{p}\left(\mathcal{L}_{\mathscr{D}}^{d}+f_{0}-\nu\nabla\right)>0.\] _Moreover, for \(u_{0}(x)\in C^{1}(\bar{\mathscr{D}})\) and \(u_{0}\) nonnegative and not always equal to \(0\), the problem (4.1) admits a unique solution \(u(t,x)\) for all \(t>0\), and \(u(t,x)\to u_{\mathscr{D}}\) in \(C(\bar{\mathscr{D}})\) as \(t\to+\infty\) when \(\lambda_{p}\left(\mathcal{L}_{\mathscr{D}}^{d}+f_{0}-\nu\nabla\right)>0;\) when \(\lambda_{p}\left(\mathcal{L}_{\mathscr{D}}^{d}+f_{0}-\nu\nabla\right)\leq 0,\)\(u(t,x)\to 0\) in \(C(\bar{\mathscr{D}})\) as \(t\to+\infty.\)_ **Theorem 4.1**.: _Assume that \(\left(\mathbf{J}\right)\) and \(\left(\mathbf{f1}\right)-\left(\mathbf{f4}\right)\) hold. If \(h_{\infty}-g_{\infty}<\infty\), then_ \[\lambda_{p}(\mathcal{L}_{(g_{\infty},h_{\infty})}^{d}+f_{0}-\nu\nabla)\leq 0 \tag{4.2}\] _and \(\lim_{t\to\infty}u(t,x)=0\) uniformly in \([g(t),h(t)].\)_ Proof.: Now we divide our proof into two steps. Step 1: We aim to show \[\lambda_{p}(\mathcal{L}_{(g_{\infty},h_{\infty})}^{d}+f_{0}-\nu\nabla)\leq 0.\] Suppose that \(\lambda_{p}(\mathcal{L}_{(g_{\infty},h_{\infty})}^{d}+f_{0}-\nu\nabla)>0\), according to Property 3.1, there is small \(\epsilon_{0}>0\) such that \[\lambda_{p}(\mathcal{L}_{(g_{\infty}+\epsilon,h_{\infty}-\epsilon)}^{d}+f_{0} -\nu\nabla)>0 \tag{4.3}\] for \(\epsilon\in(0,\epsilon_{0})\). Further, for the above \(\epsilon\), there is \(T_{\epsilon}>0\) such that \[h(t)>h_{\infty}-\epsilon,\ g(t)<g_{\infty}+\epsilon,\ \text{for}\ t>T_{\epsilon}.\] Considering the following problem \[\begin{cases}w_{t}=d\int_{g_{\infty}+\epsilon}^{h_{\infty}-\epsilon}J\left(x- y\right)w(t,y)\mathrm{d}y-d\ w-\nu w_{x}+f(w),&t>T_{\epsilon},\ x\in[g_{\infty}+\epsilon,h_{\infty}-\epsilon]\,,\\ w\left(T_{\epsilon},x\right)=u\left(T_{\epsilon},x\right),&x\in[g_{\infty}+ \epsilon,h_{\infty}-\epsilon]\,,\end{cases} \tag{4.4}\] according to the assumption of (4.3), by Proposition 4.1, the solution \(w_{\epsilon}(t,x)\) of (4.4) converges to the unique steady state \(w_{\epsilon}(x)\) of (4.4) uniformly in \([g_{\infty}+\epsilon,h_{\infty}-\epsilon]\) as \(t\to+\infty\). Further, by the Maximum principle and comparison argument, it follows \[u(t,x)\geq w_{\epsilon}(t,x),\ \text{for}\ t>T_{\epsilon}\ \text{and}\ x\in[g_{\infty}+\epsilon,h_{\infty}-\epsilon]\,.\] Then we can find a positive \(\tilde{T}_{\epsilon}>T_{\epsilon}\) such that \[u(t,x)\geq\frac{1}{2}w_{\epsilon}(x)>0,\ \text{for}\ t>\tilde{T}_{\epsilon}\ \text{and}\ x\in[g_{\infty}+\epsilon,h_{\infty}-\epsilon]\,.\] By (**J**), there exist \(\tilde{\epsilon}>0\) and \(\delta_{0}>0\) such that \(J(x)>\delta_{0}\) for \(x\in(-\tilde{\epsilon},\tilde{\epsilon})\). Then for \(0<\epsilon<\min\left\{\epsilon_{0},\tilde{\epsilon}/2\right\}\), we have \[h^{\prime}(t) =\mu\int_{g(t)}^{h(t)}\int_{h(t)}^{+\infty}J\left(x-y\right)u(t,x )\mathrm{d}y\mathrm{d}x \tag{4.5}\] \[\geq\mu\int_{g_{\infty}+\epsilon}^{h_{\infty}-\epsilon}\int_{h_{ \infty}}^{+\infty}J\left(x-y\right)u(t,x)\mathrm{d}y\mathrm{d}x\] \[\geq\frac{1}{2}\mu\int_{h_{\infty}-\tilde{\epsilon}/2}^{h_{ \infty}-\epsilon}\int_{h_{\infty}}^{h_{\infty}+\tilde{\epsilon}/2}\delta_{0}w _{\epsilon}(x)\mathrm{d}y\mathrm{d}x\] \[>0,\] for all \(t>\tilde{T}_{\epsilon}\), which implies \(h_{\infty}=+\infty\), contradicting the assumption that \(h_{\infty}-g_{\infty}<+\infty\). Thus, it follows \[\lambda_{p}(\mathcal{L}_{(g_{\infty},h_{\infty})}^{d}+f_{0}-\nu\nabla)\leq 0.\] Step 2: It turns to show that \(u(t,x)\) converges to \(0\) uniformly in \([g(t),h(t)]\) as \(t\to+\infty\). Let \(\bar{u}(t,x)\) be the solution of the following equation \[\begin{cases}\bar{u}_{t}=d\int_{g_{\infty}}^{h_{\infty}}J\left(x-y\right)\bar {u}(t,y)\mathrm{d}y-d\ \bar{u}(t,x)-\nu\bar{u}_{x}+f(\bar{u}),&t>0,\ x\in[g_{\infty},h_{\infty}]\,,\\ \bar{u}(0,x)=v_{0}(x),&x\in[g_{\infty},h_{\infty}]\,,\end{cases}\] where \[v_{0}(x)=\begin{cases}u_{0}(x),&\text{if }-h_{0}\leq x\leq h_{0},\\ 0,&\text{if }x>h_{0}\text{ or }x<-h_{0}.\end{cases}\] By Maximum principle, we can obtain that \[u(t,x)\leq\bar{u}(t,x),\text{ for }t>0\text{ and }x\in[g(t),h(t)].\] Furthermore, when \(\lambda_{p}(\mathcal{L}_{(g_{\infty},h_{\infty})}^{d}+f_{0}-\nu\nabla)\leq 0\), by Proposition 4.1, it follows that \(\bar{u}(t,x)\to 0\) uniformly in \(x\in[g_{\infty},h_{\infty}]\) as \(t\to+\infty\). Hence, \(\lim\limits_{t\to\infty}u(t,x)=0\) uniformly in \(x\in[g(t),h(t)]\). Moreover, as the spreading happens, we discuss the long-time asymptotic behaviors of the solution for system (1.2). **Lemma 4.1**.: _Let \((u,g,h)\) be the unique solution of (1.2), if \(h_{\infty}-g_{\infty}=\infty\), then \(g_{\infty}=-\infty\) and \(h_{\infty}=\infty\)_ Proof.: It suffices to show \(h_{\infty}<+\infty\) if and only if \(-g_{\infty}<+\infty\). Without loss of generality, we assume by contradiction that \(h_{\infty}<+\infty\) and \(g_{\infty}=-\infty\). According to Property 3.1, there exists \(l_{*}>0\) such that \[\lambda_{p}(\mathcal{L}_{(-l_{*},h_{0})}^{d}+f_{0}-\nu\nabla)>0.\] Further, for any \(0<\epsilon\ll 1\) small, there exists \(T_{\epsilon}>0\) such that \[h(t)>h_{\infty}-\epsilon>h_{0},\ g(t)<-l_{*},\] for \(t>T_{\epsilon}\). Especially, \[\lambda_{p}(\mathcal{L}_{(-l_{*},h_{\infty}-\epsilon)}^{d}+f_{0}-\nu\nabla)> \lambda_{p}(\mathcal{L}_{(-l_{*},h_{0})}^{d}+f_{0}-\nu\nabla)>0.\] Consider \[\begin{cases}w_{t}=d\int_{-l_{*}}^{h_{\infty}-\epsilon}J\left(x-y\right)w(t,y) \mathrm{d}y-d\ w-\nu w_{x}+f(w),&t>T_{\epsilon},\ x\in[-l_{*},h_{\infty}- \epsilon]\,,\\ w\left(T_{\epsilon},x\right)=u\left(T_{\epsilon},x\right),&x\in[-l_{*},h_{ \infty}-\epsilon]\,,\end{cases} \tag{4.6}\] applying the similar method in proving (4.5) of the proof for Theorem 4.1, it can be shown that \(h^{\prime}(t)>0\) for all \(t\) large enough, which contradicts \(h_{\infty}<+\infty\). Thus, the theorem has been proved. Assuming that \(\mathbf{(f1)-(f4)}\) hold, it can be seen that there is unique \(\tilde{u}_{0}\in(0,k_{0})\) such that \(f(\tilde{u}_{0})=0\). Motivated by Proposition 3.6 in [32], we have the following result. **Proposition 4.2**.: _Assume that \(\mathbf{(J)}\) and \(\mathbf{(f1)-(f4)}\) hold. Then there exists \(L>0\) such that for every interval \((h_{1},h_{2})\) with \(h_{2}-h_{1}>L\), we have \(\lambda_{p}(\mathcal{L}_{(h_{1},h_{2})}^{d}+f_{0}-\nu\nabla)>0\) and hence (4.1) admits a unique positive steady state \(u_{(h_{1},h_{2})}\). Moreover,_ \[\lim\limits_{h_{2}-h_{1}\to+\infty}u_{(h_{1},h_{2})}=\tilde{u}_{0}\text{ locally uniformly in }\mathbb{R}. \tag{4.7}\] Further, **Theorem 4.2** (Asymptotic limit).: _Let \((u,g,h)\) be the unique solution of (1.2), if \(h_{\infty}-g_{\infty}=\infty\), then \(\lim\limits_{t\to\infty}u(t,x)=\tilde{u}_{0}\) locally uniformly in \(\mathbb{R}\)._ Proof.: According to Lemma 4.1, \(h_{\infty}-g_{\infty}=+\infty\) implies that \(h_{\infty}\) and \(-g_{\infty}\) are equal to \(\infty\) simultaneously. Take an increasing sequence \(\left\{t_{n}\right\}_{n\geq 1}\) satisfying \(t_{n}\to\infty\) as \(n\to\infty\) and \[\lambda_{p}(\mathcal{L}^{d}_{(g(t_{n}),h(t_{n}))}+f_{0}-\nu)>0,\] for all \(n\geq 1.\) Let \(\underline{u}_{n}(t,x)\) be the unique solution of the following problem \[\begin{cases}\underline{u}_{t}=d\int_{g(t_{n})}^{h(t_{n})}J\left(x-y\right) \underline{u}(t,y)\mathrm{d}y-d\ \underline{u}(t,x)-\nu\underline{u}_{x}+f(\underline{u}),&t>t_{n},\ x\in[g(t_{n }),h(t_{n})]\,,\\ \underline{u}\left(t_{n},x\right)=u\left(t_{n},x\right),&x\in[g(t_{n}),h(t_{n })]\,.\end{cases} \tag{4.8}\] By Proposition 4.1 and the comparison argument, it follows that \[u(t,x)\geq\underline{u}_{n}(t,x)\text{ in }\left[t_{n},+\infty\right)\times \left[g(t_{n}),h(t_{n})\right]. \tag{4.9}\] Since \(\lambda_{p}(\mathcal{L}^{d}_{[g(t_{n}),h(t_{n})]}+f_{0}-\nu\nabla)>0\), by Proposition 4.2, the problem (4.8) admits a unique positive steady state \(\underline{u}_{n}(x)\) and \[\lim_{t\to+\infty}\underline{u}_{n}(t,x)=\underline{u}_{n}(x)\text{ uniformly in }\left[g(t_{n}),h(t_{n})\right]. \tag{4.10}\] By Proposition 4.2, \[\lim_{n\to\infty}\underline{u}_{n}(x)=\tilde{u}_{0}\text{ uniformly for }x\text{ in any compact subset of }\mathbb{R}. \tag{4.11}\] According to \(\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq: if \(h\) is large enough, then \[\lambda_{p}(\mathcal{L}_{(-h,h)}^{d}+f_{0}-\nu\nabla)>0.\] Hence, there is constant \(h^{*}>0\) such that \[\lambda_{p}(\mathcal{L}_{(-h^{*},h^{*})}^{d}+f_{0}-\nu\nabla)=0. \tag{4.15}\] and \(\lambda_{p}(\mathcal{L}_{(-h,h)}^{d}+f_{0}-\nu\nabla)>0,\) for \(h>h^{*}.\) In the next discussions, we will further find out the critical conditions determining the vanishing or spreading for (1.2). Assume that (4.14) always holds. We have the following results. **Theorem 4.3**.: _Assume that \(\mathbf{(J)}\) and \(\mathbf{(f1)}-\mathbf{(f4)}\) hold. If \(h_{0}\geq h^{*}\) then spreading always occurs for (1.2). If \(h_{0}<h^{*}\), then there exists \(\tilde{\mu}_{*}>0\) such that vanishing occurs for (1.2) if \(0<\mu\leq\tilde{\mu}_{*}.\)_ Proof.: when \(h_{0}\geq h^{*},\) assume by contradiction that vanishing occurs, it yields \(-\infty<g_{\infty}<h_{\infty}<\infty\) and \(h_{\infty}-g_{\infty}>2h^{*}.\) According to Property 3.1 and (4.15), \[\lambda_{p}(\mathcal{L}_{(g_{\infty},h_{\infty})}^{d}+f_{0}-\nu\nabla)>0,\] contradicting the Theorem 4.1. Thus, if \(h_{0}\geq h^{*},\) spreading always happens for (1.2). Now we mainly consider the case of \(h_{0}<h^{*}.\) Fix \(\tilde{h}\in(h_{0},h^{*}),\) let \(\omega(t,x)\) be unique solution of the following problem \[\begin{cases}w_{t}(t,x)=d\int_{-\tilde{h}}^{\tilde{h}}J\left(x-y \right)w(t,y)\mathrm{d}y-d\ w-\nu w_{x}+f(w),&t>0,\ x\in[-\tilde{h},\tilde{h}], \\ w(0,x)=u_{0}(x),&x\in[-h_{0},h_{0}]\,,\\ w(0,x)=0,&x\in[-\tilde{h},-h_{0})\cup(h_{0},\tilde{h}],\\ w(t,\tilde{h})=0,\ w(t,-\tilde{h})=0,&t>0.\end{cases} \tag{4.16}\] The choice of \(\tilde{h}\) makes \[\lambda:=\lambda_{p}(\mathcal{L}_{(-\tilde{h},\tilde{h})}^{d}+f_{0}-\nu\nabla )<0.\] Let \(\Phi>0\) be an eigenfunction of \(\lambda\) with \(\|\Phi\|_{L^{\infty}}=1,\) then \[(\mathcal{L}_{(-\tilde{h},\tilde{h})}^{d}+f_{0}-\nu\nabla)[\Phi](x)=\lambda \Phi(x),\ \text{for}\ x\in[-\tilde{h},\tilde{h}].\] By (**f4**), we have \[\begin{split}\omega_{t}(t,x)&=d\int_{-\tilde{h}}^{ \tilde{h}}J(x-y)\omega(t,y)\mathrm{d}y-d\ \omega-\nu\omega_{x}+f(\omega)\\ &\leq d\int_{-\tilde{h}}^{\tilde{h}}J(x-y)\omega(t,y)\mathrm{d}y- d\ \omega-\nu\omega_{x}+f_{0}\ \omega.\end{split} \tag{4.17}\] For given \(\gamma>0\) which will be determined later, and chosen \(\tilde{w}=\gamma e^{\lambda t/2}\Phi,\) we can obtain that \[\begin{split}& d\int_{-\tilde{h}}^{\tilde{h}}J\left(x-y\right) \tilde{w}(t,y)\mathrm{d}y-d\ \tilde{w}-\nu\tilde{w}_{x}+f_{0}\ \tilde{w}-\tilde{w}_{t}(t,x)\\ =&\gamma e^{\lambda t/2}\left\{d\int_{-\tilde{h}}^{ \tilde{h}}J\left(x-y\right)\Phi(y)\mathrm{d}y-d\ \Phi-\nu\Phi^{\prime}+f_{0}\ \Phi-\frac{\lambda}{2}\Phi\right\}\\ =&\frac{\lambda}{2}\gamma e^{\lambda t/2}\Phi<0. \end{split}\] Take \(\gamma>0\) large such that \(\gamma\)\(\Phi>u_{0}\) in \([-\tilde{h},\tilde{h}]\). Applying the Maximum principle to \(\tilde{w}-\omega\), we can get that \[\omega(t,x)\leq\tilde{w}(t,x)=\gamma e^{\lambda t/2}\Phi\leq\gamma e^{\lambda t /2},\text{ for }t>0\text{ and }x\in[-\tilde{h},\tilde{h}]. \tag{4.18}\] Denote \[\hat{h}(t)=h_{0}+2\mu\tilde{h}\gamma\int_{0}^{t}e^{\lambda s/2}\mathrm{d}s, \text{ for }t\geq 0,\] and \(\hat{g}(t)=-\hat{h}(t),\ t\geq 0\). Next, we show that \((\omega,\hat{h},\hat{g})\) is an upper solution of (1.2). Take \[0<\mu\leq\tilde{\mu}_{*}:=\frac{\lambda(h_{0}-\tilde{h})}{4\tilde{h}\gamma},\] since \(\lambda<0\), for any \(t>0\), we have \[\hat{h}(t)\leq h_{0}-\frac{4\mu\tilde{h}\gamma}{\lambda}(1-e^{\lambda t/2})<h _{0}-\frac{4\mu\tilde{h}\gamma}{\lambda}\leq\tilde{h}.\] Similarly, \(-\tilde{h}<\hat{g}(t)<0\), for any \(t>0\). Thus (4.17) gives \[\omega_{t}(t,x)\geq d\int_{\hat{g}(t)}^{\hat{h}(t)}J\left(x-y\right)\omega(t, y)\mathrm{d}y-d\ \omega-\nu\omega_{x}+f(\omega),\text{ for }t>0,\ x\in[\hat{g}(t),\hat{h}(t)].\] Due to (4.18), it can be obtained that \[\int_{\hat{g}(t)}^{\hat{h}(t)}\int_{\hat{h}(t)}^{+\infty}J\left(x-y\right) \omega(t,x)\mathrm{d}y\mathrm{d}x<2\tilde{h}\gamma e^{\lambda t/2}.\] Thus, \[\hat{h}^{\prime}(t)=2\mu\tilde{h}\gamma e^{\lambda t/2}>\mu\int_{\hat{g}(t)}^ {\hat{h}(t)}\int_{\hat{h}(t)}^{+\infty}J\left(x-y\right)\omega(t,x)\mathrm{d} y\mathrm{d}x.\] And \[\hat{g}^{\prime}(t)<-\mu\int_{\hat{g}(t)}^{\hat{h}(t)}\int_{-\infty}^{\hat{g}( t)}J\left(x-y\right)\omega(t,x)\mathrm{d}y\mathrm{d}x.\] Thus \((\omega,\hat{h},\hat{g})\) is an upper solution of (1.2). By comparison principle, it follows \[u(t,x)\leq\hat{w}(t,x),\ g(t)\geq\hat{g}(t)\text{ and }h(t)\leq\hat{h}(t), \text{ for }t>0,\ x\in[g(t),h(t)].\] Thus, \[h_{\infty}-g_{\infty}\leq\underset{t\rightarrow+\infty}{\limsup}(\hat{h}(t)- \hat{g}(t))\leq 2\tilde{h}<+\infty.\] **Theorem 4.4**.: _Assume that \((\mathbf{J})\) and \((\mathbf{f1})-(\mathbf{f4})\) hold. If \(h_{0}<h^{*}\), then there exists \(\tilde{\mu}^{*}>0\) such that spreading happens to (1.2) if \(\mu>\tilde{\mu}^{*}\)._ Proof.: Suppose that \(h_{\infty}-g_{\infty}<+\infty\) for any \(\mu>0\), we will give a proof by contradiction. According to Theorem 4.1, we have \(\lambda_{p}(\mathcal{L}^{d}_{(g_{\infty},h_{\infty})}+f_{0}-\nu\nabla)\leq 0\), it then follows \[h_{\infty}-g_{\infty}\leq 2h^{*}.\] Let \((u_{\mu},g_{\mu},h_{\mu})\) represent the solution of (1.2) to stress the dependence on \(\mu\). By the comparison principle, we can show the strict monotonicity of \(u_{\mu},g_{\mu}\) and \(h_{\mu}\) in \(\mu\). Denote \[h_{\mu}(\infty):=\lim_{t\rightarrow+\infty}h_{\mu}(t)\text{ and }g_{\mu}(\infty):= \lim_{t\rightarrow+\infty}g_{\mu}(t).\] Then \(h_{\mu}(\infty)\) is increasing in \(\mu\) and \(g_{\mu}(\infty)\) is decreasing in \(\mu\). Denote \[\mathscr{H}_{\infty}:=\lim_{\mu\rightarrow+\infty}h_{\mu}(\infty)\in(0,\infty] \text{ and }\mathscr{G}_{\infty}:=\lim_{\mu\rightarrow+\infty}g_{\mu}(\infty)\in[- \infty,0).\] Since \(J(0)>0\), there exist \(0<\epsilon_{0}<h_{0}\) and \(\delta_{0}>0\) such that \[J(x)\geq\delta_{0},\text{ for }x\in(-\epsilon_{0},\epsilon_{0}).\] And there exist constants \(\tilde{\mu}\) and \(\tilde{t}\) such that \[h_{\mu}(t)>\mathscr{H}_{\infty}-\frac{\epsilon_{0}}{2},\text{ and }g_{\mu}(t)<\mathscr{G}_{\infty}+\frac{\epsilon_{0}}{2}\] for any \(\mu\geq\tilde{\mu}\) and \(t\geq\tilde{t}\). Then, by (1.2), it follows \[\begin{split}&\mu=\frac{h_{\mu}(\infty)-h_{\mu}\left(\tilde{t} \right)}{\int_{\tilde{t}}^{+\infty}\int_{g_{\mu}(\tau)}^{h_{\mu}(\tau)}\int_{h _{\mu}(\tau)}^{+\infty}J\left(x-y\right)u_{\mu}(\tau,x)\mathrm{d}y\mathrm{d}x \mathrm{d}\tau}\\ &\leq\frac{2h^{*}}{\int_{\tilde{t}}^{2\tilde{t}}\int_{h_{\tilde{ \mu}}(\tau)}^{h_{\tilde{\mu}}(\tau)}\int_{h_{\tilde{\mu}}(\tau)+\frac{\epsilon _{0}}{2}}^{+\infty}J\left(x-y\right)u_{\tilde{\mu}}(\tau,x)\mathrm{d}y\mathrm{ d}x\mathrm{d}\tau}\\ &\leq\frac{2h^{*}}{\delta_{0}\int_{\tilde{t}}^{2\tilde{t}}\int_{h _{\tilde{\mu}}(\tau)-\epsilon_{0}}^{h_{\tilde{\mu}}(\tau)+\epsilon_{0}}u_{ \tilde{\mu}}(\tau,x)\mathrm{d}y\mathrm{d}x\mathrm{d}\tau}\\ &=\frac{2h^{*}}{\frac{1}{2}\delta_{0}\epsilon_{0}\int_{\tilde{t}} ^{2\tilde{t}}\int_{h_{\tilde{\mu}}(\tau)-\epsilon_{0}}^{h_{\tilde{\mu}}(\tau)} u_{\tilde{\mu}}(\tau,x)\mathrm{d}x\mathrm{d}\tau}\\ &<+\infty.\end{split} \tag{4.19}\] Since \(\mu\) can be taken large enough, it yields a contradiction. Therefore, we can take \[\tilde{\mu}^{*}=1+\frac{4h^{*}}{\delta_{0}\epsilon_{0}\int_{\tilde{t}}^{2 \tilde{t}}\int_{h_{\tilde{\mu}}(\tau)-\epsilon_{0}}^{h_{\tilde{\mu}}(\tau)}u_{ \tilde{\mu}}(\tau,x)\mathrm{d}x\mathrm{d}\tau},\] the proof is completed. Now we can give a more explicit dichotomy description of \(\mu\) as follows. **Theorem 4.5**.: _Assume that \((\mathbf{J})\) and \((\mathbf{f1})-(\mathbf{f4})\) hold. when \(h_{0}<h^{*},\) there exists \(\mu^{*}>0\) such that vanishing happens for (1.2) if \(0<\mu\leq\mu^{*}\) and spreading happens for (1.2) if \(\mu>\mu^{*}\)._ Proof.: Denote \(\mu^{*}:=\sup\Omega\), where \(\Omega=\left\{\mu\mid\mu>0\text{ such that }h_{\infty}-g_{\infty}<+\infty\right\}.\) Given Theorems 4.3 and 4.4, we see that \(0<\mu^{*}<+\infty\). Let \((u_{\mu},g_{\mu},h_{\mu})\) be the solution of (1.2) and set \[h_{\mu,\infty}:=\lim_{t\rightarrow+\infty}h_{\mu}(t)\text{ and }g_{\mu, \infty}:=\lim_{t\rightarrow+\infty}g_{\mu}(t).\] Since \(u_{\mu},-g_{\mu}\) and \(h_{\mu}\) are increasing in \(\mu\). It can be obtained that if \(\mu_{0}\in\Sigma\), then \(\mu\in\Omega\) for any \(\mu<\mu_{0}\) and if \(\mu_{0}\notin\Omega\), then \(\mu\notin\Omega\) for any \(\mu>\mu_{0}\). Thus, \[(0,\mu^{*})\subseteq\Omega\text{ and }\,(\mu^{*},+\infty)\cap\Omega=\emptyset. \tag{4.20}\] Next we show that \(\mu^{*}\in\Omega\) by contradiction. Suppose that \(\mu^{*}\notin\Omega\). Then \(h_{\mu^{*},\infty}=-g_{\mu^{*},\infty}=+\infty\). Thus there is \(T>0\) such that \(-g_{\mu^{*}}(t)>h^{*}\), \(h_{\mu^{*}}(t)>h^{*}\) for \(t\geq T\). Hence there exists \(\epsilon>0\) such that \(-g_{\mu}(T)>h^{*}\) and \(h_{\mu}(T)>h^{*}\) for \(\mu\in(\mu^{*}-\epsilon,\mu^{*}+\epsilon)\), which implies \(\mu\notin\Omega\). It contradicts (4.20). Therefore \(\mu^{*}\in\Omega\). Given the above theorems, we can get the following spreading-vanishing theorems. **Theorem 4.6** (Spreading-vanishing criteria).: _Assume that \(J\) and \(f\) satisfy the conditions of Theorem 4.1. Let \((u,g,h)\) be the unique solution of \((\ref{eq:1})\), if \(f_{0}<d,\) then there exists a unique \(h^{*}>0\) such that_ \((1)\) _vanishing occurs and \(h_{\infty}-g_{\infty}\leq 2h^{*};\)_ \((2)\) _spreading occurs when \(h_{0}\geq h^{*};\)_ \((3)\) _if \(h_{0}<h^{*},\) then there exists a positive constant \(\mu^{*}>0\) such that vanishing occurs when \(\mu\leq\mu^{*}\) and spreading occurs when \(\mu>\mu^{*}\)._ Furthermore, we also have the following spreading and vanishing dichotomy regimes. **Theorem 4.7** (Spreading-vanishing dichotomy).: _Let \((u,g,h)\) be the solution of \((\ref{eq:1})\), then one of the following regimes hold for \((\ref{eq:1}):\)_ \((1)\) _vanishing_: \(h_{\infty}-g_{\infty}<\infty\) _and \(\lim\limits_{t\to\infty}u(t,x)=0\) uniformly in \([g(t),h(t)];\)_ \((2)\) _spreading_: \(h_{\infty}-g_{\infty}=\infty\) _and \(\lim\limits_{t\to\infty}u(t,x)=\tilde{u}_{0}\) locally uniformly in \(\mathbb{R}\)._ ## 5 Spreading speed In this section, we will mainly investigate the effects of the advection on the spreading speeds of double free boundaries for the problem \((\ref{eq:1})\). We aim to find out the explicit differences between the leftward and rightward asymptotic spreading speeds induced by the advection term. For the sake of simplicity, without loss of generality, we take \(f(u):=u(1-u)\) in problem \((\ref{eq:1})\) in the next discussions. Our results can be applied to other Fisher-KPP type or monostable type models. The following theorem is our main result in this section which represents the propagation speed of the leftward front is strictly less than the rightward front for \(\nu>0\). Simultaneously, it shows that the spreading speed for the \((\ref{eq:1})\) is finite if and only if the following assumption is satisfied: \((\mathbf{J}_{*})\)\(\int_{0}^{\infty}xJ(x)dx<\infty.\) **Theorem 5.1**.: _Assume that \((\mathbf{J})\) and \((\mathbf{f1})-(\mathbf{f4})\) hold. If \((\mathbf{J}_{*})\) is also satisfied, when the spreading occurs, the asymptotic spreading speeds of the leftward front and the rightward front for the problem \((\ref{eq:1})\) satisfy_ \[\lim\limits_{t\to\infty}\frac{g(t)}{t}=c_{l}^{*},\,\,\,\lim\limits_{t\to \infty}\frac{h(t)}{t}=c_{r}^{*}. \tag{5.1}\] _Moreover,_ \[0<c_{l}^{*}<c^{*}<c_{r}^{*},\,\,\,\lim\limits_{\nu\to 0}c_{l}^{*}=\lim\limits_{\nu \to 0}c_{r}^{*}=c^{*}. \tag{5.2}\] _where \(c^{*}\) is exactly the finite asymptotic spreading speed of the double free boundaries for the problem \(\left(\ref{1.2}\right)\) without the advection._ However, under the assumptions of the above theorem, **Theorem 5.2**.: _If \(\mathbf{(J_{*})}\) is not satisfied, as the spreading occurs, it follows_ \[\lim_{t\to\infty}\frac{h(t)}{t}=\lim_{t\to\infty}\frac{-g(t)}{t}=\infty.\] For \[\begin{cases}u_{t}=d\int_{\mathbb{R}}J\left(x-y\right)u(t,y)\mathrm{d}y-d\ u(t,x)-\nu u_{x}+f(u),&t>0,\ x\in\mathbb{R},\\ u(0,x)=u_{0}(x),&x\in\mathbb{R},\end{cases} \tag{5.3}\] assume that the kernel function \(J(x)\) satisfies \(\mathbf{(J_{**})}\)\(\int_{-\infty}^{\infty}e^{\lambda x}J(x)dx<\infty,\ \text{for some}\ \lambda>0.\) It can be easily shown that \(\mathbf{(J_{**})}\) implies \(\mathbf{(J_{*})},\) while the reverse is not true. For instance, \(J(x)=(1+|x|)^{-3}\) satisfies \(\mathbf{(J_{*})}\) but does not satisfy \(\mathbf{(J_{**})}.\) According to Theorem **1** in [38], we can get the following vital result. **Proposition 5.1**.: _Suppose that \(J\) satisfies \(\mathbf{(J)}\), \(\mathbf{(J_{*})}\) and \(f\) satisfies \(\mathbf{(f4)}\). If additionally, \(J\) satisfies \(\mathbf{(J_{**})}\), then there is a constant \(\hat{c}_{*}>0\) such that \(\left(\ref{1.3}\right)\) has a traveling wave solution with speed \(c\) if and only if \(c\geq\hat{c}_{*}\). In fact, the following problem_ \[\left\{\begin{array}{l}d\int_{\mathbb{R}}J\left(x-y\right)\Phi(y)\mathrm{d}y -d\ \Phi(x)+(c-\nu)\Phi^{\prime}(x)+f(\Phi(x))=0,\ x\in\mathbb{R},\\ \Phi(-\infty)=1,\ \Phi(+\infty)=0,\end{array}\right. \tag{5.4}\] _has a solution \(\Phi\in L^{\infty}(\mathbb{R})\) which is nonincreasing if and only if \(c\geq\hat{c}_{*}.\) Moreover, for each \(c\geq\hat{c}_{*}\), the solution satisfies \(:\Phi\in C^{1}(\mathbb{R})\). Meanwhile, if \(J\) does not satisfy \(\mathbf{(J_{**})}\), then \(\left(\ref{1.3}\right)\) does not have a traveling wave solution._ For the problem \[\left\{\begin{array}{l}d\int_{-\infty}^{0}J\left(x-y\right)\Phi(y)\mathrm{d }y-d\ \Phi+c\Phi^{\prime}(x)+f(\Phi(x))=0,\ -\infty<x<0,\\ \Phi(-\infty)=1,\ \Phi(0)=0,\end{array}\right. \tag{5.5}\] Assume that the kernel function \(J(x)\) satisfies \(\mathbf{(J_{**})}\), it follows **Proposition 5.2** (Theorem 2.6, [31]).: _Suppose that \(\mathbf{(J)},\mathbf{(J_{**})}\) and \(\mathbf{(f4)}\) hold. There is a constant \(\tilde{c}>0\) such that for any \(c\in(0,\tilde{c})\), the problem \(\left(\ref{1.5}\right)\) has a unique solution \(\Phi=\Phi_{r}^{c}\), and \(\Phi_{r}^{c}(x)\) is strictly decreasing in \(c\in(0,\tilde{c})\) for fixed \(x<0\), and is strictly decreasing in \(x\in(-\infty,0]\) for fixed \(c\in(0,\tilde{c})\)._ Consider the following problem \[\left\{\begin{array}{l}d\int_{-\infty}^{0}J\left(x-y\right)\Phi(y)\mathrm{d }y-d\ \Phi+(c-\nu)\Phi^{\prime}(x)+f(\Phi(x))=0,\ -\infty<x<0,\\ \Phi(-\infty)=1,\ \Phi(0)=0,\end{array}\right. \tag{5.6}\] with \[c=\mu\int_{-\infty}^{0}\int_{0}^{\infty}J\left(x-y\right)\Phi(x)\mathrm{d}y \mathrm{d}x. \tag{5.7}\] To investigate the asymptotic spreading speeds of \(\left(\ref{1.2}\right)\), motivated by Theorem 2.7 [31], we first propose the following result. **Lemma 5.1**.: _Denote \(\tilde{c}_{r}:=\tilde{c}+\nu\). Assume that \(\mathbf{(J),(J_{\ast\ast})}\) and \(\mathbf{(f4)}\) hold, for any \(c_{r}\in(0,\tilde{c}_{r})\), the problem (5.6) admits a semi-wave solution \(\Phi^{c_{r}}(x)\) satisfying_ \[\lim_{c_{r}\to\tilde{c}_{r}^{-}}\Phi^{c_{r}}(x)=0\text{ locally uniformly in }x\in(-\infty,0]. \tag{5.8}\] _Further, for any \(\mu>0,\) there is unique \(c=c_{r}^{*}\in(0,\tilde{c}_{r})\) such that_ \[c_{r}^{*}=\mu\int_{-\infty}^{0}\int_{0}^{\infty}J\left(x-y\right)\Phi^{c_{r}^{ *}}(x)\mathrm{d}y\mathrm{d}x. \tag{5.9}\] Proof.: The proof will be completed in two steps. Step 1: We aim to prove (5.8). Considering the following problem \[\left\{\begin{array}{l}d\int_{-\infty}^{0}J\left(x-y\right)\Phi(y)\mathrm{d }y-d\ \Phi+(c-\nu)\Phi^{\prime}(x)+f(\Phi(x))=0,\ -\infty<x<0,\\ \Phi(-\infty)=1,\ \Phi(0)=0,\end{array}\right. \tag{5.10}\] according to Proposition 5.2, there exists a \(c_{0}=\tilde{c}+\nu\) such that (5.10) admits a semi-wave solution pair \((c,\Phi^{c}(x))\) for any \(c<c_{0}\) with \(\Phi^{c}(x)\) strictly decreasing. Let \(\{c_{n}\}\subset(0,c_{0})\) be an arbitrary sequence which increasingly converges to \(c_{0}\) as \(n\to\infty\). Denote \(\Phi_{n}(x):=\Phi^{c_{n}}(x)\). We can see that \(\Phi_{n}(x)\) is uniformly bounded, and \(\Phi_{n}^{\prime}(x)\) is also uniformly bounded in view of (5.10). Therefore there is a subsequence \(\{\Phi_{n_{k}}\}\) of \(\Phi_{n}\) such that \[\Phi_{n_{k}}(x)\to\Phi(x)\text{ in }C_{loc}((-\infty,0])\text{ as }k\to\infty.\] Without loss of generality, we still denote \(\Phi_{n_{k}}\) by \(\Phi_{n}\). By Proposition 5.2, the function \(\Phi\) satisfies \[\left\{\begin{array}{l}d\int_{-\infty}^{0}J\left(x-y\right)\Phi(y)\mathrm{d }y-d\ \Phi+c_{0}\Phi^{\prime}(x)-\nu\Phi^{\prime}(x)+f(\Phi)=0,\ -\infty<x<0,\\ \Phi(0)=0.\end{array}\right.\] And \(0\leq\Phi(x)<\Phi_{n}(x)\) for \(x<0\). We extend \(\Phi(x)\) for \(x>0\) by \(0\). For fixed \(\delta\in(0,1)\), let \(\Phi_{0}(x)=\delta\Phi(x)\). By \(\mathbf{(f3)}\), we obtain that \(f(\delta\Phi)\geq\delta f(\Phi)\) and \[\begin{cases}d\left(J\ast\Phi_{0}\right)(x)-d\ \Phi_{0}(x)+c_{0}\Phi_{0}{}^{ \prime}(x)-\nu\Phi_{0}{}^{\prime}(x)+f\left(\Phi_{0}(x)\right)\geq 0,&- \infty<x<0,\\ \Phi_{0}(x)=0,&x\geq 0.\end{cases}\] Let \(\Phi_{*}\) denote the traveling wave solution with minimal speed \(c_{0}\) given by Proposition 5.1. For any \(\sigma>0\), it follows \[\Phi_{*}(x-\sigma)\geq\Phi_{*}(-\sigma),\text{ for }x\leq 0.\] Denote \(\theta_{\sigma}(x):=\Phi_{*}(x-\sigma)-\Phi_{0}\left(x\right)\). Since \(\Phi_{*}(-\infty)=1\) and \(\Phi_{0}(x)\leq\delta<1\), for all large \(\sigma>0\), we can obtain that \(\theta_{\sigma}(x)\geq 0,\text{ for }x\leq 0.\) Denote \[\sigma_{*}:=\inf\left\{\xi\in\mathbb{R}:\theta_{\sigma}(x)\geq 0\text{ for }x\leq 0 \text{ and all }\sigma\geq\xi\right\}. \tag{5.11}\] If \(\sigma_{*}=-\infty\), then \(\Phi_{0}(x)\leq\Phi_{*}(x-\sigma)\) for all \(\sigma\in\mathbb{R}\). Since \(\Phi_{*}(+\infty)=0\), then \(\Phi_{0}(x)\leq 0\) as \(\sigma\to-\infty\), which implies \(\Phi(x)\equiv 0\). If \(\sigma_{*}>-\infty\), then \(\theta_{\sigma_{*}}(x)\geq 0\) for \(x\leq 0.\) Since \(\theta_{\sigma_{*}}(-\infty)\geq 1-\delta>0\) and \(\theta_{\sigma_{*}}(0)=\Phi_{*}\left(-\sigma_{*}\right)>0\) with (5.11), there is \(x_{*}\in(-\infty,0)\) such that \(\theta_{\sigma_{*}}\left(x_{*}\right)=0\). Considering \[d\int_{-\infty}^{+\infty}J\left(x-y\right)\Phi_{*}\left(y-\sigma_{*}\right)\mathrm{ d}y-d\Phi_{*}\left(x-\sigma_{*}\right)+\left(c_{0}-\nu\right)\left(\Phi_{*} \right)^{\prime}\left(x-\sigma_{*}\right)+f\left(\Phi_{*}\left(x-\sigma_{*} \right)\right)=0,\ x\in\mathbb{R},\] we obtain \[\begin{cases}d\left(J*\Phi_{*}\right)\left(x-\sigma_{*}\right)-d\Phi_{*}\left( x-\sigma_{*}\right)+\left(c_{0}-\nu\right)\left(\Phi_{*}\right)^{\prime}\left(x- \sigma_{*}\right)+f\left(\Phi_{*}\left(x-\sigma_{*}\right)\right)=0,&-\infty< x<0,\\ \Phi_{*}\left(x-\sigma_{*}\right)>0,&x\geq 0.\end{cases}\] Applying the Maximum principle to \(\theta_{\sigma_{*}}\), we can conclude that \(\theta_{\sigma_{*}}(x)>0\) for \(x<0\), which yields a contradiction to \(\theta_{\sigma_{*}}\left(x_{*}\right)=0\). Thus it always holds that \(\Phi(x)\equiv 0\). Since \(c_{n}\) is arbitrary and increasingly converges to \(c_{0}\), it follows that (5.8) holds. Step 2: We will prove that there is a unique \(c_{**}\) such that \(\left(c_{**},\Phi^{c_{**}}(x)\right)\) satisfies the problem (5.6) with (5.7). For any \(c\in(\nu,c_{0})\), define \[\mathscr{M}(c):=\mu\int_{-\infty}^{0}\int_{0}^{\infty}J\left(x-y\right)\Phi^{ c}(x)\mathrm{d}y\mathrm{d}x.\] By Proposition 5.2, we see that \(\Phi^{c}(x)\) is strictly decreasing in \(c\), so does \(\mathscr{M}(c)\). According to the uniqueness of \(\Phi^{c}\), using the same arguments to show the convergence of \(\Phi_{n}(x)\) in Step 1, we can obtain that \(\Phi^{c}(x)\) is continuous in \(c\) uniformly for \(x\) in any compact set of \((-\infty,0]\). It follows that \(\mathscr{M}(c)\) is also continuous in \(c\). Considering the function \[\Theta:c\mapsto c-\mathscr{M}(c),\ \text{for}\ c\in\left(\nu,c_{0}\right),\] one can see that \(\Theta\) is continuous and strictly increasing in \(c\). By (5.8) and the Lebesgue dominated convergence theorem, it follows that \(\Theta(c)\to c_{0}>0\) as \(c\to{c_{0}}^{-}\). For fixed \(\mathscr{M}\left(2\nu\right)>0\), set \[c_{min}=\min\{2\nu,\mathscr{M}(2\nu)\},\] for any small \(c\in(0,c_{min})\), according to the strict monotonicity of \(\Phi^{c}(x)\) in \(c\), it follows that \(\Theta(c)\leq c-\mathscr{M}\left(2\nu\right)<0\). Then \(\Theta(c)\) admits a unique root \(c=c_{**}\in(0,c_{0})\) satisfying \(c_{**}=\mathscr{M}\left(c_{**}\right)\). Thus, (5.9) holds. Moreover, for the problem \[\left\{\begin{array}{l}d\int_{-\infty}^{0}J\left(x-y\right)\Phi(y)\mathrm{d} y-d\ \Phi+\left(c-\nu\right)\Phi^{\prime}(x)+f(\Phi(x))=0,\ -\infty<x<0,\\ \Phi(-\infty)=1,\ \Phi(0)=0,\\ c=\mu\int_{-\infty}^{0}\int_{0}^{\infty}J\left(x-y\right)\Phi(x)\mathrm{d}y \mathrm{d}x,\end{array}\right. \tag{5.12}\] we have the following important result. **Theorem 5.3**.: _Assume that \((\mathbf{J}),(\mathbf{J}_{*})\) and (\(\mathbf{f4}\)) hold, for \(\tilde{c}_{r}:=\tilde{c}+\nu>0\), the problem (5.12) admits a unique solution pair \(\left(c_{r}^{*},\Phi^{c_{r}^{*}}\right)\) with \(\Phi^{c_{r}^{*}}(x)\) strictly decreasing and \(c_{r}^{*}\in(0,\tilde{c}_{r})\)._ Proof.: Since \((\mathbf{J}_{**})\) implies \((\mathbf{J}_{*})\), according to Lemma 5.1, it suffices to explore the case in which \((\mathbf{J}_{**})\) is not satisfied. The detailed proof is similar to the proof of Lemma 2.9 in [31]. Here, the proof can be obtained only by making some minor modifications. For the following problem \[\left\{\begin{array}{l}d\int_{-\infty}^{0}J\left(x-y\right)\Phi\left(y\right) \mathrm{d}y-d\ \Phi+(c+\nu)\Phi^{\prime}(x)+f\left(\Phi(x)\right)=0,\ -\infty<x<0,\\ \Phi(-\infty)=1,\ \Phi(0)=0,\\ c=\mu\int_{-\infty}^{0}\int_{0}^{\infty}J\left(x-y\right)\Phi\left(x\right) \mathrm{d}y\mathrm{d}x,\end{array}\right. \tag{5.13}\] we also have **Theorem 5.4**.: _Assume that \(\left(\mathbf{J}\right)\), \(\left(\mathbf{J}_{*}\right)\) and \(\left(\mathbf{f4}\right)\) hold, for \(\tilde{c}_{l}:=\tilde{c}-\nu>0\), the problem \(\left(\ref{eq:1}\right)\) admits a unique solution pair \(\left(c_{l}^{*},\Phi^{c_{l}^{*}}\right)\) with \(\Phi^{c_{l}^{*}}(x)\) strictly decreasing and \(c_{l}^{*}\in(0,\tilde{c}_{l})\)._ For any given \(0<\epsilon\ll 1\), take \[f_{1}(u)=f(u)+\frac{\epsilon}{1+\epsilon}u^{2}=u\left(1-\frac{1 }{1+\epsilon}u\right),\] \[f_{2}(u)=f(u)-\frac{\epsilon}{1-\epsilon}u^{2}=u\left(1-\frac{1 }{1-\epsilon}u\right),\] \[\Phi_{1}=1+\epsilon,\ \Phi_{2}=1-\epsilon,\] then \(0\) and \(\Phi_{i}\) are the zero solutions of \(f_{i}\), for \(i=1,2\). Let \(\left(c_{r,i},\Phi_{r,i}(x)\right)\) be the solution pair of the problem \(\left(\ref{eq:1}\right)\) with \(f\) replaced by \(f_{i}\) and \(\Phi_{r,i}(-\infty)=\Phi_{i}\), and \(\left(c_{l,i},\Phi_{l,i}(x)\right)\) be the solution pair of the problem \(\left(\ref{eq:1}\right)\) with \(f\) replaced by \(f_{i}\) and \(\Phi_{l,i}(-\infty)=\Phi_{i}\). Then \(\Phi_{r,i}(x)\) and \(\Phi_{l,i}(x)\) are strictly decreasing for \(i=1,2\). And we obtain **Proposition 5.3**.: \[c_{r,2}<c_{r}^{*}<c_{r,1},\ \lim_{\epsilon\to 0}c_{r,1}=\lim_{\epsilon\to 0}c_{r,2}=c_{r}^{*},\] (5.14) _and_ \[c_{l,2}<c_{l}^{*}<c_{l,1},\ \lim_{\epsilon\to 0}c_{l,1}=\lim_{\epsilon\to 0}c_{l,2}=c_{l}^{*}. \tag{5.15}\] Proof.: First, we prove \(c_{r,2}<c_{r}^{*}\). Take \(\left(c_{r,2},\Phi_{r,2}\right)\) into \(\left(\ref{eq:1}\right)\), we have \[d\int_{-\infty}^{0}J\left(x-y\right)\Phi_{r,2}(y)\mathrm{d}y-d\ \Phi_{r,2}+(c_{r,2}-\nu)\Phi_{r,2}^{\prime}(x)+f\left(\Phi_{r,2}(x)\right)>0, \ \text{for}\ x\in(-\infty,0).\] Since \(\Phi_{r,2}\) is strictly decreasing, it implies \(c_{r,2}<c_{r}^{*}\). Using the similar techniques to take \(\left(c_{r,1},\Phi_{r,1}\right)\) into \(\left(\ref{eq:1}\right)\), we get \[d\int_{-\infty}^{0}J\left(x-y\right)\Phi_{r,1}(y)\mathrm{d}y-d\ \Phi_{r,1}+(c_{r,1}-\nu)\Phi_{r,1}^{\prime}(x)+f(\Phi_{r,1}(x))<0,\ \text{for}\ x\in(-\infty,0).\] According to the strict monotonicity of \(\Phi_{r,1}\) in \(x\), it implies \(c_{r,1}>c_{r}^{*}\). Meanwhile, \(c_{r,1}\) is continuous and increasing in \(\epsilon\) and \(c_{r,2}\) is continuous and decreasing in \(\epsilon\). It follows that \[\lim_{\epsilon\to 0}c_{r,1}=\lim_{\epsilon\to 0}c_{r,2}=c_{r}^{*}.\] Similarly, \(\left(\ref{eq:1}\right)\) is satisfied. To complete the proof of the Theorem 5.1, we first show the following lemmas. **Lemma 5.2**.: \[\liminf_{t\to\infty}\frac{h(t)}{t}\geq c_{r,2}.\] Proof.: For any given \(\epsilon>0\), define \[\underline{U}\left(t,x\right):=\Phi_{r,2}\left(x-c_{r,2}t\right),\;\underline{h }\left(t\right)=c_{r,2}t,\text{ for }t>0,\ x\in\left(-\infty,c_{r,2}t\right],\] where \(\left(c_{r,2},\Phi_{r,2}\right)\) is the solution pair of the following problem \[\left\{\begin{array}{l}d\int_{-\infty}^{0}J\left(x-y\right)\Phi\left(y \right)\mathrm{d}y-d\ \Phi\left(x\right)+\left(c-\nu\right)\Phi^{\prime}\left(x\right)+f_{2}\left( \Phi\left(x\right)\right)=0,\ -\infty<x<0,\\ \Phi\left(-\infty\right)=1-\epsilon,\ \Phi\left(0\right)=0,\\ c=\mu\int_{-\infty}^{0}\int_{0}^{\infty}J\left(x-y\right)\Phi(x)\mathrm{d}y \mathrm{d}x.\end{array}\right. \tag{5.16}\] Then we can get that \(\underline{U}(t,x)\leq 1-\epsilon,\text{ for }t>0,\ x\in\left(-\infty,c_{r,2}t\right]\). And \[\underline{U}_{t}= -c_{r,2}\Phi_{r,2}^{\prime}(x-c_{r,2}t) \tag{5.17}\] \[= \int_{-\infty}^{0}J\left(x-c_{r,2}t-y\right)\Phi_{r,2}(y) \mathrm{d}y-d\ \Phi_{r,2}\left(x-c_{r,2}t\right)-\nu\Phi_{r,2}^{\prime}(x-c_{r,2}t)+f_{2} \left(\Phi_{r,2}\left(x-c_{r,2}t\right)\right)\] \[= \int_{-\infty}^{c_{r,2}t}J\left(x-y\right)\Phi_{r,2}\left(y-c_{r, 2}t\right)\mathrm{d}y-d\ \Phi_{r,2}(x-c_{r,2}t)-\nu\Phi_{r,2}^{\prime}(x-c_{r,2}t)+f_{2} \left(\Phi_{r,2}(x-c_{r,2}t)\right)\] \[\leq \int_{-\infty}^{c_{r,2}t}J\left(x-y\right)\underline{U}\left(t,y \right)\mathrm{d}y-d\ \underline{U}(t,x)-\nu\underline{U}_{x}\left(t,x\right)+f\left( \underline{U}\left(t,x\right)\right).\] By Theorem 4.2, when the spreading happens, we have \(\lim_{t\to\infty}u(t,x)=1\) uniformly in any compact subset of \(\mathbb{R}\). Thus, there is \(T>0\) such that \[u(t,x)>1-\epsilon/2>\underline{U}(T,x),\ t\geq T.\] Moreover, \[c_{r,2}= \mu\int_{-\infty}^{0}\int_{0}^{\infty}J\left(x-y\right)\Phi_{r,2 }(x)\mathrm{d}y\mathrm{d}x \tag{5.18}\] \[= \mu\int_{-\infty}^{c_{r,2}t}\int_{c_{r,2}t}^{\infty}J\left(x-y \right)\Phi_{r,2}(x-c_{r,2}t)\mathrm{d}y\mathrm{d}x\] \[= \mu\int_{-\infty}^{c_{r,2}t}\int_{c_{r,2}t}^{\infty}J\left(x-y \right)\underline{U}(t,x)\mathrm{d}y\mathrm{d}x.\] Then by comparison principle, it follows \[\underline{U}(t,x)\leq u\left(t+T,x\right),\ c_{r,2}t\leq h(t+T),\text{ for }t>0,\ x\in\left(-\infty,c_{r,2}t\right].\] Thus, \[\liminf_{t\to\infty}\frac{h(t)}{t}\geq c_{r,2}. \tag{5.19}\] **Lemma 5.3**.: \[\limsup_{t\to\infty}\frac{h(t)}{t}\leq c_{r,1}.\] Proof.: For the following problem \[\begin{cases}\tilde{u}^{\prime}(t)=f(\tilde{u}),&t>0,\\ \tilde{u}(0)=\|u_{0}\|_{\infty},\end{cases}\] we can get \(u\leq\tilde{u}\), which implies for any \(\epsilon>0\), there is \(\tilde{T}>0\) such that \[u(t,x)\leq 1+\epsilon/2,\text{ for }t\geq\tilde{T},\ x\in\left(-\infty,h(t) \right].\] In view that \(\left(c_{r,1},\Phi_{r,1}(x)\right)\) is a solution of problem (5.12) with \(f\) replaced by \(f_{1}\) and \(\Phi_{r,1}(-\infty)=1+\epsilon\). Hence there exists \(\tilde{x}>h(\tilde{T})\) large enough such that \[u(\tilde{T},x)\leq 1+\epsilon/2<\Phi_{r,1}\left(x-\tilde{x}\right),\ \text{ for }x\in(-\infty,h(\tilde{T})].\] Let \[\overline{U}(t,x):=\Phi_{r,1}\left(x-c_{r,1}t-\tilde{x}\right),\ \overline{h}(t)=c_{r,1}t+\tilde{x},\ \text{ for }t>0,\ x\in(-\infty,c_{r,1}t+\tilde{x}]\,,\] then we have \[\overline{U}_{t}= -c_{r,1}\Phi_{r,1}^{\prime}\left(x-c_{r,1}t-\tilde{x}\right)\] \[= \int_{-\infty}^{0}J\left(x-c_{r,1}t-\tilde{x}-y\right)\Phi_{r,1}( y)\mathrm{d}y-d\ \Phi_{r,1}(x-c_{r,1}t-\tilde{x})-\nu\Phi_{r,1}^{\prime}(x-c_{r,1}t-\tilde{x})+f _{1}\left(\Phi_{r,1}(x-c_{r,1}t)\right)\] \[= \int_{-\infty}^{c_{r,1}t+\tilde{x}}J\left(x-y\right)\Phi_{r,1} \left(y-c_{r,1}t-\tilde{x}\right)\mathrm{d}y-d\ \Phi_{r,1}(x-c_{r,1}t-\tilde{x})-\nu\Phi_{r,1}^{\prime} \left(x-c_{r,1}t-\tilde{x}\right)+f_{1}\left(\Phi_{r,1}\left(x-c_{r,1}t-\tilde {x}\right)\right)\] \[\geq \int_{-\infty}^{c_{r,1}t+\tilde{x}}J\left(x-y\right)\overline{U} \left(t,y\right)\mathrm{d}y-d\ \overline{U}(t,x)-\nu\overline{U}_{x}(t,x)+f(\overline{U}(t,x)). \tag{5.20}\] Moreover, \[\overline{h}^{\prime}(t)=c_{r,1}= \mu\int_{-\infty}^{0}\int_{0}^{\infty}\ J\left(x-y\right)\ \Phi_{r,1}\left(x\right)\mathrm{d}y\mathrm{d}x \tag{5.21}\] \[= \mu\int_{-\infty}^{c_{r,1}t+\tilde{x}}\ \int_{c_{r,1}t+\tilde{x}}^{ \infty}\ J\left(x-y\right)\ \Phi_{r,1}\left(x-c_{r,1}t-\tilde{x}\right)\mathrm{d}y\mathrm{d}x\] \[= \mu\int_{-\infty}^{c_{r,1}t+\tilde{x}}\ \int_{c_{r,1}t+\tilde{x}}^{ \infty}\ J\left(x-y\right)\ \overline{U}\left(t,x\right)\mathrm{d}y\mathrm{d}x.\] Thus, applying the comparison principle, we have \[u(t+\tilde{T},x)\leq\overline{U}\left(t,x\right),\ h(t+\tilde{T})\leq c_{r,1 }t+\tilde{x},\] for \(t>0\) and \(x\in(-\infty,h(t+\tilde{T})]\). It yields \[\limsup_{t\to\infty}\frac{h(t)}{t}\leq c_{r,1}. \tag{5.22}\] **Theorem 5.5**.: _Since \(\epsilon>0\) is arbitrarily chosen and small enough, combining (5.19) and (5.22), it follows_ \[\lim_{t\to\infty}\frac{h(t)}{t}=c_{r}^{*}.\] **Lemma 5.4**.: \[\liminf_{t\to\infty}\frac{-g(t)}{t}\geq c_{l,2}.\] Proof.: Let \[\underline{V}(t,x)=\Phi_{l,2}(-x-c_{l,2}t),\ \underline{g}(t)=-c_{l,2}t,\ \ \text{for}\ t>0,\ x\in[-c_{l,2}t,\infty),\] then \(\underline{V}(t,x)\leq 1-\epsilon.\) And by (5.13), explicit calculations give \[\begin{split}\underline{V}_{t}=&-c_{l,2}\Phi_{l,2} {}^{\prime}\left(-x-c_{l,2}t\right)\\ =&\int_{-\infty}^{0}J\left(-x-c_{l,2}t-y\right)\ \Phi_{l,2}\left(y\right)\mathrm{d}y-d\ \Phi_{l,2}\left(-x-c_{l,2}t\right)+\nu\ \Phi_{l,2}^{\prime}\left(-x-c_{l,2}t\right)+f_{2}\left(\Phi_{l,2}\left(-x-c_{l, 2}t\right)\right)\\ =&\int_{-c_{l,2}t}^{\infty}J\left(x-y\right)\ \Phi_{l,2} \left(-y-c_{l,2}t\right)\mathrm{d}y-d\ \Phi_{l,2}\left(x-c_{l,2}t\right)+\nu\ \Phi_{l,2}^{\prime}\left(-x-c_{l,2}t\right)+f_{2}\left(\Phi_{l,2}\left(x-c_{l, 2}t\right)\right)\\ \leq&\int_{-c_{l,2}t}^{\infty}J\left(x-y\right)\ \underline{V}\left(t,y\right)\mathrm{d}y-d\ \underline{V}\left(t,x\right)-\nu\ \underline{V}_{x}\left(t,x\right)+f\left(\underline{V}\left(t,x\right)\right). \end{split} \tag{5.23}\] Since \(\lim\limits_{t\to\infty}u(t,x)=1\) uniformly in any compact subset of \(\mathbb{R}\) as the spreading occurs, there is \(\bar{T}>0\) such that \[u(t,x)>1-\epsilon/2>\underline{V}\left(\bar{T},x\right),\ t\geq\bar{T}.\] Moreover, \[\begin{split}-\underline{g}^{\prime}(t)=c_{l,2}=& \mu\int_{-\infty}^{0}\int_{0}^{\infty}\ J\left(x-y\right)\ \Phi_{l,2}\left(x\right)\mathrm{d}y\mathrm{d}x\\ =&\mu\int_{\infty}^{-c_{l,2}t}\int_{-c_{l,2}t}^{- \infty}\ J\left(x-y\right)\ \Phi_{r,2}\left(-x-c_{r,2}t\right)\mathrm{d}y\mathrm{d}x\\ =&\mu\int_{-c_{l,2}t}^{\infty}\int_{-\infty}^{-c_{l, 2}t}\ J\left(x-y\right)\ \underline{V}\left(t,x\right)\mathrm{d}y\mathrm{d}x.\end{split} \tag{5.24}\] Then by comparison principle, it follows \[\underline{V}(t,x)\leq u(t+\bar{T},x),\ -c_{l,2}t\geq g(t+\bar{T}),\ \text{for}\ t>0,\ x\in[-c_{l,2}t,\infty).\] Thus, \[\liminf\limits_{t\to\infty}\frac{-g(t)}{t}\geq c_{l,2}. \tag{5.25}\] **Lemma 5.5**.: \[\limsup\limits_{t\to\infty}\frac{-g(t)}{t}\leq c_{l,1}.\] Proof.: For the following problem \[\begin{cases}\hat{u}^{\prime}(t)=f(\hat{u}),\qquad t>0,\\ \hat{u}(0)=\|u_{0}\|_{\infty},\end{cases} \tag{5.26}\] we can get \(u\leq\hat{u}\), which implies for any \(\epsilon>0\), there is \(\hat{T}>0\) such that \[u(t,x)\leq 1+\epsilon/2,\ \text{for}\ t\geq\hat{T},\ x\in[g(t),\infty).\] In view that \((c_{l,1},\Phi_{l,1}(x))\) is a solution of problem (5.13) with \(f\) replaced by \(f_{1}\) and \(\Phi_{l,1}(-\infty)=1+\epsilon\). Hence there exists \(\hat{x}>-g(\hat{T})\) large such that \[u(\hat{T},x)\leq 1+\epsilon/2<\Phi_{l,1}(-x-\hat{x}),\ \text{ for}\ x\in[g(\hat{T}),\infty).\] Let \[\overline{V}(t,x):=\Phi_{l,1}\left(-x-c_{l,1}t-\hat{x}\right),\ \overline{g}(t)=-c_{l,1}t-\hat{x},\ \ \text{for}\ t>0,\ x\in[-c_{l,1}t-\hat{x},\infty)\,,\] then we have \[\overline{V}_{t}= -c_{l,1}\Phi_{l,1}^{\prime}\left(-x-c_{l,1}t-\hat{x}\right) \tag{5.27}\] \[= \int_{-\infty}^{0}J\left(-x-c_{l,1}t-\hat{x}-y\right)\Phi_{l,1}(y )\mathrm{d}y-d\ \Phi_{l,1}\left(-x-c_{l,1}t-\hat{x}\right)+\nu\Phi_{l,1}^{\prime}\left(-x-c_{l,1}t-\hat{x}\right)+f_{1}\left(\Phi_{l,1}(-x-c_{l,1}t-\hat{x})\right)\] \[= \int_{-c_{l,1}t-\hat{x}}^{\infty}J\left(x-y\right)\Phi_{l,1}\left( -y-c_{l,1}t-\hat{x}\right)\mathrm{d}y-d\ \Phi_{l,1}\left(-x-c_{l,1}t-\hat{x}\right)+\nu\Phi_{l,1}^{\prime}\left(-x-c_{l,1}t-\hat{x}\right)+f_{1}(\Phi_{l,1}\left(-x-c_{l,1}t-\hat{x}\right))\] \[\geq \int_{-c_{l,1}t-\hat{x}}^{\infty}J\left(x-y\right)\overline{V} \left(t,y\right)\mathrm{d}y-d\ \overline{V}(t,x)-\nu\overline{V}_{x}\left(t,x\right)+f\left( \overline{V}(t,x)\right).\] Moreover, \[-\overline{g}^{\prime}(t)=c_{l,1}= \mu\int_{-\infty}^{0}\int_{0}^{\infty}J\left(x-y\right)\Phi_{l,1 }(x)\mathrm{d}y\mathrm{d}x \tag{5.28}\] \[= \mu\int_{-c_{l,1}t-\hat{x}}^{-c_{l,1}t-\hat{x}}J\left(x-y\right) \overline{V}(t,x)\mathrm{d}y\mathrm{d}x.\] Thus applying the comparison principle, we have \[u(t+\hat{T},x)\leq\overline{V}(t,x),\ g(t+\hat{T})\geq-c_{l,1}t-\hat{x},\] for \(t>0\) and \(x\in[g(t+\hat{T}),\infty)\). It yields \[\limsup_{t\to\infty}\frac{-g(t)}{t}\leq c_{l,1}. \tag{5.29}\] **Theorem 5.6**.: _Since \(\epsilon>0\) is arbitrarily chosen and small enough, combining (5.25) with (5.29), we have_ \[\lim_{t\to\infty}\frac{-g(t)}{t}=c_{l}^{*}.\] According to the above several results, now it turns to complete the proof of Theorem 5.1. Proof of Theorem 5.1.: According to Proposition 5.3, Theorems 5.5 and 5.6, it suffices to prove \[0<c_{l}^{*}<c^{*}<c_{r}^{*}.\] For the problem \[\left\{\begin{array}{l}d\int_{-\infty}^{0}J\left(x-y\right)\Phi(y)\mathrm{d }y-d\ \Phi+c\ \Phi^{\prime}(x)+f(\Phi(x))=0,\ -\infty<x<0,\\ \Phi(-\infty)=1,\ \Phi(0)=0,\end{array}\right. \tag{5.30}\] as is stated in Theorem 2.7 [31], there is a \(\tilde{c}>0\) such that problem (5.30) admits a solution \((c,\Phi^{c}(x))\) for any \(c\in(0,\tilde{c})\) with \(\Phi^{c}(x)\) is nonincreasing in \(c\) for fixed \(x\in(-\infty,0].\) And \[\lim_{c\to\tilde{c}^{-}}\Phi^{c}(x)=0\ \text{locally uniformly in}\ x\in(-\infty,0]. \tag{5.31}\] Further, for any \(\mu>0\), there exists a unique \(c^{*}=c^{*}(\mu)\in(0,\tilde{c})\) such that \[c^{*}=\mu\int_{-\infty}^{0}\int_{0}^{\infty}J\left(x-y\right)\Phi^{c^{*}}(x) \mathrm{d}y\mathrm{d}x. \tag{5.32}\] Actually, suppose that \(\left(\mathbf{J}\right)\), \(\left(\mathbf{J}_{*}\right)\) and \(\left(\mathbf{f4}\right)\) hold, \(c^{*}\) is the asymptotic spreading speed of the rightward front and the leftward front for (1.2) without the advection term \((\nu=0)\). According to Lemma 5.1, for any \(c\in(0,\tilde{c}_{r})\), let \((c,\Phi_{r}^{c})\) be the solution of (5.6). Similarly, \((c,\Phi^{c})\) and \((c,\Phi_{l}^{c})\) satisfy the corresponding equations, where \(c\in(0,\tilde{c})\) and \(c\in(0,\tilde{c}_{l})\), respectively. Denote \[\mathscr{F}_{l}(c) :=\int_{-\infty}^{0}\int_{0}^{\infty}J\left(x-y\right)\Phi_{l}^{c }(x)\mathrm{d}y\mathrm{d}x,\,\text{for }c\in(0,\tilde{c}_{l}),\] \[\mathscr{F}(c) :=\int_{-\infty}^{0}\int_{0}^{\infty}J\left(x-y\right)\Phi^{c}(x )\mathrm{d}y\mathrm{d}x,\,\text{for }c\in(0,\tilde{c}),\] \[\mathscr{F}_{r}(c) :=\int_{-\infty}^{0}\int_{0}^{\infty}J\left(x-y\right)\Phi_{r}^{c }(x)\mathrm{d}y\mathrm{d}x,\,\text{for }c\in(0,\tilde{c}_{r}).\] Given (5.8) and (5.31), applying the Lebesgue dominated convergence theorem, we have the following facts: \[\begin{split}&(1)\,\lim_{c\to\tilde{c}_{l}^{-}}\mathscr{F}_{l} \left(c\right)=0,\,\,\,\lim_{c\to\tilde{c}^{-}}\mathscr{F}(c)=0,\,\,\,\lim_{c \to\tilde{c}^{-}}\mathscr{F}_{r}(c)=0;\\ &(2)\,\,\mathscr{F}_{l}(c-\nu)=\mathscr{F}(c)=\mathscr{F}_{r}(c+ \nu),\,\,\text{for }c\in(\nu,\tilde{c});\\ &(3)\,\,\mathscr{F}_{l}(c)\,\,(\text{resp. }\mathscr{F}(c),\,\mathscr{F}_{r}(c))\,\,\text{is continuous and strictly decreasing in }c\in(0,\tilde{c}_{l})\,\,(\text{resp. }c\in(0,\tilde{c}),\,\,c\in(0,\tilde{c}_{r}));\\ &(4)\,\,\mathscr{F}_{l}(c)<\mathscr{F}(c)\,\,\text{for }c\in(0,\tilde{c}_{l}) \,\,\text{and}\,\,\mathscr{F}(c)<\mathscr{F}_{r}(c)\,\,\text{for }c\in[\nu,\tilde{c}).\end{split}\] Therefore, \(\mathscr{F}_{l}(c)\) (resp. \(\mathscr{F}(c)\) and \(\mathscr{F}_{r}(c)\) ) contact the line \(\mathscr{F}=\frac{c}{\mu}\) at the point \((c_{l}^{*},\mathscr{F}_{l}(c_{l}^{*}))\) (resp. \((c^{*},\mathscr{F}(c^{*}))\) and \((c_{r}^{*},\mathscr{F}_{r}(c_{r}^{*}))\) ) with \(c_{l}^{*}<c^{*}<c_{r}^{*}\). For fixed \(\mu\), according to the above analysis, it can be easily seen that \(c_{r}^{*}\) is strictly increasing in \(\nu\) and \(c_{l}^{*}\) is strictly decreasing in \(\nu\), and \[\lim_{\nu\to 0}c_{r}^{*}=\lim_{\nu\to 0}c_{l}^{*}=c^{*}.\] Thus, the Theorem 5.1 is proved. **Remark 5.1**.: _According to Theorem 5.1, the double free boundaries of the problem (1.2) move at different finite speeds as \(t\to\infty\) under the effect of the advection, comparing with the non-advection case that the leftward front and the rightward front spread at the same speed \(c^{*}\)._ Actually, let \((u,g,h)\) be the solution of problem (1.2), when the spreading happens, we can further show **Proposition 5.4**.: \[c_{l}^{*}\geq c^{*}-\nu\,\,\text{and}\,\,c_{r}^{*}\leq c^{*}+\nu.\] Proof.: Set \[\overline{h}(t)=\left(c^{*}+\nu\right)t+L,\,\,\overline{U}(t,x)=(1+\epsilon) \Phi^{c^{*}}\left(x-\overline{h}(t)\right),\] where \(\left(c^{*},\Phi^{c^{*}}(x)\right)\) denotes the solution pair of (5.30) with (5.32), and \(L\), \(\epsilon\) will be chosen later. In view of (4.13), \(\limsup_{t\to\infty}U(t,x)\leq 1\), there exists \(T\gg 1\) such that \[U(t+T,x)\leq 1+\epsilon/3,\,\,\text{for}\,\,t\geq 0,\,\,x\in\left[g\left(t+T \right),h\left(t+T\right)\right].\] Take \(L\) large enough such that \(\overline{h}(0)=L>h(T)\), and \[\overline{U}(0,x)=(1+\epsilon)\Phi^{c^{*}}\left(x-\overline{h}(0)\right)=(1+ \epsilon)\Phi^{c^{*}}\left(x-L\right)>(1+\epsilon)\left(1-\epsilon/3\right)>U(T,x ),\ x\in\left[g\left(T\right),h\left(T\right)\right].\] Next, we prove \[\overline{U}_{t}\geq d\int_{g(t+T)}^{\overline{h}(t)}J\left(x-y\right) \overline{U}\left(t,y\right)\mathrm{d}y-d\ \overline{U}(t,x)-\nu\overline{U}_{x}+f\left(\overline{U}\left(t,x\right) \right),\] for \(t>0\) and \(x\in\left[g\left(t+T\right),\overline{h}\left(t\right)\right].\) In view of (**f4**), it follows \[f\left(\overline{U}(t,x)\right)=f\left((1+\epsilon)\Phi^{c^{*}}\left(x- \overline{h}(t)\right)\right)\leq(1+\epsilon)f\left(\Phi^{c^{*}}\left(x- \overline{h}(t)\right)\right).\] Since \(\Phi(x)\) is nonincreasing, then direct calculations give \[\overline{U}_{t}= -\left(1+\epsilon\right)\Phi^{c^{*}\prime}\left(x-\overline{h}( t)\right)\left(c^{*}+\nu\right) \tag{5.33}\] \[= (1+\epsilon)\left[d\int_{-\infty}^{\overline{h}(t)}J\left(x-y \right)\Phi^{c^{*}}\left(y-\overline{h}(t)\right)\mathrm{d}y-d\ \Phi^{c^{*}}(x-\overline{h}(t))+f\left(\Phi^{c^{*}}(x- \overline{h}(t))\right)\right]\] \[- (1+\epsilon)\nu\Phi^{c^{*}\prime}\left(x-\overline{h}(t)\right)\] \[\geq d\int_{-\infty}^{\overline{h}(t)}J\left(x-y\right)\overline{U} \left(t,y\right)\mathrm{d}y-d\ \overline{U}(t,x)-\nu\overline{U}_{x}(t,x)+f\left(\overline{U}(t,x)\right)\] \[\geq d\int_{g(t+T)}^{\overline{h}(t)}J\left(x-y\right)\overline{U}(t, y)\mathrm{d}y-d\ \overline{U}\left(t,x\right)-\nu\overline{U}_{x}(t,x)+f\left(\overline{U}(t,x) \right).\] And if we take \(\epsilon<\nu/c^{*}\), then \[\overline{h}^{\prime}(t)= c^{*}+\nu>c^{*}(1+\epsilon) \tag{5.34}\] \[= \mu(1+\epsilon)\int_{-\infty}^{0}\int_{0}^{\infty}J\left(x-y \right)\Phi^{c^{*}}(x)\mathrm{d}y\mathrm{d}x\] \[\geq \mu\int_{-\infty}^{\overline{h}(t)}\int_{\overline{h}(t)}^{\infty }J\left(x-y\right)\overline{U}(t,x)\mathrm{d}y\mathrm{d}x\] \[\geq \mu\int_{g(t+T)}^{\overline{h}(t)}\int_{\overline{h}(t)}^{\infty }J\left(x-y\right)\overline{U}(t,x)\mathrm{d}y\mathrm{d}x.\] Next, we aim to prove \[\overline{h}(t)>h(t+T),\ \overline{U}\left(t,x\right)>U\left(t+T,x\right),\ \text{for}\ t>0,\ x\in\left[g\left(t+T\right),h\left(t+T\right)\right]. \tag{5.35}\] Suppose that the above inequalities do not hold for all \(t>0\), there exists a first time \(t^{*}>0\) such that \(\overline{h}\left(t^{*}\right)=h\left(t^{*}+T\right),\) or \(\overline{h}(t^{*})>h(t+t^{*})\) but \(\overline{U}\left(t^{*},x^{*}\right)=U\left(t^{*}+T,x^{*}\right),\) for some \(x^{*}\in\left[g\left(t^{*}+T\right),h\left(t^{*}+T\right)\right].\) If \(\overline{h}\left(t^{*}\right)=h\left(t^{*}+T\right),\) then \[\overline{h}^{\prime}\left(t^{*}\right)\leq h^{\prime}\left(t^{*}+T\right). \tag{5.36}\] Considering that \(\overline{U}\left(t^{*},x\right)\geq U\left(t^{*}+T,x\right),\ x\in\left[g\left(t^{ *}+T\right),h\left(t^{*}+T\right)\right],\) we can get that \[\begin{split}\overline{h}^{\prime}\left(t^{*}\right)&> \mu\int_{g\left(t^{*}+T\right)}^{\overline{h}\left(t^{*}\right)}\int_{ \overline{h}\left(t^{*}\right)}^{\infty}J\left(x-y\right)\overline{U}\left(t^ {*},x\right)\mathrm{d}y\mathrm{d}x\\ &=\mu\int_{g\left(t^{*}+T\right)}^{h\left(t^{*}+T\right)}\int_{h \left(t^{*}+T\right)}^{\infty}J\left(x-y\right)\overline{U}\left(t^{*},x\right) \mathrm{d}y\mathrm{d}x\\ &\geq\mu\int_{g\left(t^{*}+T\right)}^{h\left(t^{*}+T\right)}\int_ {h\left(t^{*}+T\right)}^{\infty}J\left(x-y\right)U\left(t^{*}+T,x\right) \mathrm{d}y\mathrm{d}x\\ &=h^{\prime}\left(t^{*}+T\right),\end{split} \tag{5.37}\] which yields a contradiction to (5.36). If \(\overline{h}(t^{*})>h(t+t^{*})\) and \[\overline{U}\left(t^{*},x^{*}\right)=U\left(t^{*}+T,x^{*}\right), \tag{5.38}\] since \(\overline{U}(t,x)>0,\ x=g\left(t+T\right)\) or \(x=h\left(t+T\right),\ t\in\left(0,t^{*}\right]\), and \(\overline{U}\left(0,x\right)>U\left(T,x\right),\ x\in\left[g\left(T\right),h \left(T\right)\right]\), by comparison principle, we can obtain that \[\overline{U}\left(t^{*},x\right)>U\left(t^{*}+T,x\right),\ x\in\left[g\left(t +T\right),h\left(t+T\right)\right], \tag{5.39}\] which contradicts (5.38). Thus, \[c_{r}^{*}=\limsup_{t\to\infty}\frac{h(t)}{t}\leq\lim_{t\to\infty}\frac{ \overline{h}(t-T)}{t}=c^{*}+\nu.\] The case about \(c_{l}^{*}\geq c^{*}-\nu\) can be easily proved by constructing the corresponding lower solution. _The proof of Theorem 5.2_.: If \(J(x)\) does not satisfy \(\left(\mathbf{J}_{*}\right)\), according to the Theorem 5.1, based on the Section 3.3 in [39], we can complete the proof of this theorem by some subtle constructions. Now we only provide several important sketches. Firstly, construct a series of cut-off functions \(J_{n}(x)\) which satisfies the condition \(\left(\mathbf{J}_{*}\right)\). choose a nonnegative, even function sequence \(\left\{J_{n}\right\}\) such that each \(J_{n}(x)\in C^{1}\) has nonempty compact support, and \[J_{n}(x)\leq J_{n+1}(x)\leq J(x),\ \text{and}\ J_{n}(x)\to J(x),\ \text{in}\ L^{1}( \mathbb{R})\ \text{as}\ n\to\infty. \tag{5.40}\] where \(J_{n}(x)=J(x)\chi_{n}(x)\) and \(\left\{\chi_{n}\right\}\) is a properly smooth cut-off function sequences such that \(J_{n}(x)\) satisfies \(\left(\mathbf{J}_{*}\right)\). Next, rewrite the problem (1.2) with \(J(x)\) replaced by \(J_{n}(x)\), and we can get the spreading speed \(c_{n}\) of the corresponding semi-wave problem. Similar to prove Lemmas 5.2 and 5.4, we can show that \[\liminf_{t\to\infty}\frac{h_{n}(t)}{t}\geq c_{r,2}^{n},\ \text{and}\ \liminf_{t\to\infty}\frac{g_{n}(t)}{t}\geq c_{l,2}^{n}.\] Where \(c_{r,2}^{n}\) and \(c_{l,2}^{n}\) satisfy the equations (5.12) and (5.13) with \(J\) replaced by \(J_{n}\) and \(f\) replaced by \(f_{2}\), respectively. In view that \(J(x)\) does not satisfy \(\left(\mathbf{J}_{*}\right)\), it can be proved that \(\lim_{n\to\infty}c_{r,2}^{n}=\infty\) and \(\lim_{n\to\infty}c_{l,2}^{n}=\infty\) by contradiction. Here, we omit the detailed steps. **Remark 5.2**.: _According to Theorems 5.1 and 5.2, as the spreading occurs, the assumption \(\left(\mathbf{J}_{*}\right)\) is a threshold condition to determine whether the spreading speed is finite or not._ ## Acknowledgements This work is supported by the China Postdoctoral Science Foundation (No. 2022M710426) and the Postdoctoral Science Foundation Project of Beijing Normal University at Zhuhai.
この論文では、 advective 影響と自由境界を組み込んだ Fisher-KPP 非局所拡散モデルを調査する。このモデルの拡散運動の解析を試み、拡散運動の伝播に関する影響を検討する。 拡散の影響を考慮した上で、このモデルのグローバルな解の存在、一意性、 regularity を得る。このモデルにおいて、拡散項を含む非局所演算子の主固有値を導入し、解の時間発展における漸近的性質について議論する。さらに、拡散・消失の起こりに関するいくつかの十分条件を与え、拡散・消失の二分法を導き出す。そして、半波解を用いて、自由境界の双重配置における非局所拡散と advective の影響に関する有限の漸近拡散速度を明示的に記述する。
2308.16901
Eclectic flavor group $Δ(27)\rtimes S_3$ and lepton model building
We have performed a systematical study of the eclectic flavor group $\Delta(27)\rtimes S_3$ which is the extension of the traditional flavor symmetry $\Delta(27)$ by the finite modular symmetry $S_3$. Consistency between $\Delta(27)$ and $S_3$ requires that the eight nontrivial singlet representations of $\Delta(27)$ should be arranged into four reducible doublets. The modular transformation matrices are determined for various $\Delta(27)$ multiplets, and the CP-like symmetry compatible with $\Delta(27)\rtimes S_3$ are discussed. We study the general form of the K\"ahler potential and superpotential invariant under $\Delta(27)\rtimes S_3$, and the corresponding fermion mass matrices are presented. We propose a bottom-up model for lepton masses and mixing based on $\Delta(27)\rtimes S_{3}$, a numerical analysis is performed and the experimental data can be accommodated.
Cai-Chang Li, Gui-Jun Ding
2023-08-31T17:58:30
http://arxiv.org/abs/2308.16901v2
# **Eclectic flavor group \(\Delta(27)\rtimes S_{3}\) and lepton model building** ###### Abstract We have performed a systematical study of the eclectic flavor group \(\Delta(27)\rtimes S_{3}\) which is the extension of the traditional flavor symmetry \(\Delta(27)\) by the modular symmetry group \(S_{3}\). Consistency between \(\Delta(27)\) and \(S_{3}\) requires that the eight nontrivial singlet representations of \(\Delta(27)\) should be arranged into four reducible doublets. The modular transformation matrices are determined for various \(\Delta(27)\) multiplets, and the generalized CP symmetry compatible with \(\Delta(27)\rtimes S_{3}\) are discussed. We study the general form of the Kahler potential and superpotential invariant under \(\Delta(27)\rtimes S_{3}\), and the corresponding fermion mass matrices are presented. We propose a bottom-up model for lepton masses and mixing based on \(\Delta(27)\rtimes S_{3}\), a numerical analysis is performed and the experimental data can be accommodated. Introduction The standard model (SM) of particle physics based on gauge symmetry gives an excellent description of interactions between the fundamental fermions (quarks and leptons), and it has been precisely tested by a lot of experiments up to TeV scale. The masses of quarks and charged leptons arise from the Yukawa interactions between the charged fermions and Higgs boson in SM, the intergenerational interaction strength is not subject to the constraint of gauge symmetry. Hence the SM can qualitatively explain the fermion masses and flavor mixing but cannot make any precise prediction for their values. The origin of fermion mass hierarchy and flavor mixing is one of most fascinating mysteries of particle physics. The discovery of neutrino oscillation provides new clue to understand this puzzle. Many years of neutrino oscillation experiments have established that neutrinos have tiny masses. The two neutrino mass squared differences and the three lepton mixing angles have been measured with the accuracy of percent level. The mixing pattern of lepton sector is drastically different from that of quark sector. All the three quark mixing angles are small, while the solar and atmospheric neutrino mixing angles are large and the magnitude of the reactor neutrino mixing angle is similar to that of the quark Cabibbo angle [1]. Flavor symmetry acting on the three generations of fermions has been extensively studied to address the pattern of fermion mass hierarchy and mixing angles. In particular, the non-Abelian discrete flavor symmetry could help to naturally explain the large lepton mixing angles [2, 3, 4, 5]. From the top-down perspective, the superstring theory is a promising framework of unifying all four fundamental interactions. The consistency of the theory requires six-dimensional extra compact space besides our lived four-dimensional spacetime. It is remarkable that the non-Abelian discrete flavor symmetry such as \(D_{4}\) and \(\Delta(54)\) can arise in certain compactification scheme [6, 7, 8]. Moreover, the string duality transformations generate the modular symmetry. The matter fields transform nontrivially under the modular symmetry, consequently the modular symmetry could constrain the flavor structure of quarks and leptons and it enforces the Yukawa couplings to be modular forms [9]. In bottom-up models with modular symmetry alone, the finite modular groups \(\Gamma_{N}\) and \(\Gamma^{\prime}_{N}\) (\(N=2,3,4,5,6,7\)) play the role of flavor symmetry, and more generally the finite modular groups can be expressed as the quotient groups of \(SL(2,\mathbb{Z})\) over its normal subgroups with finite index [10, 11]. In the minimal scenario, the complex modulus \(\tau\) is the unique source of modular symmetry breaking. One can construct quite predictive models with modular symmetry, and the masses and mixing angles of quarks and leptons can be described in terms of a few free parameters. It is remarkable that the modular symmetry models exhibit a universal behavior in the vicinity of fixed points [12, 13], independently from details of the models such as the modular weights and representation assignments of matter field under the finite modular groups. See [14] and references therein for various aspects of modular flavor symmetry. Usually only the minimal Kahler potential is adopted in concrete modular models. However, in principle the Kahler potential has many terms compatible with modular symmetry, and they could leads to reduction of the predictability [15, 16]. How to control the Kahler potential is an open question of the modular flavor symmetry approach. As explained in previous paragraph, the top-down constructions motivated by string theory generally gives rise to both modular symmetry and traditional flavor symmetry. This leads to the idea of eclectic flavor group (EFG) which combines the traditional flavor symmetry with modular symmetry [17, 18, 19, 20]. The traditional flavor symmetry and modular symmetry are distinguished by their action on the modulus \(\tau\). The interplay of flavor symmetry and modular symmetry can strongly restrict both the Kahler potential and superpotential. The consistency of the theory implies that the mathematica structure of EFG is a semi-direct product \(G_{f}\rtimes\Gamma_{N}\) (or \(G_{f}\rtimes\Gamma^{\prime}_{N}\)) of the traditional flavor group \(G_{f}\) and the finite modular group \(\Gamma_{N}\) (or \(\Gamma_{N}^{\prime}\)), and each the modular transformation corresponds to an automorphism of traditional flavor symmetry group. In the simplest case that every modular transformation is the trivial identity automorphism of traditional flavor group, the flavor symmetry transformations and modular transformations would be commutable. Then the EFG would reduce to the direct product \(G_{f}\times\Gamma_{N}\) (or \(G_{f}\times\Gamma_{N}^{\prime}\)), it is the so-called quasi-eclectic flavor group [21], and one can freely choose both traditional flavor symmetry and finite modular group in this case. The EFG can be consistently combined with generalized CP (gCP) symmetry [17], and the corresponding gCP transformation has to be compatible with both \(G_{f}\) and \(\Gamma_{N}\) (\(\Gamma_{N}^{\prime}\)). The EFG should be broken to obtain realistic fermion masses and mixing angles, consequently both flavon fields and complex modulus \(\tau\) are required in EFG models and their vacuum expectation values (VEVs) spontaneously break \(G_{f}\) and \(\Gamma_{N}\) (\(\Gamma_{N}^{\prime}\)) respectively. EFG is an interesting approach to control the Kahler potential. The orbifold \(\mathbb{T}^{2}/\mathbb{Z}_{3}\) can give rise to the EFG group \(\Omega(1)\cong[648,533]\) which is the combination of \(\Delta(54)\) flavor symmetry and \(T^{\prime}\) modular symmetry [18]. Based on the EFG \(\Omega(2)\cong[1944,3448]\) consisting of the traditional flavor group \(\Delta(54)\), the finite modular group \(T^{\prime}\) and a \(\mathbb{Z}_{9}^{R}\)\(R\)-symmetry, the first string-derived model was constructed [22]. The interplay of flavon alignment and the modulus in the vicinity of modular fixed point leads to naturally protected fermion mass hierarchies, and the flavor observables of both quarks and leptons are reproduced [22]. Furthermore, two typical bottom-up models for leptons are constructed with the EFG \(\Omega(1)\cong\Delta(27)\rtimes T^{\prime}\cong[648,533]\)[23]. The experimental data of lepton masses and mixing parameters can be successfully described in terms of five real parameters in the case of gCP symmetry and \(\Re\tau=0\) being imposed, and the \(\mu-\tau\) reflection symmetry is reproduced exactly. In the present work, we shall study the EFG \(\Delta(27)\rtimes S_{3}\cong[162,46]\) in a bottom-up way. This EFG is an extension of the traditional flavor group \(\Delta(27)\) by the finite modular group \(S_{3}\) which is a subgroup of the automorphism group of \(\Delta(27)\). The modular transformations \(S\) and \(T\) which are generators of \(S_{3}\) correspond to the outer automorphisms of \(\Delta(27)\), while the modular transformations \(ST\) and \(TS\) correspond to inner automorphisms. In order to consistently combine the modular symmetry \(S_{3}\) with traditional flavor symmetry \(\Delta(27)\), the eight nontrivial singlet representations of \(\Delta(27)\) should be arranged into four doublets \(\mathbf{2_{\textit{i}}}\) (\(i=1,2,3,4\)). Considering the modular symmetry further, we find that twelve irreducible two-dimensional representations \(\mathbf{2_{\textit{i,m}}}\) (\(m=0,1,2\)) of the EFG \(\Delta(27)\rtimes S_{3}\) can be induced from them. Moreover, the three-dimensional irreducible representations \(\mathbf{3}\) and \(\mathbf{\bar{3}}\) of \(\Delta(27)\) can be decomposed into a singlet plus a doublet of \(S_{3}\). The matter fields and flavon fields should be assigned to different multiplets of the EFG, and they have definite transformations under \(\Delta(27)\) and \(S_{3}\). The superpotential and Kahler potential are strongly constrained by \(\Delta(27)\) and \(S_{3}\). If the three generations of quark/lepton fields transform as \(\mathbf{3}\) or \(\mathbf{\bar{3}}\) under \(\Delta(27)\), the minimal Kahler potential is reproduced at leading order. For the singlet plus doublet assignment of matter fields under \(\Delta(27)\), the Kahler metric is a diagonal yet non-universal matrix at leading order. Consequently normalization of the kinetic terms are expected to give corrections to the flavor observables. Furthermore, we apply the above general results of EFG \(\Delta(27)\rtimes S_{3}\) to construct an example models of lepton masses and mixings. This paper is organized as follows. We recapitulate the approach of EFG in section 2, the consistency conditions between \(\Delta(27)\) flavor symmetry and \(S_{3}\) modular symmetry are analyzed, and we determine the modular transformation matrices of \(S\) and \(T\) for different representations of \(\Delta(27)\), and the gCP transformations compatible with \(\Delta(27)\rtimes S_{3}\) are fixed. We present the most general form of the Kahler potential and superpotential invariant under the EFG \(\Delta(27)\rtimes S_{3}\) in section 3. We give an example model for neutrino masses and mixing based on the EFG \(\Delta(27)\rtimes S_{3}\) in section 4. We draw the conclusion and make a summary in section 5. The group theory of \(\Delta(27)\) and \(S_{3}\) are presented in Appendix A and Appendix B respectively. The invariant contractions of two \(\Delta(27)\) doublets are given in Appendix C. ## 2 Eclectic flavor groups \(\Delta(27)\rtimes S_{3}\) The so-called eclectic flavor group is a nontrivial product of a traditional flavor group \(G_{f}\) and a finite modular group \(\Gamma_{N}\) (\(\Gamma_{N}^{\prime}\)), where the finite modular group \(\Gamma_{N}\) (\(\Gamma_{N}^{\prime}\)) is the quotient group of the modular group \(\Gamma\cong SL(2,\mathbb{Z})\) over \(\pm\Gamma(N)\) (\(\Gamma(N)\)), and \(\Gamma(N)\) is the principal congruence subgroup of level \(N\). The full modular group \(SL(2,\mathbb{Z})\) is the group of \(2\times 2\) matrices with integer coefficients and unit determinant, \[SL(2,\mathbb{Z})=\left\{\begin{pmatrix}a&b\\ c&d\end{pmatrix}\middle|ad-bc=1,a,b,c,d\in\mathbb{Z}\right\}\,, \tag{2.1}\] which can be generated by two generators \(S\) and \(T\) with \[S=\begin{pmatrix}0&1\\ -1&0\end{pmatrix},\qquad T=\begin{pmatrix}1&1\\ 0&1\end{pmatrix}\,. \tag{2.2}\] The two generators satisfy the multiplication rules \[S^{4}=(ST)^{3}=\mathbb{1}_{2},\hskip 14.226378ptS^{2}T=TS^{2}\,. \tag{2.3}\] where \(\mathbb{1}_{2}\) denotes \(2\times 2\) unit matrix. For a positive integer \(N\), the principal congruence subgroup of level \(N\) is defined as \[\Gamma(N)=\left\{\gamma\in SL(2,\mathbb{Z})\ \ \Big{|}\ \ \gamma\equiv \begin{pmatrix}1&0\\ 0&1\end{pmatrix}\mod N\right\}\,, \tag{2.4}\] which implies \(T^{N}\in\Gamma(N)\). For \(N\leq 5\), the multiplication rules of the finite modular group \(\Gamma_{N}\equiv\Gamma/(\pm\Gamma(N))\) and its double covering \(\Gamma_{N}^{\prime}\equiv\Gamma/\Gamma(N)\) are given by [9, 24] \[S^{N_{s}}=(ST)^{3}=T^{N}=1\,,\qquad S^{2}T=TS^{2}\,, \tag{2.5}\] with \(N_{s}=4\) for \(\Gamma_{N}^{\prime}\) and \(N_{s}=2\) for \(\Gamma_{N}\), and additional relations are necessary for level \(N\geq 6\)[25]. Under the action of a traditional flavor transformation \(g\) or a modular transformation \(\gamma\), the complex modulus \(\tau\) and a generic matter field multiplet \(\psi\) transform as follow [9, 24] \[\left\{\begin{array}{l}\tau\stackrel{{ g}}{{ \longrightarrow}}\tau,\quad\psi\stackrel{{ g}}{{\longrightarrow}} \rho(g)\psi,\hskip 14.226378ptg\in G_{f}\,,\\ \tau\stackrel{{\gamma}}{{\longrightarrow}}\gamma\tau\equiv\frac{a \tau+b}{c\tau+d},\quad\psi\stackrel{{\gamma}}{{\longrightarrow}}( c\tau+d)^{-k_{\psi}}\rho(\gamma)\psi,\quad\gamma=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\Gamma\,,\end{array}\right. \tag{2.6}\] where \(k_{\psi}\) is the modular weight of the matter field multiplet \(\psi\), and \(\rho(g)\) and \(\rho(\gamma)\) are unitary representations of traditional flavor from \(G_{f}\) and the finite modular group \(\Gamma_{N}\) or \(\Gamma_{N}^{\prime}\), respectively. Notice that the flavor symmetry transformation leaves the modulus \(\tau\) invariant. The modular forms \(Y^{(k_{Y})}(\tau)\) of level \(N\) and weight \(k_{Y}\) can be arranged into multiplets of \(\Gamma_{N}\) (\(\Gamma_{N}^{\prime}\)) [9, 24]: \[Y^{(k_{Y})}(\tau)\stackrel{{\gamma}}{{\longrightarrow}}Y^{(k_{Y} )}\left(\gamma\tau\right)=\left(c\tau+d\right)^{k_{Y}}\rho_{Y}(\gamma)\,Y^{(k_ {Y})}(\tau)\,, \tag{2.7}\] where \(\rho_{Y}(\gamma)\) is a unitary representation of \(\Gamma_{N}\) (\(\Gamma_{N}^{\prime}\)). As the modular multiplet \(Y^{(k_{Y})}(\tau)\) is holomorphic functions of the complex \(\tau\) which is invariant under the action of traditional flavor transformation. Thus, \(Y^{(k_{Y})}(\tau)\) is invariant under the action of \(G_{f}\). In the scheme of eclectic flavor group, in order to consistently combine a finite modular group with a traditional flavor group, the following consistency condition has to be fulfilled [17, 23] \[\rho(\gamma)\rho(g)\rho^{-1}(\gamma)=\rho(u_{\gamma}(g))\,,\qquad\forall g\in G_{ f}\,, \tag{2.8}\] where \(\rho(\gamma)\) represents the automorphism \(u_{\gamma}:G_{f}\to G_{f}\). In the case that \(u_{\gamma}\) is the trivial identity automorphism with \(u_{\gamma}(g)=g\) for any \(\gamma\in\Gamma\) and any \(g\in G_{f}\), the modular transformation and flavor symmetry transformation would be commutable. This is the so-called quasi-eclectic flavor symmetry [21], and the Kahler potential is also constrained by the simultaneous presence of traditional flavor symmetry and modular symmetry. However, the modular symmetry and traditional flavor symmetry can be freely combined together in the quasi-eclectic flavor symmetry, and the resulting models are more complex than these models with either modular symmetry or flavor symmetry alone. Hence we shall be concerned with the case that \(u_{\gamma}\) is nontrivial at least for some modular transformation \(\gamma\). Then the mathematical structure of the traditional flavor group \(G_{f}\) and the finite modular group \(\Gamma_{N}\) (\(\Gamma_{N}^{\prime}\)) is a semidirect product \(G_{f}\rtimes\Gamma_{N}\) (\(G_{f}\rtimes\Gamma_{N}^{\prime}\)) [23]. It implies that the finite modular group \(\Gamma_{N}\) (\(\Gamma_{N}^{\prime}\)) must be a subgroup of the automorphism group of the traditional flavor group \(G_{f}\) and the traditional flavor group \(G_{f}\) is a normal subgroup of eclectic flavor group. In general, the automorphism \(u_{\gamma}\) can be outer or inner automorphism of \(G_{f}\) in the scheme of eclectic flavor symmetry. Here \(\rho(g)\) should be the direct sum of all irreducible representations related by the automorphism \(u_{\gamma}\), then one could determine the modular transformation \(\rho(\gamma)\) by solving the consistency condition of Eq. (2.8). The resulting \(\rho(g)\) would form a irreducible representation of the eclectic flavor group, although the restriction to the subgroup \(G_{f}\) is usually reducible. As the finite modular groups \(\Gamma_{N}\) and \(\Gamma_{N}^{\prime}\) can be generated by the two generators \(S\) and \(T\), it is sufficient to impose the consistency condition in Eq. (2.8) on the two outer automorphisms \(u_{S}\) and \(u_{T}\) \[\rho(S)\,\rho(g)\,\rho^{-1}(S)=\rho(u_{S}(g)),\qquad\rho(T)\,\rho(g)\,\rho^{- 1}(T)\ =\ \rho(u_{T}(g))\,, \tag{2.9}\] where \(\rho(S)\) and \(\rho(T)\) are matrix representations of the two automorphisms \(u_{S}\) and \(u_{T}\), and they should satisfy the multiplication rules of the finite modular group \(\Gamma_{N}\) or \(\Gamma_{N}^{\prime}\) in Eq. (2.5). In other words, the outer automorphisms \(u_{S}\) and \(u_{T}\) should also satisfy the multiplication rules of the finite modular group \(\Gamma_{N}\) or \(\Gamma_{N}^{\prime}\): \[(u_{S})^{N_{s}}=(u_{T})^{N}=(u_{S}\circ u_{T})^{3}=1,\qquad(u_{S})^{2}\circ u _{T}=u_{T}\circ(u_{S})^{2}\, \tag{2.10}\] with \(N_{s}=4\) for \(\Gamma_{N}^{\prime}\) and \(N_{s}=2\) for \(\Gamma_{N}\). ### Traditional flavor group \(\Delta(27)\) extended by modular symmetry \(\Gamma_{2}\cong S_{3}\) In the present work, we shall consider the so-called eclectic flavor group which is the extension of the traditional flavor symmetry \(\Delta(27)\) by a finite modular group. Following the discussion above, we find that the corresponding finite modular group must be a subgroup of the automorphism group of the traditional flavor group \(\Delta(27)\). The group theory of \(\Delta(27)\) is discussed in Appendix A. The automorphism group of \(\Delta(27)\) is \(\text{Aut}\,(\Delta(27))\cong[432,734]\). As the full automorphism group of \(\Delta(27)\) only contains two finite modular groups generated by the outer automorphisms, i.e., \(\Gamma_{2}\cong S_{3}\) and \(\Gamma_{3}^{\prime}\cong T^{\prime}\cong\text{SL}(2,3)\). Hence the traditional flavor group \(\Delta(27)\) can be extended in two ways: by the finite modular group \(\Gamma_{2}\cong S_{3}\) and \(\Gamma_{3}^{\prime}\cong T^{\prime}\) in the case without CP, and the corresponding two eclectic flavor groups are \(\Delta(27)\rtimes S_{3}\) and \(\Delta(27)\rtimes T^{\prime}\), respectively. A comprehensive analysis of eclectic flavor symmetry models based on \(\Omega(1)\cong\Delta(27)\rtimes T^{\prime}\) is performed in Ref. [23]. In the present work, we are concerned with the traditional flavor group \(\Delta(27)\) and its eclectic extension by \(\Gamma_{2}\cong S_{3}\), and the scenarios without/with gCP will be studied. The group theory of the finite modular group \(\Gamma_{2}\cong S_{3}\) is given in Appendix B. Two automorphisms that generate the finite modular group \(S_{3}\) can be taken to be [17] \[u_{S}(A)=A^{2},\qquad u_{T}(A)=A^{2}\,,\qquad u_{S}(B)=B^{2},\qquad u_{T}(B)=A^{2 }B^{2}A\,, \tag{2.11}\] where \(A\) and \(B\) are the two generators of \(\Delta(27)\), please see Eq. (A.1). Note both automorphisms \(u_{S}\) and \(u_{T}\) are outer automorphisms of \(\Delta(27)\)1. It is easy to check that the outer automorphisms \(u_{S}\) and \(u_{T}\) in Eq. (2.11) satisfy the multiplication rules of the finite modular group \(S_{3}\) Footnote 1: One can easy to check that \(u_{ST}(A)=A\) and \(u_{ST}(B)=ABA^{2}\). It implies that the automorphism \(u_{ST}\) is an inner automorphism of \(\Delta(27)\). Analogously \(u_{TS}=u_{ST}^{2}\) is another inner automorphism, and \(u_{TST}\) is an outer automorphism with \(u_{TST}(A)=A^{2}\) and \(u_{TST}(B)=AB^{2}A^{2}\). \[(u_{S})^{2}=(u_{T})^{2}=(u_{S}\circ u_{T})^{3}=1\,, \tag{2.12}\] which can be obtained from Eq. (2.10) by taking \(N_{s}=2\) and \(N=2\). If one consider the outer automorphisms \(u_{S}\) and \(u_{T}\) in Eq. (2.11), the traditional flavor group \(\Delta(27)\) shall be extended to the eclectic flavor group \(\Delta(27)\rtimes S_{3}\). In order to determine the explicit expressions of the modular transformations \(\rho(S)\) and \(\rho(T)\) corresponding to the outer automorphisms \(u_{S}\) and \(u_{T}\), we should analyse how the two outer automorphisms \(u_{S}\) and \(u_{T}\) act on the conjugacy classes and irreducible representations of \(\Delta(27)\). From the conjugacy classes of \(\Delta(27)\) in Eq. (A.3), we see that the outer automorphisms \(u_{S}:(A,\,B)\to(A^{2},\,B^{2})\) and \(u_{T}:(A,\,B)\to(A^{2},\,A^{2}B^{2}A)\) act on all conjugacy classes as follows \[u_{S},\ u_{T}\ : 1C_{1}\leftrightarrow 1C_{1},\qquad 3C_{3}^{(1)}\leftrightarrow 3 C_{3}^{(5)},\qquad 3C_{3}^{(2)}\leftrightarrow 3C_{3}^{(4)},\qquad 3C_{3}^{(3)} \leftrightarrow 3C_{3}^{(6)}, \tag{2.13}\] \[3C_{3}^{(7)}\leftrightarrow 3C_{3}^{(8)},\qquad 1C_{3}^{(1)} \leftrightarrow 1C_{3}^{(1)},\qquad 1C_{3}^{(2)}\leftrightarrow 1C_{3}^{(2)}\,,\] which is displayed in table 4. As we know, the outer automorphisms of a group not only map one conjugacy class to another but also map one irreducible representation to another, while the character table is invariant. The consistency condition Eq. (2.8) may be understood as the action of an outer automorphism \(u_{\gamma}\) on an representation \(\rho\) of traditional flavor symmetry as follow \[u_{\gamma}:\rho\to\rho^{\prime}=\rho\circ u_{\gamma}\,, \tag{2.14}\] where \(\rho\) could be any reducible or irreducible representation of \(\Delta(27)\). The consistency condition (2.8) requires that \(\rho\) and \(\rho^{\prime}\) should be equivalent representations and the modular transformation \(\rho(\gamma)\) is the similarity transformation. The solution for \(\rho(\gamma)\) exists if and only if \(\rho\) contain all those irreducible representations of \(\Delta(27)\) related by the outer automorphism \(u_{\gamma}\) as Eq. (2.14). In fact, then \(\rho\) would be the restriction of certain representation of the eclectic flavor group \(\Delta(27)\rtimes S_{3}\) on the flavor symmetry group \(\Delta(27)\). Then we proceed to consider the actions of outer automorphisms \(u_{S}\) and \(u_{T}\) on all irreducible representations of \(\Delta(27)\). It is straightforward to verify that both of the two outer automorphisms \(u_{S}\) and \(u_{T}\) act on the eight nontrivial singlet irreducible representations \(\mathbf{1_{r,s}}\) of \(\Delta(27)\) with \(\boldsymbol{r,s}\neq 0\) as \[u_{S},\ u_{T}\ :\ \mathbf{1_{0,1}}\leftrightarrow\mathbf{1_{0,2}}\,,\quad \mathbf{1_{1,0}}\leftrightarrow\mathbf{1_{2,0}}\,,\quad\mathbf{1_{1,1}} \leftrightarrow\mathbf{1_{2,2}}\,,\quad\mathbf{1_{1,2}}\leftrightarrow\mathbf{1 _{2,1}}\,, \tag{2.15}\] which indicates that each one of the eight nontrivial one-dimensional representations is related to another one by the \(S_{3}\) modular symmetry. As a consequence, consistency between the modular symmetry \(S_{3}\) and flavor symmetry \(\Delta(27)\) requires that the eight non-trivial singlet representations of \(\Delta(27)\) should be arranged into the following four reducible doublets of \(\Delta(27)\): \[\mathbf{2_{1}}\equiv(\mathbf{1_{0,1}},\mathbf{1_{0,2}})^{T},\quad\mathbf{2_{2}} \equiv(\mathbf{1_{1,1}},\mathbf{1_{2,2}})^{T},\quad\mathbf{2_{3}}\equiv( \mathbf{1_{1,0}},\mathbf{1_{2,0}})^{T},\quad\mathbf{2_{4}}\equiv(\mathbf{1_{1,2 }},\mathbf{1_{2,1}})^{T}\,. \tag{2.16}\] The representation matrices of the \(\Delta(27)\) generators \(A\) and \(B\) read off as \[\mathbf{2_{1}}: \rho_{\mathbf{2_{1}}}(A)=\mathrm{diag}(1,1),\qquad\rho_{\mathbf{2_ {1}}}(B)=\mathrm{diag}(\omega,\omega^{2})\,,\] \[\mathbf{2_{2}}: \rho_{\mathbf{2_{2}}}(A)=\mathrm{diag}(\omega,\omega^{2}),\qquad \rho_{\mathbf{2_{2}}}(B)=\mathrm{diag}(\omega,\omega^{2})\,,\] \[\mathbf{2_{3}}: \rho_{\mathbf{2_{3}}}(A)=\mathrm{diag}(\omega,\omega^{2}),\qquad \rho_{\mathbf{2_{3}}}(B)=\mathrm{diag}(1,1)\,,\] \[\mathbf{2_{4}}: \rho_{\mathbf{2_{4}}}(A)=\mathrm{diag}(\omega,\omega^{2}),\qquad \rho_{\mathbf{2_{4}}}(B)=\mathrm{diag}(\omega^{2},\omega)\,. \tag{2.17}\] Analogously we find that the trivial singlet representation \(\mathbf{1_{0,0}}\), and the two three-dimensional representations \(\mathbf{3}\) and \(\mathbf{\bar{3}}\) of \(\Delta(27)\) are all invariant under the actions of \(u_{S}\) and \(u_{T}\), i.e. \[u_{S},\;u_{T}\;:\;\mathbf{1_{0,0}}\to\mathbf{1_{0,0}}\,,\quad\mathbf{3}\to \mathbf{3},\quad\mathbf{\bar{3}}\to\mathbf{\bar{3}}\,. \tag{2.18}\] Hence the three irreducible representations \(\mathbf{1_{0,0}}\), \(\mathbf{3}\) and \(\mathbf{\bar{3}}\) need not be extended to include other irreducible representations of \(\Delta(27)\). We summarize the actions of the automorphisms \(u_{S}\) and \(u_{T}\) on the irreducible representations of \(\Delta(27)\) in table 4. Accordingly the modular transformations of \(S\) and \(T\) are fixed by the consistency condition in Eq. (2.9) for \(g=A,B\), i.e., \[\rho_{\mathbf{r}}(S)\,\rho_{\mathbf{r}}(A)\,\rho_{\mathbf{r}}^{-1}(S)\;=\; \rho_{\mathbf{r}}(A^{2}),\qquad\rho_{\mathbf{r}}(T)\,\rho_{\mathbf{r}}(A)\,\rho_{\mathbf{r}}^ {-1}(T)\;=\;\rho_{\mathbf{r}}(A^{2})\,,\] \[\rho_{\mathbf{r}}(S)\,\rho_{\mathbf{r}}(B)\,\rho_{\mathbf{r}}^{-1}(S)\;=\; \rho_{\mathbf{r}}(B^{2}),\qquad\rho_{\mathbf{r}}(T)\,\rho_{\mathbf{r}}(B)\,\rho_{\mathbf{r}}^ {-1}(T)\;=\;\rho_{\mathbf{r}}(A^{2}B^{2}A)\,, \tag{2.19}\] where \(\mathbf{r}\) can be the three irreducible representations \(\mathbf{1_{0,0}}\), \(\mathbf{3}\) and \(\mathbf{\bar{3}}\), and the four reducible two-dimensional representations \(\mathbf{2_{i}}\) (\(i=1,2,3,4\)) of \(\Delta(27)\). Furthermore, as elements \(S\) and \(T\) are the generators of the finite modular group \(S_{3}\), the modular transformations \(\rho_{\mathbf{r}}(S)\) and \(\rho_{\mathbf{r}}(T)\) have to satisfy the multiplication rules of the finite modular group \(S_{3}\): \[\rho_{\mathbf{r}}^{2}(S)=\rho_{\mathbf{r}}^{2}(T)=\rho_{\mathbf{r}}^{3}(ST)=\mathbb{1_{\mathbf{ r}}}\,. \tag{2.20}\] For the trivial singlet \(\mathbf{r}=\mathbf{1_{0,0}}\), it is easy to check that the solutions for \(\rho_{\mathbf{r}}(S)\) and \(\rho_{\mathbf{r}}(T)\) are the two one-dimensional representations of the finite modular group \(S_{3}\), i.e. \[\rho_{\mathbf{1_{0,0}^{a}}}(S)=(-1)^{a},\qquad\rho_{\mathbf{1_{0,0}^{a}}}(T)=( -1)^{a}\,. \tag{2.21}\] where \(a=0,1\). For the triplet representation \(\mathbf{3}\) of \(\Delta(27)\), the consistency condition and multiplication rules of the finite modular group \(S_{3}\) fix the modular transformations to be: \[\rho_{\mathbf{3^{a}}}(S)=(-1)^{a}\left(\begin{array}{ccc}1&0&0\\ 0&0&1\\ 0&1&0\end{array}\right)\;,\quad\rho_{\mathbf{3^{a}}}(T)=(-1)^{a}\left( \begin{array}{ccc}0&1&0\\ 1&0&0\\ 0&0&1\end{array}\right)\;, \tag{2.22}\] with \(a=0,1\). From the three-dimensional matrices \(\rho_{\mathbf{3}}(A)\), \(\rho_{\mathbf{3}}(B)\) in Eq. (A.4), one can directly obtain \[\rho_{\mathbf{3^{a}}}(ST)=\rho_{\mathbf{3}}(A)\,. \tag{2.23}\] The matrices \(\rho_{\bf 3}(A)\), \(\rho_{\bf 3}(B)\), \(\rho_{\bf 3^{a}}(S)\) and \(\rho_{\bf 3^{a}}(T)\) generate a eclectic flavor group \(\Delta(54)\cong[54,8]\). The three-dimensional representation in Eq. (2.22) is a reducible representation of \(S_{3}\) and it is the direct sum of a singlet and a doublet representations of \(S_{3}\), \[{\bf 3^{a}}={\bf 1^{a}}\oplus{\bf 2}\,. \tag{2.24}\] For a triplet \(\Phi_{\bf 3^{a}}=(\phi_{1},\phi_{2},\phi_{3})^{T}\) transforming as \(\rho_{\bf 3^{a}}\) will decompose to one singlet \({\bf 1^{a}}\) and one doublet \({\bf 2}\) of \(S_{3}\) as follow \[{\bf 1^{a}}\ :\ \frac{1}{\sqrt{3}}(\phi_{1}+\phi_{2}+\phi_{3}),\qquad\qquad{ \bf 2}\ :\ \frac{P_{2}^{a}}{\sqrt{6}}\left(\begin{array}{c}\phi_{1}+\phi_{2}-2\phi_{3} \\ \sqrt{3}(\phi_{2}-\phi_{1})\end{array}\right)\,, \tag{2.25}\] with \[P_{2}=\left(\begin{array}{cc}0&-1\\ 1&0\end{array}\right)\,. \tag{2.26}\] For the representation \(\bf\bar{3}\) of \(\Delta(27)\), the corresponding modular transformations \(\rho_{\bf 3^{a}}(S)\) and \(\rho_{\bf 3^{a}}(T)\) coincide with \(\rho_{\bf 3^{a}}(S)\) and \(\rho_{\bf 3^{a}}(T)\) in Eq. (2.22) respectively. As a consequence, an \(S_{3}\) triplet in \(\bf\bar{3}^{a}\) can be decomposed into a singlet and a doublet of \(S_{3}\) modular symmetry, as shown in Eq. (2.25). For each one reducible two-dimensional representation \({\bf 2_{i}}\), there are three independent solutions for the modular transformations that fulfils both the consistency condition (2.19) and the multiplication rules (2.20). The representation matrices of the two generators \(S\) and \(T\) are denoted by \(\rho_{{\bf 2_{i}},{\bf m}}(S)\) and \(\rho_{{\bf 2_{i}},{\bf m}}(T)\) respectively with \(m=0,1,2\). In our working basis, they are determined to be \[\rho_{{\bf 2_{i}},{\bf m}}(S)=\left(\begin{array}{cc}0&1\\ 1&0\end{array}\right)\,\qquad\rho_{{\bf 2_{i}},{\bf m}}(T)=\left(\begin{array}{ cc}0&\omega^{m}\\ \omega^{-m}&0\end{array}\right)\,, \tag{2.27}\] We find that the matrices \(\rho_{{\bf 2_{i}}}(A)\), \(\rho_{{\bf 2_{i}}}(B)\), \(\rho_{{\bf 2_{i}},{\bf m}}(S)\) and \(\rho_{{\bf 2_{i}},{\bf m}}(T)\) expand into a matrix group of \(S_{3}\). Similar to Eq. (2.23) for the triplet representation, there is at least one solution of \(\rho_{{\bf 2_{i}},{\bf m}}(ST)\) identical with the representation matrix \(\rho_{{\bf 2_{i}}}(A)\) for each two-dimensional representation \({\bf 2_{i}}\), i.e. \[\rho_{{\bf 2_{i}},{\bf m}}(ST)=\rho_{{\bf 2_{i}}}(A),\ \ \mbox{for}\ \ (i,m)=(1,0),(2,2),(3,2),(4,2)\,, \tag{2.28}\] while2 Footnote 2: For the direct sum of \({\bf 3^{a}}\oplus{\bf 2_{i}},{\bf m}\) with the values of the indices \(i,m\) in Eq. (2.29), the representation matrices of \(A\), \(B\), \(S\), \(T\) generate a group isomorphic to the eclectic flavor group \(\Delta(27)\rtimes S_{3}\). \[\rho_{{\bf 2_{i}},{\bf m}}(ST)\neq\rho_{{\bf 2_{i}}}(A),\ \ \mbox{for}\ \ (i,m)=(1,1),(1,2),(2,0),(2,1),(3,0),(3,1),(4,0),(4,1)\,. \tag{2.29}\] Since \(u_{ST}\) is an inner automorphism of \(\Delta(27)\), consequently the modular transformation \(\rho(ST)\) must coincide with certain flavor symmetry transformation \(\rho(g)\) with \(g=A\in\Delta(27)\), as shown in Eq. (2.23,2.28). Then the modular forms in the Yukawa couplings must be invariant under the action of \(ST\), consequently only modular form singlets of \(S_{3}\) are allowed and they can absorbed into the coupling constants. For the solutions of \(\rho_{{\bf 2_{i}},{\bf m}}(ST)\neq\rho_{{\bf 2_{i}}}(A)\) in Eq. (2.29), modular form doublets of \(S_{3}\) can enter into the Yukawa couplings and this provides intriguing possibility for model building. Moreover, the representation \({\bf 2_{i}},{\bf_{m}}\) can be decomposed into \(\ {\bf 1}\oplus{\bf 1^{\prime}}\) of \(S_{3}\) for \(m=0\) while it is equivalent to the doublet representation \({\bf 2}\) for \(m=1,2\). For the doublet fields \(\Phi_{{\bf 2_{i}},{\bf m}}=(\phi_{1},\phi_{2})^{T}\), we find the following decomposition \[\Phi_{{\bf 2_{i}},{\bf 0}}\ :\ \frac{1}{\sqrt{2}}\left(\phi_{1}+\phi_{2} \right)\sim{\bf 1},\qquad\qquad\frac{1}{\sqrt{2}}\left(\phi_{1}-\phi_{2} \right)\sim{\bf 1^{\prime}}\,,\] \[\Phi_{{\bf 2_{i}},{\bf 1}}\ :\ \frac{1}{\sqrt{2}}\left(\begin{array}{c} \phi_{1}+\omega\phi_{2}\\ -i(\phi_{1}-\omega\phi_{2})\end{array}\right)\ \sim\ {\bf 2}\,,\] \[\Phi_{{\bf 2_{i}},{\bf 2}}\ :\ \frac{1}{\sqrt{2}}\left(\begin{array}{c} \phi_{1}+\omega^{2}\phi_{2}\\ i(\phi_{1}-\omega^{2}\phi_{2})\end{array}\right)\ \sim\ {\bf 2}\,. \tag{2.30}\] In short, from the representations of \(\Delta(27)\) flavor symmetry we have reached the irreducible multiplets of the EFG \(\Delta(27)\rtimes S_{3}\) denoted as \({\bf 1}^{a}_{{\bf 0},{\bf 0}}\), \({\bf 3}^{a}\), \(\bar{\bf 3}^{a}\), \({\bf 2}_{i,{\bf m}}\) with \(a=0,1\), \(i=1,2,3,4\), \(m=0,1,2\). Another irreducible two-dimensional representation of the EFG can be induced from the \(S_{3}\) irreducible representation \({\bf 2}\) and in the following it is labelled as \({\bf 2}_{\bf 0}\), in which the representation matrices of generators \(A\), \(B\), \(S\) and \(T\) are \[\rho_{{\bf 2}_{\bf 0}}(A)=\mathbb{1}_{2},\quad\rho_{{\bf 2}_{\bf 0}}(B)= \mathbb{1}_{2},\quad\rho_{{\bf 2}_{\bf 0}}(S)=\rho_{{\bf 2}}(S),\quad\rho_{{\bf 2}_{ \bf 0}}(T)=\rho_{{\bf 2}}(T)\,. \tag{2.31}\] Furthermore the remain two irreducible representations of EFG \(\Delta(27)\rtimes S_{3}\) are of dimension six and they are given by [26] \[{\bf 6}={\bf 3}^{\bf 0}\otimes{\bf 2}_{\bf 0}\cong{\bf 3}^{\bf 1}\otimes{\bf 2 }_{\bf 0},\qquad{\bf\bar{6}}={\bf\bar{3}}^{\bf 0}\otimes{\bf 2}_{\bf 0}\cong{ \bf\bar{3}}^{\bf 1}\otimes{\bf 2}_{\bf 0}\,, \tag{2.32}\] where the operation \(\otimes\) denotes as Kronecker product of matrix. Then one can easily write out the representation matrices of the EFG generators \(A\), \(B\), \(S\) and \(T\) from the representations \({\bf 3}^{\bf 0}\) and \({\bf 2}_{\bf 0}\). ### Including gCP One can combine the eclectic flavor group \(G_{f}\rtimes\Gamma_{N}\) (\(G_{f}\rtimes\Gamma_{N}^{\prime}\)) with the gCP symmetry by introducing a new generator \(K_{*}\)[17], which corresponds to an automorphism of both the traditional flavor symmetry \(G_{f}\) and the finite modular group \(\Gamma_{N}\) (\(\Gamma_{N}^{\prime}\)). The gCP transformation \(K_{*}\) acts on the modulus \(\tau\), the matter field and the modular form multiplets of level \(N\) and weight \(k_{Y}\) as follows, \[\tau\stackrel{{ K_{*}}}{{\longrightarrow}}-\bar{\tau},\quad \psi(x)\stackrel{{ K_{*}}}{{\longrightarrow}}\rho(K_{*})[\psi^{ \dagger}(t,-{\boldsymbol{x}})]^{T},\quad Y^{(k_{Y})}(\tau)\stackrel{{ K_{*}}}{{\longrightarrow}}Y^{(k_{Y})}(-\bar{\tau})= \rho(K_{*})(Y^{(k_{Y})}(\tau))^{*}\,, \tag{2.33}\] where the gCP transformation \(\rho(K_{*})\) is a unitary matrix, and the obvious action of CP on the spinor indices is omitted for the case of \(\psi\) being spinor. Requiring that the gCP transformation \(K_{*}\) be of order 2 with \((K_{*})^{2}=1\), we can obtain \[\rho^{*}(K_{*})=\rho^{-1}(K_{*})\,. \tag{2.34}\] The gCP transformation has to be compatible with both the traditional flavor symmetry and the finite modular group, and its allowed form is strongly constrained by the corresponding restricted consistency conditions. The consistency between the modular symmetry and gCP symmetry requires the following consistency conditions have to be satisfied [27, 28]: \[\rho(K_{*})\rho^{*}(g)\rho^{-1}(K_{*})=\rho(u_{K_{*}}(g))\,, \qquad\forall g\in G_{f}\,, \tag{2.35}\] \[\rho(K_{*})\rho^{*}(S)\rho^{-1}(K_{*})=\rho^{-1}(S),\qquad\rho(K_ {*})\rho^{*}(T)\rho^{-1}(K_{*})=\rho^{-1}(T)\,, \tag{2.36}\] where \(u_{K_{*}}\) is an automorphism of the traditional flavor symmetry group \(G_{f}\). It is sufficient to consider the element \(g\) being the generators of \(G_{f}\), and one can fix the explicit form of the gCP transformation \(\rho(K_{*})\) up to an overall irrelevant phase by solving Eqs. (2.35,2.36). Hence the automorphism \(u_{K_{*}}\) of the traditional flavor group \(G_{f}\) should satisfy the following relations: \[(u_{K_{*}})^{2}=1,\ \ u_{K_{*}}\circ u_{S}\circ u_{K_{*}}=u_{S}^{-1},\ \ u_{K_{*}} \circ u_{T}\circ u_{K_{*}}=u_{T}^{-1}\,. \tag{2.37}\] As regards the concerned eclectic flavor group \(\Delta(27)\rtimes S_{3}\), the actions of the automorphism \(u_{K_{*}}\) on the \(\Delta(27)\) generators \(A\) and \(B\) can be taken to be [17] \[u_{K_{*}}(A)\ =\ A\,,\qquad u_{K_{*}}(B)\ =\ A\,B^{2}\,A\,. \tag{2.38}\] It implies that \(u_{K_{\star}}\) is an outer automorphism of \(\Delta(27)\). If gCP transformation is imposed, the group ID of the resulting eclectic flavor group is [324, 121] in GAP [29, 30], and we need to discuss the actions of the outer automorphism \(u_{K_{\star}}\) on the conjugacy classes and the irreducible representations of \(\Delta(27)\). The outer automorphism \(u_{K_{\star}}\) acts on the conjugacy classes as \[u_{K_{\star}}:\ 1C_{3}^{(1)}\leftrightarrow 1C_{3}^{(2)},\quad 3C_{3}^{(1)} \leftrightarrow 3C_{3}^{(8)},\quad 3C_{3}^{(2)}\leftrightarrow 3C_{3}^{(4)},\quad 3C_{3}^{ (5)}\leftrightarrow 3C_{3}^{(7)}\,. \tag{2.39}\] The remaining three conjugacy classes of \(\Delta(27)\) are invariant under \(u_{K_{\star}}\). Then we shall discuss the actions of the automorphism \(u_{K_{\star}}\) on the eleven representations of \(\Delta(27)\). Similar to the automorphisms \(u_{S}\) and \(u_{T}\), the consistency condition Eq. (2.35) can be understood as a similarity transformation between the representations \(\rho^{*}\) and \(\rho\circ u_{K_{\star}}\). The action of the outer automorphism \(u_{K_{\star}}\) on a irreducible representation of \(\Delta(27)\) is defined as \(u_{K_{\star}}:\rho\to\rho\circ u_{K_{\star}}\). The automorphism \(u_{K_{\star}}\) in Eq. (2.38) acts on the eleven irreducible representations of \(\Delta(27)\) as \[u_{K_{\star}}: \mathbf{1_{0,0}}\leftrightarrow\mathbf{1_{0,0}},\qquad\mathbf{1_ {0,1}}\leftrightarrow\mathbf{1_{0,2}},\qquad\mathbf{1_{1,1}}\leftrightarrow \mathbf{1_{1,1}},\qquad\mathbf{1_{2,2}}\leftrightarrow\mathbf{1_{2,2}}, \tag{2.40}\] \[\mathbf{1_{1,0}}\leftrightarrow\mathbf{1_{1,2}},\qquad\mathbf{1_ {2,0}}\leftrightarrow\mathbf{1_{2,1}},\qquad\mathbf{3}\leftrightarrow\mathbf{ \bar{3}}\,,\] which are shown in table 4. Therefore in order to implement the gCP symmetry in the context of \(\Delta(27)\) flavor symmetry, the fields in the representations \(\mathbf{1_{1,1}}\) and \(\mathbf{1_{2,2}}\) have to appear in pair, and the same holds true for the representations \(\mathbf{1_{1,0}}\), \(\mathbf{1_{2,1}}\) as well as \(\mathbf{1_{1,2}}\), \(\mathbf{1_{2,0}}\). Furthermore, considering the action of \(u_{S}\), \(u_{T}\) given in Eq. (2.15), we find that the eight non-trivial singlet representations of \(\Delta(27)\) can be classified into three categories: \(\mathbf{1_{0,1}}\oplus\mathbf{1_{0,2}}=\mathbf{2_{1}}\), \(\mathbf{1_{1,1}}\oplus\mathbf{1_{2,2}}=\mathbf{2_{2}}\) and \(\mathbf{1_{1,0}}\oplus\mathbf{1_{2,0}}\oplus\mathbf{1_{1,2}}\oplus\mathbf{1_ {2,1}}=\mathbf{2_{3}}\oplus\mathbf{2_{4}}\), as shown in figure 1. As a consequence, if the gCP symmetry is imposed in the eclectic flavor group \(\Delta(27)\rtimes S_{3}\), one has to combine the doublets \(\mathbf{2_{3,m}}\) and \(\mathbf{2_{4,n}}\) to form a quartet \(\mathbf{4_{m,n}}\equiv\mathbf{2_{3,m}}\oplus\mathbf{2_{4,n}}\), the corresponding representation matrices of the generators are \[\mathbf{4_{m,n}} : \rho_{\mathbf{4_{m,n}}}(A)=\left(\begin{array}{cc}\rho_{\mathbf{ 2_{3,m}}}(A)&\mathbb{0}_{2}\\ \mathbb{0}_{2}&\rho_{\mathbf{2_{4,n}}}(A)\end{array}\right),\quad\rho_{ \mathbf{4_{m,n}}}(B)=\left(\begin{array}{cc}\rho_{\mathbf{2_{3,m}}}(B)& \mathbb{0}_{2}\\ \mathbb{0}_{2}&\rho_{\mathbf{2_{4,n}}}(B)\end{array}\right), \tag{2.41}\] \[\rho_{\mathbf{4_{m,n}}}(S)=\left(\begin{array}{cc}\rho_{\mathbf{ 2_{3,m}}}(S)&\mathbb{0}_{2}\\ \mathbb{0}_{2}&\rho_{\mathbf{2_{4,n}}}(S)\end{array}\right),\quad\rho_{ \mathbf{4_{m,n}}}(T)=\left(\begin{array}{cc}\rho_{\mathbf{2_{3,m}}}(T)& \mathbb{0}_{2}\\ \mathbb{0}_{2}&\rho_{\mathbf{2_{4,n}}}(T)\end{array}\right)\,.\] Solving the consistency conditions of Eqs. (2.35, 2.36) for \(g=A,B\), we find the expressions of Figure 1: The action of the automorphisms \(u_{S}\), \(u_{T}\) and \(u_{K_{\star}}\) on the non-trivial singlet representations of \(\Delta(27)\). the gCP transformation as follow, \[\rho_{\bf 1\bar{0},0}(K_{*})=1,\qquad\rho_{\bf 2_{1,0}}(K_{*})= \left(\begin{array}{cc}1&0\\ 0&1\end{array}\right)\,,\qquad\rho_{\bf 2_{2,m}}(K_{*})=\left(\begin{array}{cc}0&1 \\ 1&0\end{array}\right)\,,\] \[\rho_{\bf 3^{a}}(K_{*})=\frac{1}{\sqrt{3}}\left(\begin{array}{ccc} \omega^{2}&1&1\\ 1&\omega^{2}&1\\ 1&1&\omega^{2}\end{array}\right)\,,\qquad\rho_{\bf 3^{a}}(K_{*})=\frac{1}{ \sqrt{3}}\left(\begin{array}{ccc}\omega&1&1\\ 1&\omega&1\\ 1&1&\omega\end{array}\right)\,,\] \[\rho_{\bf 4_{m,m}}(K_{*})=\left(\begin{array}{ccc}0&0&0&1\\ 0&0&1&0\\ 0&1&0&0\\ 1&0&0&0\end{array}\right)\,, \tag{2.42}\] where the overall phase is dropped. Note that the multiplets of \(\bf 2_{1,1}\) and \(\bf 2_{1,2}\) are mapped into the complex conjugate of each other under the action of gCP, consequently they should be arranged into a quartet for consistent definition of gCP. Analogously one has to arrange the four-dimensional multiplets \(\bf 4_{m,n}\) (\(m\neq n\)) into three octets \(\bf 4_{0,1}\oplus\bf 4_{1,0}\), \(\bf 4_{0,2}\oplus\bf 4_{2,0}\) and \(\bf 4_{1,2}\oplus\bf 4_{2,1}\). The corresponding gCP transformation matrices can be straightforwardly determined and they are too large to be presented here. We turn to discuss the CP transformation of modular form multiplets. As the representations of the finite modular group generators \(S\) and \(T\) are unitary and symmetric in our basis given in Eq. (B.3), the consistency condition of Eq. (2.36) fixes the gCP transformation matrix \(\rho(K_{*})\) be an identity matrix up to an overall phase, i.e. \[\rho_{\mathbf{r}}(K_{*})=\mathbb{1}_{\mathbf{r}},\qquad\mathbf{r}=\bf 1,\,1^{\prime},\,2\,. \tag{2.43}\] From the transformation property of modular multiplets in Eq. (2.33) and the representation matrix of \(\rho(K_{*})\) above, one can obtain [27, 28] \[Y^{(k_{Y})}_{\mathbf{r}}(\tau)\ \stackrel{{ K_{*}}}{{\longrightarrow}}\ Y^{( k_{Y})}_{\mathbf{r}}(-\bar{\tau})=(Y^{(k_{Y})}_{\mathbf{r}}(\tau))^{*}\,, \tag{2.44}\] where \(Y^{(k_{Y})}_{\mathbf{r}}(\tau)\) denotes any level 2 modular form multiplets at weight \(k_{Y}\) in the irreducible representation \(\mathbf{r}\) of the finite modular group \(S_{3}\). The modular multiplets of level 2 up to weight 8 are shown in Appendix B. ## 3 Kahler potential and superpotential invariant under EFG \(\Delta(27)\rtimes S_{3}\) From Eq. (2.15), we find that any one of the eight nontrivial singlets of \(\Delta(27)\) is mapped to another under the action of \(S_{3}\) modular symmetry. If one field is assigned to be a nontrivial singlet of \(\Delta(27)\), the EFG \(\Delta(27)\rtimes S_{3}\) requires the presence of another field in the nontrivial singlet related by the outer automorphism \(u_{S}\) and \(u_{T}\). As a result, the eight nontrivial one-dimensional representations of \(\Delta(27)\) have to be arranged into four doublet \(\bf 2_{i}\) (\(i=1,2,3,4\)) shown in Eq. (2.16). Notice that the two component fields of \(\bf 2_{i}\) should transform in the same way under both SM gauge group and the auxiliary cyclic group. Hence the three generations of quarks and leptons can be assigned to transform as triplet \(\bf 3\) (or \(\bf\bar{3}\)), one trivial singlet \(\bf 1_{0,0}\) plus one reducible doublet \(\bf 2_{i}\) or three trivial singlets \(\bf 1_{0,0}\) of \(\Delta(27)\). The Higgs fields \(H_{u}\) and \(H_{d}\) are invariant under \(\Delta(27)\) if no additional Higgs are introduced. Nevertheless, flavons can be assigned to trivial singlet, triplet and reducible doublets \(\bf 2_{i}\) of the traditional flavor symmetry \(\Delta(27)\). If gCP is imposed on the EFG \(\Delta(27)\rtimes S_{3}\) the available \(\Delta(27)\) multiplets contain the singlet \(\mathbf{1_{0,0}}\), two doublet \(\mathbf{2_{1}}\), \(\mathbf{2_{2}}\), two triplets \(\mathbf{3}\), \(\mathbf{\bar{3}}\), and a quartet \(\mathbf{2_{3}}\oplus\mathbf{2_{4}}\). In this section, we shall preform a general analysis for the Kahler potential \(\mathcal{K}\) and the superpotential \(\mathcal{W}\) which are invariant under the EFG \(\Delta(27)\rtimes S_{3}\) in the framework of \(\mathcal{N}=1\) global supersymmetry. In the approach of EFG, the general form of \(\mathcal{K}\) and \(\mathcal{W}\) depend on level 2 modular forms which can be arranged into multiplets of \(S_{3}\). To obtain the general forms of Kahler potential and superpotential, we assume that the modular multiplets of level 2 and weight \(k_{Y}\) comprise all possible irreducible multiplets of \(S_{3}\), i.e. \[Y_{\mathbf{1}}^{(k_{Y})}(\tau)=Y_{1}\,,\qquad Y_{\mathbf{1^{\prime}}}^{(k_{Y}) }(\tau)=Y_{2}\,,\qquad Y_{\mathbf{2}}^{(k_{Y})}(\tau)=\begin{pmatrix}Y_{3}\\ Y_{4}\end{pmatrix}\,. \tag{3.1}\] If some modular multiplets are absent for a given weight, the corresponding modular forms \(Y_{i}\) must be set to zero. Notice that the contributions of the linearly independent modular form multiplets in the same representation of \(S_{3}\) take a similar form. In the following, we give the most general form of the Kahler potential for different possible representation assignment of the three generation matter fields under \(\Delta(27)\). ### Kahler potential It is known that the Kahler potential admits many terms compatible with modular symmetry and they could induce sizable corrections to the fermion masses and mixing parameters [15]. The EFG provides a scheme to control the Kahler potential through the interplay of modular symmetry and traditional flavor symmetry [17, 18]. In the present work, Kahler potential is required to be invariant under the actions of the eclectic flavor symmetry \(\Delta(27)\rtimes S_{3}\). The traditional flavor group \(\Delta(27)\) can impose severe constraints on the Kahler potential and the corresponding higher order corrections are suppressed by powers of \(\langle\Phi\rangle/\Lambda\), where \(\langle\Phi\rangle\) and \(\Lambda\) represent the VEVs of the flavon fields and the cutoff scale, respectively. If the three generations of matter fields \(\psi\) transform as \(\mathbf{1_{0,0}^{a}}\) of \(\Delta(27)\rtimes S_{3}\), they are generally distinguished by the different charges under auxiliary abelian symmetry, so that the Kahler metric would be diagonal and the kinetic terms can be changed to canonical form by field redefinition. Hence we shall concentrate on the assignments that the three generators matter fields \(\psi\) transform as \(\mathbf{3^{a}}\), \(\mathbf{\bar{3}^{a}}\) or \(\mathbf{1_{0,0}^{a}}\oplus\mathbf{2_{i,m}}\) under \(\Delta(27)\rtimes S_{3}\) eclectic flavor group in this section. #### 3.1.1 Kahler potential for \(\psi\sim\mathbf{3^{a}}\) or \(\mathbf{\bar{3}^{a}}\) From Eq. (2.22), we see that the representations \(\mathbf{3^{0}}\) and \(\mathbf{3^{1}}\) are different in the overall sign of the modular generators \(S\) and \(T\). Hence the same Kahler potential invariant under EFG would be obtained for both assignments \(\psi\sim\mathbf{3^{0}}\) and \(\psi\sim\mathbf{3^{1}}\). Without loss of generality, we shall consider \(\psi\sim\mathbf{3^{0}}\) in the following. Then the general form of the leading order (LO) Kahler potential is given by \[\mathcal{K}_{\text{LO}}=\sum_{k,\boldsymbol{r_{1}},\boldsymbol{r_{2}},s}(-i \tau+i\bar{\tau})^{-k_{\psi}+k}\left(Y_{\boldsymbol{r_{1}}}^{(k)\dagger}Y_{ \boldsymbol{r_{2}}}^{(k)}\psi^{\dagger}\psi\right)_{\mathbf{1_{0,0}^{a}},s}\,, \tag{3.2}\] where we have to sum over the even weights \(k\in 2\text{N}\), the representation \(\boldsymbol{r}\) of all linearly independent modular multiplets \(Y_{\boldsymbol{r}}^{(k)}(\tau)\) and all \(\Delta(27)\rtimes S_{3}\) singlet contractions labelled by the index \(s\). We have omitted the coupling constant of each contractions in Eq. (3.2), and the modular form of weight 0 is taken to be \(Y_{\boldsymbol{r}}^{(0)}=1\). The terms of \(k=0\) will give the minimal Kahler potential. The modulus \(\tau\) as well as modular forms are invariant under the action of the flavor symmetry \(\Delta(27)\), consequently invariance under \(\Delta(27)\) requires \(\psi^{\dagger}\psi\) should contract to a trivial singlet of \(\Delta(27)\), i.e. \[\sum_{\mathbf{\mathcal{R}}}\left(\psi^{\dagger}\psi\right)_{\mathbf{\mathcal{R}}}=\left( \psi^{\dagger}\psi\right)_{\mathbf{1}^{\mathbf{0}}_{\mathbf{0},\mathbf{0}}}=\psi_{1}^{\dagger} \psi_{1}+\psi_{2}^{\dagger}\psi_{2}+\psi_{3}^{\dagger}\psi_{3}\,, \tag{3.3}\] where \((...)_{\mathbf{\mathcal{R}}}\) denotes \(\Delta(27)\) invariant contractions and the subscript \(\mathbf{\mathcal{R}}\) refer to those EFG irreducible multiplets with generators \(A\) and \(B\) being unit matrices. Then \(\mathbf{\mathcal{R}}\) can be the representations \(\mathbf{1}^{\mathbf{0}}_{\mathbf{0},\mathbf{0}}\), \(\mathbf{1}^{\mathbf{1}}_{\mathbf{0},\mathbf{0}}\) and \(\mathbf{2}_{\mathbf{0}}\). We will adopt this convention in the following. Because the combination \(\psi_{1}^{\dagger}\psi_{1}+\psi_{2}^{\dagger}\psi_{2}+\psi_{3}^{\dagger}\psi_{ 3}=\left(\psi^{\dagger}\psi\right)_{\mathbf{1}^{\mathbf{0}}_{\mathbf{0},\mathbf{0}}}\) in Eq. (3.3) is invariant under the finite modular group \(S_{3}\), the contraction of modular forms \(Y^{(k)\dagger}_{\mathbf{r}_{\mathbf{1}}}Y^{(k)}_{\mathbf{r}_{\mathbf{2}}}\) must be invariant under \(S_{3}\) as well. From the Kronecker products of \(S_{3}\) in Eq. (B.4), we find that \(\mathbf{r_{1}}\) and \(\mathbf{r_{2}}\) should be the same representation \(\mathbf{r_{1}}=\mathbf{r_{2}}=\mathbf{r}\) of \(S_{3}\). Thus the EFG \(\Delta(27)\rtimes S_{3}\) constrains the general form of the LO Kahler potential as follow, \[\mathcal{K}_{\text{LO}}=\left(\psi_{1}^{\dagger}\psi_{1}+\psi_{2}^{\dagger} \psi_{2}+\psi_{3}^{\dagger}\psi_{3}\right)\sum_{k,\mathbf{r}}(-i\tau+i\bar{\tau}) ^{-k_{\psi}+k}\left(Y^{(k)\dagger}_{\mathbf{r}}Y^{(k)}_{\mathbf{r}}\right)_{\mathbf{1}^{ \mathbf{0}}_{\mathbf{0},\mathbf{0}}}\,, \tag{3.4}\] One can straightforwardly read off the Kahler metric which is proportional to a unit matrix, and the minimal Kahler potential is reproduced. We needs to rescale the supermultiplets of the theory in order to get canonical kinetic terms. The effect of such rescaling can be compensated by redefining the couplings of the superpotential. The next-to-leading-order (NLO) corrections to the Kahler potential contains a flavon \(\Phi\), and it can be written as \[\mathcal{K}_{\text{NLO}}=\frac{1}{\Lambda}\sum_{k_{1},\mathbf{r_{1}},\mathbf{r_{2}},s }(-i\tau+i\bar{\tau})^{-k_{\psi}+k_{1}}\left(Y^{(k_{1})\dagger}_{\mathbf{r_{1}}}Y^ {(k_{1}+k_{\Phi})}_{\mathbf{r_{2}}}\psi^{\dagger}\psi\Phi\right)_{\mathbf{1}^{\mathbf{0}}_ {\mathbf{0},\mathbf{0}},s}+\text{h.c.}\,, \tag{3.5}\] where \(k_{\Phi}\) is the modular weight of the flavon \(\Phi\). Comparing the expressions of \(\mathcal{K}_{\text{LO}}\) in Eq. (3.2) and \(\mathcal{K}_{\text{NLO}}\) in Eq. (3.5), we find that the flavon \(\Phi\) can contribute to \(\mathcal{K}_{\text{NLO}}\) if and only if \(\Phi\) is invariant under the auxiliary symmetry group. From the \(\Delta(27)\) tensor product \(\mathbf{3}\otimes\mathbf{\bar{3}}=\sum_{r,s=0}^{2}\mathbf{1}_{\mathbf{r},s}\) and the transformation properties of the nine contraction singlets of \(\psi^{\dagger}\psi\), we find they can be be arranged into one trivial singlet and four doublets of \(\Delta(27)\rtimes S_{3}\), i.e. \[\left(\psi^{\dagger}\psi\right)_{\mathbf{1}^{\mathbf{0}}_{\mathbf{0},\mathbf{0}}} =\psi_{1}^{\dagger}\psi_{1}+\psi_{2}^{\dagger}\psi_{2}+\psi_{3}^ {\dagger}\psi_{3}\,,\] \[\left(\psi^{\dagger}\psi\right)_{\mathbf{2}_{\mathbf{1},\mathbf{0}}} =\left(\begin{array}{c}\psi_{1}^{\dagger}\psi_{2}+\psi_{2}^{ \dagger}\psi_{3}+\psi_{3}^{\dagger}\psi_{1}\\ \psi_{1}^{\dagger}\psi_{3}+\psi_{2}^{\dagger}\psi_{1}+\psi_{3}^{\dagger}\psi_{2 }\end{array}\right)\,,\] \[\left(\psi^{\dagger}\psi\right)_{\mathbf{2}_{\mathbf{2},\mathbf{2}}} =\left(\begin{array}{c}\psi_{1}^{\dagger}\psi_{2}+\omega^{2}\psi_ {2}^{\dagger}\psi_{3}+\omega\psi_{3}^{\dagger}\psi_{1}\\ \omega^{2}\psi_{1}^{\dagger}\psi_{3}+\psi_{2}^{\dagger}\psi_{1}+\omega\psi_{3}^ {\dagger}\psi_{2}\end{array}\right)\,,\] \[\left(\psi^{\dagger}\psi\right)_{\mathbf{2}_{\mathbf{3},\mathbf{2}}} =\left(\begin{array}{c}\psi_{1}^{\dagger}\psi_{1}+\omega^{2}\psi_ {2}^{\dagger}\psi_{2}+\omega\psi_{3}^{\dagger}\psi_{3}\\ \omega^{2}\psi_{1}^{\dagger}\psi_{1}+\psi_{2}^{\dagger}\psi_{2}+\omega\psi_{3}^ {\dagger}\psi_{3}\end{array}\right)\,,\] \[\left(\psi^{\dagger}\psi\right)_{\mathbf{2}_{\mathbf{4},\mathbf{2}}} =\left(\begin{array}{c}\omega\psi_{1}^{\dagger}\psi_{3}+\psi_{2}^{ \dagger}\psi_{1}+\omega^{2}\psi_{3}^{\dagger}\psi_{2}\\ \psi_{1}^{\dagger}\psi_{2}+\omega\psi_{2}^{\dagger}\psi_{3}+\omega^{2}\psi_{3}^ {\dagger}\psi_{1}\end{array}\right)\,. \tag{3.6}\] The contractions of \(\psi^{\dagger}\psi\Phi\) should be invariant under \(\Delta(27)\), so that the flavon \(\Phi_{i}\) must transform as \(\mathbf{1}_{\mathbf{0},\mathbf{0}}\) or \(\mathbf{2}_{i}\) under \(\Delta(27)\). For \(\Phi\sim\mathbf{1}^{\mathbf{0}}_{\mathbf{0},\mathbf{0}}\), we find \[\left(Y^{(k_{1})\dagger}_{\mathbf{r_{1}}}Y^{(k_{1}+k_{\Phi})}_{\mathbf{r_{2}}}\psi^{ \dagger}\psi\Phi\right)_{\mathbf{1}^{\mathbf{0}}_{\mathbf{0},\mathbf{0}}}=\left(Y^{(k_{1})\dagger}_ {\mathbf{r_{1}}}Y^{(k_{1}+k_{\Phi})}_{\mathbf{r_{2}}}\right)_{\mathbf{1}^{\mathbf{2}-\text{a} \text{j}}}\left(\psi^{\dagger}\psi\right)_{\mathbf{1}^{\mathbf{0}}_{\mathbf{0},\mathbf{0}}}\Phi\,, \tag{3.7}\] where the notation \([2-a]\) is defined as \(2-a\) modulo 2 and this contraction result is proportional to \(\left(\psi^{\dagger}\psi\right)_{\mathbf{1}_{\mathbf{0},\mathbf{0}}^{0}}=\psi_{1}^ {\dagger}\psi_{1}+\psi_{2}^{\dagger}\psi_{2}+\psi_{3}^{\dagger}\psi_{3}\). It can be absorbed into the LO Kahler potential in Eq. (3.4), and provides no correction to fermion masses and mixing parameters. If a model contain a flavon \(\Phi=(\phi_{1},\phi_{2})^{T}\sim\mathbf{2_{i,}_{m}}\) which is chargeless under the auxiliary group, the NLO Kahler potential \(\mathcal{K}_{\rm NLO}\) invariant under the eclectic flavor group \(\Delta(27)\rtimes S_{3}\) can be expanded as \[\mathcal{K}_{\rm NLO} = \sum_{k_{1},\mathbf{r_{1}},\mathbf{r_{2}}}(-i\tau+i\bar{\tau})^{-k_{\psi}+ k_{1}}\left(Y_{\mathbf{r_{1}}}^{(k_{1})\dagger}Y_{\mathbf{r_{2}}}^{(k_{1}+k_{\Phi})} \right)_{\mathbf{\mathcal{R}}}\left(\left(\psi^{\dagger}\psi\right)_{\mathbf{2_{ 1},\mathbf{0}}}\Phi\right)_{\mathbf{\mathcal{R}}} \tag{3.8}\] \[+\sum_{k_{1},\mathbf{r_{1}},\mathbf{r_{2}}}(-i\tau+i\bar{\tau})^{-k_{\psi} +k_{1}}\left(Y_{\mathbf{r_{1}}}^{(k_{1})\dagger}Y_{\mathbf{r_{2}}}^{(k_{1}+k_{\Phi})} \right)_{\mathbf{\mathcal{R}}}\left(\left(\psi^{\dagger}\psi\right)_{\mathbf{2_{ 2},\mathbf{2}}}\Phi\right)_{\mathbf{\mathcal{R}}}\] \[+\sum_{k_{1},\mathbf{r_{1}},\mathbf{r_{2}}}(-i\tau+i\bar{\tau})^{-k_{\psi} +k_{1}}\left(Y_{\mathbf{r_{1}}}^{(k_{1})\dagger}Y_{\mathbf{r_{2}}}^{(k_{1}+k_{\Phi})} \right)_{\mathbf{\mathcal{R}}}\left(\left(\psi^{\dagger}\psi\right)_{\mathbf{2_{ 3},\mathbf{2}}}\Phi\right)_{\mathbf{\mathcal{R}}}\] \[+\sum_{k_{1},\mathbf{r_{1}},\mathbf{r_{2}}}(-i\tau+i\bar{\tau})^{-k_{\psi} +k_{1}}\left(Y_{\mathbf{r_{1}}}^{(k_{1})\dagger}Y_{\mathbf{r_{2}}}^{(k_{1}+k_{\Phi})} \right)_{\mathbf{\mathcal{R}}}\left(\left(\psi^{\dagger}\psi\right)_{\mathbf{2_{ 4},\mathbf{2}}}\Phi\right)_{\mathbf{\mathcal{R}}}\,,\] where the subscript \(\mathbf{\mathcal{R}}\) is defined below Eq. (3.3). For each possible flavon \(\Phi=(\phi_{1},\phi_{2})^{T}\sim\mathbf{2_{i,}_{m}}\), only the contractions in the \(i\)th row in Eq. (3.8) are not vanishing and the nonvanishing contractions \((\psi^{\dagger}\psi\Phi)_{\mathbf{\mathcal{R}}}\) can be obtained from Eq. (C.1). Then the explicit expression of \(\mathcal{K}_{\rm NLO}\) can be written out. If the corrections from this \(\mathcal{K}_{\rm NLO}\) are be considered, one can check that the corresponding Kahler metric is always not proportional to a unit matrix, and the non-canonical corrections are suppressed by \(\langle\Phi\rangle/\Lambda\) in comparison with \(\mathcal{K}_{\rm LO}\). If a model does not contain a flavon \(\Phi\) which a doublet of \(\Delta(27)\) and invariant under auxiliary cyclic symmetries, the corrections to fermion masses and mixings arise from the next-to-next-to-leading order (NNLO) terms of the Kahler potential. Without loss of generality, the most general form of the NNLO corrections to the Kahler potential can be written as \[\mathcal{K}_{\rm NNLO}=\frac{1}{\Lambda^{2}}\sum_{k_{1},\mathbf{r_{1}},\mathbf{r_{2} },s}(-i\tau+i\bar{\tau})^{-k_{\psi}-k_{\Theta}+k_{1}}\left(Y_{\mathbf{r_{1}}}^{(k _{1})\dagger}Y_{\mathbf{r_{2}}}^{(k_{1}+k_{\Phi}-k_{\Theta})}\psi^{\dagger}\psi \Theta^{\dagger}\Phi\right)_{\mathbf{1}_{\mathbf{0},\mathbf{0}}^{0},s}+{\rm h.c. }\,, \tag{3.9}\] where each team involves two generic flavons \(\Phi\) and \(\Theta\) which could be the identical fields. Analogously the traditional flavor symmetry \(\Delta(27)\) requires that \(\psi^{\dagger}\psi\Theta^{\dagger}\Phi\) should be invariant under \(\Delta(27)\), thus only the following contractions are allowed, \[\left(\psi^{\dagger}\psi\Theta^{\dagger}\Phi\right)_{\mathbf{\mathcal{R}}}=\left( \psi^{\dagger}\psi\right)_{\mathbf{1}_{\mathbf{0},\mathbf{0}}^{0}}\left( \Theta^{\dagger}\Phi\right)_{\mathbf{\mathcal{R}}}+\sum_{i}\left[\left(\psi^{ \dagger}\psi\right)_{\mathbf{2_{i,}_{m}}}\left(\Theta^{\dagger}\Phi\right)_{ \mathbf{2_{i,}_{n}}}\right]_{\mathbf{\mathcal{R}}}\,, \tag{3.10}\] which contract with the modular form \(Y_{\mathbf{r_{1}}}^{(k_{1})\dagger}Y_{\mathbf{r_{2}}}^{(k_{1}+k_{\Phi}-k_{\Theta})}\) to form \(S_{3}\) invariants. We see that \(\Theta^{\dagger}\Phi\) should be the invariant singlet \(\mathbf{1_{0,0}}\) or the reducible octet \(\mathbf{2_{i}}\) of the \(\Delta(27)\) flavor symmetry. The flavons \(\Theta\) and \(\Phi\) can transform as \(\mathbf{1_{0,0}}\), \(\mathbf{2_{i}}\), \(\mathbf{3}\) or \(\mathbf{\bar{3}}\) under \(\Delta(27)\). The expressions of \(\left(\psi^{\dagger}\psi\right)_{\mathbf{1}_{\mathbf{0},\mathbf{0}}^{0}}\) and \(\left(\psi^{\dagger}\psi\right)_{\mathbf{2_{i,}_{m}}}\) are given in Eq. (3.6). From Eq. (3.6), we find that the first term \(\left(\psi^{\dagger}\psi\right)_{\mathbf{1}_{\mathbf{0},\mathbf{0}}^{0}}\left( \Theta^{\dagger}\Phi\right)_{\mathbf{\mathcal{R}}}\) leads to a Kahler metric proportional to a unit matrix and its contribution can be absorbed by \(\mathcal{K}_{\rm LO}\) while the second contraction \(\left[\left(\psi^{\dagger}\psi\right)_{\mathbf{2}_{i,m}}\left(\Theta^{\dagger} \Phi\right)_{\mathbf{2}_{i,n}}\right]_{\mathbf{\mathcal{R}},s}\) can be obtained from Eq. (C.1): \[\sum_{\mathbf{\mathcal{R}}}\left[\left(\psi^{\dagger}\psi\right)_{ \mathbf{2}_{i,m}}\left(\Theta^{\dagger}\Phi\right)_{\mathbf{2}_{i,n}}\right]_ {\mathbf{\mathcal{R}}} = \left[\left(\psi^{\dagger}\psi\right)_{\mathbf{2}_{i,m}}\left( \Theta^{\dagger}\Phi\right)_{\mathbf{2}_{i,n}}\right]_{\mathbf{1}_{\mathbf{0},\mathbf{0}}^{\mathbf{0}}}\] \[+\left[\left(\psi^{\dagger}\psi\right)_{\mathbf{2}_{i,m}}\left( \Theta^{\dagger}\Phi\right)_{\mathbf{2}_{i,n}}\right]_{\mathbf{1}_{\mathbf{0},\mathbf{0}}^{\mathbf{1}}}\qquad\mbox{for}\quad m=n\,,\] \[\sum_{\mathbf{\mathcal{R}}}\left[\left(\psi^{\dagger}\psi\right)_ {\mathbf{2}_{i,m}}\left(\Theta^{\dagger}\Phi\right)_{\mathbf{2}_{i,n}}\right] _{\mathbf{\mathcal{R}}} = \left[\left(\psi^{\dagger}\psi\right)_{\mathbf{2}_{i,m}}\left( \Theta^{\dagger}\Phi\right)_{\mathbf{2}_{i,n}}\right]_{\mathbf{2}_{\mathbf{0} }}\quad\mbox{for}\quad m\neq n\,. \tag{3.11}\] These terms will give rise to off-diagonal elements of the Kahler metric in a general model. Hence the Kahler potential \(\mathcal{K}_{\rm NNLO}\) generally yield deviations from canonical kinetic terms of quark/lepton fields after the flavons develop VEVs, unless all the flavons \(\Theta\) and \(\Phi\) are invariant singlet of \(\Delta(27)\). However, the induced corrections to the quark/lepton mixing parameters are suppressed by \(\langle\Phi\rangle^{2}/\Lambda^{2}\) and they are negligible. In the case of \(\psi\sim\mathbf{\bar{3}^{a}}\), we reach the same results as those of \(\psi\sim\mathbf{3^{a}}\) except that the Kahler metric becomes into the transpose. #### 3.1.2 Kahler potential for \(\psi\sim\mathbf{1_{0,0}^{a}}\oplus\mathbf{2_{i,m}}\) The three generations of matter fields could be assigned to transform as reducible triplet of \(\Delta(27)\). For instance, the first generation is an invariant singlet \(\psi_{1}\sim\mathbf{1_{0,0}^{a}}\) under EFG and the other two generations form a doublet \(\psi_{d}=(\psi_{2},\psi_{3})^{T}\sim\mathbf{2_{i,m}}\). At leading order, the Kahler potential for the matter fields \(\psi\) can be written as \[\mathcal{K}_{\rm LO} = \sum_{k,\mathbf{r_{1},r_{2},s}}(-i\tau+i\bar{\tau})^{-k_{\psi_{1 }}+k}\left(Y^{(k)\dagger}_{\mathbf{r_{1}}}Y^{(k)}_{\mathbf{r_{2}}}\psi_{1}^{ \dagger}\psi_{1}\right)_{\mathbf{1}_{0,0}^{0},s}+(-i\tau+i\bar{\tau})^{-k_{ \psi_{d}}+k}\left(Y^{(k)\dagger}_{\mathbf{r_{1}}}Y^{(k)}_{\mathbf{r_{2}}}\psi _{d}^{\dagger}\psi_{d}\right)_{\mathbf{1}_{0,0}^{0},s} \tag{3.12}\] \[= \psi_{1}^{\dagger}\psi_{1}\sum_{k,\mathbf{r_{1},r_{2}}}(-i\tau+i \bar{\tau})^{-k_{\psi_{1}}+k}\left(Y^{(k)\dagger}_{\mathbf{r_{1}}}Y^{(k)}_{ \mathbf{r_{2}}}\right)_{\mathbf{1}_{0,0}^{0}}\] \[+\psi_{2}^{\dagger}\psi_{2}\sum_{k,\mathbf{r_{1},r_{2}}}(-i\tau+i \bar{\tau})^{-k_{\psi_{d}}+k}\left[\left(Y^{(k)\dagger}_{\mathbf{r_{1}}}Y^{(k) }_{\mathbf{r_{2}}}\right)_{\mathbf{1}_{0,0}^{0}}+\left(Y^{(k)\dagger}_{\mathbf{ r_{1}}}Y^{(k)}_{\mathbf{r_{2}}}\right)_{\mathbf{1}_{0,0}^{1}}\right]\] \[+\psi_{3}^{\dagger}\psi_{3}\sum_{k,\mathbf{r_{1},r_{2}}}(-i\tau+i \bar{\tau})^{-k_{\psi_{d}}+k}\left[\left(Y^{(k)\dagger}_{\mathbf{r_{1}}}Y^{(k) }_{\mathbf{r_{2}}}\right)_{\mathbf{1}_{0,0}^{0}}-\left(Y^{(k)\dagger}_{\mathbf{ r_{1}}}Y^{(k)}_{\mathbf{r_{2}}}\right)_{\mathbf{1}_{0,0}^{1}}\right]\,.\] Hence the resulting Kahler metric is diagonal while the diagonal entries are all different. When transforming to the basis with canonical kinetic terms, we have to rescale the matter fields \(\psi_{1,2,3}\). The effect of rescaling on the second and the third generator fermion masses can not be absorbed into the parameters of the superpotential. As a consequence, if the matter fields \(\psi\) are assigned to be one trivial singlet \(\mathbf{1_{0,0}}\) plus one reducible doublet \(\mathbf{2_{i}}\) of \(\Delta(27)\), the EFG \(\Delta(27)\rtimes S_{3}\) doesn't efficiently restrict the Kahler potential so that the predictive power of modular symmetry would be reduced. Hence we shall not consider these reducible assignments for matter fields in the EFG model construction. ### Superpotential for fermion masses As has been shown in previous section, the corrections from the Kahler potential to the fermion masses and flavor mixing are under control by the EFG \(\Delta(27)\rtimes S_{3}\), if the matter fields are assigned to transform as trivial singlets or triplet under \(\Delta(27)\). The EFG \(\Delta(27)\rtimes S_{3}\) would play less rule for the invariant singlet assignment of the left-handed (LH) matter fields \(\psi\) and right-handed (RH) matter fields \(\psi^{c}\). Hence we will analyze the assignments that both \(\psi\) and \(\psi^{c}\) are \(\Delta(27)\) triplet \(\mathbf{3}\) or \(\mathbf{\bar{3}}\), their modular weights are denoted as \(k_{\psi}\) and \(k_{\psi^{c}}\) respectively. In the paradigm of EFG, flavon fields are usually necessary to break the traditional flavor symmetry. For the concerned EFG \(\Delta(27)\rtimes S_{3}\), the flavon \(\Phi\) with modular weight \(k_{\Phi}\) can transform as singlet \(\mathbf{1_{0,0}}\), doublets \(\mathbf{2_{i}}\) or triplets \(\mathbf{3}\) and \(\mathbf{\bar{3}}\) under \(\Delta(27)\). Next we study the mass matrix of \(\psi\) (\(\psi^{c}\)) for each possible representation assignments of the matter fields and flavon. **(i)**: \(\psi,\ \psi^{c},\ \Phi\sim\mathbf{3^{a}}\) or \(\mathbf{\bar{3}^{a}}\) In the first case, both \(\psi\) and \(\psi^{c}\) are triplets \(\mathbf{3}\) (or \(\mathbf{\bar{3}}\)) of \(\Delta(27)\). Invariance under the \(\Delta(27)\) flavor symmetry entails the introduction of a triplet flavon \(\Phi=(\phi_{1},\phi_{2},\phi_{3})^{T}\) which transforms as \(\mathbf{3}\) or \(\mathbf{\bar{3}}\) under \(\Delta(27)\). For illustration, we proceed to analyze the superpotential for the representation assignment with \(\psi\equiv\left(\psi_{1},\psi_{2},\psi_{3}\right)^{T}\sim\mathbf{3^{0}}\), \(\psi^{c}\equiv\left(\psi_{1}^{c},\psi_{2}^{c},\psi_{3}^{c}\right)^{T}\sim \mathbf{3^{0}}\), \(\Phi\equiv(\phi_{1},\phi_{2},\phi_{3})^{T}\sim\mathbf{3^{0}}\), the superpotential of other assignments can be discussed analogously. Then the charged lepton/quark mass terms invariant under \(\Delta(27)\times S_{3}\) can be generally written as \[\mathcal{W}_{D}=\frac{1}{\Lambda}\sum_{\mathbf{r},s}c_{\mathbf{r},s}\left(Y_{\mathbf{r}}^{ \left(k_{Y}\right)}\Phi\psi^{c}\psi\right)_{\mathbf{1_{0,0}^{0}},s}H_{u/d}\,, \tag{3.13}\] where one must sum over all modular multiplets of weight \(k_{Y}\) and all independent singlet contractions labelled by the index \(s\), and the modular weight \(k_{Y}\) should fulfill \(k_{Y}=k_{\psi}+k_{\psi^{c}}+k_{\Phi}\). It is convenient to firstly consider the constraints of the traditional flavor group \(\Delta(27)\), the general form of the superpotential \(\mathcal{W}_{D}\) compatible with \(\Delta(27)\) is given by \[\mathcal{W}_{D}=\frac{1}{\Lambda}\sum_{\mathbf{r}}\left(Y_{\mathbf{r}}^{\left(k_{Y} \right)}\big{(}c_{\mathbf{r},1}\mathcal{O}_{1}+c_{\mathbf{r},2}\mathcal{O}_{2}+c_{\bm {r},3}\mathcal{O}_{3}\big{)}\right)_{\mathbf{1_{0,0}^{0}}}H_{u,d}\,, \tag{3.14}\] where \(\mathcal{O}_{1}\), \(\mathcal{O}_{2}\) and \(\mathcal{O}_{3}\) are three \(\Delta(27)\) invariant contractions: \[\mathcal{O}_{1} = \Big{(}\Phi\left(\psi^{c}\psi\right)_{\mathbf{\bar{3}}\mathbf{s}_{ \mathbf{1}}}\Big{)}_{\mathbf{1}_{0,0}}=\psi_{1}^{c}\psi_{1}\phi_{1}+\psi_{2}^ {c}\psi_{2}\phi_{2}+\psi_{3}^{c}\psi_{3}\phi_{3}\,,\] \[\mathcal{O}_{2} = \Big{(}\Phi\left(\psi^{c}\psi\right)_{\mathbf{\bar{3}}\mathbf{s}_{ \mathbf{2}}}\Big{)}_{\mathbf{1}_{0,0}}=\psi_{1}^{c}\psi_{2}\phi_{3}+\psi_{1}^{c }\psi_{3}\phi_{2}+\psi_{2}^{c}\psi_{3}\phi_{1}+\psi_{2}^{c}\psi_{1}\phi_{3}+ \psi_{3}^{c}\psi_{1}\phi_{2}+\psi_{3}^{c}\psi_{2}\phi_{1}\,,\] \[\mathcal{O}_{3} = \Big{(}\Phi\left(\psi^{c}\psi\right)_{\mathbf{\bar{3}}\mathbf{A}}\Big{)} _{\mathbf{1}_{0,0}}=\psi_{1}^{c}\psi_{2}\phi_{3}-\psi_{1}^{c}\psi_{3}\phi_{2}+ \psi_{2}^{c}\psi_{3}\phi_{1}-\psi_{2}^{c}\psi_{1}\phi_{3}+\psi_{3}^{c}\psi_{1} \phi_{2}-\psi_{3}^{c}\psi_{2}\phi_{1}\,. \tag{3.15}\] The modular transformation of the fields \(\psi\), \(\psi^{c}\), \(\Phi\) is given in Eq. (2.22), then we can obtain the action the modular generators \(S\) and \(T\) on the above combinations \(\mathcal{O}_{1,2,3}\) as follows \[\mathcal{O}_{1}\stackrel{{ S}}{{\longrightarrow}} \mathcal{O}_{1}, \mathcal{O}_{2}\stackrel{{ S}}{{\longrightarrow}} \mathcal{O}_{2}, \mathcal{O}_{3}\stackrel{{ S}}{{\longrightarrow}} \mathcal{-O}_{3}\,,\] \[\mathcal{O}_{1}\stackrel{{ T}}{{\longrightarrow}} \mathcal{O}_{1}, \mathcal{O}_{2}\stackrel{{ T}}{{\longrightarrow}} \mathcal{O}_{2}, \mathcal{O}_{3}\stackrel{{ T}}{{\longrightarrow}} \mathcal{-O}_{3}\,. \tag{3.16}\] This implies that both \(\mathcal{O}_{1}\) and \(\mathcal{O}_{2}\) are invariant under \(S_{3}\), while the combination \(\mathcal{O}_{3}\) transforms as \(\mathbf{1^{\prime}}\) under \(S_{3}\). Therefore only the modular form multiplets \(Y_{\mathbf{1}}^{\left(k_{Y}\right)}=Y_{1}\) and \(Y_{\mathbf{1^{\prime}}}^{\left(k_{Y}\right)}=Y_{2}\) can contract with \(\mathcal{O}_{1,2}\) and \(\mathcal{O}_{3}\) respectively to produce mass terms invariant under \(S_{3}\). Thus the most general superpotential which is invariant under the eclectic flavor group \(\Delta(27)\rtimes S_{3}\) is determined to be \[\mathcal{W}_{D}=\frac{1}{\Lambda}\Big{(}\alpha_{1}Y_{1}\mathcal{O}_{1}+\alpha_{ 2}Y_{1}\mathcal{O}_{2}+\alpha_{3}Y_{2}\mathcal{O}_{3}\Big{)}H_{u,d}\,, \tag{3.17}\] with \(\alpha_{1}\equiv c_{1,1}\), \(\alpha_{2}\equiv c_{1,2}\) and \(\alpha_{3}\equiv c_{1^{\prime},3}\). Note the first two terms arise from the coupling with \(Y_{\mathbf{1}}^{(k_{Y})}\) and the third term proportional to \(\alpha_{3}\) arises from the coupling with \(Y_{\mathbf{1}^{\prime}}^{(k_{Y})}\). If \(Y_{\mathbf{1}^{\prime}}^{(k_{Y})}\) is absent in certain modular weight \(k_{Y}\), then \(Y_{2}=0\) and the third term is vanishing. Therefore we get the following fermion mass matrix in the right-left basis of \(\psi^{c}M_{\psi}\psi\), \[M_{\psi}=\frac{v_{u,d}}{\Lambda}\left[\alpha_{1}Y_{1}\left(\begin{array}{ccc }\phi_{1}&0&0\\ 0&\phi_{2}&0\\ 0&0&\phi_{3}\end{array}\right)+\alpha_{2}Y_{1}\left(\begin{array}{ccc}0& \phi_{3}&\phi_{2}\\ \phi_{3}&0&\phi_{1}\\ \phi_{2}&\phi_{1}&0\end{array}\right)+\alpha_{3}Y_{2}\left(\begin{array}{ccc }0&-\phi_{3}&\phi_{2}\\ \phi_{3}&0&-\phi_{1}\\ -\phi_{2}&\phi_{1}&0\end{array}\right)\right]\,, \tag{3.18}\] where \(v_{u,d}=\langle H_{u,d}\rangle\) denote the VEVs of the Higgs fields. If one doesn't impose the gCP symmetry, all couplings \(\alpha_{1,2,3}\) are generally complex so that the modular forms \(Y_{1}\) and \(Y_{2}\) in the mass matrix of Eq. (3.18) can be absorbed into the coupling constants. On the other hand, if the gCP symmetry is included, the modular forms can not be absorbed by the coupling constants. With the gCP transformation matrices in Eq. (2.42), we find that the gCP invariance leads to the following constraints on the couplings \(\alpha_{1}\), \(\alpha_{2}\) and \(\alpha_{3}\): \[\alpha_{2}=\frac{\omega}{2}(\sqrt{3}\alpha_{1}^{*}-\alpha_{1})\,,\qquad{\rm Arg }(\alpha_{3})=-\frac{\pi}{12}\,\,\left({\rm mod}\,\pi\right). \tag{3.19}\] Consequently \(\alpha_{1}\), \(\alpha_{2}\) and \(\alpha_{3}\) can be parameterized as \[\alpha_{1}=\alpha_{r}+i\alpha_{i},\qquad\alpha_{2}=\frac{\omega}{2}\left[( \sqrt{3}-1)\alpha_{r}-i(1+\sqrt{3})\alpha_{i}\right],\qquad\alpha_{3}=e^{-i \frac{\pi}{12}}\alpha_{3}^{\prime}\,, \tag{3.20}\] where \(\alpha_{r}\), \(\alpha_{i}\) and \(\alpha_{3}^{\prime}\) are real. Thus the mass matrix \(M_{\psi}\) including the phases of couplings explicitly is given by \[M_{\psi} = \frac{v_{u,d}}{\Lambda}\left\{\frac{Y_{1}}{2}\left[\alpha_{r} \left(\begin{array}{ccc}2\phi_{1}&\left(\sqrt{3}-1\right)\omega\phi_{3}& \left(\sqrt{3}-1\right)\omega\phi_{2}\\ \left(\sqrt{3}-1\right)\omega\phi_{3}&2\phi_{2}&\left(\sqrt{3}-1\right)\omega \phi_{1}\\ \left(\sqrt{3}-1\right)\omega\phi_{2}&\left(\sqrt{3}-1\right)\omega\phi_{1}&2 \phi_{3}\end{array}\right)\right.\right. \tag{3.21}\] \[\left.\left.+i\alpha_{i}\left(\begin{array}{ccc}2\phi_{1}&- \left(\sqrt{3}+1\right)\omega\phi_{3}&-\left(\sqrt{3}+1\right)\omega\phi_{2}\\ -\left(\sqrt{3}+1\right)\omega\phi_{3}&2\phi_{2}&-\left(\sqrt{3}+1\right) \omega\phi_{1}\\ -\left(\sqrt{3}+1\right)\omega\phi_{2}&-\left(\sqrt{3}+1\right)\omega\phi_{1}& 2\phi_{3}\end{array}\right)\right]\right.\] \[\left.+e^{-i\frac{\pi}{12}}\alpha_{3}^{\prime}Y_{2}\left( \begin{array}{ccc}0&-\phi_{3}&\phi_{2}\\ \phi_{3}&0&-\phi_{1}\\ -\phi_{2}&\phi_{1}&0\end{array}\right)\right\}\,.\] For the other three assignments of \(\psi\), \(\psi^{c}\) and \(\Phi\) which are all triplets of \(\Delta(27)\), we can analogously obtain the corresponding fermion mass matrices shown in table 1, and they are named as \(W_{\psi 2}\), \(W_{\psi 3}\) and \(W_{\psi 4}\) respectively. Note that the modular forms \(Y_{1}\) and \(Y_{2}\) in the mass matrices of \(W_{\psi 1}\sim W_{\psi 4}\) can be absorbed by the coupling constants for models without gCP symmetry, hence we essentially get the same fermion mass matrix and the modulus \(\tau\) plays no role. If gCP symmetry is imposed, the phases of couplings would be fixed and then the effect of the modular forms \(Y_{1}\) and \(Y_{2}\) can not be ignored. For the cases \(W_{\psi 3}\) and \(W_{\psi 4}\) in which \(\psi\), \(\psi^{c}\) and \(\Phi\) transform as triplet \(\mathbf{\bar{3}}\) under \(\Delta(27)\), the gCP symmetry enforces the couplings \(\alpha_{1,2,3}\) to satisfy \[\alpha_{2}=\frac{\omega^{2}}{2}(\sqrt{3}\,\alpha_{1}^{*}-\alpha_{1})\,,\qquad {\rm Arg}(\alpha_{3})=\frac{\pi}{12}\,\,\left({\rm mod}\,\pi\right). \tag{3.22}\] \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \hline \(\mathcal{W}\) & \(\Delta(27)\rtimes S_{3}\) & \(Y_{\mathbf{r}}^{(k_{\mathrm{Y}})}\) & \(M_{\psi}\) & Constraints of gCP \\ \hline \(\mathcal{W}_{\psi 1}\) & \(\psi\), \(\psi^{c}\sim\mathbf{3^{0}}\), \(\Phi\sim\mathbf{3^{0}}\) & \(Y_{1}^{(k_{\mathrm{Y}})}\), \(Y_{1^{\prime}}^{(k_{\mathrm{Y}})}\) & \(M_{\psi}\) in Eq. (3.18) & Eq. (3.19) \\ \hline \(\mathcal{W}_{\psi 2}\) & \(\psi\), \(\psi^{c}\sim\mathbf{3^{0}}\), \(\Phi\sim\mathbf{3^{1}}\) & \(Y_{1}^{(k_{\mathrm{Y}})}\), \(Y_{1^{\prime}}^{(k_{\mathrm{Y}})}\) & \(M_{\psi}\left(Y_{1}\to Y_{2},Y_{2}\to Y_{1}\right)\) & Eq. (3.19) \\ \hline \(\mathcal{W}_{\psi 3}\) & \(\psi\), \(\psi^{c}\sim\mathbf{3^{0}}\), \(\Phi\sim\mathbf{3^{0}}\) & \(Y_{1}^{(k_{\mathrm{Y}})}\), \(Y_{1^{\prime}}^{(k_{\mathrm{Y}})}\) & \(M_{\psi}\) & Eq. (3.22) \\ \hline \(\mathcal{W}_{\psi 4}\) & \(\psi\), \(\psi^{c}\sim\mathbf{\bar{3}^{0}}\), \(\Phi\sim\mathbf{\bar{3}^{1}}\) & \(Y_{1^{\prime}}^{(k_{\mathrm{Y}})}\), \(Y_{1^{\prime}}^{(k_{\mathrm{Y}})}\) & \(M_{\psi}\left(Y_{1}\to Y_{2},Y_{2}\to Y_{1}\right)\) & Eq. (3.22) \\ \hline \(\mathcal{W}_{\psi 5}\) & \(\psi\sim\mathbf{3^{0}}\), \(\psi^{c}\sim\mathbf{\bar{3}^{0}}\), \(\Phi\sim\mathbf{\bar{3}^{0}}\), \(\Phi\sim\mathbf{2_{1,0}}\) & \(Y_{1}^{(k_{\mathrm{Y}})}\), \(Y_{1^{\prime}}^{(k_{\mathrm{Y}})}\) & \(M_{\psi}^{\prime}\) in Eq. (3.28) & Eq. (3.30) \\ \hline \(\mathcal{W}_{\psi 6}\) & \(\psi\sim\mathbf{3^{0}}\), \(\psi^{c}\sim\mathbf{\bar{3}^{0}}\), \(\Phi\sim\mathbf{2_{2,2}}\) & \(Y_{1}^{(k_{\mathrm{Y}})}\), \(Y_{1^{\prime}}^{(k_{\mathrm{Y}})}\) & \(D_{L}M_{\psi}^{\prime}\left(Y_{1}\to\omega^{2}Y_{1},Y_{2}\to i\omega Y_{2} \right)D_{R}\) & Eq. (3.30) \\ \hline \(\mathcal{W}_{\psi 7}\) & \(\psi\sim\mathbf{3^{0}}\), \(\psi^{c}\sim\mathbf{\bar{3}^{0}}\), \(\Phi\sim\mathbf{2_{3,2}}\) & \(Y_{1}^{(k_{\mathrm{Y}})}\), \(Y_{1^{\prime}}^{(k_{\mathrm{Y}})}\) & \(\begin{array}{c}\phi_{1}(\alpha_{1}Y_{1}+\alpha_{2}Y_{2})\mathrm{diag}( \omega,\omega^{2},1)\\ +\phi_{2}(\alpha_{1}Y_{1}-\alpha_{2}Y_{2})\mathrm{diag}(\omega,1,\omega^{2}) \end{array}\) & — \\ \hline \(\mathcal{W}_{\psi 8}\) & \(\psi\sim\mathbf{3^{0}}\), \(\psi^{c}\sim\mathbf{\bar{3}^{0}}\), \(\Phi\sim\mathbf{2_{4,2}}\) & \(Y_{1}^{(k_{\mathrm{Y}})}\), \(Y_{1^{\prime}}^{(k_{\mathrm{Y}})}\) & \(P_{23}D_{R}M_{\psi}^{\prime}(P_{23}D_{R})^{\dagger}\) & — \\ \hline \(\mathcal{W}_{\psi 9}\) & \(\psi\sim\mathbf{3^{0}}\), \(\psi^{c}\sim\mathbf{\bar{3}^{0}}\), \(\Phi\sim\mathbf{2_{1,1}}\) & \(Y_{2}^{(k_{\mathrm{Y}})}\) & \(M_{\psi}^{\prime\prime}\) in Eq. (3.34) & — \\ \hline \(\mathcal{W}_{\psi 10}\) & \(\psi\sim\mathbf{3^{0}}\), \(\psi^{c}\sim\mathbf{\bar{3}^{0}}\), \(\Phi\sim\mathbf{2_{1,2}}\) & \(Y_{2}^{(k_{\mathrm{Y}})}\) & \(M_{\phi}^{\prime\prime}\left(\phi_{1}\to\omega^{2}\phi_{1},Y_{4}\to-Y_{4}\right)\) & — \\ \hline \(\mathcal{W}_{\psi 11}\) & \(\psi\sim\mathbf{3^{0}}\), \(\psi^{c}\sim\mathbf{\bar{3}^{0}}\), \(\Phi\sim\mathbf{2_{2,0}}\) & \(Y_{2}^{(k_{\mathrm{Y}})}\) & \(\omega^{2}D_{L}M_{\psi}^{\prime\prime}D_{R}\) & \(\Im\alpha=0\) \\ \hline \(\mathcal{W}_{\psi 12}\) & \(\psi\sim\mathbf{3^{0}}\), \(\psi^{c}\sim\mathbf{\bar{3}^{0}}\), \(\Phi\sim\mathbf{2_{2,1}}\) & \(Y_{2}^{(k_{\mathrm{Y}})}\) & \(D_{L}M_{\phi}^{\prime\prime}\left(\phi_{1}\to\phi_{1},\phi_{2}\to\omega\phi_{2},Y _{4}\to-Y_{4}\right)D_{R}\) & \(\Im\alpha=0\) \\ \hline \(\mathcal{W}_{\psi 13}\) & \(\psi\sim\mathbf{3^{0}}\), \(\psi^{c}\sim\mathbf{\bar{3}^{0}}\), \(\Phi\sim\mathbf{2_{3,0}}\) & \(Y_{2}^{(k_{\mathrm{Y}})}\) & \(\alpha[\phi_{1}(Y_{3}-iY_{4})\mathrm{diag}(\omega,\omega^{2},1)]\) & — \\ \hline \(\mathcal{W}_{\psi 14}\) & \(\psi\sim\mathbf{3^{0}}\), \(\psi^{c}\sim\mathbf{\bar{3}^{0}}\), \(\Phi\sim\mathbf{2_{3,1}}\) & \(Y_{2}^{(k_{\mathrm{Y}})}\) & \(\alpha[\phi_{1}(Y_{3}+iY_{4})\mathrm{diag}(1,\omega,\omega^{2})\) & — \\ \hline \(\mathcal{W}_{\psi 15}\) & \(\psi\sim\mathbf{3^{0}}\), \(\psi^{c}\sim\mathbf{\bar{3}^{0}}\), \(\Phi\sim\mathbf{2_{4,0}}\) & \(Y_{2}^{(k_{\mathrm{Y}})}\) & \(P_{23}D_{L}^{\dagger}M_{\psi}^{\prime\prime}D_{R}^{\dagger}P_{23}\) & — \\ \hline \(\mathcal{W}_{\psi 16}\) & \(\psi\sim\mathbf{3^{0}}\), \(\psi^{c}\sim\mathbf{\bar{3}^{0}}\), \(\Phi\sim\mathbf{2_{4,1}}\) & \(Y_{2}^{(k_{\mathrm{Y}})}\) & \(P_{23}D_{L}^{\dagger}M_{\psi}^{\prime\prime}\left(\phi_{1}\to\omega^{2}\phi_{1},Y _{4}\to-Y_{4}\right)D_{R}^{\dagger}P_{23}\) & — \\ \hline \hline \end{tabular} \end{table} Table 1: The fermion mass matrices for different representation assignments of matter fields \(\psi\), \(\psi^{c}\) and flavon \(\Phi\), where \(D_{L}=\mathrm{diag}(1,\omega^{2},\omega^{2})\), \(D_{R}=\mathrm{diag}(\omega^{2},1,1)\) and \(P_{23}\) is the permutation matrix interchanging the 2nd and 3rd rows of the \(3\times 3\) identity matrix. The last column gives the constraints of gCP on the coupling constants. **(ii)**: \(\psi\sim{\bf 3^{a}}\,(\bar{\bf 3^{a}})\), \(\psi^{c}\sim\bar{\bf 3^{b}}\,({\bf 3^{b}})\), \(\Phi\sim{\bf 2_{i,m}}\) for \(\rho_{{\bf 2_{i,m}}}(A)=\rho_{{\bf 2_{i,m}}}(ST)\) For the second case, the matter fields \(\psi\) and \(\psi^{c}\) are assigned to triplets \({\bf 3}\,(\bar{\bf 3})\) and \(\bar{\bf 3}({\bf 3})\) of \(\Delta(27)\), respectively. From the Kronecker product \({\bf 3}\otimes\bar{\bf 3}=\sum_{r,s=0}^{2}{\bf 1_{r,s}}\) in Eq. (A.5), one see that the flavon could be absent, it should transform as \({\bf 1_{0,0}}\) or reducible doublet \({\bf 2_{i}}\) under \(\Delta(27)\) if it is present in a model. If there is no flavon or the flavon is invariant under \(\Delta(27)\), the mass matrix of the fermion \(\psi\) is proportional to a unit matrix. For the case of doublet flavon \(\Phi=(\phi_{1},\phi_{2})^{T}\) transforming as \({\bf 2_{i,m}}\) in which the representation matrices of \(A\) and modular transformation \(ST\) are identical, the corresponding values of the indices \((i,m)\) can be found in Eq. (2.28). As an example, we analyze the superpotential and the corresponding mass matrix for the assignment of \(\psi\sim{\bf 3^{0}}\), \(\psi^{c}\sim\bar{\bf 3^{0}}\) and \(\Phi\sim{\bf 2_{1,0}}\). Invariance under the action of \(\Delta(27)\) requires the superpotential \({\cal W}_{D}\) should be of the following form \[{\cal W}_{D}=\frac{1}{\Lambda}\sum_{\mathbf{r}}\left(Y_{\mathbf{r}}^{(k_{Y})}\big{[}c_ {\mathbf{r},1}{\cal O}_{4}+c_{\mathbf{r},2}{\cal O}_{5}\big{]}\right)_{{\bf 1_{0,0}^{0}}}H_{u/ d}\,, \tag{3.23}\] where the two \(\Delta(27)\) invariant combinations \({\cal O}_{4}\) and \({\cal O}_{5}\) take the following form \[{\cal O}_{4}=\phi_{1}(\psi_{1}^{c}\psi_{3}+\psi_{2}^{c}\psi_{1}+\psi_{3}^{c} \psi_{2})\,,\qquad{\cal O}_{5}=\phi_{2}(\psi_{1}^{c}\psi_{2}+\psi_{2}^{c}\psi_ {3}+\psi_{3}^{c}\psi_{1})\,. \tag{3.24}\] From the transformation properties of \(\psi\), \(\psi^{c}\) and \(\Phi\) under the actions of \(S_{3}\), we find that the modular transformations of \({\cal O}_{4,5}\) under \(S\) and \(T\) are given by \[{\cal O}_{4}\stackrel{{ S}}{{\longrightarrow}}{\cal O}_{5}, \qquad{\cal O}_{4}\stackrel{{ T}}{{\longrightarrow}}{\cal O}_{5}\,, \qquad{\cal O}_{5}\stackrel{{ S}}{{\longrightarrow}}{\cal O}_{4}, \qquad{\cal O}_{5}\stackrel{{ T}}{{\longrightarrow}}{\cal O}_{4}\,, \tag{3.25}\] Therefore \({\cal O}_{4}\) and \({\cal O}_{5}\) can be arranged into two singlets \({\bf 1}\) and \({\bf 1^{\prime}}\) of \(S_{3}\) modular symmetry, \[{\cal O}_{4}^{\prime}={\cal O}_{4}+{\cal O}_{5} \sim {\bf 1},\qquad\qquad{\cal O}_{5}^{\prime}={\cal O}_{4}-{\cal O}_{ 5} \sim {\bf 1^{\prime}}\,. \tag{3.26}\] Consequently only the singlet modular forms \(Y_{\bf 1^{\prime}}^{(k_{Y})}(\tau)\) and \(Y_{\bf 1^{\prime}}^{(k_{Y})}(\tau)\) can combine with \({\cal O}_{4}^{\prime}\) and \({\cal O}_{5}^{\prime}\) respectively to form EFG invariant superpotential, \[{\cal W}_{D}=\frac{1}{\Lambda}\Big{(}\alpha_{1}Y_{1}{\cal O}_{4}^{\prime}+ \alpha_{2}Y_{2}{\cal O}_{5}^{\prime}\Big{)}H_{u/d}\,, \tag{3.27}\] which leads to the following mass matrix, \[M_{\psi}^{\prime} = \frac{v_{u,d}}{\Lambda}\left[\alpha_{1}Y_{1}\left(\begin{array}[] {ccc}0&\phi_{2}&\phi_{1}\\ \phi_{1}&0&\phi_{2}\\ \phi_{2}&\phi_{1}&0\end{array}\right)+\alpha_{2}Y_{2}\left(\begin{array}{ccc }0&-\phi_{2}&\phi_{1}\\ \phi_{1}&0&-\phi_{2}\\ -\phi_{2}&\phi_{1}&0\end{array}\right)\right] \tag{3.28}\] \[= \frac{v_{u,d}}{\Lambda}\left[\alpha_{1}^{\prime}\phi_{1}\left( \begin{array}{ccc}0&0&1\\ 1&0&0\\ 0&1&0\end{array}\right)+\alpha_{2}^{\prime}\phi_{2}\left(\begin{array}{ccc}0&1 &0\\ 0&0&1\\ 1&0&0\end{array}\right)\right]\,,\] with \[\alpha_{1}^{\prime}=\alpha_{1}Y_{1}+\alpha_{2}Y_{2}\,,\qquad\alpha_{2}^{\prime }=\alpha_{1}Y_{1}-\alpha_{2}Y_{2}\,. \tag{3.29}\] Consequently the effect of the complex modulus \(\tau\) can be absorbed by couplings in the models without gCP. If gCP invariance is included in the model, both couplings \(\alpha_{1}\) and \(\alpha_{2}\) would be real, i.e., \[\alpha_{1}=\alpha_{1}^{*}\,,\qquad\alpha_{2}=\alpha_{2}^{*}\,. \tag{3.30}\] Then the effect of modular forms can not be removed. The mass matrix and the gCP constraint are shown as case \(W_{\psi 5}\) in the table 1. For the remaining three cases of doublet flavon: \(\Phi\sim\mathbf{2_{2,2}}\), \(\Phi\sim\mathbf{2_{3,2}}\), \(\Phi\sim\mathbf{2_{4,2}}\), the predictions for the fermion mass matrices are shown in table 1 and are labelled as \(W_{\psi 6}\), \(W_{\psi 7}\) and \(W_{\psi 8}\). The EFG constrains only the singlet modular forms \(Y_{\mathbf{1}}^{(k_{Y})}(\tau)\) and \(Y_{\mathbf{1}^{\prime}}^{(k_{Y})}(\tau)\) to appear in the superpotential in these cases, and the modular forms can be absorbed into the coupling constants in the scenario without gCP. The representation assignment \(\psi\sim\mathbf{3^{1}}\), \(\psi^{c}\sim\mathbf{\bar{3}^{1}}\) leads to the same mass matrix as that of \(\psi\sim\mathbf{3^{0}}\), \(\psi^{c}\sim\mathbf{\bar{3}^{0}}\). Furthermore, the fermion mass matrix for \(\psi\sim\mathbf{3^{0}}\left(\mathbf{3^{1}}\right)\), \(\psi^{c}\sim\mathbf{\bar{3}^{1}}\left(\mathbf{\bar{3}^{0}}\right)\) is related to that of \(\psi\sim\mathbf{3^{0}}\), \(\psi^{c}\sim\mathbf{\bar{3}^{0}}\) through the permutation \(Y_{1}\leftrightarrow Y_{2}\). **(iii)**: \(\psi\sim\mathbf{3^{a}}\left(\mathbf{\bar{3}^{a}}\right)\), \(\psi^{c}\sim\mathbf{\bar{3}^{b}}\left(\mathbf{3^{b}}\right)\), \(\Phi\sim\mathbf{2_{i,m}}\) for \(\rho_{\mathbf{2_{i,m}}}(A)\neq\rho_{\mathbf{2_{i,m}}}(ST)\) It differs from the case **ii** above in the representation assignment of the flavon \(\Phi\sim\mathbf{2_{i,m}}\). Although \(\Phi\) transforms as a reducible doublet \(\mathbf{2_{i,m}}\) under EFG, the representation matrix of the modular transformation \(\rho_{\mathbf{2_{i,m}}}(ST)\) is not identical with the flavor transformation \(\rho_{\mathbf{2_{i,m}}}(A)\). The corresponding eight combinations of the indices \((i,m)\) are shown in Eq. (29). Similar to previous cases, let us consider a model in which the matter fields \(\psi\) and \(\psi^{c}\) are assigned to be EFG triplet \(\mathbf{3^{0}}\) and \(\mathbf{\bar{3}^{0}}\) respectively, and the flavon \(\Phi=(\phi_{1},\phi_{2})^{T}\sim\mathbf{2_{1,1}}\) transforms as \(\mathbf{2_{1,1}}\) under EFG. The \(\Delta(27)\) invariant superpotential \(\mathcal{W}_{D}\) is of the same form as that of Eq. (32). For this assignment, the modular transformations of \(\Delta(27)\) invariant contractions \(\mathcal{O}_{4,5}\) in Eq. (34) under \(S\) and \(T\) are \[\mathcal{O}_{4}\stackrel{{ S}}{{\longrightarrow}}\mathcal{O}_{5},\qquad\mathcal{O}_{4}\stackrel{{ T}}{{\longrightarrow}}\omega \mathcal{O}_{5}\,,\qquad\mathcal{O}_{5}\stackrel{{ S}}{{ \longrightarrow}}\mathcal{O}_{4},\qquad\mathcal{O}_{5}\stackrel{{ T}}{{ \longrightarrow}}\omega^{2}\mathcal{O}_{4}\,. \tag{31}\] One can check that \(\mathcal{O}_{4}\) and \(\mathcal{O}_{5}\) can be arranged into \(S_{3}\) doublet \(\mathbf{2}\), i.e. \[\begin{pmatrix}\mathcal{O}_{4}+\omega\mathcal{O}_{5}\\ -i(\mathcal{O}_{4}-\omega\mathcal{O}_{5})\end{pmatrix}\sim\mathbf{2}\,. \tag{32}\] Therefore only the modular form multiplet \(Y_{\mathbf{2}}^{(k_{Y})}=\left(Y_{3},Y_{4}\right)^{T}\) is relevant, and the eclectic invariant superpotential is given by \[\mathcal{W}_{D}=\frac{\alpha}{\Lambda}\left[Y_{3}\left(\mathcal{O}_{4}+\omega \mathcal{O}_{5}\right)-iY_{4}\left(\mathcal{O}_{4}-\omega\mathcal{O}_{5} \right)\right]H_{u/d}\,. \tag{33}\] Then one can read out the mass matrix of fermion \(\psi\) as follow, \[M_{\psi}^{\prime\prime}=\frac{\alpha v_{u,d}}{\Lambda}\left(\begin{array}{ ccc}0&\omega\phi_{2}(Y_{3}+iY_{4})&\phi_{1}(Y_{3}-iY_{4})\\ \phi_{1}(Y_{3}-iY_{4})&0&\omega\phi_{2}(Y_{3}+iY_{4})\\ \omega\phi_{2}(Y_{3}+iY_{4})&\phi_{1}(Y_{3}-iY_{4})&0\end{array}\right)\,. \tag{34}\] There is only an overall parameter \(\alpha\) which can be set to real whenever gCP symmetry is included or not. The superpotential and fermion mass matrix can be determined in a similar way for the other seven possible assignments of the doublet flavon \(\Phi\), and the results are summarized in table 1. It is remarkable that only the doublet modular forms contributes to the Yukawa couplings. If the matter fields \(\psi\) and \(\psi^{c}\) transform as \(\mathbf{3^{1}}\) and \(\mathbf{\bar{3}^{1}}\) respectively under EFG while the representation assignment of the flavon \(\Phi\) remains unchanged, one would obtain the same superpotential and mass matrix. If one assigns the matter fields \(\psi\sim\mathbf{3^{a}}\) and \(\psi^{c}\sim\mathbf{\bar{3}^{b}}\) with \(a+b=1\), the \(\Delta(27)\) invariant combinations \(\mathcal{O}_{4,5}\) transform under \(S\) and \(T\) as follow, \[\mathcal{O}_{4}\stackrel{{ S}}{{\longrightarrow}}-\mathcal{O}_{5},\qquad\mathcal{O}_{4}\stackrel{{ T}}{{\longrightarrow}}-\omega \mathcal{O}_{5}\,,\qquad\qquad\mathcal{O}_{5}\stackrel{{ S}}{{ \longrightarrow}}-\mathcal{O}_{4},\qquad\mathcal{O}_{5}\stackrel{{ T}}{{ \longrightarrow}}-\omega^{2}\mathcal{O}_{4}\,, \tag{35}\] for \(\Phi=(\phi_{1},\phi_{2})^{T}\sim\mathbf{2_{1,1}}\). Consequently we can organize \(\mathcal{O}_{4}\) and \(\mathcal{O}_{5}\) into a doublet of the modular symmetry \(S_{3}\), \[\begin{pmatrix}\mathcal{O}_{4}-\omega\mathcal{O}_{5}\\ -i(\mathcal{O}_{4}+\omega\mathcal{O}_{5})\end{pmatrix}\sim\mathbf{2}\,. \tag{3.36}\] The corresponding fermion mass matrix can be obtained from that of Eq. (3.34) through the replacement \(\phi_{2}\to-\phi_{2}\). Moreover, the fermion mass matrix would be transposed if the representation assignments of \(\psi\) and \(\psi^{c}\) are interchanged, i.e. \(\psi\sim\mathbf{\bar{3}^{b}}\) and \(\psi^{c}\sim\mathbf{3^{a}}\). In above, we have performed a comprehensive analysis for Dirac mass terms which are invariant under the eclectic flavor symmetry \(\Delta(27)\rtimes S_{3}\). If a field \(\psi^{c}\) is a SM singlet, for instance \(\psi^{c}\) can be the right-handed neutrinos or the combination of left-handed leptons and Higgs, the Majorana mass term is allowed and it is of the following form \[\mathcal{W}_{M}=\sum_{\boldsymbol{r},s}c_{\boldsymbol{r},s}\left(Y_{ \boldsymbol{r}}^{(k_{Y})}\Phi\psi^{c}\psi^{c}\right)_{\mathbf{1}^{0}_{\mathbf{ 0},\mathbf{0}},s}\,. \tag{3.37}\] By dropping the antisymmetric contributions, the Majorana mass matrix of \(\psi^{c}\) can be easily obtained from the cases in which \(\psi\) and \(\psi^{c}\) transform in the same way under EFG. For the triplet assignment \(\psi^{c}\sim\mathbf{3^{a}}\) or \(\mathbf{\bar{3}^{a}}\), the Majorana mass matrix of \(\psi^{c}\) can be obtained from mass matrices of \(\mathcal{W}_{\psi 1}\sim\mathcal{W}_{\psi 4}\) in table 1 by taking \(\alpha_{3}=0\) and \(v_{u,d}/\Lambda=1\). ## 4 An example model based on the EFG \(\Delta(27)\rtimes S_{3}\) In the following, we shall present a lepton model with the EFG \(\Delta(27)\rtimes S_{3}\), and a \(Z_{3}\) symmetry is employed to forbid the unwanted operators. We formulate our model in the framework of type-I seesaw mechanism with three RH neutrinos. We assign the three generations of LH lepton doublets \(L\) and of RH charged leptons to two triplet \(\mathbf{\bar{3}}\) of \(\Delta(27)\), while the RH neutrinos \(N^{c}\) furnish a three-dimensional irreducible representation \(\mathbf{3}\) of \(\Delta(27)\). The Higgs doublets \(H_{u}\) and \(H_{d}\) are assumed to transform trivially under \(\Delta(27)\) and their modular weights are vanishing. The fields of the models and their classification under the EFG \(\Delta(27)\rtimes S_{3}\) and \(Z_{3}\) are summarized in table 2. In the traditional discrete flavor symmetry approach, a number of scalar fields (flavons) are generally required to break the flavor symmetry [2, 4, 5]. The flavons are standard model singlets, and yet they transform non-trivially under the flavor symmetry group. The vacuum expectations values (VEVs) of flavons should be aligned along particular directions in the flavor space, and they typically break the traditional flavor symmetry down to certain abelian subgroups. In the present model, we introduce three flavons \(\phi\), \(\varphi\) and \(\chi\) which transform as \(\mathbf{\bar{3}}\), \(\mathbf{2_{2}}\) and \(\mathbf{3}\) under \(\Delta(27)\), respectively. In this model, we assume that the VEV of flavon \(\phi\) breaks the traditional flavor symmetry \(\Delta(27)\) down to \(Z_{3}^{BAB^{2}}\equiv\left\{1,BAB^{2},BA^{2}B^{2}\right\}\) while the subgroup \(Z_{2}^{TST}\equiv\left\{1,TST\right\}\) of \(S_{3}\) is preserved by vacuum of flavons \(\varphi\) and \(\chi\). Hence the flavons \(\phi\), \(\varphi\) and \(\chi\) will develop VEVs along the following directions: \[\langle\phi\rangle=(1,\omega,\omega^{2})^{T}v_{\phi},\qquad\langle\varphi \rangle=(1,1)^{T}v_{\varphi},\qquad\langle\chi\rangle=(1,x,1)^{T}v_{\chi}\,, \tag{4.1}\] where the parameter \(x\) is undetermined and generally complex. In the following numerical analysis, we shall take \(x\) to be real for simplicity. In our model, the phases of the VEVs \(v_{\phi}\), \(v_{\varphi}\) and \(v_{\chi}\) are unphysical since they are the overall phases of the charged lepton mass matrix, neutrino Dirac mass matrix and neutrino Majorana matrix, respectively. It is notoriously difficult to realize the vacuum alignment of flavon, one has to construct rather complicated flavon potential, and additional symmetry such as \(U(1)_{R}\) and new fields are generally required. Both flavons and complex modulus are present in the paradigm of EFG, and the interplay between them makes the dynamical determination of the flavon vacuum alignment even more difficult. Moreover, the modular invariant potential of the modulus \(\tau\) contains a lot of independent terms [31, 32, 33]. Hence it is very challenging to determine the modulus VEV from a dynamical principle. We will not address the vacuum alignment problem here, and we rely on some unknown vacuum selection mechanism. Given the field content and the symmetry assignment in table 2, we find that the superpotential for the lepton masses, which is invariant under the eclectic flavor symmetry \(\Delta(27)\rtimes S_{3}\), is of the form \[\mathcal{W}=\mathcal{W}_{l}+\mathcal{W}_{\nu} \tag{4.2}\] with \[\mathcal{W}_{l} = \frac{\alpha}{\Lambda}\left(E^{c}L\phi Y_{\mathbf{1}}^{(6)} \right)_{(\mathbf{10},\mathbf{0},\mathbf{1}),1}H_{d}+\frac{\beta}{\Lambda} \left(E^{c}L\phi Y_{\mathbf{1}}^{(6)}\right)_{(\mathbf{10},\mathbf{0},\mathbf{1 }),2}H_{d}+\frac{\gamma}{\Lambda}\left(E^{c}L\phi Y_{\mathbf{1}^{\prime}}^{(6) }\right)_{(\mathbf{10},\mathbf{0},\mathbf{1})}H_{d}\,,\] \[\mathcal{W}_{\nu} = \frac{h}{\Lambda}\left(N^{c}L\varphi Y_{\mathbf{2}}^{(2)}\right) _{(\mathbf{10},\mathbf{0},\mathbf{1})}H_{u}+\frac{g_{1}}{2}\left(N^{c}N^{c} \chi\right)_{(\mathbf{10},\mathbf{0},\mathbf{1}),1}+\frac{g_{2}}{2}\left(N^{c }N^{c}\chi\right)_{(\mathbf{10},\mathbf{0},\mathbf{1}),2}\,, \tag{4.3}\] where \(\mathcal{W}_{l}\) is the Yukawa superpotential of the charged leptons, \(\mathcal{W}_{\nu}\) is the neutrino superpotential in type-I seesaw mechanism. The gCP symmetry is imposed on the model, thus the couplings should fulfill the following relations: \[\beta=\frac{\omega^{2}}{2}(\sqrt{3}\alpha^{*}-\alpha)\,,\qquad\text{Arg}( \gamma)=\frac{\pi}{12}\ \ (\text{mod}\,\pi)\,,\qquad g_{2}=\frac{\omega}{2}(\sqrt{3}g_{1}^{*}-g_{1})\,. \tag{4.4}\] From the results in table 1, we find that the charged lepton mass terms correspond to the case of \(\mathcal{W}_{\psi 3}\). With the vacuum configuration of \(\phi\) in Eq. (4.1), the charged lepton mass matrix is given by \[M_{l}=\frac{v_{d}v_{\phi}}{\Lambda}\left[Y_{\mathbf{1}}^{(6)}\left(\begin{array} []{ccc}\alpha&\beta\omega&\beta\omega^{2}\\ \beta\omega&\alpha\omega^{2}&\beta\\ \beta\omega^{2}&\beta&\alpha\omega\end{array}\right)+\gamma Y_{\mathbf{1}^{ \prime}}^{(6)}\left(\begin{array}{ccc}0&-\omega&\omega^{2}\\ \omega&0&-1\\ -\omega^{2}&1&0\end{array}\right)\right]\,.\] We see that the charged lepton sector involves a single flavon \(\phi\) whose VEV preserves the subgroup \(Z_{3}^{BAB}{}^{2}\). Therefore the hermitian combination \(M_{l}^{\dagger}M_{l}\) is invariant under the transformation \(L\to\rho_{\mathbf{\bar{3}}}(BAB^{2})L\) and it satisfies the identity \(\rho_{\mathbf{\bar{3}}}^{\dagger}(BAB^{2})M_{l}^{\dagger}M_{l}\rho_{\mathbf{ \bar{3}}}(BAB^{2})=M_{l}^{\dagger}M_{l}\). As a consequence, \(M_{l}^{\dagger}M_{l}\) is diagonalized by the following constant unitary matrix \[U_{l}=\frac{1}{\sqrt{3}}\left(\begin{array}{ccc}1&-\omega^{2}&1\\ \omega^{2}&-1&1\\ \omega&-\omega&1\end{array}\right)\,, \tag{4.5}\] \begin{table} \begin{tabular}{|c|c|c|c|c|c||c|c|} \hline \hline Fields & \(L\) & \(E^{c}\) & \(N^{c}\) & \(H_{u}\) & \(H_{d}\) & \(\phi\) & \(\varphi\) & \(\chi\) \\ \hline \(\text{SU}(2)_{L}\times\text{U}(1)_{Y}\) & \((\mathbf{2},-\frac{1}{2})\) & \((\mathbf{1},1)\) & \((\mathbf{1},0)\) & \((\mathbf{2},\frac{1}{2})\) & \((\mathbf{2},-\frac{1}{2})\) & \((\mathbf{1},0)\) & \((\mathbf{1},0)\) & \((\mathbf{1},0)\) \\ \hline \(\Delta(27)\rtimes S_{3}\) & \(\mathbf{\bar{3}^{0}}\) & \(\mathbf{\bar{3}^{0}}\) & \(\mathbf{\bar{3}^{0}}\) & \(\mathbf{10}_{\mathbf{0},\mathbf{0}}\) & \(\mathbf{10}_{\mathbf{0},\mathbf{0}}\) & \(\mathbf{\bar{3}^{0}}\) & \(\mathbf{2}_{\mathbf{2},\mathbf{0}}\) & \(\mathbf{3^{0}}\) \\ \hline Modular weight & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(6\) & \(4\) & \(0\) \\ \hline \(Z_{3}\) & \(\omega\) & \(\omega\) & \(1\) & \(1\) & \(1\) & \(\omega\) & \(\omega^{2}\) & \(1\) \\ \hline \hline \end{tabular} \end{table} Table 2: Transformation properties of matter fields, Higgs doublets and flavons under the EFG \(\Delta(27)\rtimes S_{3}\) and the auxiliary symmetry \(Z_{3}\). with \(U_{l}^{\dagger}M_{l}^{\dagger}M_{l}U_{l}={\rm diag}(m_{e}^{2},m_{\mu}^{2},m_{\tau}^ {2})\) and the charged lepton masses \[m_{e} = \left|(\alpha+2\beta)Y_{\bf 1}^{(6)}\right|\frac{v_{\phi}v_{d}}{ \Lambda},\] \[m_{\mu} = \left|(\alpha-\beta)Y_{\bf 1}^{(6)}+\sqrt{3}i\gamma Y_{\bf 1^{ \prime}}^{(6)}\right|\frac{v_{\phi}v_{d}}{\Lambda},\] \[m_{\tau} = \left|(\alpha-\beta)Y_{\bf 1}^{(6)}-\sqrt{3}i\gamma Y_{\bf 1^{ \prime}}^{(6)}\right|\frac{v_{\phi}v_{d}}{\Lambda}\,. \tag{4.6}\] In the neutrino sector, we see that the neutrino Dirac mass term corresponds to \({\cal W}_{\psi 11}\) of table 1 and the neutrino Majorana mass terms can be obtained from \({\cal W}_{\psi 1}\) by taking \(\alpha_{3}=0\). The Dirac neutrino mass matrix and the RH Majorana neutrino mass matrix read as \[M_{D} = \frac{hv_{\varphi}v_{u}}{\Lambda}\left(\begin{array}{ccc}0&Y_ {\bf 2,1}^{(2)}-iY_{\bf 2,2}^{(2)}&\omega(Y_{\bf 2,1}^{(2)}+iY_{\bf 2,2}^{(2)})\\ Y_{\bf 2,1}^{(2)}+iY_{\bf 2,2}^{(2)}&0&\omega(Y_{\bf 2,1}^{(2)}-iY_{\bf 2,2}^{(2)} )\\ \omega^{2}(Y_{\bf 2,1}^{(2)}-iY_{\bf 2,2}^{(2)})&\omega^{2}(Y_{\bf 2,1}^{(2)}+iY_{ \bf 2,2}^{(2)})&0\end{array}\right)\,,\] \[M_{N} = v_{\chi}\left(\begin{array}{ccc}g_{1}&g_{2}&g_{2}x\\ g_{2}&g_{1}x&g_{2}\\ g_{2}x&g_{2}&g_{1}\end{array}\right)\,, \tag{4.7}\] where \(Y_{\bf r,i}^{(k_{Y})}\) denotes the \(i\)th component of the modular form multiple \(Y_{\bf r}^{(k_{Y})}(\tau)\). It is notable that \(M_{D}\) is completely determined by the modulus \(\tau\) up to the overall scale \(hv_{u}v_{\varphi}/\Lambda\). The light neutrino mass matrix is then given by the seesaw formula \(M_{\nu}=-M_{D}^{T}M_{N}^{-1}M_{D}\). As explained early, the complex modulus \(\tau\) is treated as spurion and its value is freely varied in the fundamental domain \({\cal D}\,=\,\left\{\tau\in\mathbb{C}|-1/2\leq\Re(\tau)\leq 1/2,|\tau|\geq 1\right\}\) to adjust the agreement with the experimental data. Taking into account gCP symmetry, we see that the lepton mass matrices depend on the following six dimensionless real parameters \(\Re\tau\), \(\Im\tau\), \(\arg(\alpha)\), \(\gamma/|\alpha|\), \(\arg(g_{1})\), \(x\) and two overall scales \(\left|\alpha v_{d}v_{\phi}/\Lambda\right|\) and \(\left|h^{2}v_{u}^{2}v_{\varphi}^{2}/(g_{1}v_{\chi}\Lambda^{2})\right|\) can be determined by the measured electron mass and the solar neutrino mass square difference \(\Delta m_{21}^{2}\). In order to quantitatively determine how well the model can describe the experimental data on lepton masses and mixing parameters, we perform a conventional \(\chi^{2}\) analysis and the \(\chi^{2}\) function is constructed with the data listed in table 3. We search for the minimum of the \(\chi^{2}\) function to obtain the best fit values of the free input parameters as well as the predictions for the lepton masses and mixing parameters. It is remarkable that the model can accommodate both NO and IO neutrino mass spectrum. A good agreement between the model and the experimental data can be achieved for the following values of free parameters: \[\Re(\tau)=-0.0621\,(0.496),\qquad\Im(\tau)=0.998\,(0.965),\qquad \arg(\alpha)=0.100\pi\,(0.0833\pi),\] \[\gamma/|\alpha|=0.207\,(-0.778),\qquad\arg(g_{1})=0.119\,(0.0111),\qquad x=1.882(0.987),\] \[|\alpha|v_{d}v_{\phi}/\Lambda=739.4\,{\rm GeV}\,(159.3\,{\rm GeV} )\,,\qquad h^{2}v_{u}^{2}v_{\varphi}^{2}/(|g_{1}|v_{\chi}\Lambda^{2})=0.815\,{ \rm eV}\,(1.803\,{\rm eV})\,, \tag{4.8}\] for NO (IO). The minimum value of the \(\chi^{2}\) function is found to be \(\chi^{2}_{\rm min}=0.751(4.132)\). The best fit values of the lepton masses and mixing parameters are given by \[\sin^{2}\theta_{13}=0.02233\,(0.02216),\qquad\sin^{2}\theta_{12} =0.3041\,(0.3025),\qquad\sin^{2}\theta_{23}=0.4645\,(0.5945),\] \[\delta_{CP}=1.226\pi\,(1.498\pi)\,,\qquad\alpha_{21}=0.729\pi\,(1.998\pi),\qquad\alpha_{31}=1.923\pi\,(0.994\pi),\] \[m_{1}=1.173\,{\rm meV}\,(72.26\,{\rm meV}),\quad m_{2}=8.676\,{ \rm meV}\,(72.77\,{\rm meV})\,,\quad m_{3}=50.13\,{\rm meV}\,(52.84\,{\rm meV }),\] \[\sum_{i=1}^{3}m_{i}=59.98\,{\rm meV}\,\,(197.9\,{\rm meV})\,, \qquad m_{\beta\beta}=1.298\,{\rm meV}\,(71.98\,{\rm meV})\,,\] \[m_{e}=0.511\,{\rm MeV}\,(0.511\,{\rm MeV}),\,\,\,\,m_{\mu}=107.9\,{\rm MeV}\,(107.9\,{\rm MeV}),\,\,\,\,m_{\tau}=1.837\,{\rm GeV}\,(1.836\,{ \rm GeV})\,. \tag{4.9}\] We see that the value of \(\tau\) is very close to the self-dual point \(\tau=i\) for NO. Notice that the sum of the three neutrino mass is determined to be \(59.98\,\mathrm{meV}\) for NO, and it is much below the most stringent bound \(\sum m_{\nu}<120\,\mathrm{meV}\) from Planck [36]. In the case of IO, we find \(\sum_{i=1}^{3}m_{i}=197.9\,\mathrm{meV}\) is still allowed by the conservative upper bound \(\sum\,m_{\nu}<120\,\mathrm{meV}\) although it is above the stringent bound \(\sum\,m_{\nu}<120\,\mathrm{meV}\)[36]. Moreover, the best fit value of the effective Majorana mass is \(m_{\beta\beta}=1.298\,\mathrm{meV}\,(71.98\,\mathrm{meV})\) which is compatible with the latest result \(m_{\beta\beta}<36-156\) meV of KamLAND-Zen [37]. The prediction of neutrinoless double decay for IO can be tested by the next generation experiments such as LEGEND [38] and nEXO [39] which are expected to explore the full IO region. In order to show the viability and predictions of the model, we shall numerically scan over the parameter space of the model and require all the observables lie in their experimentally preferred \(3\sigma\) regions. Then some interesting correlations among the input parameters and observables are obtained and the corresponding correlations among different observables are shown in figure 2 for the NO spectrum. As can be seen from table 2, all the lepton fields are assigned to triplets of \(\Delta(27)\), and the model doesn't contain a doublet flavon invariant under the extra symmetry \(Z_{3}\). From the general results about the Kahler potential in section 3.1.1, we know that there is no NLO corrections to the minimal Kahler potential in this model. The off-diagonal contributions of the Kahler metric arise at NNLO and they are suppressed by \(|\langle\Phi\rangle|^{2}/\Lambda^{2}\), where \(\Phi\) denotes any flavon of the model. Hence the contributions of the Kahler potential to the lepton masses and mixing parameters are suppressed by \(|\langle\Phi\rangle|^{2}/\Lambda^{2}\), they are small enough to be negligible. ## 5 Conclusion and outlook Usually the minimal Kahler potential is adopted in modular symmetry model building. However, it is not the most general one compatible with modular symmetry, the non-minimal and flavor-dependent terms are allowed. How to restrict the Kahler is an open question in modular symmetry. The top-down approach to modular flavor symmetry in string inspired constructions can give rise to both traditional flavor symmetry and modular symmetry. This results in the idea of eclectic flavor group which is the nontrivial product the modular and traditional flavor symmetries. The eclectic \begin{table} \begin{tabular}{|c|c|c||c|c|} \hline \hline \multirow{2}{*}{Observables} & \multicolumn{2}{c|}{NO} & \multicolumn{2}{c|}{IO} \\ \cline{2-5} & \(\mathrm{bf}\pm 1\sigma\) & \(3\sigma\) region & \(\mathrm{bf}\pm 1\sigma\) & \(3\sigma\) region \\ \hline \(\sin^{2}\theta_{13}\) & \(0.02225^{+0.00056}_{-0.00059}\) & \([0.02052,0.02398]\) & \(0.02223^{+0.00058}_{-0.00058}\) & \([0.02048,0.02416]\) \\ \(\sin^{2}\theta_{12}\) & \(0.303^{+0.012}_{-0.012}\) & \([0.270,0.341]\) & \(0.303^{+0.012}_{-0.011}\) & \([0.270,0.341]\) \\ \(\sin^{2}\theta_{23}\) & \(0.451^{+0.019}_{-0.016}\) & \([0.408,0.603]\) & \(0.569^{+0.016}_{-0.021}\) & \([0.412,0.613]\) \\ \(\delta_{CP}/\pi\) & \(1.289^{+0.20}_{-0.14}\) & \([0.8,1.944]\) & \(1.533^{+0.122}_{-0.161}\) & \([1.08,1.911]\) \\ \(\Delta m_{21}^{2}/\Delta m_{3\ell}^{2}\) & \(0.0294^{+0.00088}_{-0.00088}\) & \([0.0263,0.0331]\) & \(-0.0296^{+0.00064}_{-0.00088}\) & \([-0.0331,-0.0263]\) \\ \(m_{e}/m_{\mu}\) & \(0.004737^{+0.00004}_{-0.00004}\) & — & \(0.004737^{+0.00004}_{-0.00004}\) & — \\ \(m_{\mu}/m_{\tau}\) & \(0.05876^{+0.000465}_{-0.000465}\) & — & \(0.05876^{+0.000465}_{-0.000465}\) & — \\ \hline \hline \end{tabular} \end{table} Table 3: The global best fit values and \(1\sigma\) ranges and \(3\sigma\) ranges of the lepton mixing parameters and mass ratios, where the experimental data and errors of the lepton mixing parameters and neutrino masses are obtained from NuFIT5.2 with SK atmospheric data [34], the charged lepton mass ratios are taken from [35] with \(M_{\mathrm{SUSY}}=10\,\mathrm{TeV}\) and \(\tan\beta=5\). Note that \(\Delta m_{3\ell}^{2}=\Delta m_{31}^{2}>0\) for normal ordering (NO) and \(\Delta m_{3\ell}^{2}=\Delta m_{32}^{2}<0\) for inverted ordering (IO). flavor group can severely restrict both Kahler potential and superpotential. In the present work, we have studied the traditional flavor group \(\Delta(27)\) extended by the finite modular group \(\Gamma_{2}\cong S_{3}\), and the resulting EFG is \(\Delta(27)\rtimes S_{3}\). Note that \(S_{3}\) is a subgroup of the automorphism group of \(\Delta(27)\). The modular transformations \(S\), \(T\), \(TST\) correspond to outer automorphisms of \(\Delta(27)\) while others are inner automorphisms. In order to consistently combine the modular symmetry \(S_{3}\) with the traditional flavor symmetry \(\Delta(27)\), we find that the eight nontrivial singlet representations of \(\Delta(27)\) should be arranged into four reducible doublets \(\mathbf{2_{i}}\) (\(i=1,\cdots,4\)) and the remain three irreducible representations \(\mathbf{1_{0,0}}\), \(\mathbf{3}\) and \(\mathbf{\bar{3}}\) need not be extended. Considering the modular symmetry, we find that the EFG \(\Delta(27)\rtimes S_{3}\) has two one-dimensional representations \(\mathbf{1_{0,0}^{a}}\), twelve two-dimensional representations \(\mathbf{2_{i,m}}\) and four three-dimensional representations \(\mathbf{3^{a}}\) and \(\mathbf{\bar{3}^{a}}\) of \(\Delta(27)\rtimes S_{3}\). Furthermore, we also extend the EFG \(\Delta(27)\rtimes S_{3}\) to include the gCP symmetry and give the explicit form of the gCP transformation matrices. We have performed a comprehensive analysis of the superpotential and Kahler potential which are invariant under the action of the EFG \(\Delta(27)\rtimes S_{3}\). We find that the Kahler potential is under control when the chiral superfields of quarks/leptons are assigned to trivial singlet or triplet of \(\Delta(27)\), and the minimal Kahler potential is reproduced at leading order. The flavor-dependent terms of Kahler potential are suppressed by \(\langle\Phi\rangle^{2}/\Lambda^{2}\) unless the model contain a \(\Delta(27)\) doublet flavon invariant under auxiliary cyclic symmetries, where \(\Phi\) denotes a generic flavon. On the other hand, the Kahler matric is diagonal but flavor-dependent and the contributions to flavor observables are not negligible, if the quark/lepton fields are assigned to a singlet plus a doublet of \(\Delta(27)\). Moreover, we analyze the superpotential of fermion mass for various possible representation assignments of matter fields and flavons, the predictions for the fermion mass matrix are summarized in table 1. Figure 2: The predicted correlations among the input free parameters, neutrino mixing angles, CP violation phases and the effective mass in neutrinoless double beta decay. Furthermore, we propose a bottom-up model for lepton masses and mixing based on the EFG \(\Delta(27)\rtimes S_{3}\). In contrast with the top-down models, we freely assign the representations and modular weights of the fields, although the modular transformation is fixed by the transformation under \(\Delta(27)\) flavor symmetry. We introduce an extra symmetry \(Z_{3}\) to forbid the undesired operator, two triplet flavons and one doublet flavon are introduced to break the \(\Delta(27)\) flavor symmetry. It is assumed that the subgroup \(Z_{3}^{BAB^{2}}\) of \(\Delta(27)\) and \(Z_{2}^{TST}\) of \(S_{3}\) are preserved by the VEVs of flavons in the charged lepton and neutrino sectors respectively. When gCP symmetry consistent with the EFG \(\Delta(27)\rtimes S_{3}\) is imposed, all six lepton masses and six mixing parameters depend on eight real input parameters. A comprehensive numerical analysis are performed, we find that the model is in excellent agreement with experimental data for certain values of free parameters. The predictions for the three neutrino mass sum and the effective mass \(m_{\beta\beta}\) of neutrinoless double beta decay are safely below the present upper limit. In this work, we explicitly show it is not obligatory that all elements of finite modular group need correspond to outer automorphisms of the flavor symmetry group. Even some modular symmetry elements are inner automorphisms of the flavor symmetry group, we could still get nontrivial results and non-singlet modular forms could appear in the Yukawa couplings if the corresponding modular transformation matrices don't coincide with the flavor symmetry transformations. ## Acknowledgements CCL is supported by the National Natural Science Foundation of China under Grant Nos. 12005167, 12247103, and the Young Talent Fund of Association for Science and Technology in Shaanxi, China. GJD is supported by the National Natural Science Foundation of China under Grant Nos. 11975224, 11835013. ## Appendix A Traditional flavor symmetry \(\Delta(27)\) The group structure of the traditional flavor group \(\Delta(27)\) is \(\Delta(27)\cong(Z_{3}\times Z_{3})\rtimes Z_{3}\) which is a non-Abelian group of order \(27\) with GAP ID [27, 3] in GAP [29, 30]. In detail, \(\Delta(27)\) can be generated by two generators \(A\) and \(B\) obeying the relations \[A^{3}=B^{3}=(AB)^{3}=(AB^{2})^{3}=1\,.\] (A.1) The center of the traditional flavor group \(\Delta(27)\), denoted by \(Z(\Delta(27))\), is of the following form \[Z(\Delta(27))=\{1,BAB^{2}A^{2},ABA^{2}B^{2}\}\,,\] (A.2) which is a normal abelian \(Z_{3}\) subgroup of \(\Delta(27)\). The \(27\) group elements of \(\Delta(27)\) can be divided into the eleven conjugacy classes as follows \[1C_{1}=\{1\}\,, 3C_{3}^{(1)}=\left\{A^{2}B^{2},B^{2}A^{2},AB^{2}A\right\}\,,\] \[3C_{3}^{(2)}=\left\{A^{2}B,ABA,BA^{2}\right\}\,, 3C_{3}^{(3)}=\left\{A,BAB^{2},B^{2}AB\right\}\,,\] \[3C_{3}^{(4)}=\left\{AB^{2},BAB,B^{2}A\right\}\,, 3C_{3}^{(5)}=\left\{AB,BA,A^{2}BA^{2}\right\}\,,\] \[3C_{3}^{(6)}=\left\{A^{2},B^{2}A^{2}B,BA^{2}B^{2}\right\}\,, 3C_{3}^{(7)}=\left\{B^{2},AB^{2}A^{2},A^{2}B^{2}A\right\}\,,\] \[3C_{3}^{(8)}=\left\{B,A^{2}BA,ABA^{2}\right\}\,, 1C_{3}^{(1)}=\left\{BAB^{2}A^{2}\right\}\,, 1C_{3}^{(2)}=\left\{ABA^{2}B^{2}\right\}\,,\] (A.3) where \(kC_{n}\) denotes a conjugacy class which contains \(k\) elements with order \(n\). Since the number of conjugacy class is equal to the number of irreducible representation, \(\Delta(27)\) has eleven inequivalent irreducible representations which contain nine singlets labeled as \(\mathbf{1}_{\boldsymbol{(r,s)}}\)\((r,s=0,1,2)\) and two triplets labeled as \(\mathbf{3}\) and \(\mathbf{\bar{3}}\). In our working basis, the explicit forms of the generators \(A\) and \(B\) in the eleven irreducible representations of \(\Delta(27)\) are as follows \[\mathbf{1}_{\boldsymbol{r,s}} : \rho_{\mathbf{1}_{\boldsymbol{r,s}}}(A)=\omega^{r},\qquad\rho_{ \mathbf{1}_{\boldsymbol{r,s}}}(B)=\omega^{s}\,,\quad\text{with}\quad r,s=0,1,2\,,\] \[\mathbf{3} : \rho_{\mathbf{3}}(A)=\left(\begin{array}{ccc}0&1&0\\ 0&0&1\\ 1&0&0\end{array}\right),\qquad\rho_{\mathbf{3}}(B)=\left(\begin{array}{ccc}1 &0&0\\ 0&\omega&0\\ 0&0&\omega^{2}\end{array}\right)\,,\] \[\mathbf{\bar{3}} : \rho_{\mathbf{\bar{3}}}(A)=\left(\begin{array}{ccc}0&1&0\\ 0&0&1\\ 1&0&0\end{array}\right),\qquad\rho_{\mathbf{\bar{3}}}(B)=\left(\begin{array}{ cccc}1&0&0\\ 0&\omega^{2}&0\\ 0&0&\omega\end{array}\right)\,.\] (A.4) We see that \(\mathbf{3}\) and \(\mathbf{\bar{3}}\) are complex conjugate to each other. For a triplet \(\Phi=(\phi_{1},\phi_{2},\phi_{3})^{T}\sim\mathbf{3}\), it complex conjugate \(\Phi^{*}=(\phi_{1}^{*},\phi_{2}^{*},\phi_{3}^{*})^{T}\) transforms as \(\mathbf{\bar{3}}\) under \(\Delta(27)\). The character table of \(\Delta(27)\) and the transformation properties of conjugacy classes and irreducible representations under the actions of outer automorphisms \(u_{S}\), \(u_{T}\) and \(u_{K_{*}}\) are summarized in table 4. In table 4, the second line indicates representatives of the eleven conjugacy classes in the second line. From the character table, the Kronecker products between different irreducible representations read as \[\mathbf{1}_{\boldsymbol{r,s}}\otimes\mathbf{1}_{\boldsymbol{r^{ \prime},s^{\prime}}}=\mathbf{1}_{\boldsymbol{[r+r^{\prime}]},\,[s+s^{\prime}]}, \qquad\mathbf{1}_{\boldsymbol{r,s}}\otimes\mathbf{3}=\mathbf{3},\qquad \mathbf{1}_{\boldsymbol{r,s}}\otimes\mathbf{\bar{3}}=\mathbf{\bar{3}}\,,\] \[\mathbf{3}\otimes\mathbf{3}=\mathbf{\bar{3}}_{\boldsymbol{S,1}} \oplus\mathbf{\bar{3}}_{\boldsymbol{S,2}}\oplus\mathbf{\bar{3}}_{\boldsymbol{A }}\,,\qquad\mathbf{\bar{3}}\otimes\mathbf{\bar{3}}=\mathbf{3}_{\boldsymbol{S,1} }\oplus\mathbf{3}_{\boldsymbol{S,2}}\oplus\mathbf{3}_{\boldsymbol{A}}\,,\qquad \mathbf{3}\otimes\mathbf{\bar{3}}=\sum_{r,s=0}^{2}\mathbf{1}_{\boldsymbol{r,s}}\,,\] (A.5) where \(r,s,r^{\prime},s^{\prime}=0,1,2\) and integer \([n]\) stands for \(n\) mod 3. In the following, we present CG coefficients in the chosen basis. All CG coefficients can be reported in the form \(\alpha\otimes\beta\), where \(\alpha_{i}\) denotes the elements of the left base vector \(\alpha\), and \(\beta_{j}\) stands for the elements of of the right base vector \(\beta\). In the following, we shall adopt the convention \(\beta_{[3]}=\beta_{0}\equiv\beta_{3}\). We first report the CG coefficients associated with the singlet representation \({\bf 1_{r,s}}\) \[{\bf 1_{r,s}}\otimes{\bf 3}={\bf 3}\ \sim\ \alpha_{1}\left(\begin{array}{c}\beta_{ [1-s]}\\ \omega^{r}\beta_{[2-s]}\\ \omega^{2r}\beta_{[3-s]}\end{array}\right),\qquad{\bf 1_{r,s}}\otimes{ \bf\overline{3}}={\bf\overline{3}}\ \sim\ \alpha_{1}\left(\begin{array}{c}\beta_{ [1+s]}\\ \omega^{r}\beta_{[2+s]}\\ \omega^{2r}\beta_{[3+s]}\end{array}\right)\,.\] Finally, for the products of the triplet representations \({\bf 3}\) and \({\bf\overline{3}}\), we find \[{\bf 3}\otimes{\bf\overline{3}}=\sum_{r,s=0}^{2}{\bf 1_{r,s}}\quad\mbox{with} \quad{\bf 1_{r,s}}:\alpha_{1}\beta_{[1-s]}+\omega^{-r}\alpha_{2}\beta_{[2-s]}+ \omega^{r}\alpha_{3}\beta_{[3-s]}\,,\] \[{\bf 3}\otimes{\bf 3}={\bf\overline{3}_{S_{1}}}\oplus{\bf\overline{3}_{S_{2}} }\oplus{\bf\overline{3}_{A}}({\bf\overline{3}}\otimes{\bf\overline{3}}={\bf 3 _{S_{1}}}\oplus{\bf 3_{S_{2}}}\oplus{\bf 3_{A}})\quad\mbox{with}\quad\left\{ \begin{array}{c}{\bf\overline{3}_{S_{1}}}({\bf 3_{S_{1}}}):\left(\begin{array}{c}\alpha_{1} \beta_{1}\\ \alpha_{2}\beta_{2}\\ \alpha_{3}\beta_{3}\end{array}\right)\,,\\ {\bf\overline{3}_{S_{2}}}({\bf 3_{S_{2}}}):\left(\begin{array}{c}\alpha_{2} \beta_{3}+\alpha_{3}\beta_{2}\\ \alpha_{3}\beta_{1}+\alpha_{1}\beta_{3}\\ \alpha_{1}\beta_{2}+\alpha_{2}\beta_{1}\end{array}\right)\,,\\ {\bf\overline{3}_{A}}({\bf 3_{A}}):\left(\begin{array}{c}\alpha_{2} \beta_{3}-\alpha_{3}\beta_{2}\\ \alpha_{3}\beta_{1}-\alpha_{1}\beta_{3}\\ \alpha_{1}\beta_{2}-\alpha_{2}\beta_{1}\end{array}\right)\,.\] \begin{table} \begin{tabular}{c|c c c c c c c c c c} & \(1C_{1}\) & \(3C_{3}^{(1)}\) & \(3C_{3}^{(2)}\) & \(3C_{3}^{(3)}\) & \(3C_{3}^{(4)}\) & \(3C_{3}^{(5)}\) & \(3C_{3}^{(6)}\) & \(3C_{3}^{(7)}\) & \(3C_{3}^{(8)}\) & \(1C_{3}^{(1)}\) & \(1C_{3}^{(2)}\) \\ \hline & 1 & \(A^{2}B^{2}\) & \(A^{2}B\) & \(A\) & \(AB^{2}\) & \(AB\) & \(A^{2}\) & \(B^{2}\) & \(B\) & \(BAB^{2}A^{2}\) & \(ABA^{2}B^{2}\) \\ \hline \({\bf 1_{0,0}}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \(\sqrt[3]{1_{0,1_{5}}}\) & 1 & \(\omega^{2}\) & \(\omega\) & 1 & \(\omega^{2}\) & \(\omega\) & 1 & \(\omega^{2}\) & \(\omega\) & 1 & 1 \\ \(\sqrt[3]{1_{0,2}}\) & 1 & \(\omega\) & \(\omega^{2}\) & 1 & \(\omega\) & \(\omega^{2}\) & 1 & \(\omega\) & \(\omega^{2}\) & 1 & 1 \\ \(\sqrt[3]{1_{0,0}}\) & 1 & \(\omega^{2}\) & \(\omega^{2}\) & \(\omega\) & \(\omega\) & \(\omega\) & \(\omega^{2}\) & 1 & 1 & 1 \\ \(\sqrt[3]{1_{1,1_{5}}}\) & 1 & \(\omega\) & 1 & \(\omega\) & 1 & \(\omega^{2}\) & \(\omega^{2}\) & \(\omega\) & 1 & 1 \\ \(\sqrt[3]{1_{1,2}}\) & 1 & 1 & \(\omega\) & \(\omega^{2}\) & 1 & \(\omega^{2}\) & \(\omega\) & \(\omega^{2}\) & 1 & 1 \\ \(\sqrt[3]{1_{2,0}}\) & 1 & \(\omega\) & \(\omega^{2}\) & \(\omega^{2}\) & \(\omega^{2}\) & \(\omega\) & 1 & 1 & 1 & 1 \\ \(\sqrt[3]{1_{2,1}}\) & 1 & 1 & \(\omega^{2}\) & \(\omega^{2}\) & \(\omega\) & 1 & \(\omega\) & \(\omega^{2}\) & \(\omega\) & 1 & 1 \\ \(\sqrt[3]{3}\) & 1 & \(\omega^{2}\) & 1 & \(\omega^{2}\) & 1 & \(\omega\) & \(\omega\) & \(\omega^{2}\) & \(\omega\) & 1 & 1 \\ \(\sqrt[3]{3}\) & 3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 3\(\omega\) & 3\(\omega^{2}\) \\ \end{tabular} \end{table} Table 4: The character table of \(\Delta(27)\), where \(\omega\) is the cube root of unity \(\omega=e^{2\pi i/3}\). The arrowed lines show the transformation of the irreducible representations and conjugacy classes of \(\Delta(27)\) under the actions of the outer automorphisms \(u_{S}\)(blue), \(u_{T}\)(also blue) and \(u_{K,}\)(green). Note that the action of \(u_{S}\) is identical with \(u_{T}\). The finite modular group \(\Gamma_{2}\cong S_{3}\) and modular forms of level 2 The group \(\Gamma_{2}\cong S_{3}\) is the permutation group of order 3 with 6 elements. It can be expressed in terms of two generators \(S\) and \(T\) which satisfy the multiplication rules [3] \[S^{2}=T^{2}=(ST)^{3}=1\,.\] (B.1) The six elements of \(\Gamma_{2}\cong S_{3}\) can be divided into three conjugacy classes \[1C_{1}=\{1\}\,,\quad 3C_{2}=\{S,T,TST\},\quad 2C_{3}=\{ST,TS\}\,.\] (B.2) where the conjugacy class is denoted by \(nC_{k}\). \(k\) is the number of elements belonging to it, and the subscript \(n\) is the order of the elements contained in it. The finite modular group \(S_{3}\) has two singlet representations \(\mathbf{1}\) and \(\mathbf{1^{\prime}}\), and one double representation \(\mathbf{2}\). In the present work, we shall work in the basis where the representation matrix of the generator \(T\) is diagonal. The representation matrices of generators \(S\) and \(T\) in three irreducible representations are taken to be \[\mathbf{1} : \rho_{\mathbf{1}}(S)=1,\qquad\rho_{\mathbf{1}}(T)=1\,,\] \[\mathbf{1^{\prime}} : \rho_{\mathbf{1^{\prime}}}(S)=-1,\qquad\rho_{\mathbf{1^{\prime}}} (T)=-1\,,\] \[\mathbf{2} : \rho_{\mathbf{2}}(S)=-\frac{1}{2}\left(\begin{array}{cc}1& \sqrt{3}\\ \sqrt{3}&-1\end{array}\right),\qquad\rho_{\mathbf{2}}(T)=\left(\begin{array}[] {cc}1&0\\ 0&-1\end{array}\right)\,,\] (B.3) with \(a=0,1,2\). The Kronecker products between different irreducible representations can be obtained from the character table \[\mathbf{1}\otimes\mathbf{1^{\prime}}=\mathbf{1^{\prime}},\qquad\mathbf{1^{a}} \otimes\mathbf{2}=\mathbf{2},\qquad\mathbf{2}\otimes\mathbf{2}=\mathbf{1} \oplus\mathbf{1^{\prime}}\oplus\mathbf{2}\,,\] (B.4) where \(a,b=0,1\) and we denote \(\mathbf{1^{0}}\equiv\mathbf{1}\) and \(\mathbf{1^{1}}\equiv\mathbf{1^{\prime}}\). We now list the CG coefficients in our working basis. For the product of the singlet \(\mathbf{1^{\prime}}\) with a doublet, we have \[\mathbf{1^{\prime}}\otimes\mathbf{2}=\mathbf{2}\ \sim\ \alpha\left(\begin{array}{c} \beta_{2}\\ -\beta_{1}\end{array}\right)\,.\] (B.5) The CG coefficients for the products involving the doublet representation \(\mathbf{2}\) are found to be \[\mathbf{2}\otimes\mathbf{2}=\mathbf{1}\oplus\mathbf{1^{\prime}}\oplus \mathbf{2},\qquad\qquad\text{with}\qquad\left\{\begin{array}{l}\mathbf{1}\ =\alpha_{1}\beta_{1}+\alpha_{2}\beta_{2}\\ \mathbf{1^{\prime}}=\alpha_{1}\beta_{2}-\alpha_{2}\beta_{1}\\ \mathbf{2}\ =\left(\begin{array}{c}\alpha_{2}\beta_{2}-\alpha_{1}\beta_{1}\\ \alpha_{1}\beta_{2}+\alpha_{2}\beta_{1}\end{array}\right)\end{array}\right.\] In this eclectic approach, the Yukawa couplings are modular forms which are holomorphic functions of the complex modulus \(\tau\). In our model, all of the couplings as well as lepton masses must be modular forms of even weights of level 2. There are two linearly independent modular forms of the lowest non-trivial weight 2. They have been derived in Ref. [40] and they are explicitly written by use of eta-function as \[Y_{1}(\tau) = \frac{i}{4\pi}\left[\frac{\eta^{\prime}(\tau/2)}{\eta(\tau/2)}+ \frac{\eta^{\prime}((\tau+1)/2)}{\eta((\tau+1)/2)}-8\frac{\eta^{\prime}(2\tau )}{\eta(2\tau)}\right],\] \[Y_{2}(\tau) = \frac{\sqrt{3}i}{4\pi}\left[\frac{\eta^{\prime}(\tau/2)}{\eta( \tau/2)}-\frac{\eta^{\prime}((\tau+1)/2)}{\eta((\tau+1)/2)}\right],\] (B.6) which can be arranged into a doublet \({\bf 2}\) of \(S_{3}\) and the doublet is defined as \[Y_{\bf 2}^{(2)}=\left(\begin{array}{c}Y_{1}(\tau)\\ Y_{2}(\tau)\end{array}\right)\,.\] (B.7) The \(q\)-expansion of the modular forms \(Y_{1,2}(\tau)\) is given by \[Y_{1}(\tau) = 1/8+3q+3q^{2}+12q^{3}+3q^{4}+18q^{5}+12q^{6}+24q^{7}+3q^{8}+39q^ {9}+18q^{10}\cdots\,,\] \[Y_{2}(\tau) = \sqrt{3}q^{1/2}(1+4q+6q^{2}+8q^{3}+13q^{4}+12q^{5}+14q^{6}+24q^{7 }+18q^{8}+20q^{9}\cdots).\] (B.8) As the expressions of the linearly independent higher weight modular multiplets can be obtained from the tensor products of lower weight modular multiplets. The explicit expressions of the modular multiplets of level \(N=2\) up to weight \(8\) are \[Y_{\bf 1}^{(4)}=\left(Y_{\bf 2}^{(2)}Y_{\bf 2}^{(2)}\right)_{ \bf 1}=(Y_{\bf 2,1}^{(2)})^{2}+(Y_{\bf 2,2}^{(2)})^{2}\,,\qquad Y_{\bf 2}^{(4)}= \left(Y_{\bf 2}^{(2)}Y_{\bf 2}^{(2)}\right)_{\bf 2}=\left(\begin{array}{c}(Y_{\bf 2,2}^{(2)})^{2}-(Y_{\bf 2,1}^{(2)})^{2}\\ 2Y_{\bf 2,1}^{(2)}Y_{\bf 2,2}^{(2)}\end{array}\right)\,,\] \[Y_{\bf 1}^{(6)}=\left(Y_{\bf 2}^{(2)}Y_{\bf 2}^{(4)}\right)_{\bf 1}=Y_{ \bf 2,1}^{(2)}Y_{\bf 2,1}^{(4)}+Y_{\bf 2,2}^{(2)}Y_{\bf 2,2}^{(4)},\quad Y_{\bf 1 ^{\prime}}^{(6)}=\left(Y_{\bf 2}^{(2)}Y_{\bf 2}^{(4)}\right)_{\bf 1^{\prime}}=Y_{ \bf 2,1}^{(2)}Y_{\bf 2,2}^{(4)}-Y_{\bf 2,2}^{(2)}Y_{\bf 2,1}^{(4)}\,,\] \[Y_{\bf 2}^{(6)}=\left(Y_{\bf 2}^{(2)}Y_{\bf 1}^{(4)}\right)_{\bf 2}= \left(\begin{array}{c}Y_{\bf 2,1}^{(2)}Y_{\bf 1}^{(4)}\\ Y_{\bf 2,2}^{(2)}Y_{\bf 1}^{(4)}\end{array}\right)\,,\qquad\qquad Y_{\bf 1 }^{(8)}=\left(Y_{\bf 1}^{(1)}Y_{\bf 1}^{(4)}\right)_{\bf 1}=(Y_{\bf 1}^{(4)})^{2}\,,\] \[Y_{\bf 2a}^{(8)}=\left(Y_{\bf 1}^{(1)}Y_{\bf 2}^{(4)}\right)_{\bf 2}= \left(\begin{array}{c}Y_{\bf 1}^{(4)}Y_{\bf 2,1}^{(4)}\\ Y_{\bf 1}^{(4)}Y_{\bf 2,2}^{(4)}\end{array}\right)\,,\qquad\qquad Y_{\bf 2b}^{(8)}= \left(Y_{\bf 2}^{(1)}Y_{\bf 2}^{(4)}\right)_{\bf 2}=\left(\begin{array}{c}(Y_{ \bf 2,2}^{(4)})^{2}-(Y_{\bf 2,1}^{(4)})^{2}\\ 2Y_{\bf 2,1}^{(4)}Y_{\bf 2,2}^{(4)}\end{array}\right)\,.\] ## Appendix C The \(\Delta(27)\) invariant contractions of two doublets In this section, we shall show the contraction results of two \(\Delta(27)\) doublet fields \(\Phi=(\phi_{1},\phi_{2})^{T}\sim{\bf 2_{\it i,m}}\) and \(\Theta=(\theta_{1},\theta_{2})^{T}\sim{\bf 2_{\it j,\it n}}\), where \(i,j=1,\cdots 4\) and \(m,n=0,1,2\). The \(\Delta(27)\) invariant contraction \((\Phi\Theta)_{\bf\cal R}\) requires \(i=j\). Without loss of generality, we only consider the assignments of \(\Phi\) and \(\Theta\) with \(m\leq n\). Then there are total \(24\) possible different assignments for fields \(\Phi\) and \(\Theta\). The contraction results of the \(24\) assignments are summarized as follow \[n=m : \left\{\begin{array}{l}(\Phi\Theta)_{\bf 1_{0,0}^{0}}=\phi_{1} \theta_{2}+\phi_{2}\theta_{1}\,,\\ (\Phi\Theta)_{\bf 1_{0,0}^{1}}=\phi_{1}\theta_{2}-\phi_{2}\theta_{1}\,,\end{array}\right.\] \[n=m+1 : (\Phi\Theta)_{\bf 2_{0}}=\left(\begin{array}{c}\phi_{1}\theta_{2}+ \omega^{2}\phi_{2}\theta_{1}\\ i\phi_{1}\theta_{2}-i\omega^{2}\phi_{2}\theta_{1}\end{array}\right)\,,\] \[n=m+2 : (\Phi\Theta)_{\bf 2_{0}}=\left(\begin{array}{c}\phi_{1}\theta_{2}+ \omega\phi_{2}\theta_{1}\\ -i\phi_{1}\theta_{2}+i\omega\phi_{2}\theta_{1}\end{array}\right)\,.\] (C.1) When the two \(\Delta(27)\) doublet fields \(\Phi=(\phi_{1},\phi_{2})^{T}\) and \(\Theta=(\theta_{1},\theta_{2})^{T}\) transform as \({\bf 2_{\it i,\it m}}\) and \({\bf 2_{\it i,\it n}}\) under the eclectic flavor group \(\Delta(27)\rtimes S_{3}\), respectively. The \(\Delta(27)\) invariant contraction results of the two fields are given by \[n=m : \quad\left\{\begin{array}{l}(\Theta^{\dagger}\Phi)_{\mathbf{1_{0},0}}=\theta_{1}^{\dagger}\phi_{1}+\theta_{2}^{\dagger}\phi_{2}\,,\\ (\Theta^{\dagger}\Phi)_{\mathbf{1_{0},0}}=\theta_{1}^{\dagger}\phi_{1}-\theta _{2}^{\dagger}\phi_{2}\,,\\ n=m+1&:\quad(\Theta^{\dagger}\Phi)_{\mathbf{2_{0}}}=\left(\begin{array}{l} \theta_{1}^{\dagger}\phi_{1}+\omega^{2}\theta_{2}^{\dagger}\phi_{2}\\ i\theta_{1}^{\dagger}\phi_{1}-i\omega^{2}\theta_{2}^{\dagger}\phi_{2}\end{array} \right)\,,\\ n=m+2&:\quad(\Theta^{\dagger}\Phi)_{\mathbf{2_{0}}}=\left(\begin{array}{l} \theta_{1}^{\dagger}\phi_{1}+\omega\theta_{2}^{\dagger}\phi_{2}\\ -i\theta_{1}^{\dagger}\phi_{1}+i\omega\theta_{2}^{\dagger}\phi_{2}\end{array} \right)\,,\end{array}\right.\] (C.2) which can be obtain from Eq. (C.1) through the replacements \(\theta_{1}\rightarrow\theta_{2}^{\dagger}\) and \(\theta_{2}\rightarrow\theta_{1}^{\dagger}\).
``` 体系的な研究を行なった、伝統的な風味対称性Δ(27)を有限モジュラー対称性S_3で拡張した、エクリティックな風味グループΔ(27)×S_3。Δ(27)とS_3の整合性には、Δ(27)の8つの非 triviality singlet表現が4つの reducible doublets に配置される必要がある。さまざまなΔ(27)多重体のモジュラ変換行列が決定され、Δ(27)×S_3に適合するCP型の対称性について議論した。Δ(27)×S_3に関するK\"ahlerポテンシャルと超重力の形を研究し、その対応するfermion mass matrices を提示した。Δ(27)×S_3に基づいた lepton masses and mixing の基本モデルを提案し、数値解析が実施され、実験データに
2309.10081
Quantum Computational Complexity and Symmetry
Testing the symmetries of quantum states and channels provides a way to assess their usefulness for different physical, computational, and communication tasks. Here, we establish several complexity-theoretic results that classify the difficulty of symmetry-testing problems involving a unitary representation of a group and a state or a channel that is being tested. In particular, we prove that various such symmetry-testing problems are complete for BQP, QMA, QSZK, QIP(2), QIP_EB(2), and QIP, thus spanning the prominent classes of the quantum interactive proof hierarchy and forging a non-trivial connection between symmetry and quantum computational complexity. Finally, we prove the inclusion of two Hamiltonian symmetry-testing problems in QMA and QAM, while leaving it as an intriguing open question to determine whether these problems are complete for these classes.
Soorya Rethinasamy, Margarite L. LaBorde, Mark M. Wilde
2023-09-18T18:48:44
http://arxiv.org/abs/2309.10081v1
# Quantum Computational Complexity and Symmetry ###### Abstract Testing the symmetries of quantum states and channels provides a way to assess their usefulness for different physical, computational, and communication tasks. Here, we establish several complexity-theoretic results that classify the difficulty of symmetry-testing problems involving a unitary representation of a group and a state or a channel that is being tested. In particular, we prove that various such symmetry-testing problems are complete for BQP, QMA, QSZK, QIP(2), QIP\({}_{\text{EB}}\)(2), and QIP, thus spanning the prominent classes of the quantum interactive proof hierarchy and forging a non-trivial connection between symmetry and quantum computational complexity. Finally, we prove the inclusion of two Hamiltonian symmetry-testing problems in QMA and QAM, while leaving it as an intriguing open question to determine whether these problems are complete for these classes. ###### Contents * I Introduction * II Notions of Symmetry * II.1 Review of Existing Notions of Symmetries * II.2 Separably Extendible Symmetries * III Review of Quantum Computational Complexity Theory and Classes * III.1 BQP * III.2 QIP * III.3 QMA * III.4 QSZK * III.5 QIP\({}_{\text{EB}}\)(2) * III.6 QAM * IV Results: Symmetry-Testing Problems and Quantum Computational Complexity * IV.1 Testing \(G\)-Bose Symmetry of a State is BQP-Complete * IV.2 Testing \(G\)-Symmetry of a State Using Hilbert-Schmidt Norm is BQP-Complete * IV.3 Testing \(G\)-Bose Symmetry of the Output of a Channel is QMA-Complete * IV.4 Testing \(G\)-Symmetry of a State using Trace Norm is QSZK-Complete * IV.5 Testing \(G\)-Symmetry of a State using Fidelity is QSZK-Complete * IV.6 Testing \(G\)-Bose Symmetric Extendibility of a State is QIP\({}_{\text{EB}}\)(2)-Complete * IV.7 Testing \(G\)-Bose Symmetric Extendibility is QIP\({}_{\text{EB}}\)(2)-Complete * IV.8 Testing \(G\)-Bose Symmetric Extendibility of the Output of a Channel is QIP-Complete * IV.9 Testing Hamiltonian Symmetry Using Average Spectral Norm is in QAM * V Conclusion * A Error and Number of Samples in State-HS-Symmetry ## I Introduction Computational complexity theory was born from the Church-Turing thesis, which informally states that "any real-world computation can be translated into an equivalent computation involving a Turing machine." Turing machines soon became the standard method to measure the difficulty or computational complexity of a problem. However, they are classical machines and, as such, are limited by classical physics. With the advent of quantum mechanics as a prevalent model for explaining the universe, the question of whether Turing machines capture all that is computable arose naturally. To address this issue, a new computational model, called the quantum Turing machine, was developed to incorporate quantum mechanics [1, 2, 3]. This led to the development of quantum computational complexity theory, which classifies computational problems by means of quantum Turing machines [1]. Later on, it was shown that the quantum Turing machine and quantum circuit computational models are equivalent [1] (see also [25]), and the bulk of the larger research community's focus has been on the quantum circuit model since then. Quantum computational complexity theory provides an important contextualization for quantum algorithms; namely, determining the complexity class of a computational problem allows for a meaningful discussion of how difficult it is for a quantum computer to solve the problem as its scale increases [22, 23]. This classification allows us to clearly separate problems into broad categories, like those that are efficiently solvable on a quantum computer, those that are efficiently solvable on a classical computer, etc. The various classical and quantum complexity classes can be arranged into a hierarchy, which gives insight into their relative difficulty. For instance, an NP-hard problem is believed to be inefficiently solvable, but a PSPACE-hard problem is believed to be significantly more so. The complexity-theoretic perspective becomes relevant whenever attempting to solve computationally difficult tasks, such as in machine-learning applications. In this paper, we connect several major quantum complexity classes to a hierarchy of symmetry-testing tasks. Much interest has arisen in using symmetry in quantum information science for various applications, linked with its fundamental role in physics [14, 15]. To give a few examples, symmetries can be used to test for separability of pure states, as in [1, 1, 2, 3, 4, 5, 6]. In both classical and quantum machine learning tasks, symmetries can vastly improve performance by limiting the relevant search space [13, 14, 15, 16], and machine learning can help to identify hidden symmetries manifested only in certain coordinate systems [11]. Symmetries in POVMs have been utilized in state discrimination [11] and estimation [12] applications. Additionally, recent work has been conducted to test symmetries of states, channels, measurements [1, 15], Hamiltonians [10], and Lindbladians [15] using quantum algorithms. It is on these last three works that we expand, tying symmetry-testing language to a complexity-theoretic structure. In more detail, we provide a number of complexity-theoretic characterizations for symmetry-testing tasks. See Section III for a review of various complexity classes, as well as [22, 23] for much more detail. Table 1 and Figure 1 provide all the results and details of our paper at a glance, and Figure 1 places some of them in a containment diagram for ease of access. These results span a symmetry-testing problem complete for BQP to another complete for QIP. More specifically, for the classes BQP, QMA, QIP(2), QIP\({}_{\text{EB}}\)(2), QSZK, and QIP, we provide symmetry-testing problems that are complete for each class, meaning any promise problem in those classes can be mapped in polynomial time to a respective symmetry-testing problem for some group \(G\). In particular, we prove complexity-theoretic results for the various symmetry definitions given in [15]--for a given input state, testing for \(G\)-symmetry with respect to Hilbert-Schmidt norm is BQP-Complete, testing for \(G\)-Bose symmetry is BQP-Complete, testing for \(G\)-symmetry with respect to trace norm is QSZK-Complete, and testing for \(G\)-Bose symmetric extendibility is QIP(2)-Complete. The last aforementioned contribution, on QIP(2)-completeness, addresses a problem that has been open in quantum computational complexity theory for quite some time [21] -- namely, to identify a non-trivial problem complete for the class other than the known Close Image problem. In addition, we establish a symmetry-testing problem that is complete for QIP\({}_{\text{EB}}\)(2), and we show that testing whether there exists an input to a channel such that its output is \(G\)-Bose symmetric or \(G\)-Bose symmetric extendible is QMA-Complete and QIP-Complete, respectively. Furthermore, we show that two different versions of Hamiltonian symmetry-testing problems are in QMA and QAM. We accomplish these findings in the following ways. To establish containment of a given symmetry-testing problem in a given class, we either employ the algorithms put forward in [15, 15], which involve a verifier acting with or without a prover, or we propose new algorithms. To establish hardness of a given symmetry-testing problem, the key concept is to embed the circuits involved in a general computation either into the preparation of the state or channel being tested or into a unitary representation of a group. This approach to proving hardness results is common in the literature on quantum interactive proofs [22, 23, 24, 25, 26], as well as in a recent paper [10] establishing DQC1-hardness of a Hamiltonian symmetry-testing problem. The outlook and findings of our paper complement those of [1, 12] and [22, 23, 24, 25, 26], which connected entanglement theory and distinguishability, respectively, to quantum computational complexity theory. Here, in the same spirit, we connect symmetry to quantum computational complexity theory. To recall the prior developments, the authors of [15, 12] showed that, for most quantum complexity classes that involve interaction with a computationally-unbounded prover, including BQP, QMA, QSZK, QMA(2), and QIP, there exists a corresponding natural separability testing problem that is complete for the class. These findings are summarized in [12, Figure 1]. Similarly, in [22, 23, 24, 25, 26, 27], these authors showed that there exist natural distinguishability problems that are complete for the same aforementioned classes, as well as QIP(2), as summarized in [26, Figure 17]. Here, we show that there are multiple symmetry-testing problems that are complete for a range of quantum complexity classes. The rest of our paper is structured as follows. Section II begins with some preliminaries that are helpful for understanding the remainder of the work. In Section III, we review the class definitions for the relevant promise problems. Section IV contains a variety of complexity-theoretic results for different symmetry tests. In Section IV.1, we prove that testing \(G\)-Bose symmetry of a state \(\rho\) is BQP-Complete. In Section IV.2, we show that testing \(G\)-symmetry of a state, according to a Hilbert-Schmidt norm measure, is BQP-Complete. In Section IV.3, we go on to show that testing if there exists an input to a channel such that its output is \(G\)-Bose symmetric is QMA-Complete. Following this, Sections IV.4 and IV.5 show that the problems of testing \(G\)-symmetry of a state, according to a trace-norm measure and fidelity, respectively, are QSZK-Complete. In Section IV.6, testing if a state \(\rho\) is \(G\)-Bose symmetric extendible is shown to be \(\text{QIP}(2)\)-Complete; while in Section IV.8, we show that the channel version of the problem is QIP-Complete. In Section IV.7, we show that the problem of testing whether a state has a separable, \(G\)-Bose symmetric extension is \(\text{QIP}_{\text{EB}}(2)\)-Complete. Finally, in Sections IV.9 and IV.10, the problem of estimating the maximal and average spectral norm of the commutator between a Hamiltonian and a group representation \(\{U(g)\}_{g\in G}\) is shown to be in QMA and QAM, respectively. We conclude in Section V with a summary and some suggestions for future directions. ## II Notions of symmetry In this section, we review the notions of symmetry presented in [23, 21, 20] with respect to some finite group \(G\) and a corresponding unitary representation \(\{U(g)\}_{g\in G}\). We also introduce two notions of symmetry not previously considered in [23, 21, 20], which are related to and generalize the symmetry considered recently in [24]. Our task in this work will be to contextualize these symmetry definitions in a quantum complexity-theoretic framework. ### Review of Existing Notions of Symmetries **Definition 1** (\(G\)-symmetric): _Let \(G\) be a group with projective unitary representation \(\{U_{S}(g)\}_{g\in G}\), and let \(\rho_{S}\) be a state of system \(S\). The state \(\rho_{S}\) is symmetric with respect to \(G\)[23, 21] if_ \[\rho_{S}=U_{S}(g)\rho_{S}U_{S}(g)^{\dagger}\quad\forall g\in G. \tag{1}\] \(G\)-symmetry is the usual notion of symmetry considered in most physical contexts. For example, in [20, 21, 22], the authors use \(G\)-symmetric states in various quantum machine learning applications, primarily in classification algorithms where the labeling of the state should remain invariant. Additionally, testing the incoherence of a state in the vein of [1, 1] is a special case of a \(G\)-symmetry test where the group is the cyclic group of order \(|G|\). Expanding upon this definition, we recall the definition of \(G\)-Bose symmetry, a stronger notion of symmetry. \(G\)-Bose symmetry implies \(G\)-symmetry, though the reverse implication is not true in general. \(G\)-Bose symmetry checks if a state belongs to the symmetric subspace induced by the group representation. This more mathematical notion of symmetry has proven useful in deriving important results, such as the quantum de Finetti theorem [17]. As a practical application, a circuit construction for projecting onto the symmetric subspace corresponding to the standard symmetric group [2] has been used in a number of quantum computational tests of entanglement [14, 15, 16, 21]. We give the definition of \(G\)-Bose symmetry below. **Definition 2** (\(G\)-Bose-symmetric): _Let \(G\) be a group with unitary representation \(\{U_{S}(g)\}_{g\in G}\). A state \(\rho_{S}\) is Bose-symmetric with respect to \(G\) if_ \[\rho_{S}=U_{S}(g)\rho_{S}\quad\forall g\in G. \tag{2}\] _The condition in (2) is equivalent to the condition_ \[\rho_{S}=\Pi_{S}^{G}\rho_{S}\Pi_{S}^{G}, \tag{3}\] _where the projector \(\Pi_{S}^{G}\) is defined as_ \[\Pi_{S}^{G}\coloneqq\frac{1}{|G|}\sum_{g\in G}U_{S}(g). \tag{4}\] Both of the aforementioned symmetry notions can be expanded to scenarios in which the tester has limited access to the state of interest. For example, one can test whether, given a part of a state, there exists an extension that is symmetric. These notions lead to further, pertinent symmetry tests. For instance, when the group in question is the permutation group, \(G\)-Bose extendibility is relevant for detecting entanglement [15] and efficiently bounding quantum discord [17]. Similarly, \(G\)-symmetric extendible states have been studied in the context of entanglement distillability [16] and \(k\)-extendibility [16, 17, 2, 2, 2, 18]. **Definition 3** (\(G\)-symmetric extendible): _Let \(G\) be a group with unitary representation \(\{U_{RS}(g)\}_{g\in G}\). A state \(\rho_{S}\) is \(G\)-symmetric extendible if there exists a state \(\omega_{RS}\) such that_ 1. _the state_ \(\omega_{RS}\) _is an extension of_ \(\rho_{S}\)_, i.e.,_ \[\operatorname{Tr}_{R}[\omega_{RS}]=\rho_{S},\] (5) 2. _the state_ \(\omega_{RS}\) _is_ \(G\)_-symmetric, in the sense that_ \[\omega_{RS}=U_{RS}(g)\omega_{RS}U_{RS}(g)^{\dagger}\qquad\forall g\in G.\] (6) **Definition 4** (\(G\)-Bose symmetric extendible): _A state \(\rho_{S}\) is \(G\)-Bose symmetric extendible (G-BSE) if there exists a state \(\omega_{RS}\) such that_ 1. _the state_ \(\omega_{RS}\) _is an extension of_ \(\rho_{S}\)_, i.e.,_ \[\operatorname{Tr}_{R}[\omega_{RS}]=\rho_{S},\] (7) 2. _the state_ \(\omega_{RS}\) _is Bose symmetric, i.e., satisfies_ \[\omega_{RS}=\Pi_{RS}^{G}\omega_{RS}\Pi_{RS}^{G},\] (8) _where_ \[\Pi_{RS}^{G}\coloneqq\frac{1}{|G|}\sum_{g\in G}U_{RS}(g).\] (9) In [14, Section 5], we discussed how each of the above symmetry definitions can be expressed as semi-definite programs (SDPs), thus implying that it is efficient to test for them when the state and unitary representations are given as matrices. That is, the runtime of these SDP algorithms is polynomial in the dimension of the states and unitaries. In this paper, we consider the complexity of testing these symmetries when circuit descriptions are given for preparing these states and executing these unitary representations on quantum computers. ### Separably Extendible Symmetries Let us finally introduce two other notions of symmetry, one of which represents a generalization of a symmetry recently considered in [15]. Before doing so, let us first recall that a bipartite state \(\rho_{AB}\) is separable with respect to the partition \(A,B\), denoted as \(\rho_{AB}\in\mathrm{SEP}(A\!:\!B)\), if it can be written in the following form [16]: \[\rho_{AB}=\sum_{x\in\mathcal{X}}p(x)\psi_{A}^{x}\otimes\phi_{B}^{x}, \tag{10}\] where \(\mathcal{X}\) is a finite alphabet, \(\{p(x)\}_{x\in\mathcal{X}}\) is a probability distribution, and \(\{\psi_{A}^{x}\}_{x\in\mathcal{X}}\) and \(\{\phi_{B}^{x}\}_{x\in\mathcal{X}}\) are sets of pure states. States that cannot be written in this form are entangled. **Definition 5** (\(G\)-symmetric separably extendible): _Let \(G\) be a group with projective unitary representation \(\{U_{RS}(g)\}_{g\in G}\), and let \(\rho_{S}\) be a state. The state \(\rho_{S}\) is \(G\)-symmetric separably extendible if there exists a state \(\omega_{RS}\) such that_ 1. _the state_ \(\omega_{RS}\) _is a separable extension of_ \(\rho_{S}\)_, i.e.,_ \[\mathrm{Tr}_{R}[\omega_{RS}] =\rho_{S},\] (11) \[\omega_{RS} \in\mathrm{SEP}(R\!:\!S),\] (12) 2. _the state_ \(\omega_{RS}\) _is_ \(G\)_-symmetric, in the sense that_ \[\omega_{RS}=U_{RS}(g)\omega_{RS}U_{RS}(g)^{\dagger}\qquad\forall g\in G.\] (13) **Definition 6** (\(G\)-Bose-sympm. separably extendible): _Let \(G\) be a group with unitary representation \(\{U_{RS}(g)\}_{g\in G}\), and let \(\rho_{S}\) be a state. The state \(\rho_{S}\) is \(G\)-Bose-symmetric separably extendible if there exists a state \(\omega_{RS}\) such that_ 1. _the state_ \(\omega_{RS}\) _is a separable extension of_ \(\rho_{S}\)_, i.e.,_ \[\mathrm{Tr}_{R}[\omega_{RS}] =\rho_{S},\] (14) \[\omega_{RS} \in\mathrm{SEP}(R\!:\!S),\] (15) 2. _the state_ \(\omega_{RS}\) _is Bose symmetric, i.e., satisfies_ \[\omega_{RS}=\Pi_{RS}^{G}\omega_{RS}\Pi_{RS}^{G},\] (16) _where_ \(\Pi_{RS}^{G}\) _is defined in (_9_)._ By comparing Definitions 3 and 4 with Definitions 5 and 6, respectively, we see that the main additional constraint in the latter definitions is that the extension is required to be a separable state. As such, when the state and unitary representations are given as matrices, this additional constraint makes the search for an extension more computationally difficult than those needed for Definitions 3 and 4, because optimizing over the set of separable states is computationally difficult [17, 18] and it is not possible to perform this search by means of SDPs [19]. Here, we consider the complexity of testing the symmetry in Definition 6 when the state and unitary representations are given as circuit descriptions. Let us comment briefly on the connection between Definition 6 and the symmetry considered in [15]. In [15], the goal was to test whether a given bipartite state \(\rho_{AB}\) is separable. It was shown that one can equivalently do so by testing whether there exists a separable extension \(\rho_{A^{\prime}AB}\in\mathrm{SEP}(A^{\prime}\!:\!AB)\) of \(\rho_{AB}\) that is Bose symmetric with respect to the unitary representation \(\{I_{A^{\prime}A},F_{A^{\prime}A}\}\) of the symmetric group of order two, where \(F_{AA^{\prime}}\) is the unitary swap operator. More concretely, the test checks whether there exists \(\rho_{A^{\prime}AB}\in\mathrm{SEP}(A^{\prime}\!:\!AB)\) such that \[\mathrm{Tr}_{A^{\prime}}[\rho_{A^{\prime}AB}] =\rho_{AB}, \tag{17}\] \[\rho_{A^{\prime}AB} =\Pi_{A^{\prime}A}\rho_{A^{\prime}AB}\Pi_{A^{\prime}A}, \tag{18}\] where \(\Pi_{A^{\prime}A}\coloneqq(I_{AA^{\prime}}+F_{AA^{\prime}})/2\). As such, this represents a non-trivial example of the symmetry presented in Definition 6. Quantum algorithms to test the symmetries in Definitions 1-4 were given in [14], and we provide a new quantum algorithm to test for the symmetry in Definition 6. In the following sections, we also discuss the complexity of several corresponding symmetry-testing problems. ## III Review of quantum computational complexity theory and classes Before we delve into the different results of this work, we present a short review of quantum computational complexity theory. As mentioned previously, comprehensive reviews can be found in [16, 17]. Complexity theory is the branch of study that broadly seeks to classify the difficulty of computational problems [1, 2]. This field of study has led to answers to several questions of the form: "Is Problem \(A\) harder to solve than Problem \(B\)?" or "Given a potential solution to Problem \(A\), is it efficient to verify its authenticity?" Broadly speaking, computational problems are placed into computational complexity classes. All problems within a class can be thought of as having similar complexity or difficulty. Examples of such complexity classes are P (the class of problems efficiently solvable on a classical computer), NP (the class of problems whose solutions can be verified efficiently), etc. Quantum complexity theory is a generalization of classical complexity theory where the input resources and the underlying computation are allowed to be quantum mechanical. An im \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Problem** & **Description** & **Mathematical Description** & **Complexity** \\ \hline State \(G\)-Bose & & & \\ Symmetry & Decide if a state is \(G\)-Bose symmetric & & \\ State \(G\)-Symmetry in & & & \\ Hilbert–Schmidt & & Hilbert–Schmidt norm & \(\frac{1}{|G|}\sum\limits_{g\in G}\left\|\left[|U(g),\rho\right]\right\|_{2}^{2}\) & \\ \hline Channel \(G\)-Bose & & & \\ Symmetry & Decide if the output of a channel with optimized input is \(G\)-Bose symmetric & & \\ State \(G\)-Symmetric & & & \\ Trace Distance & & & \\ Trace Distance & & & \\ \hline State \(G\)-Symmetric & & & \\ Fidelity & & using fidelity & \\ \hline State \(G\)-Bose & & & \\ Symmetric & Decide if a state is \(G\)-Bose & & \\ Extendibility & & & \\ \hline State Separable & & & \\ Extension \(G\)-Bose & & & \\ Symmetry & & & \\ \hline Channel \(G\)-Bose & & & \\ Symmetric & & & \\ Extendibility & & & \\ \hline Hamiltonian & & & \\ Symmetry Maximal & & & \\ Spectral Norm & & & \\ \hline Hamiltonian & & & \\ Symmetry Average & & & \\ Spectral Norm & & & \\ \hline \end{tabular} \end{table} Table 1: List of all complexity-theoretic results from this work. In this table, \(\rho\) and \(\sigma\) are mixed states, and \(\mathcal{N}\) is a quantum channel. \(\{U(g)\}_{g\in G}\) is a unitary representation of the group \(G\). The shorthands \(\text{Sym}_{G}\), \(\text{B-Sym}_{G}\), and \(\text{BSE}_{G}\) denote the sets of \(G\)-symmetric, \(G\)-Bose symmetric, and \(G\)-Bose-symmetric extendible states for the chosen unitary representation of \(G\), respectively (from Definitions 1, 2, and 4). \(\text{SEP}(R\!:\!S)\) denotes the set of separable states across the \(R,S\) partition. Finally, \(\Pi^{G}\) denotes the projector onto the symmetric subspace of the group representation, as defined in (4). portant example is BQP, the class of problems efficiently solvable using quantum computers. In anticipation of the results of this paper, we define two more concepts that will be essential - 'hardness' and 'completeness'. A problem is said to be hard for a computational class if it is at least as hard to solve as the hardest problem of the class. A problem is complete for a class if it is hard for the class, and additionally, belongs to the class. A property of complete problems is that every other problem in the class can be efficiently mapped to a complete problem. In other words, the ability to solve a complete problem for a class can be efficiently repurposed to solve any other problem in that class. Therefore, a complete problem indeed completely characterizes the difficulty of the class. Two different methods exist to show that a problem is hard for a given class. First, we pick another problem that is known to be complete for the class and efficiently map that problem to the problem of interest. Another method is to take the definition of the class itself and show that an arbitrary problem in the class can be efficiently mapped to the problem of interest. Another important concept needed to fully specify the complexity of a problem is a polynomial-time generated family of circuits. Given a classical description/encoding of a quantum circuit, \(x\in S\), where \(S\subseteq\{0,1\}^{*}\) is a set of binary strings, the set of quantum circuits \(\{Q_{x}\mid x\in S\}\) is said to be polynomial-time generated if there exists a Turing machine that takes in as input the string \(x\) and outputs an encoding of the quantum circuit \(Q_{x}\) in polynomial time. This particular definition allows us to limit the power of the computational model to circuits that are "polynomially complex" by limiting the process by which such circuits are created. Lastly, we define promise problems. A promise problem can be thought of as a yes-no rewriting of a general decision problem. More concretely, a promise problem is a pair \(L=(L_{\text{yes}},L_{\text{no}})\), where \(L_{\text{yes}},L_{\text{no}}\) are subsets of all possible inputs such that \(L_{\text{yes}}\cap L_{\text{no}}=\emptyset\). The inputs of the two subsets are called yes-instances and no-instances, respectively. An algorithm is said to "decide" a promise problem if, given an input from \(L_{\text{yes}}\cup L_{\text{no}}\), it can determine to which subset the input belongs. Throughout this work, we will make reference to various established complexity classes via their archetypal, complete promise problems. In view of both conciseness and comprehensiveness, those relevant promise problems are presented here. ### Bqp The class of bounded-error quantum polynomial time (BQP) promise problems is often referred to as the class of problems efficiently solvable on a quantum computer [17, Chapter 4]. The classical analog of BQP is the class of bounded-error probabilistic polynomial time (BPP) problems, which is the class of problems efficiently solvable on a classical computer with access to random bits. A promise problem is a member of BQP if there exists an efficient quantum algorithm solving it in polynomial time with a success probability of at least \(2/3\). Here, we give the definition of BQP for convenience, wherein we restrict the circuits considered to be unitary circuits; it is known that the computational power of BQP does not change under this restriction. Let \(L=(L_{\text{yes}},L_{\text{no}})\) be a promise problem, \(\alpha,\beta:\mathbb{N}\rightarrow[0,1]\) arbitrary functions, and \(p\) a polynomial function. Then \(L\in\text{BQP}_{p}(\alpha,\beta)\) if there exists a polynomial-time generated family \(Q=\{Q_{n}:n\in\mathbb{N}\}\) of unitary circuits, where each circuit \(Q_{n}\) * the first \(n\) qubits are used for the input \(x\in L\), and the next \(p(n)\) input qubits are extra ancilla qubits that the verifier is allowed, * produces as output one decision qubit labeled by \(D\) and \(n+p(n)-1\) garbage qubits labeled by \(G\). In what follows, we write each \(Q_{n}\) as \(Q_{SA\to DG}\), thereby suppressing the dependence on the input length \(n=|x|\) and explicitly indicating the systems involved at the input and output of the unitary. In addition, the circuit \(Q_{n}\) has the following properties: 1. Completeness: For all \(x\in L_{\text{yes}}\), \(\Pr[Q\text{ accepts }x]\) Figure 1: List of complete symmetry-testing problems and the corresponding quantum complexity class. The notations are the same as described in Table 1. The cells are organized such that if a cell is connected to a cell above it, the complexity class for the lower cell is a subset of that for the higher cell. For example, QMA is a subset of QIP(2). \[\coloneqq\|(\langle 1|_{D}\otimes I_{G})Q_{SA\to DG}(|x \rangle_{S}\otimes|0\rangle_{A})\|_{2}^{2}\] \[\geq\alpha(|x|). \tag{19}\] 2. Soundness: For all \(x\in L_{\text{no}}\), \[\Pr[Q\text{ accepts }x]\leq\beta(|x|),\] (20) where acceptance is defined as obtaining the outcome one upon measuring the decision qubit register \(D\) of the state \(Q_{SA\to DG}(|x\rangle_{S}\otimes|0\rangle_{A})\). Then \(\text{BQP}=\bigcup_{p}\text{BQP}_{p}(2/3,1/3)\), where the union is over every polynomial-bounded function \(p\). ### Qip Quantum interactive proof systems (QIP) denote a powerful complexity class in quantum computational complexity theory. Indeed, a landmark result of the field is \(\text{QIP}=\text{PSPACE}\)[15]. The interactive proof system model involves messages between a computationally-bounded verifier and a prover with limitless computational power. These interactions may consist of some number of rounds \(m\), in which case these models can be classified by the number of exchanges as \(\text{QIP}(m)\). After all the messages have been exchanged, the verifier makes a decision to either accept or reject based on these interactions. Thus, the class QIP refers to all such promise problems that can be framed in this manner. More formally, the definition both of \(\text{QIP}(m)\) and QIP are given in [11, 12] to be as follows: Let \(m\in\mathbb{N}\), and let \(\alpha,\beta:\mathbb{N}\rightarrow[0,1]\) be functions. Then let \(\text{QIP}(m,\alpha,\beta)\) denote the class of promise problems \(L\) for which there exists an \(m\)-message verifier \(V\) such that 1. for all \(x\in L\), \(\exists\) a prover \(P\) such that the pair \((V,P)\) accepts with probability at least \(\alpha(|x|)\), and, 2. for all \(x\notin L\), \(\forall\) provers \(P\), the pair \((V,P)\) accepts with probability at most \(\beta(|x|)\). Usually, interactive proof classes are denoted solely by the number of messages exchanged, \(\text{QIP}(m)\). An important finding for QIP is that \(\text{QIP}=\text{QIP}(3)\), which implies that no further computational power is afforded by increasing the number of messages exchanged beyond three [11]. A general \(\text{QIP}(3)\) algorithm can be seen in Figure 2. The problem of close images was the first QIP-Complete problem to be proposed [11], and it is stated as follows: **Definition 7** (Problem of Close Images): _For constants \(0\leq\beta<\alpha\leq 1\), the input consists of two polynomial-time computable quantum circuits that agree on the number of output qubits and realize the quantum channels \(\mathcal{N}_{1}\) and \(\mathcal{N}_{2}\). Decide which of the following holds:_ \[\text{Yes:}\quad\max_{\rho_{1},\rho_{2}}F(\mathcal{N}_{1}(\rho_{1}),\mathcal{ N}_{2}(\rho_{2}))\geq\alpha, \tag{21}\] _No:_ \[\max_{\rho_{1},\rho_{2}}F(\mathcal{N}_{1}(\rho_{1}),\mathcal{N}_{2}(\rho_{2}) )\leq\beta,\] (22) _where the optimization is over all input states \(\rho_{1}\) and \(\rho_{2}\)._ ### Qma The quantum Merlin-Arthur (QMA) class is equivalent to \(\text{QIP}(1)\); that is, this model consists of a single message exchanged between a computationally unbounded prover and a computationally limited verifier. The definition of QMA can be found in [11], reproduced here for convenience. Let \(L=(L_{\text{yes}},L_{\text{no}})\) be a promise problem, let \(p,q\) be polynomially-bounded functions, and let \(\alpha,\beta:\mathbb{N}\rightarrow[0,1]\) be functions. Then \(L\in\text{QMA}_{p,q}(\alpha,\beta)\) if there exists a polynomial-time generated family of unitary circuits \(Q=\{Q_{n}:n\in\mathbb{N}\}\), where each circuit \(Q_{n}\) * the first \(n\) qubits are used for the input \(x\in L\), the next \(p(n)\) input qubits are extra ancilla qubits that the verifier is allowed, and the last \(q(n)\) qubits are given by the prover, * produces as output one decision qubit labeled by \(D\) and \(n+p(n)+q(n)-1\) garbage qubits labeled by \(G\). As before, we write \(Q_{n}\) as \(Q_{SAP\to DG}\), thereby suppressing the dependence on the input length \(n=|x|\) and explicitly indicating the systems involved at the input and output of the unitary. In addition, the circuit \(Q_{n}\) has the following properties: 1. Completeness: For all \(x\in L_{\text{yes}}\), there exists a \(q(|x|)\)-qubit state \(\sigma_{P}\) such that \[\Pr[Q\text{ accepts }(x,\sigma)] =\langle 1|_{D}\Pr[\omega_{DG}|]|1\rangle_{D}\] (23) \[\geq\alpha(|x|),\] (24) where \[\omega_{DG}\coloneqq Q_{n}(|x\rangle\!\langle x|_{S}\otimes|0\rangle\! \langle 0|_{A}\otimes\sigma_{P})Q_{n}^{\dagger}.\] (25) 2. Soundness: For all \(x\in L_{\text{no}}\) and every \(q(|x|)\)-qubit state \(\sigma_{P}\), the following inequality holds: \[\Pr[Q\text{ accepts }(x,\sigma)]\leq\beta(|x|).\] (26) Then \(\text{QMA}=\bigcup_{p,q}\text{QMA}_{p,q}(2/3,1/3)\), where the union is over all polynomial-bounded functions \(p\) and \(q\). ### Qszk The complexity class quantum statistical zero-knowledge (QSZK) gives a quantum analog of the classical statistical zero-knowledge class [11, 12], which can be phrased in terms of an interactive proof system. We reproduce the definition of QSZK here for convenience. Let \(V\) be a verifier and \(P\) a prover that acts on some input \(x\). Define the mixed state of the verifier and message qubits after \(j\) messages to be \(\rho_{V,M}(x,j)\). Then the pair \((V,P)\) is a quantum statistical zero-knowledge proof system for a promise problem \(L\) if 1. \((V,P)\) is an interactive proof system for \(L\), and 2. there exists a polynomial-time preparable set \(\{\sigma_{x,j}\}_{j}\) of states such that \[x\in L\Rightarrow\forall j\quad\left\lVert\sigma_{x,j}-\rho_{V,M}(x,j)\right \rVert_{1}\leq\delta(|x|),\] (27) for some \(\delta\) such that \(\delta(n)<1/p(n)\) for sufficiently large \(n\) and every polynomial \(p\). The completeness and soundness requirement of this class comes from the underlying proof system; for the definition of QSZK, we restrict the completeness and soundness errors to be at most \(1/3\). For this class, as with many class definitions, it can be helpful to look at a QSZK-Complete promise problem. The quantum state distinguishability problem was originally proposed alongside the class definition in [23], and so it is a natural choice. The problem statement is as follows: **Definition 8** (Quantum State Distinguishability): _Let \(L=(L_{\mathrm{yes}},L_{\mathrm{no}})\) be a promise problem, and let \(\alpha\) and \(\beta\) be constants satisfying \(0\leq\beta<\alpha\leq 1\). Given two quantum circuits \(Q_{0}\) and \(Q_{1}\) acting on \(m\) qubits each and having \(k\) specified output qubits, let \(\rho_{i}\) denote the output mixed state obtained by running \(Q_{i}\) on an input \(|0\rangle^{\otimes m}\). Decide whether_ \[\text{Yes:}\quad\quad\frac{1}{2}\left\lVert\rho_{0}-\rho_{1} \right\rVert_{1}\geq\alpha, \tag{28}\] \[\text{No:}\quad\quad\frac{1}{2}\left\lVert\rho_{0}-\rho_{1} \right\rVert_{1}\leq\beta. \tag{29}\] In [23], it was shown that \((\alpha,\beta)\)-Quantum State Distinguishability is QSZK-Complete for \(0\leq\beta<\alpha^{2}\leq 1\). ### Qip(2) The complexity class QIP(2) specifically denotes a set of promise problems that include two messages exchanged between the prover and verifier. The formal definition can be inferred from Section III.2 by setting the number of messages to two, i.e., \(m=2\). A general QIP(2) algorithm can be seen in Figure 3. We now reproduce the canonical QIP(2)-Complete problem [23, 24] as follows: **Definition 9** (Problem of Close Image): _Given is a circuit to realize a unitary extension \(U_{AE^{\prime}\to BE}\) of a channel \(\mathcal{N}_{A\to B}\), such that_ \[\mathcal{N}_{A\to B}(\omega_{A})=\\ \mathrm{Tr}_{E}[U_{AE^{\prime}\to BE}(\omega_{A}\otimes|0\rangle \!\langle 0|_{E^{\prime}})(U_{AE^{\prime}\to BE})^{\dagger}] \tag{30}\] _for every input state \(\omega_{A}\), and a circuit to realize a purification of the state \(\rho_{B}\). Decide which of the following holds:_ \[\text{Yes:}\quad\quad\max_{\sigma_{A}}F(\rho_{B},\mathcal{N}_{A \to B}(\sigma_{A}))\geq\alpha, \tag{31}\] \[\text{No:}\quad\quad\max_{\sigma_{A}}F(\rho_{B},\mathcal{N}_{A \to B}(\sigma_{A}))\leq\beta, \tag{32}\] _where the optimization is over every input state \(\sigma_{A}\)._ Note that the Problem of Close Image is different from the Problem of Close Images (see Definition 7). In the former, we bound the fidelity between a channel and a state, whereas in the latter, we bound the fidelity between two channels. ### Qip\({}_{\text{EB}}(2)\) The complexity class QIP\({}_{\text{EB}}(2)\) was introduced in [22] and represents a modification of QIP(2). By inspecting Figure 3 and recalling the Stinespring dilation theorem (see, e.g., [23]), we see that the prover's action in a QIP(2) protocol is equivalent to performing a quantum channel that has input system \(R\) and output system \(R^{\prime}\) (see also [11, Figure 1]). The idea behind QIP\({}_{\text{EB}}(2)\) is that the prover is constrained to performing Figure 2: A general QIP(3) algorithm. The unitaries \(P_{x}^{1}\) and \(P_{x}^{2}\) are implemented by an all-powerful prover, and the probability of measuring the decision qubit to be in the state \(|1\rangle\) is the acceptance probability of the algorithm. an entanglement-breaking channel. Such a channel has the following form [10]: \[\rho\rightarrow\sum_{x}\operatorname{Tr}[\mu_{x}\rho]\phi_{x}, \tag{33}\] where \(\{\mu_{x}\}_{x}\) is a rank-one positive operator-valued measure (i.e., each \(\mu_{x}\) is a rank-one positive semi-definite operator and \(\sum_{x}\mu_{x}=I\)) and \(\{\phi_{x}\}_{x}\) is a set of pure states. The canonical \(\operatorname{QIP}_{\mathrm{EB}}(2)\)-Complete problem is as follows [10, Theorem 11]: **Definition 10**: _Given circuits to generate a unitary extension of a channel \(\mathcal{N}_{G\to S}\) and a purification of a state \(\rho_{S}\), decide which of the following holds:_ \[\begin{array}{l}\text{Yes:}\quad\max_{\begin{subarray}{c}\{(p(x),\psi^{x}) \}_{x},\\ \{\varphi^{x}\}_{x},\\ \sum_{x}p(x)\psi^{x}_{S}=\rho_{S}\end{subarray}}\sum_{x}p(x)F(\psi^{x}_{S}, \mathcal{N}_{G\to S}(\varphi^{x}_{G}))\geq\alpha,\\ \text{No:}\quad\max_{\begin{subarray}{c}\{(p(x),\psi^{x})\}_{x},\\ \{\varphi^{x}\}_{x},\\ \sum_{x}p(x)\psi^{x}_{S}=\rho_{S}\end{subarray}}\sum_{x}p(x)F(\psi^{x}_{S}, \mathcal{N}_{G\to S}(\varphi^{x}_{G}))\leq\beta,\\ \end{array} \tag{34}\] \[\begin{array}{l}\text{$\sum_{x}p(x)\psi^{x}_{S}=\rho_{S}$}\end{array} \tag{35}\] _where the optimization is over every pure-state decomposition of \(\rho_{S}\), as \(\sum_{x}p(x)\psi^{x}_{S}=\rho_{S}\), and \(\left\{\varphi^{x}\right\}_{x}\) is a set of pure states._ ### Qam The quantum Arthur-Merlin (QAM) class was introduced in [14], and it can be understood as a variation of QMA in which the verifier and prover are given access to shared randomness in advance. It can also be understood as a restricted version of \(\operatorname{QIP}(2)\) in which the first message of the verifier is restricted to being a uniformly random classical bitstring. As such, the following containments hold: \(\operatorname{QMA}\subseteq\operatorname{QAM}\subseteq\operatorname{QIP}(2)\). Let us recall its definition here. Let \(L=(L_{\mathrm{yes}},L_{\mathrm{no}})\) be a promise problem, let \(p,q,r\) be polynomially-bounded functions, and let \(\alpha,\beta:\mathbb{N}\rightarrow[0,1]\) be functions. Then \(L\in\operatorname{QAM}_{p,q,r}(\alpha,\beta)\) if there exists a polynomial-time generated family of unitary circuits \(Q=\{Q_{n,y}:n\in\mathbb{N},y\in\mathcal{Y}\}\), where \(y\) is a uniformly random bitstring consisting of \(r(n)\) bits, so that \(\log_{2}|\mathcal{Y}|=r(n)\), and each circuit \(Q_{n,y}\) * the first \(n\) qubits are used for the input \(x\in L\), the next \(p(n)\) input qubits are extra ancilla qubits that the verifier is allowed, and the last \(q(n)\) qubits are given by the prover, * produces as output one decision qubit labeled by \(D\) and \(n+p(n)+q(n)-1\) garbage qubits labeled by \(G\). We write \(Q_{n,y}\) as \(Q^{y}_{SAP\to DG}\), thereby suppressing the dependence on the input length \(n=|x|\) and explicitly indicating the systems involved at the input and output of the unitary. We also use the shorthand \(Q^{y}\equiv Q^{y}_{SAP\to DG}\). In addition, each set \(\{Q_{n,y}\}_{y\in\mathcal{Y}}\) of circuits has the following properties: 1. Completeness: For all \(x\in L_{\mathrm{yes}}\), there exists a set \(\{\sigma^{y}_{P}\}_{y\in\mathcal{Y}}\) of \(q(|x|)\)-qubit states such that \[\frac{1}{|\mathcal{Y}|}\sum_{y\in\mathcal{Y}}\Pr[Q^{y}\text{ accepts }(x, \sigma^{y})]\] \[=\frac{1}{|\mathcal{Y}|}\sum_{y\in\mathcal{Y}}\langle 1|_{D} \operatorname{Tr}_{G}[\omega^{y}_{DG}]|1\rangle_{D}\] (36) \[\geq\alpha(|x|),\] (37) where \[\omega^{y}_{DG}:=Q^{y}(|x\rangle\!\langle x|_{S}\otimes|0\rangle\!\langle 0 |_{A}\otimes\sigma^{y}_{P})(Q^{y})^{\dagger}.\] (38) 2. Soundness: For all \(x\in L_{\mathrm{no}}\), and every set \(\{\sigma^{y}_{P}\}_{y\in\mathcal{Y}}\) of \(q(|x|)\)-qubit states, the following inequality holds: \[\frac{1}{|\mathcal{Y}|}\sum_{y\in\mathcal{Y}}\Pr[Q^{y}\text{ accepts }(x, \sigma^{y})]\leq\beta(|x|).\] (39) The acceptance probability \[\frac{1}{|\mathcal{Y}|}\sum_{y\in\mathcal{Y}}\Pr[Q^{y}\text{ accepts }(x, \sigma^{y})] \tag{40}\] Figure 3: A general \(\operatorname{QIP}(2)\) algorithm. The unitary \(P_{x}\) is implemented by an all-powerful prover, and the probability of measuring the decision qubit to be in the state \(|1\rangle\) is the acceptance probability of the algorithm. can be understood as the probability of acceptance conditioned on a fixed value of \(y\), which is then averaged over the shared uniform randomness (i.e., here we are applying the law of total probability). Then \(\mathrm{QAM}=\bigcup_{p,q,r}\mathrm{QAM}_{p,q,r}(2/3,1/3)\), where the union is over all polynomial-bounded functions \(p\), \(q\), and \(r\). ## IV Results: Symmetry-testing problems and quantum computational complexity ### Testing \(G\)-Bose Symmetry of a State is BQP-Complete In this section, we show that testing the \(G\)-Bose Symmetry of a state is BQP-Complete. We begin now by specifying this problem statement in precise terms. **Problem 1** (\((\alpha,\beta)\)-State-\(G\)-Bose-Symmetry): _Let \(\alpha\) and \(\beta\) be such that \(0\leq\beta<\alpha\leq 1\). Given are descriptions of a circuit \(U_{RS}^{p}\) that generates a purification of a state \(\rho_{S}\) and circuit descriptions of a representation \(\{U_{S}(g)\}_{g\in G}\) of a group \(G\). Decide which of the following holds:_ \[\text{Yes:}\quad\quad\mathrm{Tr}\big{[}\Pi_{S}^{G}\rho_{S}\big{]} \geq\alpha, \tag{41}\] \[\text{No:}\quad\quad\mathrm{Tr}\big{[}\Pi_{S}^{G}\rho_{S}\big{]} \leq\beta, \tag{42}\] _where the group representation projector \(\Pi_{S}^{G}\) is defined in (4)._ As observed in [14, Section 3.1], the measure \(\mathrm{Tr}\big{[}\Pi_{S}^{G}\rho_{S}\big{]}\) is a faithful symmetry measure, in the sense that it is equal to one if and only if the state \(\rho_{S}\) is Bose-symmetric. **Theorem 1**: _The promise problem State-\(G\)-Bose-Symmetry is BQP-Complete._ 1. \((\alpha,\beta)\)_-State-_\(G\)_-Bose-Symmetry is in BQP for all_ \(\beta<\alpha\)_, whenever the gap between_ \(\alpha\) _and_ \(\beta\) _is larger than an inverse polynomial in the input length._ 2. \((1-\varepsilon,\varepsilon)\)_-State-_\(G\)_-Bose-Symmetry is BQP-Hard, even when_ \(\varepsilon\) _decays exponentially in the input length._ _Thus, \((\alpha,\beta)\)-State-\(G\)-Bose-Symmetry is BQP-Complete for all \((\alpha,\beta)\) such that \(0\leq\beta<\alpha\leq 1\)._ **Remark 1**: _In the statement of Theorem 1, the first part indicates the largest range of parameters for which we can show that the problem is contained in BQP. Similarly, the second part indicates the largest range of parameters for which we can show that the problem is BQP-hard. Both of these parameter ranges include the case when \(\alpha\) and \(\beta\) are constants. As such, this leads to the final statement above that the problem is BQP-complete for constant values of \(\alpha\) and \(\beta\) satisfying the inequality constraint given. We present all subsequent theorems in a similar way._ **Proof of Theorem 1.** To show that the problem is BQP-Complete, we need to demonstrate two facts: first, that the problem is in BQP, and second, that it is BQP-Hard. Let us begin by proving that the problem is in BQP. In [14, Chapter 8] (see also [14, Algorithm 1]), an algorithm was proposed to test for \(G\)-Bose symmetry of a state \(\rho_{S}\) given a circuit description of unitary that generates a purification of the state and circuit descriptions of a unitary representation of a group \(G\), \(\{U(g)\}_{g\in G}\). Since the algorithm can be performed efficiently, the problem is contained in BQP. Next, we show that any problem in the BQP class can be reduced to an instance of this problem. The acceptance and rejection probabilities are encoded in the state of the decision qubit, with zero indicating acceptance. Now, we need to map this problem to an instance of State-\(G\)-Bose-Symmetry; i.e., using the circuit descriptions for a general BQP algorithm, we need to define a state \(\rho_{S^{\prime}}\) and a unitary representation \(\{U_{S^{\prime}}(g)\}_{g\in G}\), and also show how the symmetry-testing condition \(\mathrm{Tr}[\Pi_{S}^{G}\rho_{S^{\prime}}]\) can be written in terms of the BQP algorithm's acceptance probability. To this end, we define the group \(G\) to be the cyclic group on two elements \(C_{2}=\{I,V\}\) such that \(V^{2}=I\), where \(V\) is simply given by \[V_{D}=-Z_{D}, \tag{43}\] and the input state to be \[\rho_{D}=\mathrm{Tr}_{G}[Q_{SA\to DG}(|x\rangle\!\langle x|_{S}\otimes|0 \rangle\!\langle 0|_{A})(Q_{SA\to DG})^{\dagger}]. \tag{44}\] As such, we are making the identification \(S^{\prime}\leftrightarrow D\) between the system label \(S^{\prime}\) of a general symmetry-testing problem and the system \(D\) for a BQP algorithm. The group representation and circuit to generate \(\rho_{D}\) are thus efficiently implementable. Furthermore, the state of interest is just the state of the decision qubit, and the group projector for the unitary representation above is given by \[\Pi_{D}^{G}=\frac{1}{2}(I_{D}-Z_{D})=|1\rangle\!\langle 1|_{D}. \tag{45}\] Furthermore, we find that the symmetry-testing condition \(\mathrm{Tr}[\Pi_{D}^{G}\rho_{D}]\) maps to the BQP algorithm's acceptance probability as follows: \[\mathrm{Tr}[\Pi_{D}^{G}\rho_{D}]\] \[=\mathrm{Tr}[|1\rangle\!\langle 1|_{D}\rho_{D}]\] \[=\mathrm{Tr}[(|1\rangle\!\langle 1|_{D}\otimes I_{G})\times\] \[\qquad(Q_{SA\to DG}(|x\rangle\!\langle x|_{S}\otimes|0\rangle\! \langle 0|_{A})(Q_{SA\to DG})^{\dagger})]\] \[=\|(\langle 1|_{D}\otimes I_{G})Q_{SA\to DG}|x\rangle\!\langle 0|_{A} \|^{2}_{2}. \tag{46}\] Comparing with (19), we observe that the acceptance probability of the BQP algorithm exactly matches the symmetry-testing condition of the constructed \(G\)-Bose symmetry-testing problem. As such, we have proven that any BQP problem can be efficiently mapped to a \(G\)-Bose symmetry test, concluding the proof. ### Testing \(G\)-Symmetry of a State Using Hilbert-Schmidt Norm is BQP-Complete In this section, we show that testing the \(G\)-symmetry of a state using the Hilbert-Schmidt norm is BQP-Complete, and it is thus emblematic of the class of problems efficiently solvable using quantum computers. **Problem 2** (\((\alpha,\beta)\)-State-HS-Symmetry): _Given are a circuit description of a unitary \(U_{RS}^{\rho}\) that generates a purification of the state \(\rho_{S}\) and circuit descriptions of a unitary representation \(\{U_{S}(g)\}_{g\in G}\) of a group \(G\). Let \(\alpha\) and \(\beta\) be such that \(0\leq\beta<\alpha\leq\gamma\), where_ \[\gamma\coloneqq 2\left(1-\frac{1}{|G|}\right). \tag{47}\] _Decide which of the following holds:_ \[\text{Yes:}\quad\frac{1}{|G|}\sum_{g\in G}\left\|[U(g),\rho] \right\|_{2}^{2}\leq\beta, \tag{48}\] \[\text{No:}\quad\frac{1}{|G|}\sum_{g\in G}\left\|[U(g),\rho] \right\|_{2}^{2}\geq\alpha, \tag{49}\] _where the Hilbert-Schmidt norm of a matrix \(A\) is defined as \(\left\|A\right\|_{2}\coloneqq\sqrt{\operatorname{Tr}[A^{\dagger}A]}\)._ As observed in [3, Section 1], the quantity in (48) is a faithful symmetry measure in the sense that it is equal to zero if and only if the state \(\rho\) is \(G\)-symmetric, as in Definition 1. The quantity \(\gamma\) in (47) arises as a natural upper bound for \(\frac{1}{|G|}\sum\limits_{g\in G}\left\|[U(g),\rho]\right\|_{2}^{2}\) because \[\frac{1}{|G|}\sum_{g\in G}\left\|[U(g),\rho]\right\|_{2}^{2}\] \[=\frac{1}{|G|}\sum_{g\in G}2\left(\operatorname{Tr}[\rho^{2}]- \operatorname{Tr}[\rho U(g)\rho U(g)^{\dagger}]\right)\] \[=\frac{1}{|G|}\sum_{g\in G,g\neq e}2\left(\operatorname{Tr}[\rho ^{2}]-\operatorname{Tr}[\rho U(g)\rho U(g)^{\dagger}]\right)\] \[\leq\gamma, \tag{50}\] where the first inequality follows from (51) below and the last inequality follows because \(\operatorname{Tr}[\rho^{2}]\leq 1\) and \(\operatorname{Tr}[\rho U(g)\rho U(g)^{\dagger}]\geq 0\). Furthermore, the upper bound is saturated by choosing \(\rho\) to be \(|0\rangle\!\langle 0|\) in a \(d\)-dimensional space and the unitary representation to be the Heisenberg-Weyl shift operators \(\{X(x)\}_{x=0}^{d-1}\), such that \(X(x)|0\rangle=|x\rangle\). **Theorem 2**: _The promise problem State-HS-Symmetry is BQP-Complete._ 1. \((\alpha,\beta)\)_-State-HS-Symmetry is in BQP for all_ \(\beta<\alpha\)_. (It is implicit that the gap between_ \(\alpha\) _and_ \(\beta\) _is larger than an inverse polynomial in the input length.)_ 2. \((\gamma-\varepsilon,\varepsilon)\)_-State-HS-Symmetry is BQP-Hard, even when_ \(\varepsilon\) _decays exponentially in the input length._ _Thus, \((\alpha,\beta)\)-State-HS-Symmetry is BQP-Complete for all \((\alpha,\beta)\) such that \(0\leq\beta<\alpha\leq\gamma\)._ **Proof.** To show that the problem is BQP-Complete, we first show that it is in the BQP class and then show that it is BQP-Hard. For the first part, let us briefly recall the development from [3, Section 3.1]. Consider the following equalities: \[\left\|[U(g),\rho]\right\|_{2}^{2}\] \[=\left\|\rho U(g)-U(g)\rho\right\|_{2}^{2}\] \[=\operatorname{Tr}[\rho^{2}]+\operatorname{Tr}[(U(g)\rho U(g)^{ \dagger})^{2}]-2\operatorname{Tr}[\rho U(g)\rho U(g)^{\dagger}]\] \[=2\left(\operatorname{Tr}[\rho^{2}]-\operatorname{Tr}[\rho U(g) \rho U(g)^{\dagger}]\right), \tag{51}\] where the second equality is due to the unitary invariance of the Hilbert-Schmidt norm. Thus, we see that \[\frac{1}{|G|}\sum_{g\in G}\left\|[U(g),\rho]\right\|_{2}^{2}\] \[=\frac{1}{|G|}\sum_{g\in G}2\left(\operatorname{Tr}[\rho^{2}]- \operatorname{Tr}[\rho U(g)\rho U(g)^{\dagger}]\right)\] \[=2\left(\operatorname{Tr}[\rho^{2}]-\operatorname{Tr}[\rho\mathcal{ T}_{G}(\rho)]\right)\] \[=2\left(\operatorname{Tr}[\operatorname{SWAP}(\rho\otimes \rho)]-\operatorname{Tr}[\operatorname{SWAP}(\rho\otimes\mathcal{T}_{G}(\rho))] \right), \tag{52}\] where \(\mathcal{T}_{G}\) is the twirl channel given by \[\mathcal{T}_{G}(\cdot)\coloneqq\frac{1}{|G|}\sum_{g\in G}U(g)(\cdot)U(g)^{\dagger} \tag{53}\] and SWAP is the unitary swap operator. The two terms can be individually estimated by means of the destructive SWAP test [1]. To realize the twirl, one can pick an element \(g\) uniformly at random and apply \(U(g)\) to the state \(\rho\). Since the twirl and the SWAP test can be efficiently performed, it follows that the problem is in the BQP class. Next, we show that the problem is BQP-Hard by providing an efficient mapping from a general BQP problem to our problem of interest. Consider a general BQP algorithm as described in Section III.1. The output state of the BQP algorithm is given by \[Q_{SA\to DG}|x\rangle_{S}|0\rangle_{A}. \tag{54}\] Then, the acceptance and rejection probabilities of the BQP algorithm are given by \[p_{\text{acc}} =\left\|(\langle 1|_{D}\otimes I_{G}\rangle Q_{SA\to DG}|x \rangle_{S}|0\rangle_{A}\right\|_{2}^{2}, \tag{55}\] \[p_{\text{rej}} =1-p_{\text{acc}}\] \[=\left\|(\langle 0|_{D}\otimes I_{G}\rangle Q_{SA\to DG}|x \rangle_{S}|0\rangle_{A}\right\|_{2}^{2}. \tag{56}\] Now, we need to map this problem to an instance of State-HS-Symmetry; i.e., we need to define a state \(\rho\) and a unitary representation \(\{U(g)\}_{g\in G}\). To this end, let us define the group \(G\) to be the cyclic group on two elements, \(C_{2}=\{I,V\}\), such that \(V^{2}=I\), where \(V\) is given by \[V_{SAC}=(Q_{SA\to DG})^{\dagger}\,\text{CNOT}_{DC}\,Q_{SA\to DG}, \tag{57}\] and the input state to be \[\rho_{SAC}=|x\rangle\!\langle x|_{S}\otimes|0\rangle\!\langle 0|_{AC}. \tag{58}\] From (51), we see that \[\frac{1}{|G|}\sum_{g\in G}\left\|[U(g),\rho]\right\|_{2}^{2}\] \[=\frac{1}{|G|}\sum_{g\in G}2(\text{Tr}[\rho^{2}]-\text{Tr}[\rho U (g)\rho U(g)^{\dagger}])\] \[=1-\text{Tr}[\rho V\rho V^{\dagger}]\] \[=1-\left|(\langle x|_{S}\otimes\langle 0|_{AC}\rangle V(|x \rangle_{S}\otimes|0\rangle_{AC})\right|^{2}. \tag{59}\] To show the equivalence, we now expand \(V\) as follows: \[V(|x\rangle_{S}\otimes|0\rangle_{AC})=\\ (Q_{SA\to DG})^{\dagger}\,\text{CNOT}_{DC}\,Q_{SA\to DG}(|x \rangle_{S}\otimes|0\rangle_{AC}). \tag{60}\] Next, we insert an identity operator \(I_{D}\) to simplify: \[(Q_{SA\to DG})^{\dagger}\,\text{CNOT}_{DC}\,Q_{SA\to DG}(|x \rangle_{S}\otimes|0\rangle_{AC})\\ =(Q_{SA\to DG})^{\dagger}\,\text{CNOT}_{DC}(|0\rangle\!\langle 0|_{ D}\otimes I_{GC}\\ +|1\rangle\!\langle 1|_{D}\otimes I_{GC})Q_{SA\to DG}(|x \rangle_{S}\otimes|0\rangle_{AC}). \tag{61}\] Expanding, this reduces to \[(Q_{SA\to DG})^{\dagger}(|0\rangle\!\langle 0|_{D} \otimes I_{GC}\\ +|1\rangle\!\langle 1|_{D}\otimes I_{G}\otimes X_{C})Q_{SA\to DG}(|x \rangle_{S}\otimes|0\rangle_{AC})\\ =(Q_{SA\to DG})^{\dagger}(|0\rangle\!\langle 0|_{D})Q_{SA\to DG}(|x \rangle_{S}\otimes|0\rangle_{AC})\\ +(Q_{SA\to DG})^{\dagger}(|1\rangle\!\langle 1|_{D})Q_{SA\to DG}(|x \rangle_{S}\otimes|0\rangle_{A}|1\rangle_{C}). \tag{62}\] Thus, by expanding \(p_{\text{rej}}\) as \[p_{\text{rej}} =\left\|(\langle 0|_{D}\otimes I_{G})Q_{SA\to DG}(|x \rangle_{S}\otimes|0\rangle_{A})\right\|_{2}^{2}\] \[=(\langle x|_{S}\otimes\langle 0|_{A})(Q_{SA\to DG})^{ \dagger}(|0\rangle\!\langle 0|_{D})\times\] \[Q_{SA\to DG}(|x\rangle_{S}\otimes|0\rangle_{A}), \tag{63}\] we find that \[(\langle x|_{S}\otimes\langle 0|_{AC}\rangle V(|x\rangle_{S}\otimes|0 \rangle_{AC})\] \[=(\langle x|_{S}\otimes\langle 0|_{A})(Q_{SA\to DG})^{ \dagger}(|0\rangle\!\langle 0|_{D})\times\] \[Q_{SA\to DG}(|x\rangle_{S}\otimes|0\rangle_{A})\] \[=p_{\text{rej}}. \tag{64}\] We then finally see that \[q\coloneqq\frac{1}{|G|}\sum_{g\in G}\left\|[U(g),\rho]\right\|_{2}^{2}=1-p_{ \text{rej}}^{2}. \tag{65}\] Thus, given a method to estimate \(q\) within additive error \(\varepsilon\), we can estimate \(p_{\text{rej}}\) within an additive error of \(\sqrt{\varepsilon}\). A proof of this can be found in Appendix A. We can then estimate \(p_{\text{acc}}=1-\sqrt{1-q}\) within an additive error of \(\sqrt{\varepsilon}\) as well. As such, a general BQP problem can be efficiently mapped to our problem of interest, showing that State-HS-Symmetry is BQP-Hard. This along with that the fact that the problem lies in BQP, completes the proof of BQP-Completeness. ### Testing \(G\)-Bose Symmetry of the Output of a Channel is QMA-Complete In this section, we show that testing the \(G\)-Bose symmetry of the output of a channel with optimized input is QMA-Complete. **Problem 3** (\((\alpha,\beta)\)-Channel-\(G\)-Bose-Symmetry): _Let \(\alpha\) and \(\beta\) be such that \(0\leq\beta<\alpha\leq 1\). Given is a circuit description of a unitary \(U_{BD^{\prime}\to SD}^{\mathcal{N}}\) that realizes a unitary dilation of a channel_ \[\mathcal{N}_{B\to S}(\cdot)\coloneqq\\ \text{Tr}_{D}[U_{BD^{\prime}\to SD}^{\mathcal{N}}((\cdot)_{B} \otimes|0\rangle\!\langle 0|_{D^{\prime}})(U_{BD^{\prime}\to SD}^{\mathcal{N}})^{ \dagger}] \tag{66}\] _and circuit descriptions of a unitary representation \(\{U_{S}(g)\}_{g\in G}\) of a group \(G\). Decide which of the following holds:_ \[\text{Yes:}\quad\max_{\rho_{B}}\ \text{Tr}\big{[}\Pi_{S}^{G} \mathcal{N}_{B\to S}(\rho_{B})\big{]}\geq\alpha, \tag{67}\] \[\text{No:}\quad\max_{\rho_{B}}\ \text{Tr}\big{[}\Pi_{S}^{G} \mathcal{N}_{B\to S}(\rho_{B})\big{]}\leq\beta, \tag{68}\] _where the optimization is over every input state \(\rho_{B}\)._ Let us observe that the measure in (67) is a faithful symmetry measure, in the sense that it is equal to one if and only if there exists an input state \(\rho_{B}\) such that the output state \(\mathcal{N}_{B\to S}(\rho_{B})\) is Bose-symmetric. This follows from continuity of \(\text{Tr}\big{[}\Pi_{S}^{G}\mathcal{N}_{B\to S}(\rho_{B})\big{]}\) and from the arguments in [17, Section 3.1]. **Theorem 3**: _The promise problem Channel-\(G\)-Bose-Symmetry is QMA-Complete._ 1. \((\alpha,\beta)\)_-Channel-_\(G\)_-Bose-Symmetry is in QMA for all_ \(\beta<\alpha\)_. (It is implicit that the gap between_ \(\alpha\) _and_ \(\beta\) _is larger than an inverse polynomial in the input length.)_ 2. \((1-\varepsilon,\varepsilon)\)_-Channel-_\(G\)_-Bose-Symmetry is QMA-Hard, even when_ \(\varepsilon\) _decays exponentially in the input length._ _Thus, \((\alpha,\beta)\)-Channel-Bose-Symmetry is QMA-Complete for all \((\alpha,\beta)\) such that \(0\leq\beta<\alpha\leq 1\)._ **Proof.** To show that the problem is QMA-Complete, we need to demonstrate two facts: first, that the problem is in QMA, and second, that it is QMA-Hard. Let us begin by proving that the problem is in QMA. Let \(\rho_{B}\) be a state sent by the prover. This state is input to the channel \(\mathcal{N}_{B\to S}\) defined in (66). This leads to the output state \(\mathcal{N}_{B\to S}(\rho_{B})\), on which the \(G\)-Bose symmetry test is conducted. From [11, Algorithm 1], we see that testing \(G\)-Bose symmetry can be done efficiently when circuit descriptions of the unitaries \(\{U(g)\}_{g\in G}\) are provided. The acceptance probability of this test, for an input state \(\rho_{B}\), is equal to \(\operatorname{Tr}\big{[}\Pi_{S}^{G}\mathcal{N}_{B\to S}(\rho_{B})\big{]}\). Since the prover can optimize this probability over all possible input states, the acceptance probability is then \[\max_{\rho_{B}}\;\operatorname{Tr}\big{[}\Pi_{S}^{G}\mathcal{N}_{B\to S}( \rho_{B})\big{]}\,. \tag{69}\] Thus, the entire estimation can be done efficiently when aided by an all-powerful prover, establishing that the problem lies in QMA. To show that the problem is QMA-Hard, we pick an arbitrary QMA problem and map it to Channel-\(G\)-Bose-Symmetry. We use a similar construction proposed above in Section IV.1. For the mapping, we need to define a channel \(\mathcal{N}\) and a unitary representation \(\{U(g)\}_{g\in G}\); i.e., using the circuit descriptions for a general QMA algorithm, we need to define a channel \(\mathcal{N}_{P\to S^{\prime}}\) and a unitary representation \(\{U_{S^{\prime}}(g)\}_{g\in G}\), and also show how the symmetry-testing condition \[\max_{\rho_{P}}\operatorname{Tr}[\Pi_{S}^{G}\mathcal{N}_{P\to S^{\prime}}( \rho_{P})] \tag{70}\] can be written in terms of the QMA algorithm's acceptance probability. To this end, we define the group \(G\) to be the cyclic group on two elements \(C_{2}=\{I,V\}\) such that \(V^{2}=I\), where \(V\) is simply given by \[V_{D}=-Z_{D}, \tag{71}\] and we define the channel \(\mathcal{N}_{P\to D}\) to be \[\mathcal{N}_{P\to D}(\cdot)\coloneqq\] \[\operatorname{Tr}_{G}[Q_{SAP\to DG}(|x\rangle\!\langle x|_{S} \otimes|0\rangle\!\langle 0|_{A}\otimes(\,\cdot\,)_{P})(Q_{SAP\to DG})^{\dagger}]. \tag{72}\] As such, we are making the identification \(S^{\prime}\leftrightarrow D\) between the system label \(S^{\prime}\) of a general symmetry-testing problem and the system \(D\) for a QMA algorithm. The group representation and the channel are thus efficiently implementable. Thus, the channel output state of interest is just the state of the decision qubit, and the group projector for the given unitary representation is given by \[\Pi_{D}^{G}=\frac{1}{2}(I_{D}-Z_{D})=|1\rangle\!\langle 1|_{D}. \tag{73}\] We then find, for a fixed input state \(\rho_{P}\), the following equalities relating the symmetry-testing condition to the QMA algorithm's acceptance probability: \[\operatorname{Tr}\big{[}\Pi_{D}^{G}\mathcal{N}_{P\to D}(\rho_{P}) \big{]}\] \[=\operatorname{Tr}[|1\rangle\!\langle 1|_{D}\mathcal{N}_{P\to D}( \rho_{P})]\] \[=\operatorname{Tr}[(|1\rangle\!\langle 1|_{D}\otimes I_{G})(Q_{SAP \to DG}(|x\rangle\!\langle x|_{A}\otimes\] \[|0\rangle\!\langle 0|_{A}\otimes\rho_{P})(Q_{SAP\to DG})^{ \dagger})]\] \[=\operatorname{Pr}[Q\text{ accepts }(x,\rho_{P})]. \tag{74}\] The prover then optimizes this probability over every input state \(\rho_{P}\), and we observe that the acceptance probability of the QMA algorithm exactly matches the symmetry-testing condition of the constructed \(G\)-Bose symmetry testing algorithm. As such, we have proven that any QMA problem can be efficiently mapped to an instance of a Channel-\(G\)-Bose symmetry-testing problem, concluding the proof. ### Testing \(G\)-Symmetry of a State using Trace Norm is QSZK-Complete In Section IV.2, we showed that testing \(G\)-symmetry of a state using the Hilbert-Schmidt norm is BQP-Complete. In this section, we show that testing the \(G\)-symmetry of a state using the trace norm is QSZK-Complete. As such, the complexity of a \(G\)-symmetry test depends on the measure being used, much like what was observed in [14, Section V]. **Problem 4** (\((\alpha,\beta)\)-State-\(G\)-Sym-TD): _Let \(\alpha\) and \(\beta\) be such that \(0\leq\beta<\alpha\leq 1\). Given are a circuit description of a unitary \(U_{RS}^{\rho}\) that generates a purification of a state \(\rho_{S}\) and circuit descriptions of a unitary representation \(\{U_{S}(g)\}_{g\in G}\) of a group \(G\). Decide which of the following holds:_ \[\text{Yes:}\quad\min_{\sigma\in\operatorname{Sym}_{G}}\frac{1}{2} \left\|\rho-\sigma\right\|_{1}\geq\alpha, \tag{75}\] \[\text{No:}\quad\min_{\sigma\in\operatorname{Sym}_{G}}\frac{1}{2} \left\|\rho-\sigma\right\|_{1}\leq\beta, \tag{76}\] _where the set \(\operatorname{Sym}_{G}\) is defined as follows:_ \[\operatorname{Sym}_{G}\coloneqq\{\sigma\in\mathcal{D}(\mathcal{H}):U(g) \sigma U(g)^{\dagger}=\sigma\quad\forall g\in G\}. \tag{77}\] Let us observe that the asymmetry measure in (75) is faithful, in the sense that it is equal to zero if and only if the state \(\rho\) is \(G\)-symmetric. This follows from the faithfulness of the trace norm and its continuity properties. **Theorem 4**: _The promise problem State-\(G\)-Sym-TD is QSZK-Complete._ 1. \((\alpha,\beta)\)_-State-_\(G\)_-Sym-TD is in QSZK for all_ \(2\beta<\alpha\)_. (It is implicit that the gap between_ \(\alpha\) _and_ \(2\beta\) _is larger than an inverse polynomial in the input length.)_ 2. \((1-\varepsilon,\varepsilon)\)_-State-_\(G\)_-Sym-TD is QSZK-Hard, even when_ \(\varepsilon\) _decays exponentially in the input length._ _Thus, \((\alpha,\beta)\)-State-\(G\)-Sym-TD is QSZK-Complete for all \((\alpha,\beta)\) such that \(0<2\beta<\alpha<1\)._ **Proof.** To prove that the problem is QSZK-Complete, we need to show two facts. First, we need to show that the problem is in the QSZK class. Next, we show that the problem is QSZK-Hard; i.e., every problem in QSZK can be efficiently mapped to this problem. First, we show that it is in QSZK. We begin with a simple calculation. Consider the case of a No instance, for which the following inequality holds \[\min_{\sigma\in\operatorname{Sym}_{G}}\frac{1}{2}\left\lVert\rho-\sigma \right\rVert_{1}\leq\beta. \tag{78}\] Let \(\sigma^{*}\in\operatorname{Sym}_{G}\) be a state achieving the minimum, so that \[\frac{1}{2}\left\lVert\rho-\sigma^{*}\right\rVert_{1}\leq\beta. \tag{79}\] Define \(\overline{\rho}\) as the result of twirling \(\rho\) with respect to the group elements of \(G\). More concretely, \[\overline{\rho}\coloneqq\mathcal{T}_{G}(\rho)=\frac{1}{\left\lvert G\right\rvert }\sum_{g\in G}U(g)\rho U^{\dagger}(g). \tag{80}\] From the data-processing inequality for the trace distance [13, Chapter 9], and the fact that \(\mathcal{T}_{G}(\sigma^{*})=\sigma^{*}\), we conclude that \[\frac{1}{2}\left\lVert\overline{\rho}-\sigma^{*}\right\rVert_{1} =\frac{1}{2}\left\lVert\mathcal{T}_{G}(\rho)-\mathcal{T}_{G}( \sigma^{*})\right\rVert_{1}\] \[\leq\frac{1}{2}\left\lVert\rho-\sigma^{*}\right\rVert_{1}\] \[\leq\beta. \tag{81}\] Now, using the triangle inequality for the trace distance, (79), and (81), we find that \[\frac{1}{2}\left\lVert\rho-\overline{\rho}\right\rVert_{1} \leq\frac{1}{2}\left\lVert\rho-\sigma^{*}\right\rVert_{1}+\frac{1 }{2}\left\lVert\sigma^{*}-\overline{\rho}\right\rVert_{1} \tag{82}\] \[\leq\beta+\beta=2\beta. \tag{83}\] Having established the above, we now construct a QSZK algorithm consisting of the following steps: 1. The verifier randomly prepares the state \(\rho\) or \(\overline{\rho}\). The verifier can prepare the latter state by preparing \(\rho\) and performing the group twirl \(\mathcal{T}_{G}\). 2. The verifier sends the state to the prover, who performs an optimal measurement to distinguish \(\rho\) from \(\overline{\rho}\). 3. The verifier accepts if the prover can guess the state that was prepared, and the maximum acceptance probability of the prover is given by [11, 12] \[\frac{1}{2}\left(1+\frac{1}{2}\left\lVert\rho-\overline{\rho}\right\rVert_{1 }\right). \tag{84}\] In the case of a No instance, by applying (82)-(83), this probability is bounded from above as \[\frac{1}{2}\left(1+\frac{1}{2}\left\lVert\rho-\overline{\rho} \right\rVert_{1}\right) \leq\frac{1}{2}\left(1+2\beta\right)=\frac{1}{2}+\beta. \tag{85}\] In the case of a Yes instance, we find that \[\frac{1}{2}\left\lVert\rho-\overline{\rho}\right\rVert_{1} \geq\frac{1}{2}\left\lVert\rho-\sigma^{*}\right\rVert_{1} \tag{86}\] \[\geq\alpha. \tag{87}\] This then implies, in this case, that the acceptance probability satisfies \[\frac{1}{2}\left(1+\frac{1}{2}\left\lVert\rho-\overline{\rho} \right\rVert_{1}\right) \geq\frac{1}{2}\left(1+\alpha\right) \tag{88}\] \[=\frac{1}{2}+\frac{1}{2}\alpha. \tag{89}\] Thus, there is a gap as long as \[\frac{1}{2}+\frac{1}{2}\alpha>\frac{1}{2}+\beta, \tag{90}\] which is the same as \(\alpha>2\beta\). The interactive proof system is quantum statistical zero-knowledge because, in the case of a Yes instance, the verifier can efficiently simulate the whole interaction on their own, and the statistical difference between the simulation and the actual protocol is negligible. Thus, the problem is in the QSZK class. Next, we show that an arbitrary problem in the QSZK class can be efficiently mapped to this problem. We do so by mapping a known QSZK-Complete problem to this problem. We pick the \((\alpha_{s},\beta_{s})\)-State Distinguishability Problem (see Definition 8). Given circuits to generate the states \(\rho_{B}^{0}\) and \(\rho_{B}^{1}\) with the following soundness and completeness parameters \[\text{\it Yes:}\quad\frac{1}{2}\left\lVert\rho_{B}^{0}-\rho_{B}^ {1}\right\rVert_{1} \geq\alpha_{s}, \tag{91}\] \[\text{\it No:}\quad\frac{1}{2}\left\lVert\rho_{B}^{0}-\rho_{B}^ {1}\right\rVert_{1} \leq\beta_{s}, \tag{92}\] we use the construction from [13, Theorem 1] to create circuits that generate states \(\omega^{0}\), \(\omega^{1}\) such that \[\text{\it Yes:}\quad\frac{1}{2}\left\lVert\omega^{0}-\omega^{1} \right\rVert_{1}\geq 1-2^{-n} \tag{93}\] _No:_ \[\frac{1}{2}\left\|\omega^{0}-\omega^{1}\right\|_{1}\leq 2^{-n}.\] (94) The value of \(n\) will be chosen later in the proof. The procedure from [24, Theorem 1] runs in time polynomial in \(n\) and the size of the circuits that generate \(\rho^{i}\), and thus it is efficient. Next, we define the following state \[\tau_{FB}\coloneqq\frac{1}{2}\left(|0\rangle\!\langle 0|_{F}\otimes\omega_{B}^{0 }+|1\rangle\!\langle 1|_{F}\otimes\omega_{B}^{1}\right), \tag{95}\] and define the group \(G\) to be \(\{I_{F}\otimes I_{B},X_{F}\otimes I_{B}\}\). A circuit to create the state \(\tau_{FB}\) is given in Figure 4. Twirling this state with respect to the group elements results in the state \[\overline{\tau}_{FB}\coloneqq\mathcal{T}_{G}(\tau_{FB})=\pi_{F}\otimes\frac{ 1}{2}(\omega_{B}^{0}+\omega_{B}^{1}), \tag{96}\] where \(\pi_{F}\coloneqq\frac{1}{2}(|0\rangle\!\langle 0|_{F}+|1\rangle\!\langle 1|_{F}).\) Then we find that \[\tau_{FB}-\overline{\tau}_{FB}\] \[=\frac{1}{2}|0\rangle\!\langle 0|\otimes\omega^{0}+\frac{1}{2}|1 \rangle\!\langle 1|\otimes\omega^{1}-\pi\otimes\frac{1}{2}\left(\omega^{0}+ \omega^{1}\right)\] \[=\frac{1}{2}|0\rangle\!\langle 0|\otimes\omega^{0}+\frac{1}{2}|1 \rangle\!\langle 1|\otimes\omega^{1}\] \[\qquad-\left(\frac{1}{2}|0\rangle\!\langle 0|+\frac{1}{2}|1 \rangle\!\langle 1|\right)\otimes\frac{1}{2}\left(\omega^{0}+\omega^{1}\right)\] \[=\frac{1}{2}|0\rangle\!\langle 0|\otimes\left(\omega^{0}-\frac{1}{2 }\left(\omega^{0}+\omega^{1}\right)\right)\] \[\qquad+\frac{1}{2}|1\rangle\!\langle 1|\otimes\left(\omega^{1}- \frac{1}{2}\left(\omega^{0}+\omega^{1}\right)\right)\] \[=\frac{1}{2}|0\rangle\!\langle 0|\otimes\frac{1}{2}\left(\omega^{0} -\omega^{1}\right)+\frac{1}{2}|1\rangle\!\langle 1|\otimes\frac{1}{2}\left( \omega^{1}-\omega^{0}\right), \tag{97}\] which implies that \[\frac{1}{2}\left\|\tau_{FB}-\overline{\tau}_{FB}\right\|_{1}=\frac{1}{4}\left\| \omega^{0}-\omega^{1}\right\|_{1}. \tag{98}\] We now map Yes instances of Quantum-State-Distinguishability to Yes instances of State-\(G\)-Sym-TD. For a Yes instance, \[\frac{1}{2}\left\|\rho^{0}-\rho^{1}\right\|_{1}\geq\alpha_{s}\implies\frac{1} {2}\left\|\omega^{0}-\omega^{1}\right\|_{1}\geq 1-2^{-n}. \tag{99}\] Define \(\sigma^{*}\) to be a state that achieves the following minimum: \[\frac{1}{2}\left\|\tau_{FB}-\sigma^{*}\right\|_{1}\coloneqq\min_{\sigma\in \operatorname{Sym}_{G}}\frac{1}{2}\left\|\tau_{FB}-\sigma\right\|_{1}. \tag{100}\] Using the triangle inequality and the data-processing inequality (the latter being similar to how it was used before in (81)), we find that \[\frac{1}{2}\left\|\tau_{FB}-\overline{\tau}_{FB}\right\|_{1} \leq\frac{1}{2}\left\|\tau_{FB}-\sigma^{*}\right\|_{1}+\frac{1}{2 }\left\|\sigma^{*}-\overline{\tau}_{FB}\right\|_{1}\] \[\leq 2\left(\frac{1}{2}\left\|\tau_{FB}-\sigma^{*}\right\|_{1}\right)\] \[=2\left(\min_{\sigma\in\operatorname{Sym}_{G}}\frac{1}{2}\left\| \tau_{FB}-\sigma\right\|_{1}\right). \tag{101}\] Thus, using (98), we see that \[\min_{\sigma\in\operatorname{Sym}_{G}}\frac{1}{2}\left\|\tau_{FB }-\sigma\right\|_{1} \geq\frac{1}{2}\left(\frac{1}{2}\left\|\tau_{FB}-\overline{\tau}_{ FB}\right\|_{1}\right)\] \[=\frac{1}{4}\left(\frac{1}{2}\left\|\omega^{0}-\omega^{1}\right\|_ {1}\right)\] \[\geq\frac{1-2^{-n}}{4}. \tag{102}\] As such, the Yes instances are mapped as follows: \[\frac{1}{2}\left\|\rho^{0}-\rho^{1}\right\|_{1}\geq\alpha_{s}\Rightarrow\min_{ \sigma\in\operatorname{Sym}_{G}}\frac{1}{2}\left\|\tau_{FB}-\sigma\right\|_{1} \geq\frac{1-2^{-n}}{4}. \tag{103}\] Similarly, consider a NO instance of Quantum-State-Distinguishability, \[\frac{1}{2}\left\|\rho^{0}-\rho^{1}\right\|_{1}\leq\beta_{s}. \tag{104}\] Then using (94) and (98), \[\min_{\sigma\in\operatorname{Sym}_{G}}\frac{1}{2}\left\|\tau_{FB }-\sigma\right\|_{1} \leq\frac{1}{2}\left\|\tau_{FB}-\overline{\tau}_{FB}\right\|_{1}\] \[=\frac{1}{2}\left(\frac{1}{2}\left\|\omega^{0}-\omega^{1}\right\|_ {1}\right)\] \[\leq 2^{-n-1}. \tag{105}\] As such, we have shown that \((\alpha_{s},\beta_{s})\)-Quantum-State-Distinguishability is efficiently mapped to \(\left(\frac{1}{4}(1-2^{-n}),2^{-n-1}\right)\)-State-\(G\)-Sym-TD. Thus, a gap exists between the soundness and completeness conditions if \[\frac{(1-2^{-n})}{4}>2^{-n-1}, \tag{106}\] which is equivalent to \(n>\log_{2}(3)\). To conclude that State-\(G\)-Sym-TD is QSZK-Complete for arbitrary constants \(\alpha\) and \(\beta\), one can use the constructions from Lemmas 2 and 3 of [24] to manipulate the parameters \(\frac{1}{4}(1-2^{-n})\) and \(2^{-n-1}\) as desired. The reasoning for this latter statement is similar to that given at the end of the proof of [24, Theorem 6]. ### Testing \(G\)-Symmetry of a State using Fidelity is QSZK-Complete In this section, we show that testing \(G\)-Symmetry of a state using fidelity is QSZK-Complete, where the fidelity of states \(\rho\) and \(\sigma\) is defined as [23] \[F(\rho,\sigma)\coloneqq\left\|\sqrt{\rho}\sqrt{\sigma}\right\|_{1}^{2}. \tag{107}\] To show hardness, we provide an efficient mapping from the problem State-\(G\)-Sym-TD, defined in Section IV.4, to the problem of interest State-\(G\)-Sym-Fid **Problem 5** (\((\alpha,\beta)\)-State-\(G\)-Sym-Fid): _Let \(\alpha\) and \(\beta\) be such that \(0\leq\beta<\alpha\leq 1\). Given are a circuit description of a unitary \(U^{\rho}_{RS}\) that generates a purification of a state \(\rho_{S}\) and circuit descriptions of a unitary representation \(\{U(g)\}_{g\in G}\) of a group \(G\). Decide which of the following holds:_ \[\text{Yes:}\quad\max_{\sigma\in\mathrm{Sym}_{G}}F(\rho,\sigma) \geq\alpha, \tag{108}\] \[\text{No:}\quad\max_{\sigma\in\mathrm{Sym}_{G}}F(\rho,\sigma) \leq\beta, \tag{109}\] _where the set \(\mathrm{Sym}_{G}\) is defined in (77)._ Let us observe that the symmetry measure in (108) is faithful, in the sense that it is equal to one if and only if the state \(\rho\) is \(G\)-symmetric. This follows from the faithfulness and continuity properties of fidelity. **Theorem 5**: _The promise problem State-\(G\)-Sym-Fid is QSZK-Complete._ 1. \((\alpha,\beta)\)_-State-_\(G\)_-Sym-Fid is in QSZK for all_ \(\beta<4\alpha-3\)_. (It is implicit that the gap between_ \(4\alpha-3\) _and_ \(\beta\) _is larger than an inverse polynomial in the input length.)_ 2. \((1-\varepsilon,\varepsilon)\)_-State-_\(G\)_-Sym-Fid is QSZK-Hard, even when_ \(\varepsilon\) _decays exponentially in the input length._ _Thus, \((\alpha,\beta)\)-State-\(G\)-Sym-Fid is QSZK-Complete for all \((\alpha,\beta)\) such that \(0\leq\beta<4\alpha-3\leq 1\)._ **Proof.** Before we get into the proof, let us recall the sine distance of two quantum states \(\rho,\sigma\)[10]: \[P(\rho,\sigma)\coloneqq\sqrt{1-F(\rho,\sigma)}. \tag{110}\] The sine distance has a triangle-inequality property for states \(\rho,\sigma,\omega\): \[P(\rho,\sigma)\leq P(\rho,\omega)+P(\sigma,\omega). \tag{111}\] Furthermore, it has a data-processing inequality inherited from that of fidelity: \[P(\rho,\sigma)\geq P(\mathcal{N}(\rho),\mathcal{N}(\sigma)). \tag{112}\] To prove that State-\(G\)-Sym-Fid is QSZK-Complete, we need to show two results. First, we need to show that the problem belongs to QSZK and, second, that the problem is QSZK-Hard. We first show that the problem belongs to QSZK. To this end, we propose an algorithm to estimate the quantity \(F(\rho,\overline{\rho})\). The underlying principle is Uhlmann's theorem [12], which can be simply understood as follows: \[F(\rho_{S},\sigma_{S})=\max_{V_{R\to R^{\prime}}}\left|\langle\psi^{\sigma}| _{R^{\prime}S}(V_{R\to R^{\prime}}\otimes I_{S})|\psi^{\rho}\rangle_{RS} \right|^{2}, \tag{113}\] where the maximization is over every isometry \(V_{R\to R^{\prime}}\) and \(|\psi^{\rho}\rangle_{RS}\) and \(|\psi^{\sigma}\rangle_{RS}\) are purifications of \(\rho_{S}\) and \(\sigma_{S}\), respectively. In other words, the fidelity of two states is given by the maximum squared overlap between their purifications. To calculate the fidelity of \(\rho\) and \(\overline{\rho}\), we then need purifications of both states. We are given a unitary \(U^{\rho}\) to prepare a purification of \(\rho\) that is used in the following manner: \[|\psi^{\rho}\rangle_{RS} =U^{\rho}_{RS}|0\rangle_{RS},\] \[\rho_{S} =\mathrm{Tr}_{R}[|\psi^{\rho}\rangle\!\langle\psi^{\rho}|_{RS}]. \tag{114}\] The following unitary generates a purification of \(\overline{\rho}\): \[U^{\overline{\rho}}_{CRS}\coloneqq\left(\sum_{g}|g\rangle\!\langle g|_{C} \otimes U_{S}(g)\right)\mathrm{QFT}_{C}\,U^{\rho}_{RS}, \tag{115}\] as follows: \[|\psi^{\overline{\rho}}\rangle =U^{\overline{\rho}}_{CR^{\prime\prime}S}|0\rangle_{CR^{\prime \prime}S}\] \[=\left(\sum_{g\in G}|g\rangle\!\langle g|_{C}\otimes U_{S}(g) \right)\frac{1}{\sqrt{|G|}}\sum_{g^{\prime}\in G}|g^{\prime}\rangle_{C}|\psi^{ \rho}\rangle_{R^{\prime\prime}S}\] \[=\frac{1}{\sqrt{|G|}}\sum_{g\in G}|g\rangle_{C}\otimes U_{S}(g)| \psi^{\rho}\rangle_{R^{\prime\prime}S}. \tag{116}\] Thus, performing the partial trace over \(R^{\prime\prime}C\), we see that \[\mathrm{Tr}_{R^{\prime\prime}C}[|\psi^{\overline{\rho}}\rangle\! \langle\psi^{\overline{\rho}}|_{R^{\prime\prime}C}] =\frac{1}{|G|}\sum_{g}U_{S}(g)\rho_{S}U^{\dagger}_{S}(g)\] \[=\overline{\rho}. \tag{117}\] Therefore, using the unitaries \(U^{\rho}_{RS}\) and \(U^{\overline{\rho}}_{R^{\prime}S}\) (where \(R^{\prime}\equiv R^{\prime\prime}C\)), we can apply Uhlmann's theorem in tandem with an all-powerful prover (to implement the isometry \(V\)) to estimate the fidelity. The construction for the algorithm can be seen in Figure 5. In the case of a Yes-instance, \[\max_{\sigma\in\mathrm{Sym}_{G}}F(\rho,\sigma)\geq\alpha. \tag{118}\] Let \(\sigma^{*}\) be a state achieving the maximum, i.e., \[F(\rho,\sigma^{*}) \geq\alpha,\] \[\Leftrightarrow P(\rho,\sigma^{*}) \leq\sqrt{1-\alpha}. \tag{119}\] Then \[P(\rho,\overline{\rho}) \leq P(\rho,\sigma^{*})+P(\sigma^{*},\overline{\rho})\] \[\leq 2P(\rho,\sigma^{*})\] \[\leq 2\sqrt{1-\alpha},\] \[\Leftrightarrow F(\rho,\overline{\rho}) \geq 4\alpha-3, \tag{120}\] where the second inequality follows from the data-processing inequality (see (112)) under the application of the twirling channel \(\mathcal{T}_{G}\) and the fact that \(\sigma^{*}\) is unchanged under the application of this channel. In the case of a No-instance, \[F(\rho,\overline{\rho})\leq\max_{\sigma\in\mathrm{Sym}_{G}}F(\rho,\sigma)\leq\beta. \tag{121}\] Thus, there exists a gap as long as \[\alpha>\frac{3+\beta}{4}. \tag{122}\] The interactive proof system is quantum statistical zero-knowledge because, in the case of a Yes instance, the input state \(\rho\) is close to the twirled state \(\overline{\rho}\). Thus, the verifier can efficiently simulate the whole interaction on their own, and the statistical difference between the simulation and the actual protocol is negligible. As such, the problem is in the QSZK class. Next, we show that the problem is QSZK-Hard. To do this, we map the QSZK-Complete problem State-\(G\)-Sym-TD to our problem. To show this, we make use of two standard inequalities that relate the trace distance and fidelity of two states [23]: \[1-\sqrt{F(\rho,\sigma)} \leq\frac{1}{2}\left\|\rho-\sigma\right\|_{1}, \tag{123}\] \[\sqrt{1-F(\rho,\sigma)} \geq\frac{1}{2}\left\|\rho-\sigma\right\|_{1}. \tag{124}\] Consider a No-instance of State-\(G\)-Sym-TD. Then, \[\frac{1}{2}\min_{\sigma\in\mathrm{Sym}_{G}}\left\|\rho-\sigma \right\|_{1}\leq\beta. \tag{125}\] Using (123), we see that \[\min_{\sigma\in\mathrm{Sym}_{G}}1-\sqrt{F(\rho,\sigma)}\leq\min_{\sigma\in \mathrm{Sym}_{G}}\frac{1}{2}\left\|\rho-\sigma\right\|_{1}\leq\beta. \tag{126}\] Therefore, after some basic algebra, \[\max_{\sigma\in\mathrm{Sym}_{G}}F(\rho,\sigma)\geq(1-\beta)^{2}. \tag{127}\] Similarly, consider a Yes-instance of State-\(G\)-Sym-TD. Then, \[\min_{\sigma\in\mathrm{Sym}_{G}}\frac{1}{2}\left\|\rho-\sigma \right\|_{1}\geq\alpha. \tag{128}\] Using (124), we see that \[\alpha\leq\min_{\sigma\in\mathrm{Sym}_{G}}\frac{1}{2}\left\|\rho- \sigma\right\|_{1}\leq\min_{\sigma\in\mathrm{Sym}_{G}}\sqrt{1-F(\rho,\sigma)}. \tag{129}\] Therefore, after some basic algebra, \[\max_{\sigma\in\mathrm{Sym}_{G}}F(\rho,\sigma)\leq 1-\alpha^{2}. \tag{130}\] Thus, \((\alpha,\beta)\)-State-\(G\)-Sym-TD reduces to \(((1-\beta)^{2},1-\alpha^{2})\)-State-\(G\)-Sym-Fid. Since we mapped Yes (No) instances to No (Yes) instances, this proves that State-\(G\)-Sym-Fid belongs to co-QSZK. Since QSZK is closed under complement, the problem belongs to QSZK [20]. Thus, State-\(G\)-Sym-Fid is QSZK-Hard. ### Testing \(G\)-Bose Symmetric Extendibility of a State is QIP(2)-Complete In this section, we show that testing \(G\)-Bose symmetric extendibility (G-BSE) of a state is QIP(2)-Complete. To our knowledge, the proposed problem is the first non-trivial problem, other than Close Image, that is QIP(2)-Complete, addressing an open problem posed in [15], wherein the authors called QIP(2) the "most mysterious" quantum complexity class. **Problem 6** (\((\alpha,\beta)\)-State-\(G\)-Bse): _Let \(\alpha\) and \(\beta\) be such that \(0\leq\beta<\alpha\leq 1\). Given are a circuit description of a unitary \(U_{RS}^{\rho}\) that generates a purification of a state \(\rho_{S}\) and circuit descriptions of a unitary representation \(\{U_{RS}(g)\}_{g\in G}\) of a group \(G\). Decide which of the following holds:_ \[\text{Yes:}\quad\max_{\sigma_{S}\in\mathrm{BSE}_{G}}F(\rho_{S},\sigma_{S}) \geq\alpha, \tag{131}\] Figure 5: QSZK algorithm to estimate the fidelity \(F(\rho,\overline{\rho})\) given a unitary \(U^{\rho}\) that prepares a purification of \(\rho\) and a unitary representation \(\{U(g)\}_{g\in G}\) over which the twirled state \(\overline{\rho}\) is defined. The probability of measuring the all-zeros state gives an estimate of the required fidelity. The isometry \(V\) is implemented by an all-powerful prover. QFT is an abbreviation of the standard quantum Fourier transform, which in this case takes \(\left|0\right\rangle\) to \(\left|G\right|^{-1/2}\sum_{g\in G}\left|g\right\rangle\). _No:_ \[\max_{\sigma_{S}\in\mathrm{BSE}_{G}}F(\rho_{S},\sigma_{S})\leq\beta,\] (132) _where the set \(\mathrm{BSE}_{G}\) is defined as_ \[\mathrm{BSE}_{G}\coloneqq\\ \left\{\begin{array}{c}\sigma_{S}:\exists\ \omega_{RS}\in \mathcal{D}(\mathcal{H}_{RS}),\mathrm{Tr}_{R}[\omega_{RS}]=\sigma_{S},\\ \omega_{RS}=U_{RS}(g)\omega_{RS},\ \forall g\in G\end{array}\right\}. \tag{133}\] Let us observe that the symmetry measure in (131) is faithful, in the sense that it is equal to one if and only if the state \(\rho\) is \(G\)-Bose symmetric extendible. This follows from the faithfulness and continuity properties of fidelity. **Theorem 6**: _The promise problem State-\(G\)-BSE is QIP(2)-Complete._ 1. \((\alpha,\beta)\)_-State-_\(G\)_-BSE is in QIP(2) for all_ \(\beta<\alpha\)_. (It is implicit that the gap between_ \(\alpha\) _and_ \(\beta\) _is larger than an inverse polynomial in the input length.)_ 2. \((1-\varepsilon,\varepsilon)\)_-State-_\(G\)_-BSE is QIP(2)-Hard, even when_ \(\varepsilon\) _decays exponentially in the input length._ _Thus, \((\alpha,\beta)\)-State-\(G\)-BSE is QIP(2)-Complete for all \((\alpha,\beta)\) such that \(0\leq\beta<\alpha\leq 1\)._ **Proof.** To show that the problem is QIP(2)-Complete, we need to demonstrate two facts: first, that the problem is in QIP(2), and second, that it is QIP(2)-Hard. Let us begin by proving that the problem is in QIP(2). In our previous work [13, Algorithm 3], we proposed an algorithm to test for \(G\)-Bose symmetric extendibility of a state \(\rho_{S}\) given a circuit description of unitary that generates a purification of the state and circuit descriptions of a unitary representation \(\{U_{RS}(g)\}_{g\in G}\) of a group \(G\) (see also [13, Figure 6]). By inspection, the algorithm can be conducted efficiently given two messages exchanged with an all-powerful prover; therefore, this promise problem is clearly in QIP(2). To show that the problem is QIP(2)-Hard, we map an arbitrary QIP(2) problem to a \(G\)-BSE problem. Specifically, from the circuit descriptions for a QIP(2) algorithm, we will identify a state \(\rho_{S^{\prime}}\) and a unitary representation \(\{V_{R^{\prime}S^{\prime}}(g)\}_{g\in G}\) corresponding to a \(G\)-BSE problem, and we will show how the symmetry-testing condition \[\max_{\sigma_{S^{\prime}}\in\mathrm{BSE}_{G}}F(\rho_{S^{\prime}},\sigma_{S^{ \prime}}) \tag{134}\] can be written in terms of the QIP(2) algorithm's acceptance probability. To begin, recall that a QIP(2) problem consists of a first verifier circuit \(U^{1}_{SA\to S^{\prime}R}\), a prover circuit \(P_{RE\to R^{\prime}E^{\prime}}\), and a second verifier circuit \(U^{2}_{S^{\prime}R^{\prime}\to DG}\), where \(D\) is the decision qubit. (Here and in what follows, we keep implicit the dependence of the prover's unitary on the problem input \(x\).) The acceptance probability is given by \[p_{\mathrm{acc}}=\max_{P_{RE\to R^{\prime}E^{\prime}}}\left\|( \langle 1|_{D}\otimes I_{E^{\prime}G})U^{2}_{S^{\prime}R^{\prime}\to DG}\times\right.\] \[\left.P_{RE\to RE^{\prime}}U^{1}_{SA\to S^{\prime}R}|x \rangle_{S}|0\rangle_{A}|0\rangle_{E}\right\|^{2}_{2}. \tag{135}\] Define \(|\psi\rangle_{S^{\prime}R}\coloneqq U^{1}_{SA\to S^{\prime}R}|x\rangle_{S}|0 \rangle_{A}\), and we identify the aforementioned state \(\rho_{S^{\prime}}\) as \[\rho_{S^{\prime}}\coloneqq\mathrm{Tr}_{R}[|\psi\rangle\!\langle\psi|_{S^{ \prime}R}]. \tag{136}\] The acceptance probability can then be rewritten as \[p_{\mathrm{acc}}=\max_{P_{RE\to R^{\prime}E^{\prime}}}\mathrm{Tr}[(|1 \rangle\!\langle 1|_{D}\otimes I_{E^{\prime}G})U^{2}\times\\ P(|0\rangle\!\langle 0|_{E}\otimes|\psi\rangle\!\langle\psi|_{S^{ \prime}R})P^{\dagger}(U^{2})^{\dagger}], \tag{137}\] where we omitted system labels for brevity. Using the cyclicity of trace, consider that \[p_{\mathrm{acc}}=\max_{P_{RE\to R^{\prime}E^{\prime}}}\mathrm{Tr}[(U^{2})^{ \dagger}(|1\rangle\!\langle 1|_{D}\otimes I_{E^{\prime}G})U^{2}\times\\ P(|0\rangle\!\langle 0|_{E}\otimes|\psi\rangle\!\langle\psi|_{S^{ \prime}R})P^{\dagger}]. \tag{138}\] Motivated by this, we pick the group \(G\) to be \(C_{2}\) with unitary representation \(\{I_{R^{\prime}S^{\prime}},V_{R^{\prime}S^{\prime}}\}\), where \[V_{R^{\prime}S^{\prime}}\coloneqq\left(U^{2}_{R^{\prime}S^{\prime}\to DG} \right)^{\dagger}\left(-Z_{D}\otimes I_{G}\right)U^{2}_{R^{\prime}S^{\prime} \to DG}, \tag{139}\] We note that \(V^{2}_{RS}=I_{RS}\), establishing that this is indeed a representation of \(C_{2}\). The resulting group projection is then \[\Pi^{G}_{R^{\prime}S^{\prime}} =\frac{1}{2}(I_{R^{\prime}S^{\prime}}+V_{R^{\prime}S^{\prime}})\] \[=\frac{1}{2}\left({U^{2}}^{\dagger}(I_{D}\otimes I_{G})U^{2}+{U^ {2}}^{\dagger}\left(-Z_{D}\otimes I_{G}\right)U^{2}\right)\] \[=\frac{1}{2}\left({U^{2}}^{\dagger}\left(I_{DG}-\left(Z_{D}\otimes I _{G}\right)\right)U^{2}\right)\] \[={U^{2}}^{\dagger}(|1\rangle\!\langle 1|_{D}\otimes I_{G})U^{2}, \tag{140}\] which is precisely the acceptance projection in the first line of (138). That is, \[p_{\mathrm{acc}}=\max_{P_{RE\to R^{\prime}E^{\prime}}}\mathrm{Tr}[\Pi^{G}_{R^{ \prime}S^{\prime}}P(|0\rangle\!\langle 0|_{E}\otimes|\psi\rangle\!\langle\psi|_{S^{ \prime}R})P^{\dagger}]. \tag{141}\] Now invoking [13, Theorem III.3], we conclude that \[\max_{P_{RE\to R^{\prime}E^{\prime}}}\mathrm{Tr}[\Pi^{G}_{R^{ \prime}S^{\prime}}P(|0\rangle\!\langle 0|_{E}\otimes|\psi\rangle\!\langle\psi|_{S^{ \prime}R})P^{\dagger}]\] \[=\max_{P_{RE\to R^{\prime}E^{\prime}}}\left\|\Pi^{G}_{R^{ \prime}S^{\prime}}P(|0\rangle_{E}\otimes|\psi\rangle_{S^{\prime}R})\right\|^{2}_{2}\] \[=\max_{\sigma_{S^{\prime}}\in\mathrm{BSE}_{G}}F(\rho_{S^{\prime}}, \sigma_{S^{\prime}}), \tag{142}\] where \(\mathrm{BSE}_{G}\) in this case is \[\mathrm{BSE}_{G}\coloneqq\\ \left\{\begin{array}{c}\sigma_{S^{\prime}}:\exists\ \omega_{R^{\prime}S^{ \prime}}\in\mathcal{D}(\mathcal{H}_{R^{\prime}S^{\prime}}),\mathrm{Tr}_{R^{ \prime}}[\omega_{R^{\prime}S^{\prime}}]=\sigma_{S^{\prime}},\\ \omega_{R^{\prime}S^{\prime}}=V_{R^{\prime}S^{\prime}}\omega_{R^{\prime}S^{\prime}}, \end{array}\right\}. \tag{143}\] To help visualize the reduction, the \(G\)-Bose symmetric extendibility test corresponding to a general QIP(2) algorithm is depicted in Figure 6. Thus, the acceptance probability of the QIP(2) algorithm exactly matches the symmetry-testing condition of the constructed \(G\)-BSE problem. As such, any QIP(2) problem can be efficiently mapped to a \(G\)-BSE problem, proving that the problem State-\(G\)-BSE is QIP(2)-Hard. Along with the fact that the problem lies in the QIP(2) class, this concludes the proof. ### Testing \(G\)-Bose Symmetric Separable Extendibility is \(\text{QIP}_{\text{EB}}(2)\)-Complete In this section, we introduce the following problem: decide whether a state has a separable extension that is \(G\)-Bose symmetric. We also prove that it is a \(\text{QIP}_{\text{EB}}(2)\)-Complete problem. **Problem 7**: **(\((\alpha,\beta)\)-Sep-Ext-\(G\)-Bose-Symmetry)** _Let \(\alpha\) and \(\beta\) be such that \(0\leq\beta<\alpha\leq 1\). Given is a circuit description of a unitary \(U_{S^{\prime}}^{\rho}\) that generates a purification of a state \(\rho_{S}\), as well as circuit descriptions of a unitary representation \(\{U_{RS}(g)\}_{g\in G}\) of a group \(G\). Decide which of the following holds:_ _Yes:_ \[\max_{\begin{subarray}{c}\omega_{RS}\in\text{SEP}(R:S),\\ \text{Tr}_{R}[\omega_{RS}]=\rho_{S}\end{subarray}}\text{Tr}[\Pi_{RS}^{G} \omega_{RS}]\geq\alpha,\] (144) _No:_ \[\max_{\begin{subarray}{c}\omega_{RS}\in\text{SEP}(R:S),\\ \text{Tr}_{R}[\omega_{RS}]=\rho_{S}\end{subarray}}\text{Tr}[\Pi_{RS}^{G} \omega_{RS}]\leq\beta,\] (145) _where \(\Pi_{RS}^{G}\) is defined in (9)._ Let us observe that the following equality holds: \[\max_{\begin{subarray}{c}\omega_{RS}\in\text{SEP}(R:S),\\ \text{Tr}_{R}[\omega_{RS}]=\rho_{S}\end{subarray}}\text{Tr}[\Pi_{RS}^{G} \omega_{RS}]=\\ \max_{\begin{subarray}{c}\omega_{RS}\in\text{SEP}(R:S),\\ \text{Tr}_{R}[\omega_{RS}]=\rho_{S},\\ \sigma_{RS}\in\text{B-Sym}_{G}\end{subarray}}F(\omega_{RS},\sigma_{RS}), \tag{146}\] where the set \(\text{B-Sym}_{G}\) in this case is defined as \[\text{B-Sym}_{G}\coloneqq\{\sigma_{RS}:\sigma_{RS}=U_{RS}(g)\sigma_{RS},\ \forall g\in G\}\,. \tag{147}\] The identity in (146) follows as a consequence of [11, Theorem 3.1]. Rewriting the expression in this way allows for a fidelity interpretation of the symmetry condition, which implies that the symmetry-testing condition in (144) is equal to one if and only if there exists a separable extension of \(\rho_{S}\) that is Bose-symmetric according to the unitary representation \(\{U_{RS}(g)\}_{g\in G}\). As such, this is a faithful measure of \(G\)-Bose symmetric separable extendibility. **Theorem 7**: _The promise problem Sep-Ext-\(G\)-Bose-Symmetry is \(\text{QIP}_{EB}(2)\)-Complete._ 1. \((\alpha,\beta)\)_-Sep-Ext-_\(G\)_-Bose-Symmetry is in_ \(\text{QIP}_{EB}(2)\) _for all_ \(\beta<\alpha\)_. (It is implicit that the gap between_ \(\alpha\) _and_ \(\beta\) _is larger than an inverse polynomial in the input length.)_ 2. \((1-\varepsilon,\varepsilon)\)_-Sep-Ext-_\(G\)_-Bose-Symmetry is_ \(\text{QIP}_{EB}(2)\)_-Hard, even when_ \(\varepsilon\) _decays exponentially in the input length._ _Thus, \((\alpha,\beta)\)-Sep-Ext-\(G\)-Bose-Symmetry is \(\text{QIP}_{EB}(2)\)-Complete for all \((\alpha,\beta)\) such that \(0\leq\beta<\alpha\leq 1\)._ **Proof.** To show that the problem is \(\text{QIP}_{\text{EB}}(2)\)-Complete, we need to demonstrate two facts: first, that the problem is in \(\text{QIP}_{\text{EB}}(2)\), and second, that it is \(\text{QIP}_{\text{EB}}(2)\)-Hard. Let us begin by proving that the problem is in \(\text{QIP}_{\text{EB}}(2)\). The algorithm to estimate the quantity in (144) is given in Figure 7. The general form of an entanglement-breaking channel is given by \[\mathcal{E}_{S^{\prime}\to R}(\cdot)=\sum_{x\in\mathcal{X}}\text{Tr}[\mu_{S^{ \prime}}^{x}(\cdot)]\phi_{R}^{x}, \tag{148}\] as discussed in (33). Thus, the acceptance probability of the algorithm for a fixed channel \(\mathcal{E}_{S^{\prime}\to R}\) is given by \[\text{Tr}[\Pi_{RS}^{G}\mathcal{E}_{S^{\prime}\to R}(|\psi^{\rho} \rangle\!\langle\psi^{\rho}|_{SS^{\prime}})]\] \[=\sum_{x\in\mathcal{X}}\text{Tr}[\Pi_{RS}^{G}\,\text{Tr}_{S^{ \prime}}[\mu_{S^{\prime}}^{x}(|\psi^{\rho}\rangle\!\langle\psi^{\rho}|_{SS^{ \prime}})]\otimes\phi_{R}^{x}]\] Figure 6: Circuit to map an arbitrary QIP(2) computation to a \(G\)-Bose symmetric extendibility test. \[=\sum_{x\in\mathcal{X}}p(x)\operatorname{Tr}[\Pi^{G}_{RS}(\phi^{x}_{R} \otimes\psi^{x}_{S})], \tag{149}\] where \[p(x) \coloneqq\operatorname{Tr}[\mu^{x}_{\mathcal{S}^{\prime}}(|\psi^{ \rho}\rangle\!\langle\psi^{\rho}|_{SS^{\prime}})], \tag{150}\] \[\psi^{x}_{S} \coloneqq\frac{1}{p(x)}\operatorname{Tr}_{S^{\prime}}[\mu^{x}_{ \mathcal{S}^{\prime}}(|\psi^{\rho}\rangle\!\langle\psi^{\rho}|_{SS^{\prime}})]. \tag{151}\] Note that \[\sum_{x\in\mathcal{X}}p(x)\psi^{x}_{S} =\sum_{x\in\mathcal{X}}\operatorname{Tr}_{S^{\prime}}[\mu^{x}_{ \mathcal{S}^{\prime}}(|\psi^{\rho}\rangle\!\langle\psi^{\rho}|_{SS^{\prime}})]\] \[=\operatorname{Tr}_{S^{\prime}}[|\psi^{\rho}\rangle\!\langle\psi^ {\rho}|_{SS^{\prime}}]\] \[=\rho_{S}. \tag{152}\] Thus, maximizing over all possible entanglement-breaking channels, the acceptance probability is given by \[\max_{\{(p(x),\psi^{x}_{S},\phi^{x}_{R})\}_{x}} \sum_{x\in\mathcal{X}}p(x)\operatorname{Tr}[\Pi^{G}_{RS}(\phi^{x }_{R}\otimes\psi^{x}_{S})]\] \[=\max_{\omega_{RS}\in\operatorname{SEP}(R:S)}\operatorname{Tr}[ \Pi^{G}_{RS}\omega_{RS}], \tag{153}\] with the condition that \(\operatorname{Tr}_{R}[\omega_{RS}]=\sum_{x}p(x)\psi^{x}_{S}=\rho_{S}\). Since the entire algorithm can be performed efficiently when augmented by an entanglement-breaking prover, the problem is in \(\operatorname{QIP}_{\text{EB}}(2)\). Next, we show that the problem is \(\operatorname{QIP}_{\text{EB}}(2)\)-Hard. To do this, we need to map a general \(\operatorname{QIP}_{\text{EB}}(2)\) problem to an instance of \(\operatorname{Sep-Ext}(G\)-Bose-Symmetry; i.e., using the circuit descriptions for a general \(\operatorname{QIP}_{\text{EB}}(2)\) algorithm, we need to define a state \(\rho_{S}\) and a unitary representation \(\{U_{RS}(g)\}_{g\in G}\). Consider a general interactive proof system in \(\operatorname{QIP}_{\text{EB}}(2)\) that begins with the verifier preparing a bipartite pure state \(\psi_{RS}\), followed by the system \(R\) being sent to the prover, who subsequently performs an entanglement-breaking channel \(\mathcal{E}_{R\to R^{\prime}}\) and sends the \(R^{\prime}\) register to the verifier. The verifier then performs a unitary \(V_{R^{\prime}S\to DG}\), measures the decision qubit, and accepts if the outcome \(|1\rangle\) is observed. Indeed, the acceptance probability is given by \[\max_{\vec{\mathcal{E}}\in\text{EB}}\operatorname{Tr}[(|1\rangle\!\langle 1 |_{D}\otimes I_{G})\mathcal{V}_{R^{\prime}S\to DG}(\mathcal{E}_{R\to R^{ \prime}}(\psi_{RS}))], \tag{154}\] where \(\mathcal{V}_{R^{\prime}S\to DG}\) is the unitary channel corresponding to the unitary operator \(V_{R^{\prime}S\to DG}\) and EB denotes the set of entanglement-breaking channels. Following the reasoning in (149), the output of an entanglement-breaking channel can be written in the form \[\mathcal{E}_{R\to R^{\prime}}(\psi_{RS})=\sum_{x}p(x)\phi^{x}_{R^{\prime}} \otimes\psi^{x}_{S}, \tag{155}\] with the condition that \(\sum_{x}p(x)\psi^{x}_{S}=\operatorname{Tr}_{R}[\psi_{RS}]=\rho_{S}\). In other words, the output of the entanglement-breaking channel is a separable extension of the state \(\rho_{S}\). Thus, the acceptance probability is given by \[\max_{\omega_{R^{\prime}S}\in\operatorname{SEP}(R^{\prime}:S)} \operatorname{Tr}[(|1\rangle\!\langle 1|_{D}\otimes I_{G})V(\omega_{R^{ \prime}S})V^{\dagger}]\] \[=\max_{\omega_{R^{\prime}S}\in\operatorname{SEP}(R^{\prime}:S)} \operatorname{Tr}[(V^{\dagger}(|1\rangle\!\langle 1|_{D}\otimes I_{G})V)\omega_{R^{ \prime}S}], \tag{156}\] subject to the constraint \(\operatorname{Tr}_{R^{\prime}}[\omega_{R^{\prime}S}]=\rho_{S}\), where we have used the shorthand \(V\equiv V_{R^{\prime}S\to DG}\). The second inequality results from the cyclicity of trace. Let us then define the group \(G\) to be the cyclic group on two elements \(C_{2}=\{I,W\}\) such that \(W^{2}=I\), where \(W\) is simply given by \[W_{R^{\prime}S}:=(V_{R\mathcal{S}^{\prime}\to DG})^{\dagger}\left(-Z_{D} \otimes I_{G}\right)V_{R^{\prime}S\to DG}, \tag{157}\] We note that \(W^{2}_{R^{\prime}S}=I_{R^{\prime}S}\), establishing that this is indeed a representation of \(C_{2}\). The resulting group projection is then \[\Pi^{G}_{R^{\prime}S} =\frac{1}{2}(I_{R^{\prime}S}+W_{R^{\prime}S})\] \[=\frac{1}{2}\left(V^{\dagger}(I_{D}\otimes I_{G})V+V^{\dagger} \left(-Z_{D}\otimes I_{G}\right)V\right)\] \[=\frac{1}{2}\left(V^{\dagger}\left(I_{DG}-\left(Z_{D}\otimes I_{G }\right)\right)V\right)\] \[=V^{\dagger}(|1\rangle\!\langle 1|_{D}\otimes I_{G})V. \tag{158}\] Next, we define the state \(\rho_{S}\) to be \[\rho_{S}\coloneqq\operatorname{Tr}_{R}[\psi_{RS}]. \tag{159}\] Thus, the symmetry-testing condition of the instance of \(\operatorname{Sep-Ext}(G\)-Bose-Symmetry is given by \[\max_{\omega_{R^{\prime}S}\in\operatorname{SEP}(R^{\prime}:S)} \operatorname{Tr}[\Pi^{G}_{R^{\prime}S}\omega_{R^{\prime}S}]=\] \[\max_{\omega_{R^{\prime}S}\in\operatorname{SEP}(R^{\prime}:S)} \operatorname{Tr}[V^{\dagger}(|1\rangle\!\langle 1|_{D}\otimes I_{G})V\omega_{R^{\prime}S}], \tag{160}\] where the maximization over \(\omega_{R^{\prime}S}\) is subject to the constraint that \(\operatorname{Tr}_{R^{\prime}}[\omega_{R^{\prime}S}]=\rho_{S}\). This exactly matches the acceptance probability of the \(\operatorname{QIP}_{\text{EB}}(2)\) problem, establishing that \(\operatorname{Sep-Ext}(G\)-Bose-Symmetry is \(\operatorname{QIP}_{\text{EB}}(2)\)-Hard, thus completing the proof. ### Testing \(G\)-Bose Symmetric Extendibility of the Output of a Channel is QIP-Complete In this section, we show that testing the \(G\)-Bose symmetric extendibility (G-BSE) of the output of a channel state is QIP-Complete. Figure 7: \(\operatorname{QIP}_{\text{EB}}(2)\) algorithm to test for \(G\)-Bose symmetry of a separable extension of the state \(\rho_{S}\), where the prover’s actions are depicted in the dashed box. The prover’s channel \(\mathcal{E}_{S^{\prime}\to R}\) is entanglement breaking. **Problem 8** (\((\alpha,\beta)\)-Channel-\(G\)-Bse): _Let \(\alpha\) and \(\beta\) be such that \(0\leq\beta<\alpha\leq 1\). Given are descriptions of circuits \(U^{\mathcal{N}}_{BC^{\prime}\to S^{\prime}S}\) that prepare a unitary dilation of a channel_ \[\mathcal{N}_{B\to S}(\cdot)\coloneqq\\ \operatorname{Tr}_{S^{\prime}}[U^{\mathcal{N}}_{BC^{\prime}\to S^{ \prime}S}((\cdot)_{B}\otimes|0\rangle\!\langle 0|_{C^{\prime}})(U^{\mathcal{N}}_{BC^{ \prime}\to S^{\prime}S})^{\dagger}] \tag{161}\] _and descriptions of a unitary representation \(\{U_{S}(g)\}_{g\in G}\) of a group \(G\). Decide which of the following holds:_ _Yes:_ \[\max_{\begin{subarray}{c}\rho_{B}\in\mathcal{D}(\mathcal{H}_{B}),\\ \sigma_{S}\in\operatorname{BSE}_{G}\end{subarray}}F(\mathcal{N}_{B\to S}( \rho_{B}),\sigma_{S})\geq\alpha,\] (162) _No:_ \[\max_{\begin{subarray}{c}\rho_{B}\in\mathcal{D}(\mathcal{H}_{B}),\\ \sigma_{S}\in\operatorname{BSE}_{G}\end{subarray}}F(\mathcal{N}_{B\to S}( \rho_{B}),\sigma_{S})\leq\beta,\] (163) _where the set \(\operatorname{BSE}_{G}\) is defined to be:_ \[\operatorname{BSE}_{G}\coloneqq\\ \left\{\begin{array}{c}\sigma_{S}:\exists\ \omega_{RS}\in \mathcal{D}(\mathcal{H}_{RS}),\operatorname{Tr}_{R}[\omega_{RS}]=\sigma_{S},\\ \omega_{RS}=U_{RS}(g)\omega_{RS},\ \forall g\in G\end{array}\right\}. \tag{164}\] Let us observe that the symmetry measure in (162) is faithful, in the sense that it is equal to one if and only if there is a channel input state \(\rho_{B}\) such that the output state \(\mathcal{N}_{B\to S}(\rho_{B})\) is \(G\)-Bose symmetric extendible. **Theorem 8**: _The promise problem Channel-\(G\)-BSE is QIP-Complete._ 1. \((\alpha,\beta)\)_-Channel-_\(G\)_-BSE is in QIP for all_ \(\beta<\alpha\)_. (It is implicit that the gap between_ \(\alpha\) _and_ \(\beta\) _is larger than an inverse polynomial in the input length.)_ 2. \((1-\varepsilon,\varepsilon)\)_-Channel-_\(G\)_-BSE is QIP-Hard, even when_ \(\varepsilon\) _decays exponentially in the input length._ _Thus, \((\alpha,\beta)\)-Channel-\(G\)-BSE is QIP-Complete for all \((\alpha,\beta)\) such that \(0\leq\beta<\alpha\leq 1\)._ **Proof.** To show that the problem is QIP-Complete, we need to demonstrate two facts: first, that the problem is in QIP, and second, that it is QIP-Hard. Let us begin by proving that the problem is in QIP. In our previous work [11, Algorithm 3], we proposed an algorithm to test for \(G\)-Bose Symmetric Extendibility of a state \(\rho_{S}\) given a circuit description of unitary that generates a purification of the state and circuit descriptions of a unitary representation \(\{U_{RS}(g)\}_{g\in G}\) of a group \(G\). By inspection, the algorithm can be executed efficiently given two messages exchanged with an all-powerful prover. The optimal input state to the channel is sent by another message of the prover, thus adding up to three messages in total. As such, the algorithm is clearly in QIP. To show that the problem is QIP-Hard, we map an arbitrary QIP problem to an instance \((\mathcal{N},\{U_{RS}(g)\}_{g\in G})\) of Channel-\(G\)-BSE. Since \(\operatorname{QIP}(3)\equiv\operatorname{QIP}\)[10], our goal is to find a correspondence between an arbitrary QIP(3) protocol and a choice of channel and group, \((\mathcal{N},\{U_{RS}(g)\}_{g\in G})\). Specifically, from the circuit descriptions for a QIP(3) algorithm, we will identify a channel \(\mathcal{N}_{R^{\prime\prime}\to S^{\prime}}\) and a unitary representation \(\{V_{R^{\prime}S^{\prime}}(g)\}_{g\in G}\) corresponding to a \(G\)-BSE problem, and we will show how the symmetry-testing condition \[\max_{\begin{subarray}{c}\rho_{B^{\prime\prime}}\in\mathcal{D}(\mathcal{H}_{R^ {\prime\prime}}),\\ \sigma_{S^{\prime}}\in\operatorname{BSE}_{G}\end{subarray}}F(\mathcal{N}_{R^{ \prime\prime}\to S^{\prime}}(\rho_{R^{\prime\prime}}),\sigma_{S^{\prime}}) \tag{165}\] can be written in terms of the QIP(3) algorithm's acceptance probability. To begin, recall that an arbitrary QIP(3) problem consists of three messages exchanged and involves a first prover unitary \(P^{1}_{E^{\prime\prime}\to R^{\prime\prime}E}\), a first verifier unitary \(U^{1}_{SAR^{\prime\prime}\to S^{\prime}R}\), a second prover unitary \(P^{2}_{RE\to R^{\prime}E^{\prime}}\), and a second verifier unitary \(U^{2}_{S^{\prime}R^{\prime}\to DG}\), where \(D\) is the decision qubit. (Here we leave the dependence of the prover unitaries on \(x\) to be implicit.) The acceptance probability is thus, \[p_{\text{acc}}=\max_{\begin{subarray}{c}P^{1}_{E^{\prime\prime} \to R^{\prime\prime}E},\\ P^{2}_{RE\to R^{\prime}E^{\prime}}\end{subarray}}\left\|(\langle 1|_{D}\otimes I_{GE^{ \prime}})U^{2}_{S^{\prime}R^{\prime}\to DG}P^{2}_{RE\to R^{\prime}E^{\prime}}\right.\] \[\times\left.U^{1}_{SAR^{\prime\prime}\to S^{\prime}R}P^{1}_{E^{ \prime\prime}\to R^{\prime\prime}E}|x\rangle_{S}|0\rangle_{A}|0\rangle_{E^{ \prime\prime}}\right\|^{2}_{2}. \tag{166}\] Defining the first state after the action of the prover's unitary \(P^{1}\) to be \[|\psi\rangle_{R^{\prime\prime}}\coloneqq P^{1}_{E^{\prime\prime} \to R^{\prime\prime}E}|0\rangle_{E^{\prime\prime}}, \tag{167}\] and the isometry \[W_{R^{\prime\prime}\to S^{\prime}R}\coloneqq U^{1}_{SAR^{\prime} \to S^{\prime}R}|x\rangle_{S}|0\rangle_{A}, \tag{168}\] the acceptance probability can then be written as \[p_{\text{acc}} =\max_{\begin{subarray}{c}\psi_{R^{\prime\prime}E},P^{2}_{RE\to R ^{\prime\prime}E^{\prime}}\end{subarray}}\operatorname{Tr}[(|1\rangle\! \langle 1|_{D}\otimes I_{E^{\prime}G})U^{2}\] \[\quad\times P^{2}W\psi_{R^{\prime\prime}E}W^{\dagger}{P^{2}}^{ \dagger}{U^{2}}^{\dagger}]\] \[=\max_{\begin{subarray}{c}\psi_{R^{\prime\prime}E},P^{2}_{RE\to R ^{\prime\prime}E^{\prime}}\end{subarray}}\operatorname{Tr}[{U^{2}}^{\dagger}(|1 \rangle\!\langle 1|_{D}\otimes I_{E^{\prime}G})U^{2}\] \[\quad\times P^{2}W\psi_{R^{\prime\prime}E}W^{\dagger}{P^{2}}^{ \dagger}], \tag{169}\] where we have used cyclicity of trace in the last line. We can also identify the aforementioned channel \(\mathcal{N}_{R^{\prime\prime}\to S^{\prime}}\) as follows: \[\mathcal{N}_{R^{\prime\prime}\to S^{\prime}}(\cdot)\coloneqq \operatorname{Tr}_{R}[W(\cdot)_{R^{\prime\prime}}W^{\dagger}]. \tag{170}\] Motivated by the expression in (169), we pick the group \(G\) to be \(C_{2}\) with unitary representation \(\{I_{R^{\prime}S^{\prime}},V_{R^{\prime}S^{\prime}}\}\), where \[V_{R^{\prime}S^{\prime}}\coloneqq\left(U^{2}_{R^{\prime}S^{\prime }\to DG}\right)^{\dagger}\left(-Z_{D}\otimes I_{G}\right)U^{2}_{R^{\prime}S^{ \prime}\to DG}. \tag{171}\] We note that \(V^{2}_{RS}=I_{RS}\), proving that this is indeed a representation of \(C_{2}\). The resulting group projection is then \[\Pi^{G}_{R^{\prime}S^{\prime}}=\frac{1}{2}(I_{R^{\prime}S^{\prime}}+V_{R^{ \prime}S^{\prime}})\] \[=\frac{1}{2}\left({U^{2}}^{\dagger}(I_{D}\otimes I_{G})U^{2}+{U^{2}}^ {\dagger}\left(-Z_{D}\otimes I_{G}\right)U^{2}\right)\] \[=\frac{1}{2}\left({U^{2}}^{\dagger}\left(I_{DG}-\left(Z_{D}\otimes I _{G}\right)\right)U^{2}\right)\] \[={U^{2}}^{\dagger}(|1\rangle\!\langle 1|_{D}\otimes I_{G})U^{2}, \tag{172}\] which is precisely the acceptance projection in (169). That is, \[p_{\rm acc}=\max_{P_{R^{\prime}E}^{2},\atop P_{R\gets R^{\prime}E^{\prime }}^{2}}\operatorname{Tr}[\Pi_{R^{\prime}S^{\prime}}^{G}P^{2}W\psi_{R^{\prime}E }{W^{\dagger}}{P^{2}}^{\dagger}] \tag{173}\] Now invoking [16, Theorem III.3], we conclude that \[p_{\rm acc} =\max_{\psi_{R^{\prime\prime}E},P_{R\gets R^{\prime}E^{\prime }}^{2}}\operatorname{Tr}[\Pi_{R^{\prime}S^{\prime}}^{G}P^{2}W\psi_{R^{\prime \prime}E}{W^{\dagger}}{P^{2}}^{\dagger}]\] \[=\max_{\psi_{R^{\prime\prime}E},P_{R\gets R^{\prime}E^{\prime }}^{2}}\left\|\Pi_{R^{\prime}S^{\prime}}^{G}P^{2}W|\psi\rangle_{R^{\prime\prime }E}\right\|_{2}^{2}\] \[=\max_{\begin{subarray}{c}\rho_{R^{\prime\prime}}\in\mathcal{D} (\mathcal{H}_{R^{\prime\prime}}),\\ \sigma_{S^{\prime}}\in\operatorname{BSE}_{G}\end{subarray}}F(\mathcal{N}_{R^ {\prime\prime}\to S^{\prime}}(\rho_{R^{\prime\prime}}),\sigma_{S^{\prime}}), \tag{174}\] where in this case \(\operatorname{BSE}_{G}\) is defined in the same way as in (143). To help visualize the reduction, the \(G\)-Bose symmetric extendibility test corresponding to a general QIP algorithm is depicted in Figure 8. Thus, the acceptance probability of the QIP algorithm now exactly matches the symmetry-testing condition of the constructed \(G\)-BSE problem. As such, any QIP problem can be efficiently mapped to testing \(G\)-BSE of the output of a channel, proving that the problem Channel-\(G\)-BSE is QIP-Hard. Along with the fact that the problem lies in the QIP class, this concludes the proof. ### Testing Hamiltonian Symmetry Using Maximum Spectral Norm is in QMA In this section, we show that testing whether a Hamiltonian is symmetric with respect to a group representation and the maximum spectral norm is in QMA. In particular, we consider the following task: given a group \(G\) with unitary representation \(\{U(g)\}_{g\in G}\), a time \(t\in\mathbb{R}\), and a classical description of a local or sparse Hamiltonian \(H\), estimate the following quantity: \[\max_{g\in G}\left\|[U(g),e^{-iHt}]\right\|_{\infty}^{2}, \tag{175}\] where the spectral norm of a matrix \(A\) is defined as \[\left\|A\right\|_{\infty}\coloneqq\sup_{|\psi\rangle\in\mathcal{H}}\left\{ \left\|A|\psi\right\rangle\right\|_{2}:\left\|\psi\right\rangle\right\|_{2}=1 \right\}. \tag{176}\] The quantity in (175) is a faithful measure of asymmetry in the following sense: \[\max_{g\in G}\left\|\left[U(g),e^{-iHt}\right]\right\|_{\infty}^ {2} =0\quad\forall t\in(-\delta,\delta),\] \[\Leftrightarrow\qquad[U(g),e^{-iHt}] =0\quad\forall g\in G,t\in(-\delta,\delta),\] \[\Leftrightarrow\qquad[U(g),H]=0\quad\forall g\in G, \tag{177}\] where \(\delta>0\). The first equivalence follows from faithfulness of the spectral norm, and the second equivalence follows by taking the derivative of the second line at \(t=0\). **Problem 9** (Ham-Sym-Max-Spec): _Let \(\alpha\) and \(\beta\) be such that \(0\leq\beta<\alpha\leq 2\), and fix \(t\in\mathbb{R}\). Given are circuit descriptions of a unitary representation \(\{U(g)\}_{g\in G}\) of a group \(G\) and a classical description of a \(k\)-local or sparse Hamiltonian \(H\). Decide which of the following holds:_ \[\text{Yes:}\quad\max_{g\in G}\left\|[U(g),e^{-iHt}]\right\|_{ \infty}^{2} \geq\alpha, \tag{178}\] \[\text{No:}\quad\max_{g\in G}\left\|[U(g),e^{-iHt}]\right\|_{ \infty}^{2} \leq\beta, \tag{179}\] In what follows, we show that Ham-Sym-Max-Spec is in QMA, and it remains an interesting open question to determine whether this problem is QMA-Hard or hard for some other complexity class. **Theorem 9**: _The promise problem Ham-Sym-Max-Spec is in QMA._ **Proof.** Consider the following steps of a QMA interactive proof (see Figure 9): 1. The prover sends a state in registers \(C\) and \(P\), with the dimension of \(C\) being equal to \(|G|\) and the dimension of \(P\) being equal to the dimension of \(H\). 2. The verifier measures the register \(C\) and obtains the outcome \(g\in G\). 3. The verifier adjoins a qubit \(C^{\prime}\) in the state \(|+\rangle\), performs the Hamiltonian evolution \(e^{iHt}\), the controlled unitary \(|0\rangle\!\langle 0|\otimes I+|1\rangle\!\langle 1|\otimes U^{\dagger}(g)\), the Hamiltonian evolution \(e^{-iHt}\), and the controlled unitary \(|0\rangle\!\langle 0|\otimes I+|1\rangle\!\langle 1|\otimes U(g)\). 4. The verifier measures the qubit \(C^{\prime}\) in the Hadamard basis \(\{|+\rangle,|-\rangle\}\) and accepts if the outcome \(|-\rangle\) occurs. As noted in the previous section, there exist multiple methods to realize an efficient circuit for the Hamiltonian evolutions \(e^{-iHt}\) and \(e^{iHt}\) (see [18] and references therein). We also note that there are some similarities, as well as key differences, between this algorithm and that given in Figure 3 of [17]. Let us now analyze the acceptance probability of this interactive proof. It suffices for the prover to send a pure state, as this maximizes the acceptance probability. Let us expand a fixed pure state \(|\psi\rangle_{CP}\) of registers \(C\) and \(P\) as follows: \[|\psi\rangle_{CP}=\sum_{g\in G}\sqrt{p(g)}|g\rangle_{C}|\psi_{g}\rangle_{P}, \tag{180}\] where \(\{p(g)\}_{g}\) is a probability distribution and \(\{|\psi_{g}\rangle_{P}\}_{g}\) is a set of states. After the verifier's measurement in Step 2, the probability of obtaining outcome \(g\in G\) is \(p(g)\) and the post-measurement state of register \(P\) is \(\left|\psi_{g}\right\rangle_{P}\). Conditioned on the outcome \(g\) and defining the unitary \(W(g,t)\equiv U(g)e^{-iHt}U^{\dagger}(g)e^{iHt}\), the acceptance probability of Steps 3-4 is then given by \[\left\|(\langle-|_{C^{\prime}}\otimes I_{P}\rangle\frac{1}{\sqrt{ 2}}(|0\rangle_{C^{\prime}}|\psi_{g}\rangle_{P}+|1\rangle_{C^{\prime}}W(g,t)| \psi_{g}\rangle_{P})\right\|_{2}^{2}\\ =\frac{1}{4}\left\|(I-W(g,t))|\psi_{g}\rangle_{P}\right\|_{2}^{2}. \tag{181}\] Thus, for a fixed state \(|\psi\rangle_{CP}\) of the prover, the acceptance probability is given by \[\frac{1}{4}\sum_{g\in G}p(g)\left\|(I-W(g,t))|\psi_{g}\rangle_{P}\right\|_{2}^ {2}, \tag{182}\] and finally maximizing over all such states leads to the following expression for the acceptance probability: \[\max_{|\psi\rangle_{CP}}\frac{1}{4}\sum_{g\in G}p(g)\left\|(I-W(g, t))|\psi_{g}\rangle_{P}\right\|_{2}^{2}\\ =\frac{1}{4}\max_{\begin{subarray}{c}\{p(g)\}_{g},\\ \{|\psi_{g}\rangle_{P}\}_{g}\end{subarray}}\sum_{g\in G}p(g)\left\|(I-W(g,t))| \psi_{g}\rangle_{P}\right\|_{2}^{2}\\ =\frac{1}{4}\max_{\{p(g)\}_{g}}\sum_{g\in G}p(g)\max_{\{|\psi_{g} \rangle_{P}\}_{g}}\left\|(I-W(g,t))|\psi_{g}\rangle_{P}\right\|_{2}^{2}\\ =\frac{1}{4}\max_{\{p(g)\}_{g}}\sum_{g\in G}p(g)\left\|I-W(g,t) \right\|_{\infty}^{2}\\ =\frac{1}{4}\max_{g\in G}\left\|I-W(g,t)\right\|_{\infty}^{2}\\ =\frac{1}{4}\max_{g\in G}\left\|I-W(g,t)\right\|_{\infty}^{2}\\ =\frac{1}{4}\max_{g\in G}\left\|I-W(g,t)\right\|_{\infty}^{2}\\ =\frac{1}{4}\max_{g\in G}\left\|I-W(g,t)\right\|_{\infty}^{2}\\ =\frac{1}{4}\max_{g\in G}\left\|I-W(g,t)\right\|_{\infty}^{2}\\ =\frac{1}{4}\max_{g\in G}\left\|I-W(g,t)\right\|_{\infty}^{2}\\ =\frac{1}{4}\max_{g\in G}\left\|I-W(g,t)\right\|_{\infty}^{2}. \tag{183}\] The third equality follows from the definition of the spectral norm. The fourth equality follows because the optimal distribution is a point mass on the largest value of \(\left\|I-W(g,t)\right\|_{\infty}^{2}\). The penultimate equality follows from unitary invariance of the spectral norm. In light of the above analysis, the best strategy of the prover is to compute \(\max_{g\in G}\left\|[U(g),e^{-iHt}]\right\|_{\infty}^{2}\) in advance, send the maximizing value of \(g\) in register \(C\), and send the corresponding state that achieves the spectral norm in register \(P\). As the acceptance probability of this QMA interactive proof is precisely related to the decision criteria in Problem 9, this concludes the proof. ### Testing Hamiltonian Symmetry Using Average Spectral Norm is in QAM In this section, we show that testing whether a Hamiltonian is symmetric with respect to a group representation and the average spectral norm is in QAM. In particular, we consider the following task: given a group \(G\) with unitary representation \(\{U(g)\}_{g\in G}\), a time \(t\in\mathbb{R}\), and a classical description of a local or sparse Hamiltonian \(H\), estimate the following quantity: \[\frac{1}{\left|G\right|}\sum_{g\in G}\left\|[U(g),e^{-iHt}]\right\|_{\infty}^{ 2}. \tag{184}\] This is a faithful measure of symmetry in the following sense: \[\frac{1}{\left|G\right|}\sum_{g\in G}\left\|[U(g),e^{-iHt}]\right\|_{\infty}^ {2}=0\quad\forall t\in(-\delta,\delta),\] Figure 8: Circuit to map an arbitrary QIP algorithm to a \(G\)-Bose symmetric extendibility test on the output of a channel. Figure 9: Circuit depicting a QMA test for Hamiltonian symmetry with respect to a group, where it is understood that the unitary \(P\) is implemented by an all-powerful prover. The final measurement is in the Hadamard basis, and the algorithm accepts if the \(\left|-\right\rangle\) outcome occurs. \[\Leftrightarrow [U(g),e^{-iHt}]=0 \forall g\in G,t\in(-\delta,\delta),\] \[\Leftrightarrow [U(g),H]=0 \forall g\in G, \tag{185}\] where \(\delta>0\). The first equivalence follows from faithfulness of the spectral norm, and the second equivalence follows by taking the derivative of the second line at \(t=0\). **Problem 10** (Ham-Sym-Avg-Spec): _Let \(\alpha\) and \(\beta\) be such that \(0\leq\beta<\alpha\leq\gamma\), where \(\gamma\) is defined in (47), and fix \(t\in\mathbb{R}\). Given are circuit descriptions of a unitary representation \(\{U(g)\}_{g\in G}\) of a group \(G\) and a classical description of a \(k\)-local or sparse Hamiltonian \(H\). Decide which of the following holds:_ \[\text{Yes:}\quad\frac{1}{|G|}\sum_{g\in G}\left\|[U(g),e^{-iHt}] \right\|_{\infty}^{2}\geq\alpha, \tag{186}\] \[\text{No:}\quad\frac{1}{|G|}\sum_{g\in G}\left\|[U(g),e^{-iHt}] \right\|_{\infty}^{2}\leq\beta, \tag{187}\] In what follows, we show that Ham-Sym-Avg-Spec is in QAM, and it remains an interesting open question to determine whether this problem is QAM-Hard or hard for some other complexity class. **Theorem 10**: _The promise problem Ham-Sym-Avg-Spec is in QAM._ **Proof.** Consider the following steps of a QAM interactive proof (see Figure 10): 1. The verifier and prover are given a value \(g\in G\) chosen uniformly at random. 2. The prover prepares a state \(|\psi_{g}\rangle\) in register \(P\), which depends on the value \(g\) and which has dimension equal to that of \(H\). 3. The verifier adjoins a qubit \(C^{\prime}\) in the state \(|+\rangle\), performs the Hamiltonian evolution \(e^{iHt}\), the controlled unitary \(|0\rangle\!\langle 0|\otimes I+|1\rangle\!\langle 1|\otimes U^{\dagger}(g)\), the Hamiltonian evolution \(e^{-iHt}\), and the controlled unitary \(|0\rangle\!\langle 0|\otimes I+|1\rangle\!\langle 1|\otimes U(g)\). 4. The verifier measures the qubit \(C^{\prime}\) in the Hadamard basis \(\{|+\rangle,|-\rangle\}\) and accepts if the outcome \(|-\rangle\) occurs. Let us now analyze the acceptance probability of this interactive proof. We define the set of states \(\{|\psi_{g}\rangle=P_{g}|0\rangle\}_{g\in G}\). Conditioned on the value \(g\), the prover's state is \(|\psi_{g}\rangle\), and defining the unitary \(W(g,t)\equiv U(g)e^{-iHt}U^{\dagger}(g)e^{iHt}\), the acceptance probability of Steps 3-4 is then given by \[\left\|(\langle-|_{C^{\prime}}\otimes I_{P})\frac{1}{\sqrt{2}}( |0\rangle_{C^{\prime}}|\psi_{g}\rangle_{P}+|1\rangle_{C^{\prime}}W(g,t)|\psi_{ g}\rangle_{P})\right\|_{2}^{2}\\ =\frac{1}{4}\left\|(I-W(g,t))|\psi_{g}\rangle_{P}\right\|_{2}^{2}. \tag{188}\] Thus, for a fixed set \(\{P_{g}\}_{g}\) of prover unitaries and averaging over the shared uniform randomness, the acceptance probability is given by \[\frac{1}{4|G|}\sum_{g\in G}\left\|(I-W(g,t))|\psi_{g}\rangle_{P}\right\|_{2}^{2}. \tag{189}\] Finally maximizing over all such prover unitaries leads to the following expression for the acceptance probability: \[\max_{\{P_{g}\}_{g}}\frac{1}{4|G|}\sum_{g\in G}\left\|(I-W(g,t))| \psi_{g}\rangle_{P}\right\|_{2}^{2}\] \[=\frac{1}{4|G|}\sum_{g\in G}\left\|I-W(g,t)\right\|_{\infty}^{2}\] \[=\frac{1}{4|G|}\sum_{g\in G}\left\|\left[U(g),e^{-iHt}]\right\|_{ \infty}^{2}, \tag{190}\] where the reasoning is the same as that in (183). As the acceptance probability of this QAM interactive proof is precisely related to the decision structure in Problem 10, this concludes the proof. ## V Conclusion In summary, we established the computational complexity of various symmetry-testing problems. In particular, we showed that the various problems are complete for BQP, QMA, QSZK, QIP(2), QIP\({}_{\text{EB}}\)(2), and QIP, encompassing much of the known suite of quantum interactive proof models. In some cases, we employed the interactive proof algorithms from [11] to establish containment, and in other cases, we devised new algorithms. We proved hardness results by embedding various circuits involved in a given computation into the preparation of a state or channel or into a unitary representation of a group. Finally, we introduced two Hamiltonian symmetry-testing problems and proved that they are contained in QMA and QAM. Going forward from here, there are several directions to consider: * Let us observe that several key resources such as entanglement and distinguishability have been connected to the quantum interactive proof hierarchy, through the findings of [11, 12] Figure 10: Circuit depicting a QAM test for Hamiltonian symmetry with respect to a group, where it is understood that the unitary \(P\) is implemented by an all-powerful prover. The final measurement is in the Hadamard basis, and the algorithm accepts if the \(|-\rangle\) outcome occurs. and [22, 23, 24, 25, 26], respectively. Our work makes a nontrivial link between this hierarchy and asymmetry, another key resource. These connections make us wonder whether other resources in quantum mechanics, such as coherence, magic, athermality, etc. [19], can be linked with the same hierarchy. * We are curious whether the two aforementioned Hamiltonian symmetry-testing problems could be shown to be complete for QMA and QAM, respectively, or complete for some other quantum complexity class of interest. * Several multipartite separability problems were identified in [23] and related to a quantum interactive proof setting in which there is a prover who performs a measurement, sends the classical outcome to multiple provers, who then send states to the verifier. One could thus try to find a symmetry-testing problem that is complete for this class. * Various quantum algorithms for testing symmetries of channels, measurements, and Lindbladians under the Hilbert-Schmidt norm were proposed recently in [1]. One could attempt to show that corresponding symmetry-testing problems are complete for BQP. ###### Acknowledgements. We acknowledge the guiding role that Professor A. Ravi P. Rau has played in our academic lives, through many influential scientific discussions and interactions. We take this occasion to dedicate our paper, in which symmetry has played an essential role, to Prof. Rau. We thank Aby Philip and Vishal Singh for discussions. MLL acknowledges support from the DoD SMART Scholarship program. She also acknowledges support from NSF Grant No. 2315398 for a visit to Cornell University during November 2022, when this work was initiated. MMW and SR acknowledge support from NSF Grant No. 2315398.
量子状態とチャネルの対称性をテストすることで、異なる物理的、計算的、コミュニケーションタスクに使用できる有用性を評価する手段を確立することができます。ここでは、グループの unitary 表現とテスト対象の量子状態またはチャネルを含む、いくつかの複雑性理論の結果を導き出し、この対称性テストの問題の難しさの分類を行います。特に、さまざまな対称性テスト問題が BQP、QMA、QZSK、QIP(2)、QIP_EB(2) および QIP で完備であることが証明され、量子相互作用証明階層の主要なクラスにわたって、対称性と量子計算複雑さの非 trivialityな関係を築きます。最後に、QMA および QAM に対する 2 つのハミルトン対称性テスト問題の包含を証明し、これらの問題はこれらのクラスに完備であるかどうかを決定する興味深い未解決問題として残します。
2309.08816
EgoObjects: A Large-Scale Egocentric Dataset for Fine-Grained Object Understanding
Object understanding in egocentric visual data is arguably a fundamental research topic in egocentric vision. However, existing object datasets are either non-egocentric or have limitations in object categories, visual content, and annotation granularities. In this work, we introduce EgoObjects, a large-scale egocentric dataset for fine-grained object understanding. Its Pilot version contains over 9K videos collected by 250 participants from 50+ countries using 4 wearable devices, and over 650K object annotations from 368 object categories. Unlike prior datasets containing only object category labels, EgoObjects also annotates each object with an instance-level identifier, and includes over 14K unique object instances. EgoObjects was designed to capture the same object under diverse background complexities, surrounding objects, distance, lighting and camera motion. In parallel to the data collection, we conducted data annotation by developing a multi-stage federated annotation process to accommodate the growing nature of the dataset. To bootstrap the research on EgoObjects, we present a suite of 4 benchmark tasks around the egocentric object understanding, including a novel instance level- and the classical category level object detection. Moreover, we also introduce 2 novel continual learning object detection tasks. The dataset and API are available at https://github.com/facebookresearch/EgoObjects.
Chenchen Zhu, Fanyi Xiao, Andres Alvarado, Yasmine Babaei, Jiabo Hu, Hichem El-Mohri, Sean Chang Culatana, Roshan Sumbaly, Zhicheng Yan
2023-09-15T23:55:43
http://arxiv.org/abs/2309.08816v1
# EgoObjects: A Large-Scale Egocentric Dataset for Fine-Grained ###### Abstract Object understanding in egocentric visual data is arguably a fundamental research topic in egocentric vision. However, existing object datasets are either non-egocentric or have limitations in object categories, visual content, and annotation granularities. In this work, we introduce EgoObjects, a large-scale egocentric dataset for fine-grained object understanding. Its Pilot version contains over 9K videos collected by 250 participants from 50+ countries using 4 wearable devices, and over 650K object annotations from 368 object categories. Unlike prior datasets containing only object category labels, EgoObjects also annotates each object with an instance-level identifier, and includes over 14K unique object instances. EgoObjects was designed to capture the same object under diverse background complexities, surrounding objects, distance, lighting and camera motion. In parallel to the data collection, we conducted data annotation by developing a multi-stage federated annotation process to accommodate the growing nature of the dataset. To bootstrap the research on EgoObjects, we present a suite of 4 benchmark tasks around the egocentric object understanding, including a novel instance level- and the classical category level object detection. Moreover, we also introduce 2 novel continual learning object detection tasks. The dataset and API are available at [https://github.com/facebookresearch/EgoObjects](https://github.com/facebookresearch/EgoObjects). ## 1 Introduction Object understanding tasks, such as classification and detection, are arguably fundamental research topics in computer vision. Enormous amount of advances achieved so Figure 1: **EgoObjects dataset.****Left**: It contains videos of objects captured from the first-person viewpoint under 10 diverse conditions (only 5 are shown for clarity). Multiple objects in each video are annotated with instance ID and category label. In each row, we visualize the annotations of one instance track (“can opener”) in one video captured under one set of condition variable choices. For clarity, we use shorthand notations: \(D-\) Distance, \(B-\) Background, \(L-\) Lighting, \(M-\) Camera Motion. Also annotations on other objects are not shown. **Right:** A visualization of a subset of non-leaf nodes in our hierarchical object taxonomy, covering diverse object categories. Leaf nodes and other non-leaf nodes are omitted for clarity. far have been accelerated by the availability of large-scale datasets, such as ImageNet [17], COCO [35], LVIS [23], Open Images [31] and Objectron [2]. Those datasets often contain images captured from a third-person or exocentric viewpoint and curated from given sources (e.g. Flicker). Albeit the large volume, they often only capture individual object instances in a single image or video, and do not capture the same object under diverse settings, which are important for fine-grained object understanding task, such as instance-level object detection. In contrast, object understanding tasks in egocentric vision processes visual data containing objects captured from a first-person or egocentric viewpoint. The approaches to those tasks have wide applications in augmented reality and robotics, such as robustly anchoring virtual content at a real world object under various conditions (e.g. background, lighting, distance), and are often required to perform well from the egocentric viewpoint and distinguish objects at both category- (e.g. mug vs kettle) and instance level (e.g. my mug vs your mug) under various conditions. Therefore, there are clear gaps in adopting existing exocentric datasets for egocentric object understanding. On the other hand, several egocentric datasets containing object annotations have been built. A family of such datasets are focused on capturing human activities and hand-object interactions. Ego4D [22] contains a large number of egocentric videos of human activities. However, according to the PACO-Ego4D [47] which mines the objects from Ego4D, there are only 75 object categories with at least 20 samples, and each object instance often only appears in one video. Epic-Kitchens-100 [14] contains over 700 videos depicting human activities in the kitchen, but only annotates objects within the kitchen. HOI4D [37] is collected for category-level human-object interaction, and only contains 800 different object instances from 16 categories. There are several other datasets that are more object-centric, including TREK-150 [18], FPHA [20] and CO3D [49], but only contain objects from a limited set of categories (\(<\)50). Objects there are often captured in a single setup or few setups with limited variations in surrounding objects, background, distances and camera motions. Moreover, semantic granularity of the object annotations are often limited at category-level, and object instances from the same category are not distinguished, which impedes the development of instance-level object understanding approaches. Therefore, there are still significant gaps with existing egocentric datasets in the dataset scale, visual content variations around individual objects, object semantic diversity, and instance-level object annotation. To address these gaps, we introduce _EgoObjects_, a new large-scale egocentric video dataset for fine-grained object understanding (Figure 1). Unlike prior egocentric datasets which are limited to a small dataset scale, a specific domain or a small number of object categories, EgoObjects includes a large number of videos containing objects from hundreds of object categories commonly seen in the households and offices worldwide. For video capture, 4 wearable devices with various field-of-view are used, including Vuzix Blade smart glasses1, Aria glasses2, Ray-Ban Stories smart glasses3 and mobile phones with ultra-wide lens4, which provide representative media formats of egocentric visual data. Each main object is captured in multiple videos with different choices of nearby secondary objects, background complexity, lighting, viewing distance and camera motion. We annotate both the main and secondary objects in the sampled frames with bounding boxes, category level semantic labels and instance-level object identifiers (ID). In current Pilot version release, it contains over \(9,200\) videos of over 30 hours collected by 250 participants from 50+ countries and regions, and 654K object annotations with 368 object categories and 14K unique object instance IDs from 3.8K hours of annotator efforts. To our best knowledge, EgoObjects is the largest egocentric video dataset of objects in terms of object categories, videos with object annotations, and object instances captured in multiple conditions. Comparisons between EgoObjects and other datasets can be seen in Table 1. Footnote 1: [https://www.vuzix.com](https://www.vuzix.com) Footnote 2: [https://about.meta.com/realitylabs/projectaria](https://about.meta.com/realitylabs/projectaria) Footnote 3: [https://www.meta.com/glasses](https://www.meta.com/glasses) Footnote 4: Participants are asked to hold mobile phone close to their eyes to simulate egocentric viewpoints To bootstrap the research on EgoObjects, we introduce 4 benchmark tasks spanning over both non-continual learning and continual learning settings. For non-continual learning setting, we include a novel instance-level object detection task, largely under-explored previously due to the lack of a dataset with object ID annotations, as well as conventional category-level object detection task. For continual learning setting, we present novel object detection tasks at instance- and category level. Evaluations of different approaches to all tasks are also presented to establish the baseline benchmarks. In particular, for instance-level object detection task, a novel target-aware instance detection approach is proposed and validated to outperform a baseline target-agnostic object detection method. To summarize, we make the following contributions. * We created a large-scale egocentric dataset for object understanding, which features videos captured by various wearable devices at worldwide locations, objects from a diverse set of categories commonly seen in indoor environments, and videos of the same object instance captured under diverse conditions. * We proposed a multi-stage federated annotation process for the continuously growing dataset to accompany the parallel data collection at scale. Rich annotations at video level (e.g. location, background description) and object-level (e.g. bounding box, object instance ID, category level semantic label) are collected from 3.8K hours of human annotator efforts. * We introduced 4 benchmark tasks on EgoObjects, including the novel instance-level and the conventional category-level object detection tasks as well as their continual learning variants. We evaluated multiple approaches on all tasks, and also proposed a novel target-aware approach for instance-level object detection task. ## 2 Related Work **Egocentric object understanding datasets.** Given the growing needs of egocentric object understanding in augmented reality and robotics, several egocentric datasets focused on objects have been built. TEgO [32] contains egocentric images of only 19 distinct objects for training object recognizers. TREK-150 [18] consists of 150 annotated videos for tracking objects from 34 categories merely. Despite the availability of object annotations, other larger egocentric datasets are more focused on human activities and hand-object interactions. For example, Epic-KitchenS-100 [14] captures 700 videos of nearly 100 human activities involving 300 object categories in the kitchen, but is limited to the kitchen scenario. The ADL [45] dataset features people performing everyday activities in kitchens, which has object boxes, object track ID, action labels. However, it only has 42 object categories and the track ID is not used for analysis. The MECCANO [46] is a multimodal dataset of egocentric videos to study humans behavior understanding in industrial-like settings with object, depth, and gaze annotations, supporting a suite of 5 tasks. However, the diversity of participants and locations is limited. FPHA [20] captures 45 different daily hand-object action categories involving only 26 different objects. HOI4D [37] contains videos of human-object interaction with only 800 different object instances from 16 categories. Albeit the large number of human activity videos, the recent Ego4D [22] only contains object annotations from around 75 object categories with at least 20 samples. Object instances often only appear in a single video, and only 870 instances have more than 5 occurrences. Meanwhile, synthetic egocentric datasets are built to scale up the data collection. xR-EgoPose [55] is a large-scale synthetic dataset containing realistic renderings of people in various poses and serves as a benchmark of 3D human pose estimation. It is focused on ego-body and simulates fisheye lens where the surrounding environment, including objects, are largely distorted. EHOI [33] is also a synthetic dataset, consisting of 20K images and 124K object instances from 19 categories with interactions with human hands. Its fidelity is low compared with real data, and it has limited complexities in lighting, background and viewpoints. To summarize, existing egocentric datasets have limitations in the number of object categories, the variations in the setting of capturing the same object, the granularity of object semantic labeling where instance-level object ID is not available and photorealism in synthetic datasets. **Instance-level object detection and datasets.** Being able to localize and recognize different object instances is critical to applications in augmented reality and robotics, such as detecting a specific toy or a custom industrial part. However, such task has been severely less explored due to the lack of object ID annotations at scale in existing datasets. In the cases of a growing number of object instances to detect, which is arguably a realistic setup, instance-level detection approaches are often required to adapt with little-to-no fine-tuning time. Mercier et al [41] proposed a template-based detector that uses example viewpoints of the target object to detect it in query images without extra training, and evaluated it on a small exocentric dataset of 20 object instances only. Hu et al [28] proposed a template-based detection approach, which incorporated a multi-level correlation model and a similarity-refine module, for handling the category-agnostic instance. On the dataset side, T-less [26] is an object dataset with 6D pose annotation for only 30 industry-relevant object instances. In [51], a small dataset of 10K RGBD images of 24 object instances were created for object detection and pose estimation. BOP dataset [27] combines 8 public datasets, and consists of 89 object instances with 3D groundtruth and over 330K RGBD images from different viewpoints. All those datasets aforementioned are not egocentric, and only contains a small number of object instances. In contrast, EgoObjects contains over 14K unique object instances captured under diverse settings. We also propose a target-agnostic baseline approach and a novel target-aware approach, and evaluate them on EgoObjects. \begin{table} \begin{tabular}{l|c c c|c c c c} & \multicolumn{3}{c|}{Exocentric} & \multicolumn{3}{c}{Egocentric} \\ & \multicolumn{2}{c|}{Objectron} & \multicolumn{2}{c|}{CO3D} & \multicolumn{1}{c|}{BOP} & Epic-K\({}^{*}\), HOI4D & Ego4D\({}^{*}\) & EgoObjects \\ \hline \#category & 9 & 50 & 89 & 300\({}^{*}\) & 16 & 75 & 368+ \\ \#participant & int’l. & - & - & 45 & 9 & 859 int’l. & 250 int’l. \\ \#image & 4M & 1.5M & 330K & 20M & 2.4M & 23.9K & 114K+ \\ \#instance & 17K & 19K & 89 & - & 800 & 17K & 14K+ \\ \#bbox & - & - & - & 38M & - & 50K & 654K+ \\ inst ID & ✗ & ✗ & ✓ & ✗ & ✗ & ✗ & ✓ \\ device & M & M & PC,K & G & K,I & G,V,Z,W,PP & R,A,V,M \\ \end{tabular} \end{table} Table 1: _Comparing EgoObjects with other datasets. For EgoObjects, we report statistics of the current Pilot version, which is estimated to account for 10% of the full dataset (thus the “+” notation). \({}^{*}\)Epic-Kitchen-100 [14] only contain object categories in the kitchen. \({}^{**}\)Ego4D statistics are reported by the PACO-Ego4D [47], which annotates objects in the Ego4D [22]. Abbreviation for devices: M=Mobile, K=Kinect, A=Aria, G=GoPro, PC=Primesense Carmine, I=Intel RealSense, V=Vazix Blade, R=Ray-Ban Stories, PP=Pupil, Z=Zetronix zShades, W=Weview._ **Continual learning.** Conventional object understanding approaches build static models incapable of adapting their predictive behaviors over time. In contrast, continual learning models can learn from an infinite stream of data and grow their predicative capabilities while reducing catastrophic forgetting of previous knowledge [40, 8, 4, 12, 43, 3, 48, 11, 56]. Broadly speaking, they can be categorized into 3 classes [15] with increasing complexities and practicalities. In _Task Incremental Learning_, individual tasks with respective training data arrive sequentially, and the model is often built with separate heads for individual tasks. At inference time, a task ID is required for each sample. In the _Class Incremental Learning_, no task ID is provided at any time, and the model often has only one head. In the most general _Data Incremental Learning_[16], more assumptions on the stationary data distribution and the paradigm of sequentially growing tasks and classes are removed. In this work, we use EgoObjects to set up 2 new continual learning tasks, which covers both Class- and Data Incremental Learning paradigms. Moreover, existing approaches are often assessed on small object classification datasets, such as Core50 [39] and OpenLORIS-Object [53]. To our best knowledge, EgoObjects is the first dataset to support the benchmarking of _continual learning of object detection_ at both instance and category level. **Category-level object detection.** Early CNN-based approaches include both two-stage methods, which tackles object proposal generation and object recognition separately [50, 9], and single-stage methods which remove the explicit object proposal generation for simplicity and efficiency [36, 6, 54]. Recent transformer-based methods introduce attention building blocks into both the backbone and the detection head to significantly improve the detection performance [10, 59, 13]. However, those approaches are often only evaluated on exocentric datasets, such as COCO [35] and LVIS [23], while their performance on egocentric datasets are largely unknown. EgoObjects contains nearly 400 object categories, and we assess both CNN and transformer models on it. ## 3 EgoObjects Dataset ### An Overview In current Pilot version, EgoObjects contains over 9K videos collected by 250 participants. A total of 114K frames are sampled and annotated. **Object instances captured under diverse conditions.** A total of 14.4K unique object instances from 368 categories are annotated. Among them, there are 1.3K main object instances from 206 categories and 13.1K secondary object instances (_i.e_., objects accompanying the main object) from 353 categories. On average, each image is annotated with 5.6 instances from 4.8 categories, and each object instance appears in 44.8 images, which ensures diverse viewing di Figure 2: **Dataset statistics.****(a) Left**: the number of instances per category in the log scale. **Right**: the word cloud highlights the head categories, including box, bottle. **(b) Left** the number of annotations per category in the log scale. **Right**: the word cloud is similar to (a), but a few new head categories emerge, including mug, boot. **(c) Spatial distribution of the main objects center coordinates, confirming the diverse locations of main objects. **(d)** Relative bounding box sizes compared between EgoObjects, LVIS, and COCO. EgoObjects has more objects of medium and large sizes in the egocentric view. **(e)** Diverse distribution of participants’ geographic locations in 50+ countries from 5 continents. **(f)** Distribution of video metadata including lighting (left) and background (right). Most objects are collected indoor, where lighting is more likely either artificial or low light. The background is uniformly distributed across rooms within the household. rections for the object. To further break down, for the main object, each instance appears in 95.9 images, whereas each secondary instance _i.e_. 39.8 images on average. See distributions of unique object instances and object annotations in Figure 1(a) and 1(b), respectively. Both figures indicate the long-tailed nature of the dataset, making the benchmark more challenging and closer to the real-world distributions. **Diverse object spatial distribution**. We encourage participants to avoid center bias during the capture of main objects with the moving camera. In Figure 1(c), we confirm both the center coordinates are widely spread in the image. **Object scale distribution in egocentric view**. Figure 1(d) compares EgoObjects with other datasets on the relative size distribution of object bounding boxes. The relative size is defined as the square root of the box-area-over-image-area ratio. Compared to COCO and LVIS, EgoObjects has more medium and large-sized objects, and suits the applications of egocentric object understanding where the users are more likely to interact with closer objects. **Metadata statistics**. We further accumulate the per-video statistics across several metadata tags. As shown in Figure 1(e), our data is geographically diverse, covering 50+ countries from five continents. Finally, as presented in Figure 1(f), our data also has a diverse distribution covering various video capture conditions for lighting and backgrounds. ### Data Collection We work with third-party vendors to recruit participants for capturing videos of common indoor objects at worldwide locations. They use various glasses such as Vuzix Blade, Aria Glasses, and Ray-Ban Stories. They also use the ultra-wide lens on mobile phones and hold the phone close to the eyes for simulating the egocentric viewpoint. Participants are asked to capture videos of objects from a predefined list of 400 object categories, referred as main object categories. Each main object should be unique in the individual location and captured under various conditions. We define 4 variables of capture conditions, including background complexity, camera motion, object distance and lighting. The background complexity can be either "simple" or "busy". The simple background has at least 3 surrounding objects besides the main object, whereas the busy background has at least 5 other objects. In either background, we ask participants to capture the main object in natural settings (vs intentional setting). We also instruct the participants to move the camera around and capture different views of the main object, while avoiding the bias that the main object always stays in the center of the view. We define three 3 of camera motion: 1) "horizontal": move the camera from left to right or right to left. 2) "vertical": move the camera upwards or downwards and 3) "combined": rotate the camera both horizontally and vertically. The object distance also has three levels, i.e. "near", "medium", and "far". We define the object scale and the frame scale as the longer object dimension and the shorter frame edge, respectively. Near distance refer to those that the object-scale/frame-scale ratio is larger than \(30\%\), whereas the medium distance has the ratio fall in between \(20\%\) and \(30\%\). All remaining images are considered as having far object distances to the camera. For lighting conditions, there are two levels bright and dim. Lighting is considered as bright when a light meter reads above 250 lux and dim otherwise. Given these 4 variables, participants are instructed to collect 10 videos of each main object according to 10 predefined configurations (see details in supplement), and each video lasts at least 10 seconds. Finally, videos are further tagged with rich metadata including the associated participant ID, main object category, location, background description and capture time. ### Federated Annotation of the Growing Dataset EgoObjects data collection was planned to operate at large scale and lasted for 14 months. To reduce the overall dataset creation time, we conducted data annotation in parallel to the data collection, which continuously grew the dataset and introduced more complexities to the data annotation. Inspired by LVIS [23], we adopt the idea of federated annotation to achieve a balance between annotation cost and annotation exhaustiveness, and further propose a 3-stage annotation pipeline tailored to the continuously growing nature of our dataset. Figure 3 illustrates our annotation pipeline, which is used to annotate video frames evenly sampled at 1 FPS. **Stage 1: category discovery.** The annotators are instructed to identify object categories from a predefined vocabulary \(\mathcal{V}\) of 600+ categories commonly seen in indoor egocentric view. Annotators are asked to find at least 5 categories per each image if possible, including the main object category and other salient secondary objects. **Stage 2: exhaustive instance labeling.** For each image, 3 annotators exhaustively annotate _all_ object instances of the discovered categories with bounding box and category label \(c\). To enable instance-level object understanding, we further enhance the bounding box annotation with a unique Figure 3: _EgoObjects multi-stage annotation. See text for details at each stage._ object instance ID5 that is consistent across the dataset. To reconcile the work from 3 annotators, we compare the annotations from one annotator to all other annotators to get an averaged IoU based consensus score for each annotator. Then, we select the annotator with the highest consensus score as the source of truth for final annotations. Footnote 5: We exclude objects from categories that have indistinguishable appearances between instances, such as those related to animals and food. **Stage 3: negative category verification.** By the design of federated annotation [23], not all object categories are handled in each image. However, for evaluation purpose for each image we would need to collect a set of negative categories, defined as categories that do not appear in the image. To operationalize this, we randomly sample several categories from the vocabulary \(\mathcal{V}\) as the candidates of the negative categories, and ask annotators to verify. We remove a candidate category from the negative set if any annotator flags any corresponding object instance in the image. Finally, we get a set of negative categories per image. ## 4 Benchmark Tasks on EgoObjects We introduce 4 benchmark tasks on EgoObjects, starting with a novel instance-level object detection task, which has been under-explored due to the lack of object ID annotations on individual objects captured in various conditions in existing datasets. We further present 2 novel continual learning object detection tasks, which are newly enabled by EgoObjects. Finally, we assess the performance of classical category-level object detection models on EgoObjects. ### Instance-Level Detection In the applications of egocentric object understanding in AR and robotics, we are often confronted with the situation where the model is presented with few examples of object instances unseen during training and needs to detect those novel objects on-the-fly. Inspired by this, we introduce the instance-level detection below, and present two models, including a novel target aware- and a baseline target agnostic instance detector. #### 4.1.1 Task Specification At training time, the model has access to instance-level annotations of objects captured under diverse conditions. At inference time, the user can use the model to register a novel target instance \(T\), regardless of whether its category is seen during training, by providing one or more 2D bounding boxes on reference images containing the target instance. After that, on a query image \(I\), the detector needs to predict the bounding box of \(T\) with \(T\)'s ID, or no box if \(T\) is absent. To simulate the realistic setup where model back-propagation is difficult for deployed models, we _disallow_ model fine-tuning on the target object instance annotations. The model is required to allow the user to continuously register more model targets, and all registered targets should be considered during detection. Figure 4 contains an example where the user registers 3 targets sequentially and the model gradually detects more target objects in the same image. **Dataset split.** We divide the dataset into 4 splits: train/target/val/test. The train split contains 9.6k instances with a total of 450k annotations from 79k images. The target, val, test splits share the remaining 4.1k instances which do not appear in the train images, and their categories can also be unseen during training. In the target split, there is a single reference image and one annotation for each instance. The val and test splits have 5.7K and 29.5K images with 3.8K and 4.1K instances, respectively. **Evaluation protocols.** Under various IoU thresholds, we report Average Precision (AP) metrics, which are averaged across instances. Furthermore, we break down the metrics into two buckets for object instances from categories seen and unseen during training to assess the model capability of generalizing to instances from novel categories. #### 4.1.2 Target-aware Instance Detector We propose an instance-level object detector aware of target objects during object localization, and refer to it as _Target-Aware Instance Detector_ (TA-IDet). It supports 2 modes, namely target registration and target detection (Figure 5). **Target registration**. To register a new target object, we feed the reference image into a ResNet based FPN backbone [34], generate target features at different pyramid levels by using ROIAlign operator [24] according to the target bounding box annotation, and average them over pyramid levels. Target features of two different resolutions are obtained from ROIAlign. \(T^{loc}\) feature of resolution \(1\times 1\) is used to efficiently localize the bounding box. \(T^{cls}\) feature has higher resolution \(S\times S\) (\(S=5\) by default), and will be Figure 4: **Instance detection at inference time. Continuously registering more targets leads to more detected objects, while previously registered targets are not forgotten. The targets can be from either seen (target 1 and 2) or unseen (target 3) categories.** used to compute a confidence score of classifying the localized target object. If several reference images per target are provided, the target features are averaged. **Target detection**. At detection time, TA-IDet use the same FPN backbone to extract query image feature map \(F\) of size \(C\times H\times W\) where \(C\) denotes feature channels, and \(\{H,W\}\) feature map size. A feature modulation block will transform \(F\) according to target localization feature \(T^{loc}\) of size \(C\times 1\times 1\), which attenuates the features in regions where the target object is less likely to appear. The detection head takes as input the modulated query feature map, and processes it using a _Score_ module, which consists of 4 convolutional layers with ReLU activations, to gradually reduce the channels from 256 to 1. The resulting score map is normalized by a _Softmax_ operation, and the target object center \((C_{y},C_{x})\) is predicted as the weighted sum of spatial coordinates according to the normalized score map. \[\begin{split} F^{mod}=(T^{loc}\varocc@{}F)\odot F\\ P=\text{Softmax}(\text{Score}(F^{mod}).\text{reshape}(-1))\\ Y^{g}=\text{ls}(0,H-1,\text{steps}=H).\text{view}(H,1).\text{repeat }(1,W)\\ X^{g}=\text{ls}(0,W-1,\text{steps}=W).\text{view}(1,W).\text{repeat }(H,1)\\ C_{y}=\text{sum}(P\odot Y^{g}.\text{reshape}(-1))\\ C_{x}=\text{sum}(P\odot X^{g}.\text{reshape}(-1))\end{split} \tag{1}\] where \(\varocc@{}\) denotes convolution, \(\odot\) element-wise multiplication and \(\text{ls}\) torch.linspace. To refine object center and predict target object size, we sample a target feature at \((C_{y},C_{x})\) in \(F\) via bilinear interpolation, and employ a 3-layer MLP with hidden dimension \(256\) to predict the spatial offset \((\delta C_{y},\delta C_{x})\) and target object size \((S_{y},S_{x})\) with a ReLU activation. After predicting target object box, we use ROIAlign to sample a spatial feature of the resolution \(S\times S\) in \(F\), and compute its dot product with \(T^{cls}\) using a sigmoid activation function as the box confidence score. **Model training**. During training, we sample three images for each instance: one reference image containing the instance, one positive image containing the instance captured in a different setting, and one negative image that does not contain the instance. In positive image, we consider both bounding box localization loss and classification loss. For localization loss, we use a linear combination of \(L_{1}\) loss and generalized IoU loss [52]. For classification loss, we use the binary cross entropy loss between the predicted box confidence score and groundtruth box label, which is positive when IoU is above \(IoU^{pos}\), negative when IoU is below \(IoU^{neg}\) and ignored otherwise. By default, (\(IoU^{pos}\), \(IoU^{neg}\)) = \((0.7,0.3)\). In negative image, only the classification loss is used and groundtruth label is negative. See more studies in the supplement. #### 4.1.3 Baseline Target-agnostic Instance Detector We also consider a simple baseline approach _RPN+SFNet_ which consists of a Region Proposal Network (RPN) [50] for object localization and a SFNet model [57], commonly used in metric learning, for object classification. We briefly review its target registration, detection and model training below, and include more details in the supplement. **Target registration**. We crop the target object from reference images and feed it through the SFNet model to obtain the target object feature, which is then added to an index of target object features. **Target detection**. For a given query image, the RPN generates a large number of object proposals _agnostic_ to the target objects in the index. Each object proposal is cropped from the query image and fed into the SFNet model to extract the feature. These object features are then matched against all the added target features in the index. We pick the target object in the index with the highest matching score. The final confidence score of an object proposal against the top target object is the product of RPN object proposal confidence score and its matching score with the target object. **Model training**. The RPN is trained on the train split using all bounding box annotations. The SFNet model is trained with SphereFace2 [57] loss function using all instance-level annotations, which encourages small distances between features of multiple views of the same instance, and large distances for features of different instances. #### 4.1.4 Results The benchmark results of both models are presented in Table 2. For both approaches _TA-IDet_ and _RPN+SFNet_, we build models with ResNet-50 and ResNet-101 backbones. There are several intriguing observations. First, _TA-IDet_ substantially outperforms the _RPN+SFNet_ on all metrics. Figure 5: _**Architecture of target-aware instance detector TA-IDet. Top:** in target registration, localization and classification feature for each target are generated. Bottom: during target detection, the model predicts 1 bounding box per target and computes a confidence score to decide whether the prediction should be rejected via thresholding._ For example, the gains in AP50 are large (\(+6\%\) on val and \(+10\%\) on test split). We attribute this to the design that _TA-IDet_ localizes the target object by using query image feature maps modulated by the target feature, and does not rely on the target-agnostic RPN to generate object proposals. Second, the best _TA-IDet_ model with ResNet-101 backbone only achieves less than \(23\%\) AP on both val and test split which have around 4K novel instances each, indicating the unique challenges in the large-scale instance-level object detection, such as large changes in viewing direction, lighting, background and distance as well as less distinguishable appearance between instances from the same category. See more examples in the supplement. Third, there are significant gaps between AP\({}_{sc}\) and AP\({}_{un}\), reflecting the challenges in generalizing the models to detect instances from categories unseen during training. ### Continual Learning Existing continual learning (CL) approaches often tackle object classification problem while continual learning object detection task is not well explored due to the lack of a large-scale dataset that is annotated with instance- and category-level labels, and contains individual objects in multiple images captured under diverse conditions. We introduce 2 novel CL tasks, namely _CL instance detection_ and _CL category detection_, on a subset of EgoObjects which contains 100K images with 250K box annotations for 1.1K main object instances from 277 categories. There are 3.4K and 3.1K instances in the train- and test set, respectively. **CL Instance Detection.** In this task, we simulate the setting when a system continuously encounters new batches of instance detection training data, where each batch indexed at \(i\) is called an experience \(E_{i}\). In \(E_{i}\), each image only carries the annotation of its main object with instance ID being the class. The system can only access data in the latest experience, which means no access to previous experiences apart from the use of a limited replay memory. Previous experiences share no common main object instances with later experiences, which makes it a _Class-Incremental Learning_ setup. We evenly split \(1110\) main instances into 5 experiences, hence 222 instances per experience. For evaluation, the system is benchmarked after each experience on a fixed testing set with all the main instances, making it a \(1110\)-way detection problem. The evaluation metric for each experience is mean average precision (mAP). The final performance is the averaged metric across all the experiences. **CL Category Detection.** In this task, the goal is to predict the object category labels instead of instance IDs. We create 5 experiences by applying the class-incremental ordering on the 277 categories of the main object instances. Additionally, we also include the annotations of secondary objects, which makes it a _Data-Incremental Learning_ setup, i.e. previous experiences share no common images or annotations with later ones. This differentiates our task from other CL object detection tasks focusing on annotation incrementality, where the same images are repeatedly encountered in successive experiences but with a different set of annotations (usually class-incremental). We believe our task provides a more realistic setting. The evaluation metric for each experience is also mAP. **Results.** We benchmark the methods from the top submissions of the 3rd CLVision workshop challenge [44] and report the results on above CL tasks in Table 3. In general, these submissions build upon classic 1-stage/2-stage detectors and adopt techniques to mitigate catastrophic forgetting of early experiences when trained on later experiences, such as sampling data from previous experiences cached in a replay buffer, and distilling models from early experiences to the model for the current experience. However, these winning methods still have limitations. They treat the instance detection as a close-set problem same as the category detection, which cannot scale up to flexibly accommodate more instances. Additionally, there is no change to the detector architecture to better tailor to the CL tasks. See more discussions in the supplement. ### Category-Level Detection EgoObjects also supports the classical category-level object detection task given the nearly 400 object categories in the current Pilot version. **Evaluation protocols.** We use the same dataset splits as the instance-level detection task. In total, there are \begin{table} \begin{tabular}{c|c|c c c c c|c c c} \hline \multirow{2}{*}{backbone} & \multirow{2}{*}{method} & \multicolumn{5}{c|}{val} & \multicolumn{5}{c}{test} \\ & & AP & AP50 & AP50\({}_{w}\) & AP50\({}_{un}\) & AP & AP50 & AP50\({}_{un}\) \\ \hline \multirow{2}{*}{R50} & RPN+SFNet & 17.8 & 29.0 & 29.1 & 19.8 & 15.7 & 25.4 & 25.5 & 16.8 \\ & TA-IDet & 18.7 & 35.0 & 35.0 & 21.7 & 18.5 & 35.2 & 35.2 & 24.8 \\ \hline \multirow{2}{*}{R101} & RPN+SFNet & 19.3 & 32.0 & 32.0 & 22.3 & 17.0 & 27.7 & 27.8 & 20.0 \\ & TA-IDet & 22.6 & 37.9 & 38.0 & 28.5 & 21.9 & 37.9 & 38.0 & 26.4 \\ \hline \end{tabular} \end{table} Table 2: _Instance-level detection benchmarking results on EgoObjects. The proposed_ TA-IDet _model significantly outperforms the baseline_ RPN +SFNet _approach. AP50\({}_{sc}\) and AP50\({}_{un}\) are computed for instances with categories seen and unseen during training. On the more challenging test split with more targets object instances, TA-IDet can maintain the performance whereas_ RPN +SFNet _baseline has a significant performance drop. R50 and R101 denote ResNet-50/101 backbones._ \begin{table} \begin{tabular}{c|c c c c c c|c c c c c} \hline \multicolumn{1}{c|}{} & \multicolumn{5}{c|}{CL Instance Detection} & \multicolumn{5}{c}{CL Category Detection} \\ rank & \(E_{0}\) & \(E_{1}\) & \(E_{2}\) & \(E_{3}\) & \(E_{4}\) & \(EAP\) & \(E_{0}\) & \(E_{1}\) & \(E_{2}\) & \(E_{3}\) & \(E_{4}\) & \(EAP\) \\ \hline 1st & 23.3 & 39.5 & 54.6 & 70.2 & 85.6 & 54.7 & 30.6 & 47.2 & 58.1 & 67.5 & 76.2 & 55.9 \\ 2nd & 15.1 & 30.4 & 45.5 & 60.8 & 75.4 & 45.4 & 28.4 & 44.7 & 57.6 & 67.9 & 78.2 & 55.4 \\ 3rd & 14.7 & 29.1 & 42.3 & 55.4 & 66.9 & 41.7 & 19.5 & 34.5 & 43.9 & 52.7 & 61.5 & 42.4 \\ \hline \end{tabular} \end{table} Table 3: _CL detection benchmarks on EgoObjects. We report detection accuracy (mAP) after each experience and final Experience Average Precision (EAP), which is the averaged mAP over experiences._ 447K/31K/164K object annotations from 368 categories in the train/val/test split, respectively. Due to its federated annotation process, we only penalize false positive predictions on an image if the predicted class is in the list of negative categories for that image. **Benchmarking models**. We consider 3 types of performant object detectors. The first one is FasterRCNN [50], which is a two-stage detector. Next, we include the representative single-stage detector FCOS [54] which skips the explicit proposal generation to accelerate the model inference. Finally, we also consider the recent transformer-based detectors (_i.e_., DETR [10]). Specifically, we adopt the Deformable-DETR [59] due to its stable and fast training. For both FasterRCNN and FCOS, we use the ResNet50/101 backbone pretrained on ImageNet-1K, whereas for Deformable-DETR, we use the Swin-Transformers [38] backbone pretrained on ImageNet-22K. **Results**. The results are presented in Table 4. Notably, single stage FCOS models outperform two-stage FasterRCNN detectors particularly for high IOU threshold (_e.g_. AP75), while DeformDETR-Swin models significantly outperform both types of CNN detectors at the cost of large model size and significantly more compute. However, even for the largest DeformDETR-SwinL model, its AP metrics on EgoObjects are still \(10\%\) lower than its AP on LVIS \(43.7\%\) reported in Table 3 of prior work [29]. We hypothesize due to the egocentric view and its data capture setting, EgoObjects contains larger variations in background, viewpoints, lighting and object distances, which together render it more difficult even for category-level detection. We also implemented the metrics of different buckets on experimental conditions (e.g. object scale, lighting, background complexity) in our evaluation API. We observe model's performance is lower under the more challenging conditions (small scale, dim lighting, busy background). ## 5 Conclusions We present EgoObjects, a large-scale egocentric dataset containing tens of thousands of videos, and more than half million object annotations. By design, it captures the same object under diverse conditions while annotating it with both category label and consistent object IDs in multiple images. To stimulate the egocentric object understanding research on it, we introduce 4 tasks and also provide the benchmarking results of various models, including a novel target-aware instance-level detector which largely outperforms an off-the-shelf baseline based on RPN and SFNet.
**エゴocentric視覚データにおける物体理解は、エゴocentric視覚の基礎的調査課題と言える。しかし、既存の物体データセットは、非エゴocentricなものや、物体カテゴリ、視覚コンテンツ、注釈の精度の低いものなど、多くの制限がある。この研究では、EgoObjectsという、微細な物体理解のための、大規模なエゴocentricデータセットを導入する。Pilot版には、250人の参加者から50以上の国で4個のウェアラブルデバイスを使用して収集された9000以上のビデオと、368個の物体カテゴリから得られた650000以上のオブジェクト注釈が含まれている。従来のデータセットは、物体カテゴリラベルのみを annotate しているのに対し、EgoObjects は、各オブジェクトをインスタンスレベルの識別子で注釈し、14000以上のオブジェクトインスタンスを含んでいる。EgoObjects は、様々な背景の複雑
2306.06066
Multi-level Cross-modal Feature Alignment via Contrastive Learning towards Zero-shot Classification of Remote Sensing Image Scenes
Zero-shot classification of image scenes which can recognize the image scenes that are not seen in the training stage holds great promise of lowering the dependence on large numbers of labeled samples. To address the zero-shot image scene classification, the cross-modal feature alignment methods have been proposed in recent years. These methods mainly focus on matching the visual features of each image scene with their corresponding semantic descriptors in the latent space. Less attention has been paid to the contrastive relationships between different image scenes and different semantic descriptors. In light of the challenge of large intra-class difference and inter-class similarity among image scenes and the potential noisy samples, these methods are susceptible to the influence of the instances which are far from these of the same classes and close to these of other classes. In this work, we propose a multi-level cross-modal feature alignment method via contrastive learning for zero-shot classification of remote sensing image scenes. While promoting the single-instance level positive alignment between each image scene with their corresponding semantic descriptors, the proposed method takes the cross-instance contrastive relationships into consideration,and learns to keep the visual and semantic features of different classes in the latent space apart from each other. Extensive experiments have been done to evaluate the performance of the proposed method. The results show that our proposed method outperforms state of the art methods for zero-shot remote sensing image scene classification. All the code and data are available at github https://github.com/masuqiang/MCFA-Pytorch
Chun Liu, Suqiang Ma, Zheng Li, Wei Yang, Zhigang Han
2023-05-31T10:00:45
http://arxiv.org/abs/2306.06066v1
Multi-level Cross-modal Feature Alignment via Contrastive Learning towards Zero-shot Classification of Remote Sensing Image Scenes ###### Abstract Zero-shot classification of image scenes which can recognize the image scenes that are not seen in the training stage holds great promise of lowering the dependence on large numbers of labeled samples. To address the zero-shot image scene classification, the cross-modal feature alignment methods have been proposed in recent years. These methods mainly focus on matching the visual features of each image scene with their corresponding semantic descriptors in the latent space. Less attention has been paid to the contrastive relationships between different image scenes and different semantic descriptors. In light of the challenge of large intra-class difference and inter-class similarity among image scenes and the potential noisy samples, these methods are susceptible to the influence of the instances which are far from these of the same classes and close to these of other classes. In this work, we propose a multi-level cross-modal feature alignment method via contrastive learning for zero-shot classification of remote sensing image scenes. While promoting the single-instance level positive alignment between each image scene with their corresponding semantic descriptors, the proposed method takes the cross-instance contrastive relationships into consideration, and learns to keep the visual and semantic features of different classes in the latent space apart from each other. Extensive experiments have been done to evaluate the performance of the proposed method. The results show that our proposed method outperforms state of the art methods for zero-shot remote sensing image scene classification. All the code and data are available at github [https://github.com/masuqiang/MCFA-Pytorch](https://github.com/masuqiang/MCFA-Pytorch). Remote sensing image scene classification, Zero-shot learning, Contrastive learning, Cross-modal feature alignment. ## I Introduction With the rapid development of remote sensing observation technology, remote sensing images have been greatly improved in spatial resolution. The images with high spatial resolution express more detailed features of the ground objects and their surrounding environments. Different ground objects with different spatial distribution relationships will form different high-level semantic scenes, which help people cross the gap between the underlying image features (e.g., color and texture) and the high-level semantics [1]. Therefore, as an important manner to mine the higher-level scene semantics, image scene classification is of great significance for remote sensing image understanding, and plays an important role in the fields of remote sensing image retrieval, disaster monitoring, and urban planning [2]. Because it is difficult to prepare sufficient samples for each class of image scenes for training, zero-shot learning (ZSL) methods which can recognize the image scenes that are not seen in the training stage, have extensive prospects for image scene classification [3]. ZSL aims to build a mapping between the visual space and the semantic space based on a large number of samples from seen classes, and transfers the model from seen classes to unseen classes in a data-free manner. In light of the difficulty to know whether the samples to be classified is seen or unseen in the real world, it is more applicable for the learned model to have the ability of classifying both seen and unseen classes. This more challenging problem is often called the generalized zero-shot learning (GZSL). To address the zero-shot image scene classification, cross-modal feature alignment methods [4, 5] have been proposed in recent years. These methods learn to match the image scenes with their semantic descriptors in the latent space. From the perspective of each image scene, they mainly focus on reducing the distance from the image scene to its corresponding semantic descriptor in the latent space. While, less attention has been paid to the contrastive relationships between different image scenes and different semantic descriptors, e.g., the distance from one image scene to other image scenes and from one semantic descriptor to other semantic descriptors. However, due to the specific characteristics of remote sensing image scenes, there often exist large intra-class difference and inter-class similarity among image scenes [5]. In addition, there may be also noisy image scenes. All these may affect the learning of the cross-modal alignment between the image scenes with their semantic descriptors in the latent space. In this work, we propose a multi-level cross-modal feature alignment method, named MCFA, via contrastive learning [6] for zero-shot classification of remote sensing image scenes. While promoting the alignment between each image scene with their corresponding semantic descriptor, we also take the cross-instance contrastive relationships into consideration to further improve the cross-modal feature alignment performance. Following the aligned VAEs method [7], the proposed method integrates the single-instance level positive alignment losses and the cross-instance level contrastive alignment losses to constrain the model learning process. While learning to align the visual features of image scenes with their corresponding semantic features, the model can also learn to keep the visual and semantic features of different classes apart from each other in the latent space. The remainder of this paper is structured as follows: Section 2 details the proposed method; Section 3 presents our experiments and the results; and Section 4 are the conclusions of our work. ## II Methodology In this section, we detail the proposed method by introducing the model framework and the losses used. ### _The Multi-level Cross-modal Feature Alignment Framework_ **Overview:** As shown in Fig. 1, there are two VAEs corresponding to two modalities in our model. Each VAE consists of an encoder and a decoder. The inputs are the visual features of image scenes and the semantic features of scene classes, which are extracted respectively by the models such as ResNet [8] and Bert [9] in advance. With the inputs, the encoders project the visual features and the semantic features into the latent space. Then, the decoders reconstruct the visual features and the semantic features from the projected latent features. Using the training set, it is expected that the VAEs can learn to enforce the cross-modal alignment between the two kinds of latent features embedded by the encoders. Once trained, the model can be used to generate the latent semantic features for unseen classes from their text descriptors. Under the ZSL setting, a classifier can be trained by using the generated latent semantic features of unseen classes. While for the GZSL setting, the classifier will be trained with the latent semantic features from unseen classes but also the latent visual features from seen classes. Then, for one image scene from the testing set, its class can be inferred by generating its corresponding latent feature with the encoder and subsequently inputting the latent visual features into the classifier. **Two level cross-modal feature alignment constrains:** For each class of image scenes, there are many visual instances but one semantic descriptor projected into the latent space, as shown in the right part of Fig. 1. The visual instances of one class are supposed to center around the semantic descriptor of the class in the latent space. But there may be some noise instances which are father to their own semantic descriptors but closer to the semantic descriptors of other classes. Particularly, from the perspective of one image scene, there are four kinds of relationships between the visual instances(V) and the semantic descriptors(S): * \(\mathbf{VtoS^{+}}\): The relationship between the visual instance with its corresponding semantic descriptor. * \(\mathbf{VtoS^{-}}\): The relationship between the visual instance with the semantic descriptors of other classes. * \(\mathbf{VtoV^{+}}\): The relationship between the visual instance with other instances from the same class. * \(\mathbf{VtoV^{-}}\): The relationship between the instance with the instances from other classes. Among these relationships, it is expected that two items associated by the relationships of \(VtoS^{+}\) and \(VtoV^{+}\) are close as much as possible in the latent space. On the contrary, it is better to keep the items in relationships of \(VtoS^{-}\) and \(VtoV^{-}\) apart from each other as much as possible, reducing the possibility of mis-matching between the items in the relationships of \(VtoS^{+}\) and \(VtoV^{+}\). Moreover, from the perspective of one semantic descriptor, there are two more kinds of relationships(since each class has only one semantic descriptor, the class-to-class relationship is omitted.): * \(\mathbf{StoV^{+}}\): The relationship between the semantic descriptor and the visual instances from the same class. Fig. 1: The overview of the proposed method. Obviously, this kind of relationships is equal to that of \(VtoS^{+}\). * \(\mathbf{StoV^{-}}\): The relationship between the semantic descriptor and the visual instances from other classes. This is contrastive to \(StoV^{+}\), i.e., it is better to keep the two items associated by the relationship apart from each other. In current works like [5], much attention has been paid to the relationship of \(VtoS^{+}\). They focus on positively promoting the single-instance level cross-modal feature alignment, i.e., each visual instance matches with its semantic descriptor. In this paper, we propose to take all these relationships into consideration. It is obvious that these relationships form three contrastive relationships, i.e., \(VtoS^{+}\) vs. \(VtoS^{-}\), \(VtoV^{+}\) vs. \(VtoV^{-}\), and \(StoV^{+}\) vs. \(StoV^{-}\), which cross multi instances or their corresponding class descriptors. With these contrastive relationships as new constrains, we can deal with the challenge of large intra-class difference and inter-class similarity among the remote sensing image scenes. It will constrain the influence of noisy samples or outliers on the learning of cross-modal feature alignment. Therefore, we propose the **cross-instance level contrastive alignment losses** to complement the **single-instance level positive alignment losses**. Next, we detail these two levels of loss constrains. ### _Single-instance Level Positive Alignment Losses_ The single-instance level loss constrains promote the alignment between each image scene and its semantic descriptor in the latent space. Following the aligned VAEs method [7], there are three kinds of losses for this purpose: the VAE loss, the cross-modal feature reconstruction loss and the cross-modal distribution alignment loss. The definitions of them are described as follows. #### Iii-B1 The VAE Loss For each VAE shown in Fig. 1, the encoder maps the input features into the latent space, and the decoder strives to reconstruct them from the mapped latent features. The VAE loss measures the reconstruction quality, and also expects that the data distribution in the latent space obeys the standard Gaussian distribution, which is defined as Eq.(1). \[\begin{split}\ell_{VAE}&=E_{p_{E_{v}}(z_{v}|v|)}[ logp_{D_{v}}(v|z_{v})]+E_{p_{E_{s}}(z_{v}|s)}[logp_{D_{v}}(s|z_{s})]-\\ &\quad D_{KL}(p_{E_{v}}(z_{v}|v)||p(z_{v}))-D_{KL}(p_{E_{s}}(z_{ s}|s)||p(z_{s}))\end{split} \tag{1}\] \(E_{p_{E_{v}}(z_{v}|v)}[logp_{D_{v}}(v|z_{v})]\) and \(E_{p_{E_{s}}(z_{s}|s)}[logp_{D_{s}}(s|z_{s})]\) are the reconstruction losses, measuring the difference between the input data and the reconstructed data. While, the items of \(D_{KL}(p_{E_{v}}(z_{v}|v)||p(z_{v}))\) and \(D_{KL}(p_{E_{s}}(z_{s}|s)||p(z_{s}))\) are the Kullback-Leibler(KL) divergence losses, referring to the difference between the distribution \(p_{E}(z|x)\) of the generated latent variable \(z\) in the latent space and the unit Gaussian distribution \(p(z)\). #### Iii-B2 The Cross-modal Feature Reconstruction Loss The cross-modal feature reconstruction loss shown in Eq.(2) aims to constrain the latent features to align with each other in the latent space. \(N\) denotes the number of training instances in one batch, and \(v^{i}\) and \(s^{i}\) represent the visual feature and semantic feature of \(i^{th}\) instance. \[\ell_{CMFR}=\sum_{i=1}^{N}|v^{i}-D_{v}(E_{s}(s^{i}))|+|s^{i}-D_{s}(E_{v}(v^{i} ))| \tag{2}\] #### Iii-B3 The Cross-modal Distribution Alignment Loss The cross-modal distribution alignment loss is to constrain the distribution alignment between each image scene and its semantic descriptor. Its definition is shown in Eq.(3) where \(\mu^{i}\) and \(\sqrt{E^{i}}\) represent the the mean and standard deviation of the feature distribution in the latent space corresponding to \(i^{th}\) image scene. \[\ell_{CMDA}=\sum_{i=1}^{N}\sqrt{||\mu^{i}_{v}-\mu^{i}_{s}||^{2}_{F}+||\sqrt{E ^{i}_{v}}-\sqrt{E^{i}_{s}}||^{2}_{F}} \tag{3}\] ### _Cross-instance Level Contrastive Alignment Losses_ The cross-instance level loss constrains aim to promote the positive feature matching, i.e., \(VtoS^{+}\), \(VtoV^{+}\) and \(StoV^{+}\), and weaken the negative feature matching, i.e., \(VtoS^{-}\), \(VtoV^{-}\) and \(StoV^{-}\). Corresponding to these three kinds of contrastive relationships, three cross-instance level loss constrains are introduced based on the supervised contrastive learning [6]: visual-to-visual contrastive loss, visual-to-semantic contrastive loss, and semantic-to-visual contrastive loss. They are defined as follows. #### Iii-C1 The Visual-to-visual Contrastive Loss Taking an image scene as anchor, the visual-to-visual contrastive loss tends to make it close to the image scenes from the same class in the latent space, and apart from the image scenes from other classes. Given a batch of image scenes whose normalized latent visual features are \(z^{v}\), the visual-to-visual contrastive loss is defined by Eq.(4). \[\ell_{VtoV}=-\sum_{i\in I}\frac{1}{|P(i)|}\sum_{p\in P(i)}log\frac{exp(z^{v}_ {i}\bullet z^{v}_{p}/\tau)}{\sum_{a\in I\setminus i}exp(z^{v}_{i}\bullet z^{ v}_{a}/\tau)} \tag{4}\] In above definition, \(i\in I=|z^{v}|\) is the index of an arbitrary visual instances in the batch. \(P(i)=\{p\in I\setminus i|y_{p}=y_{i}\}\) is the set of indices excluding \(i\), where the instances indexed by the items in the set are from the same class as that of the \(i^{th}\) instance. And, \(\tau\in R^{+}\) is a temperature parameter controlling the concentration of contrastive losses on hard samples. #### Iii-C2 The Visual-to-semantic Contrastive Loss Also taking an image scene as anchor, the visual-to-semantic contrastive loss aims to promote its alignment with its corresponding semantic descriptor, and weaken its alignment with the semantic descriptors of other classes in the latent space. Given a batch of image scenes whose normalized latent visual features are \(z^{v}\), and normalized corresponding semantic features are \(z^{s}\), the visual-to-semantic contrastive loss is defined by Eq.(5). \[\ell_{VtoS}=-\sum_{i\in I}log\frac{exp(z^{v}_{i}\bullet z^{s}_{s(i)}/\tau)}{ \sum_{k=1}^{|z^{s}|}exp(z^{v}_{i}\bullet z^{s}_{k}/\tau)} \tag{5}\] Where \(s(i)\) means the index of the corresponding semantic descriptor of the \(i^{th}\) image scene. 3 The Semantic-to-visual Contrastive Loss The semantic-to-visual contrastive loss shifts focus to the alignment from the semantic descriptors to the image scenes. It is to promote the alignment of each semantic descriptor with the image scenes from the same class, and weaken the alignment with the image scenes of other classes in the latent space. Its definition is shown in Eq.(6). \[\ell_{StoV}=-\sum_{i\in J}\frac{1}{|P(i)|}\sum_{p\in P(i)}log\frac{exp(z_{i}^{s }\bullet z_{p}^{v}/\tau)}{\sum_{a\in I}exp(z_{i}^{s}\bullet z_{a}^{v}/\tau)} \tag{6}\] Where \(i\in J=|z^{s}|\) is the index of an arbitrary semantic feature in the batch. \(P(i)=\{p\in I|y_{p}=i\}\) is the set of indices, where the features indexed by the items in the set are from the classes of the \(i^{th}\) semantic descriptor. ### _The Overall Loss_ With all above defined loss functions, the overall loss function of our model is defined as Eq.(7) where \(\lambda_{i}(i=1,2,3,4,5)\) is the loss weight factor. \[\begin{split}\ell_{total}=\ell_{VAE}+\lambda_{1}\ell_{CMFR}+ \lambda_{2}\ell_{CMDA}+\lambda_{3}\ell_{VtoV}\\ +\lambda_{4}\ell_{VtoS}+\lambda_{5}\ell_{StoV}\end{split} \tag{7}\] ## III Experiments To validate the proposed method, several experiments have been done in our work. In this section, we describe the experimental setup and the experimental results. ### _Experimental Setup_ Dataset and PreprocessingWe take the dataset which has been used in the work of Li, et al. [5] for our experiments. There are 70 classes in the dataset, and 800 image scenes per class with the size of 256 \(\times\) 256 pixels. As can be seen in Fig. 1, instead of directly inputting the image scenes into our model, we first use an extractor to extract their visual features. In our work, we use the classical model of ResNet18 [8] for this purpose, which has been pretrained on the ImageNet dataset and can be accessed in the Python environment. The extracted visual features are with 512 dimensions. Similarly, the semantic features of scene classes also need to be extracted in advance. We adopt the sematic features which are extracted by Bert [9] and have been used by the work of Li et al. [5] for experiments. They are 1024 dimensional features extracted from a set of sentences describing the image scene classes. Implementation and ConfigurationThe model we used in our work is comprised by two VAEs and a softmax classifier. The two VAEs are for two different modalities, i.e., the visual and the semantic, respectively. All the encoders and decoders in both VAEs are the neural networks with only one hidden layer. For visual modality, the hidden layers of the encoder and decoder are implemented with 512 dimension. Meanwhile, the hidden layers of the encoder and decoder for the semantic modality are implemented with 256 dimension. When using the generated latent features to train a classifier for predicting the classes of testing image scenes, the softmax classifier is applied. There is a set of parameters in our proposed method. The settings of them are shown in Table I. And 50 epochs are taken to train the model. We have divided the dataset according to the ratios of 60/10, 50/20 and 40/30 to obtain the seen and unseen classes of image scenes. For each ratio, the average of the classification accuracies over five seen/unseen splits is taken as the final result. And under the GZSL setting, we select 600 image scenes of each seen class for training, and the remaining 200 image scenes are used for testing. We report the overall accuracy for ZSL setting, and the harmonic mean accuracy for GZSL setting. ### _Comparison with Related Methods_ To validate the performance of our method, we have taken several classical zero-shot or generalized zero-shot methods as baselines for comparison. The results of the comparisons are shown in Table II. It can be seen that for the zero-shot and generalized zero-shot classification of remote sensing image scenes, our proposed method outperforms these baseline methods. Particularly, when compared with CE-GZSL [12] which has also introduced the contrastive learning into GZSL task, our method has shown better performance. CE-GZSL extended GAN model by selecting several samples from different classes as the contrastive samples and several class descriptors of different classes as the contrastive descriptors for each instance. Different from that, we have followed the way of cross-modal feature alignment to generate intermediate latent features for unseen classes and use them to train the classifier, instead of generating visual features for unseen classes directly from their semantic descriptors. The generated intermediate latent features may contain the important visual information but also the semantic information. Moreover, more contrastive relationships under the cross-modal feature alignment setting are also taken into account. These may lead to that our proposed method resolves the zero-shot image scene classification better. ### _Ablation Experiments_ As aforementioned, three new contrastive alignment loss functiones have been introduced to constrain the learning process in our method. They are the visual-to-visual contrastive loss \(\ell_{VtoV}\), the visual-to-semantic contrastive loss \(\ell_{VtoS}\), and the semantic-to-visual contrastive loss \(\ell_{Stov}\). To investigate the effectiveness of these components to the performance of image scene classification, the ablation experiment has been done. Table III shows the results under the seen/unseen ratio of 60/10. When comparing the variants of \(v_{1}\), \(v_{2}\) and \(v_{3}\) to that of \(v_{0}\), the results show that all these contrastive losses are useful for the zero-shot classification of remote sensing image scenes. As shown in Table III, the performance has reached from \(31.34\%\) to over \(33\%\) under ZSL setting, and from \(22.43\%\) to over \(23\%\) under the GZSL setting when applying these contrastive losses, it can be seen that the visual-to-semantic contrastive loss \(\ell_{VtoS}\) produces more contribution under the ZSL setting, but under GZSL setting, the semantic-to-visual contrastive loss \(\ell_{Stov}\) plays a more important role. This indicates that there are more benefits from the contrastive losses considering the cross-modal relationships between the image scenes and their semantic descriptors. Based on the variant of \(v_{1}\), variant \(v_{4}\) further includes the visual-to-semantic contrastive loss \(\ell_{VtoS}\), and variant \(v_{5}\) takes all the three contrastive losses simultaneously. The results show that the classification performance experiences improvement when using these contrastive losses together. For example, when applying the contrastive losses \(\ell_{VtoV}\) and \(\ell_{VtoS}\) together, the performance rises from \(33.6\%\) to \(35.58\%\) under ZSL setting, and from \(23.19\%\) to \(23.28\%\) under GZSL setting. When applying all three contrastive losses, the performance further improves to \(36.61\%\) under ZSL setting and \(23.52\%\) under GZSL setting. This demonstrates that these losses do not conflict with each other and can be collaboratively utilized to obtain better zero-shot classification performance of remote sensing image scenes. ## IV Conclusion This paper proposes a multi-level cross-modal feature alignment method via contrastive learning for zero-shot classification of remote sensing image scenes. It uses two VAEs to project the visual features of image scenes and the semantic features of scene classes into a shared latent space, and learns to achieve the alignment between them in that space. Taking the cross-instance contrastive relationships into consideration, it proposes cross-instance level contrastive alignment losses to complement the single-instance level positive alignment losses. While learning to align the visual features of image scenes with their corresponding semantic features, it can also learn to keep the visual and semantic features of different classes in the latent space apart from each other. It shows the promise to address the challenge of large intra-class difference and inter-class similarity among image scenes and the potential noisy samples. Based on the widely used dataset, extensive experiments have been done to evaluate the performance of the proposed method. The results show that our proposed method outperforms state of the art methods for the zero-shot classification of remote sensing image scenes.
画像シーンのゼロショット分類は、訓練セットで見た scena がないシーンを認識できる能力は、大量のラベル付きサンプルに依存を減らす可能性を秘めています。このゼロショット画像シーン分類を解決するために、近年、クロスモダリティの共有特徴の調整手法が提案されています。これらの手法は、主に各画像シーンの視覚特徴と、潜在空間におけるその対応するセマンティック記述子とをマッピングすることに重点を置きます。しかし、異なる画像シーン間の対比的な関係や、異なるセマンティック記述子の関係については、あまり注目されていません。画像シーンの Intra-class 差異と Inter-class 類似性、そして潜在的なノイズサンプルの問題は、これらの手法に、同じクラスと他のクラスの境界に近いサンプルの影響を受けやすくすることがあります。この研究では、リモートセンシング画像シーンのゼロショット分類に、対比学習に基づく多階層クロスモダリティの共有
2309.12696
Counterfactual Conservative Q Learning for Offline Multi-agent Reinforcement Learning
Offline multi-agent reinforcement learning is challenging due to the coupling effect of both distribution shift issue common in offline setting and the high dimension issue common in multi-agent setting, making the action out-of-distribution (OOD) and value overestimation phenomenon excessively severe. Tomitigate this problem, we propose a novel multi-agent offline RL algorithm, named CounterFactual Conservative Q-Learning (CFCQL) to conduct conservative value estimation. Rather than regarding all the agents as a high dimensional single one and directly applying single agent methods to it, CFCQL calculates conservative regularization for each agent separately in a counterfactual way and then linearly combines them to realize an overall conservative value estimation. We prove that it still enjoys the underestimation property and the performance guarantee as those single agent conservative methods do, but the induced regularization and safe policy improvement bound are independent of the agent number, which is therefore theoretically superior to the direct treatment referred to above, especially when the agent number is large. We further conduct experiments on four environments including both discrete and continuous action settings on both existing and our man-made datasets, demonstrating that CFCQL outperforms existing methods on most datasets and even with a remarkable margin on some of them.
Jianzhun Shao, Yun Qu, Chen Chen, Hongchang Zhang, Xiangyang Ji
2023-09-22T08:10:25
http://arxiv.org/abs/2309.12696v1
# Counterfactual Conservative Q Learning for Offline Multi-agent Reinforcement Learning ###### Abstract Offline multi-agent reinforcement learning is challenging due to the coupling effect of both distribution shift issue common in offline setting and the high dimension issue common in multi-agent setting, making the action out-of-distribution (OOD) and value overestimation phenomenon excessively severe. To mitigate this problem, we propose a novel multi-agent offline RL algorithm, named Counterfactual Conservative Q-Learning (CFCQL) to conduct conservative value estimation. Rather than regarding all the agents as a high dimensional single one and directly applying single agent methods to it, CFCQL calculates conservative regularization for each agent separately in a counterfactual way and then linearly combines them to realize an overall conservative value estimation. We prove that it still enjoys the underestimation property and the performance guarantee as those single agent conservative methods do, but the induced regularization and safe policy improvement bound are independent of the agent number, which is therefore theoretically superior to the direct treatment referred to above, especially when the agent number is large. We further conduct experiments on four environments including both discrete and continuous action settings on both existing and our man-made datasets, demonstrating that CFCQL outperforms existing methods on most datasets and even with a remarkable margin on some of them. ## 1 Introduction Online Reinforcement Learning (Online RL) needs frequently deploying untested policies to environment for data collection and policy optimization, making it dangerous and inefficient to apply in the real-world scenarios (e.g. autonomous vehicle teams). While, Offline Reinforcement Learning (Offline RL) aims to learn policies from a fixed dataset rather than from interacting with the environment, and therefore is suitable for the real applications with highly safety requirements or without efficient simulators [25]. Directly applying off-policy RL to the offline setting may fail due to overestimation [13; 24]. Existing works usually tackle this problem by pessimism. They either utilize behavior regularization to constrain the learning policy close to the behavior policy induced by the dataset [49; 20; 11], or conduct conservative(pessimistic) value iteration to mitigate unexpected overestimation [18; 21]. It has been demonstrated both theoretically and empirically that these methods can achieve comparable performance to their online counterparts under some conditions [21].
オフラインマルチアgente reinforcement learningは、オフライン設定における分布シフト問題とマルチアgente設定におけるハイ Dimensional問題のCouplingeffectにより、アウト・オブ・ディストリビューション(OOD)と価値過大評価現象が過度な悪化を引き起こしています。この問題を軽減するために、私たちはCounterFactual Conservative Q-Learning(CFCQL)という新しいマルチアgenteのオフライン RLアルゴリズムを提案しました。CFCQLは、保守的な値を評価するために、すべての代理者を単一のハイDimensionalなものとして捉え、単一のアgenteのメソッドを適用するのではなく、対偶的な方法で各代理のために、保守的な正規化を計算します。そして、それらを線形に組み合わせることで、全体的な保守的な値評価を実現します。私たちは、CFCQLは、単一のアgenteの保守的な方法と同じ、下評価の性質と性能保証を享受することができると証明しています。しかし、
2309.03943
Cosmological Constraints from the eBOSS Lyman-$α$ Forest using the PRIYA Simulations
We present new cosmological parameter constraints from the eBOSS Lyman-$\alpha$ forest survey. We use a new theoretical model and likelihood based on the PRIYA simulation suite. PRIYA is the first suite to resolve the Lyman-$\alpha$ forest in a ($120$~Mpc/h~)$^3$ volume, using a multi-fidelity emulation technique. We use PRIYA to predict Lyman-$\alpha$ forest observables with $\lesssim 1\%$ interpolation error over an $11$ dimensional ($9$ simulated, $2$ in post-processing) parameter space. We identify an internal tension within the flux power spectrum data. Once the discrepant data is removed, we find the primeval scalar spectral index measured at a pivot scale of $k_0 = 0.78$ Mpc$^{-1}$ to be $n_P = 1.009^{+0.027}_{-0.018}$ at 68\% confidence. This measurement from the Lyman-$\alpha$ forest flux power spectrum alone is in reasonable agreement with Planck, and in tension with earlier eBOSS analyses. The amplitude of matter fluctuations is $\sigma_8 = 0.733^{+0.026}_{-0.029}$ at 68\% confidence, in agreement with Dark Energy Survey weak lensing measurements and other small-scale structure probes and in tension with CMB measurements from Planck and ACT. The effective optical depth to Lyman-$\alpha$ photons from our pipeline is in good agreement with earlier high resolution measurements. We find a linear power at $z=3$ and $k = 0.009$ s/km of $\Delta_L^2 = 0.302^{+0.024}_{-0.027}$ with a slope $n_\mathrm{eff} = -2.264^{+0.026}_{-0.018}$. Our flux power spectrum only chains prefer a low level of heating during helium reionization. When we add IGM temperature data we find $n_P = 0.983\pm 0.020$ and $\sigma_8 = 0.703^{+0.023}_{-0.027}$. Our chains prefer an early and long helium reionization event, as suggested by measurements from the helium Lyman-$\alpha$ forest. In the near future we will use our pipeline to infer cosmological parameters from the DESI Lyman-$\alpha$ data.
M. A. Fernandez, Simeon Bird, Ming-Feng Ho
2023-09-07T18:00:01
http://arxiv.org/abs/2309.03943v2
# Cosmological Constraints from the eBOSS Lyman-\(\alpha\) Forest using the PRIYA Simulations ###### Abstract We present new cosmological parameter constraints from the eBOSS Lyman-\(\alpha\) forest survey. We use a new theoretical model and likelihood based on the PRIYA simulation suite. PRIYA is the first suite to resolve the Lyman-\(\alpha\) forest in a (120 Mpc/h )\({}^{3}\) volume, using a multi-fidelity emulation technique. We use PRIYA to predict Lyman-\(\alpha\) forest observables with \(\lesssim 1\%\) interpolation error over an 11 dimensional (9 simulated, 2 in post-processing) parameter space. We identify an internal tension within the flux power spectrum data. Once the discrepant data is removed, we find the scalar spectral index at \(k=0.78\)h/Mpc to be \(n_{P}=0.97-0.995\) at 68% confidence from the Lyman-\(\alpha\) forest flux power spectrum alone, in good agreement with Planck. The amplitude of matter fluctuations is \(\sigma_{8}=0.733\pm 0.026\) at 68% confidence, in agreement with Dark Energy Survey weak lensing measurements and other small-scale structure probes and in tension with CMB measurements from Planck and ACT. The effective optical depth to Lyman-\(\alpha\) photons from our pipeline is in good agreement with earlier measurements. We add measurements of the mean temperature of the intergalactic gas from \(z=3.8-2.2\) and use them to constrain the duration and heating rate of helium reionization, finding a preference for an early, hot, helium reionization event, as suggested by measurements from the helium Lyman-\(\alpha\) forest. Adding the mean IGM temperature data also increases the significance of the \(\sigma_{8}\) tension. In the near future we will use our pipeline to infer cosmological parameters from the DESI Lyman-\(\alpha\) data. ###### Contents * 1 Introduction * 2 Simulation Suite and Emulator * 2.1 Cosmological & Astrophysical Parameters * 2.2 Summary Statistics: Flux Power and IGM Temperature * 2.3 Gaussian Process Emulators * 3 Inference Scheme and Likelihood Function * 3.1 Flux Power Spectrum Data * 3.2 IGM Mean Temperature Data * 3.3 Covariance Matrix * 3.4 Likelihood * 3.4.1 Priors * 3.5 Inference Using Simulation Data * 4 Results * 4.1 Cosmological Parameters * 4.2 Reionization and Other Parameters * 4.3 Parameter Correlations * 5 Discussion * 5.1 The Tension in the Lowest Redshift Bins * 5.2 Comparison of the Posterior Constraints to Other Lyman-alpha Analyses * 5.3 Likelihood Modifications * 6 Conclusions * A Leave-one-out versus Emulator Error * B BOSS DR9 Data * C Mean Temperature Only Posteriors * D Full Posteriors ## 1 Introduction The Lyman-\(\alpha\) forest [1; 2; 3; 4; 5; 6; 7; 8; 9] measures the distribution of neutral gas at relatively low densities. This gas traces the growth of cosmic structure, making the Lyman-\(\alpha\) forest an exceptionally powerful cosmological probe, sensitive to the distribution of dark matter deep in the matter dominated era. Correlating absorption from different quasar sightlines has allowed detection of the baryon acoustic oscillations and constraints on the expansion of the universe [10; 11; 12; 13]. The densities probed by the Lyman-\(\alpha\) forest from redshift \(z=2-5\) are \(\sim 1-100\)\(\times\) the cosmological mean density. For these redshifts and densities stellar winds and star formation effects are negligible, though feedback from black holes can be important [14; 15]. Thus the Lyman-\(\alpha\) forest is also able to measure the primordial fluctuations on some of the smallest scales available, \(k\sim 1\) h/Mpc [16, 17, 18, 19, 20, 21, 22, 23]. In addition, the Lyman-\(\alpha\) forest is sensitive to the thermal and ionization history of the intergalactic medium (IGM) [24, 25, 26, 27, 28, 29, 30, 31], and by constraining the free-streaming length of small structures, the mass scale of thermal relic dark matter [32, 33, 34, 35, 36, 37]. The extended Baryon Oscillation Sky Survey (eBOSS), part of the Sloan Digital Sky Survey (SDSS) [38], has computed the 1D flux power spectrum along quasar sight lines for over \(43,000\) quasars, with a statistical error \(\sim 1\%\) at some redshifts. This exceptional statistical error means that the error budget is dominated by systematic uncertainty, especially uncertainty in the resolution of the spectrograph on small scales [38]. The Dark Energy Spectroscopic Instrument (DESI) has improved the spectrograph resolution by a factor of two [39]. Thus, early data from DESI has measured the flux power spectrum at smaller scales (\(k\gtrsim 0.035\) km\({}^{-1}\) s) than SDSS [40, 41]. Future releases will measure higher redshifts (\(z>4.6\)) and increase the number of Lyman-\(\alpha\) forest quasar spectra by a factor of four over SDSS [42]. There are other high resolution, small sample datasets of quasar spectra, from which Lyman-\(\alpha\) forest flux power measurements have been made [43, 44, 45]. Ref. [44] used spectra from multiple surveys (XQ-100, KODIAQ, and SQUAD) to measure the Lyman-\(\alpha\) forest flux power at redshifts \(z=2-4.6\) and scales \(k\approx 0.005-0.1\) km\({}^{-1}\) s (albeit with larger uncertainty than eBOSS). Modeling the Lyman-\(\alpha\) forest requires numerical simulations that are able to follow the distribution of gas on small scales. In this paper we present cosmological parameter inference using a new likelihood built on the PRIYA simulation suite [46]. The PRIYA simulations are in 120 Mpc/h boxes, and are comprised of 48 simulations with \(2\times 1536^{3}\) particles (mean inter-particle spacing of 78 kpc/h), as well as 3 simulations with \(2\times 3072^{3}\) particles (mean inter-particle spacing of 39 kpc/h). The higher of these two resolutions exceeds the resolution of state-of-the-art galaxy formation simulations such as Illustris-TNG [47]. PRIYA is run with the same highly scalable MP-Gadget code as the ASTRID simulation [48, 49]. PRIYA contains full hydrodynamic simulations with models of galaxy formation and black hole feedback to \(z=2.2\). PRIYA is thus the first cosmological simulation suite which achieves, in a single box, the required box size of 120 Mpc/h, capable of minimising sample variance in the Lyman-\(\alpha\) forest [50], and a resolution high enough to include the gas Jeans' scale. Importantly, this removes the need for the'splicing' correction used in earlier work to combine different boxsizes into a single whole [22, 50]. Here, the PRIYA simulations are used to build multi-fidelity emulators [51, 52, 53] for the flux power spectrum and the mean temperature of the IGM. Each emulator is a surrogate model, able to reproduce the 1D flux power spectrum or mean IGM temperature for cosmological parameters (within the prior simulation volume) to \(\sim 1\%\) accuracy. A multi-fidelity emulator combines two different resolution training samples. Many low fidelity samples are used to explore parameter space, and their output is corrected with a few high fidelity samples. A multi-fidelity emulator makes predictions for the highest resolution simulation at a fraction of the computational cost of a single fidelity emulator [52, 54]. Emulators have been used to study various cosmological probes: the matter power spectrum [55, 56, 57, 58, 59, 60, 61], weak lensing shear [62, 63], the halo mass function [64, 65, 66], the 21-cm signal [67, 68, 69, 70] and the Lyman-\(\alpha\) forest [71, 72, 73, 74, 75, 51, 76, 77, 36, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 224, 213, 214, 215, 216, 217, 218, 219, 22, 230, 246, 247, 248, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 252, 259, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 289, 291, 280, 283, 285, 286, 287, 288, 289, 281, 284, 287, 282, 289, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 322, 323, 324, 335, 336, 337, 338, 339, 311, 333, 338, 33, 340, 308, 312, 333, 341, 342, 343, 344, 355, 356, 357, 358, 36, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 424, 44, 44, 445, 446, 447, 448, 450, 409, 411, 425, 449, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 470, 409, 411, 426, 44, 450, 400, 412, 446, 451, 453, 454, 456, 457, 458, 459, 461, 462, 46, 463, 46, 464, 471, 46, 46, 472, 46, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 50, 511, 52, 53, 540, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 99, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 109, 111, 113, 114, 115, 116, 117, 118, 119, 120, 120, 121, 121, 123, 124, 125, 126, 127, 128, 129, 130, 133, 134, 135, 136, 137, 138, 139, 140, 141, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 159, 170, 150, 152, 154, 159, 150, 153, 156, 157, 159, 161, 171 Our multi-fidelity emulator is similar to that described in Ref. [53], but the simulation volume has been increased by a factor of 64, and the spatial resolution has been improved by a factor of 1.5. We also use mean IGM temperature data [30] to constrain the parameters of helium reionization, data which is ultimately derived from higher resolution quasar surveys [43; 44; 45]. In summary, our method is: (1) Construct an emulator for the 1D Lyman-\(\alpha\) flux power spectrum and mean IGM temperature using the PRIYA simulations [46], Section 2. (2) Augment observational errors with estimates of the residual theoretical uncertainty to build a covariance matrix, and correct the flux power spectra for metal contamination as described in Section 3. (3) Use this emulator and likelihood to constrain cosmological parameters using Markov Chain Monte Carlo (MCMC), with results described in Section 4. We discuss some caveats and compare to earlier work in Section 5 and our conclusions are presented in Section 6. MCMC chains for all the results presented in this work along with files containing the training outputs used to construct the emulators1, as well as the code2, which includes the emulator, likelihood, and integration with the Cobaya MCMC package, are available publicly. Footnote 1: [https://github.com/mafern/InferenceLyData](https://github.com/mafern/InferenceLyData) Footnote 2: [https://github.com/sbird/lya_emulator_full](https://github.com/sbird/lya_emulator_full) ## 2 Simulation Suite and Emulator In this Section, we briefly describe the properties of the simulations and emulator, and refer the reader to Ref. [46] for the full details. The emulator allows predictions for the output of a simulation at an arbitrary set of cosmological parameters within our prior volume with an average interpolation error of 0.2% at low fidelity and 1% at high fidelity. Our multi-fidelity emulator combines simulations at different resolutions, following the scheme outlined in Ref. [53]. The emulator combines low fidelity (LF) and high fidelity (HF) simulations. Box volume, number of gas particles, and gas particle mass resolution are reported in Table 1. We performed a total of 48 low fidelity (LF) and 3 high fidelity (HF) simulations. Low fidelity simulations have \(1536^{3}\) particles, while high fidelity simulations have \(3072^{3}\) particles. Sampled parameters are chosen to maximise spread in parameter space, as described fully in Ref. [46]. The range given for the gas mass resolution is due to the varying value of \(h\) in our simulation suite (\(\Omega_{b}h^{2}\) is fixed at a value of 0.0224). We show in Ref. [46] that this gas mass is sufficient for the scales and redshifts probed by the eBOSS flux power spectrum. Our simulations include a full galaxy physics model with star formation, stellar and AGN feedback and inhomogeneous reionization models. Simulations were performed using MP-Gadget3, an N-body and smoothed particle hydrodynamics (SPH) code. MP-Gadget uses the gravitational timestepping algorithm from Gadget-4 [76], and various other algorithmic \begin{table} \begin{tabular}{|c|c|c|c|} \hline Simulation & Box Volume & N\({}_{\text{gas}}\) & M\({}_{\text{gas}}\) (M\({}_{\odot}\) h\({}^{-1}\)) \\ \hline LF & (120 Mpc h\({}^{-1}\))3 & \(1536^{3}\) & \([5.29,6.98]\times 10^{6}\) \\ HF & (120 Mpc h\({}^{-1}\))3 & \(3072^{3}\) & \([6.73,7.97]\times 10^{5}\) \\ \hline \end{tabular} \end{table} Table 1: Low-Fidelity (LF) and High-Fidelity (HF) simulation suite details. N\({}_{\text{gas}}\) is the number of gas particles simulated, M\({}_{\text{gas}}\) is the resulting mass resolution of those particles. improvements [77]. Simulations are initialised at \(z=99\) and finish at \(z=2.2\). The galaxy formation model is similar to the ASTRID simulation [48; 49] and is described fully in Ref. [46]. ### Cosmological & Astrophysical Parameters Table 2 summarises the parameters that are varied across our suite of simulations, as well as their limits. We model the primordial power spectrum \(P(k)\) using two parameters: a slope, \(n_{P}\), and an amplitude, \(A_{P}\): \[P(k)=A_{P}\left(\frac{k}{0.78\text{h/Mpc}}\right)^{n_{P}-1}\,. \tag{1}\] We also vary the Hubble parameter \(h\), and the total matter density through \(\Omega_{M}h^{2}\), although we will see these are not strongly constrained by the Lyman-\(\alpha\) forest. We add three parameters for the He ii reionization model [78]: \(z_{Hei}\) and \(z_{Hef}\) are the redshifts for the start and end of He ii reionization, and \(\alpha_{q}\) is the quasar spectral index (which scales the peak temperature during He ii reionization). \(z_{Hi}\) is the midpoint redshift of H i reionization. Finally, \(\epsilon_{AGN}\) is the black hole feedback factor, to which the Lyman-\(\alpha\) forest is insensitive. There are two further parameters for the Lyman-\(\alpha\) effective optical depth, varied by post-processing the artificial spectra. We parameterize the mean flux \(\mathcal{F}=\exp(-\tau)\) around the power law redshift evolution from Ref. [79], as \[\tau_{\text{H i}}^{\text{eff}}=0.0023\times\tau_{0}\times(1+z)^{3.65+d\tau_{0 }}\,. \tag{2}\] The parameters varied are \(\tau_{0}\) and \(d\tau_{0}\), with \((1,0)\) corresponding to the redshift evolution of Ref. [79]. As the mean flux is chosen in post-processing, we can dramatically over-sample these parameters. The final set of flux power spectra is thus ten times the number of simulations, or 480 LF, and 30 HF simulated flux power spectra. We have freedom to vary the Figure 1: Example Lyman-\(\alpha\) forest spectra and corresponding gas density and temperature (colors in top and bottom panels) from an LF and HF simulation at redshift \(z=4\). The top panel shows high resolution and the bottom panel shows low resolution. The middle panel shows the Lyman-\(\alpha\) forest spectra for the skewers passing through the middle of the top panel (high resolution, yellow line) and bottom panel (low resolution, red dashed line). Lyman-\(\alpha\) mean flux as it is degenerate with the amplitude of the ultraviolet background (UVB). ### Summary Statistics: Flux Power and IGM Temperature Figure 1 shows an example of the gas density and temperature (colors) at \(z=4\) for both high and low resolution in our simulations, demonstrating how spectra connect to the matter density field. We generate a total of \(3\times 480^{2}=691,200\) spectra from each snapshot of each simulation, from \(z=4.6\) to \(z=2.2\) in increments of \(\Delta z=0.2\), with a pixel resolution of 10 km s\({}^{-1}\). We generate Lyman-\(\alpha\) forest absorption spectra using Fake Spectra Flux Extractor [80]4, described in Ref. [81]. Footnote 4: [https://github.com/sbird/fake_spectra](https://github.com/sbird/fake_spectra) We compute the 1D flux power spectrum of the Lyman-\(\alpha\) forest flux, averaged over all sightlines. The flux power is defined as \[P_{F}(k)=|L^{-1}\delta_{F}^{2}(k)|\,. \tag{3}\] \(\delta_{F}^{2}(k)\) is the Fourier transform of the flux excess, \(\delta_{F}(k)=F(k)/\langle F(k)\rangle-1\), and \(L\) is the length of the sightline. Our simulations contain a realistic population of DLAs, which we mask as in the observational pipeline. We extract the IGM mean temperatures directly from the simulation snapshots. First, the temperature and density for all the gas particles in the simulation are retrieved, then all particles that are within 5% of the critical density are retained. The median temperature of these retained particles is used as the IGM mean temperature. All of the Lyman-\(\alpha\) forest flux power spectra, IGM mean temperatures, trained emulators, as well as select MCMC chains are available at: [https://github.com/mafern/InferenceLyaData](https://github.com/mafern/InferenceLyaData). ### Gaussian Process Emulators We use the Gaussian Process (GP) emulators described in Refs. [46, 53] for the 1D flux power spectra and mean IGM temperature extracted from our simulations. The emulators interpolate over simulation outputs and make predictions for arbitrary parameter sets within \begin{table} \begin{tabular}{l l l l} \hline Parameter & Minimum & Maximum & Description \\ \hline \(n_{P}\) & 0.8 & 0.995 & Scalar spectral index \\ \(A_{P}\) & \(1.2\times 10^{-9}\) & \(2.6\times 10^{-9}\) & Power amplitude at \(k=0.78\) h/Mpc \\ \(h\) & 0.65 & 0.75 & Hubble parameter \\ \(\Omega_{0}h^{2}\) & 0.14 & 0.146 & Total matter density \\ \(z_{Hei}\) & 3.5 & 4.1 & Start redshift of HeII reionization \\ \(z_{Hef}\) & 2.6 & 3.2 & End redshift of HeII reionization \\ \(\alpha_{q}\) & 1.3 & 2.5 & Quasar spectral index during HeII reionization \\ \(z_{Hi}\) & 6.5 & 8 & Median redshift of HI reionization \\ \(\epsilon_{AGN}\) & 0.03 & 0.07 & Thermal efficiency of black hole feedback \\ \(\tau_{0}\) & 0.75 & 1.25 & Mean optical depth at \(z=3\) in Eq. 2.2. \\ \(d\tau_{0}\) & \(-0.4\) & 0.25 & Mean optical depth redshift evolution in Eq. 2.2. \\ \hline \end{tabular} \end{table} Table 2: Summary of likelihood function parameters, together with the ranges covered by the emulator. We vary a total of 11 parameters: 4 for cosmology, 3 for the helium reionization model, 1 for the hydrogen reionization model, 1 for the strength of AGN feedback and 2 for the mean optical depth. the parameter limits shown in Table 2. We use a multi-fidelity model, which allows simulations with different particle loads, and thus costs, to be combined together. Specifically, we combine simulations run at two different resolutions, high fidelity (HF) and low fidelity (LF) specified in Table 1. The multi-fidelity prediction for the HF outputs (shown here for the Lyman-\(\alpha\) forest flux power spectrum, but equally valid for the mean temperature) is given by a linear multi-fidelity model: \[P_{F}^{\text{\tiny HF}}(k,\mathbf{\theta})=\rho(k)\cdot P_{F}^{\text{\tiny LF}}(k, \mathbf{\theta})+\delta(k,\mathbf{\theta}), \tag{4}\] where \(\rho\) is a cosmology-independent but scale-dependent parameter, and \(\delta(k,\mathbf{\theta})\) is a GP (independent of the LF output), both of which are optimized using the training samples. We implement our multi-fidelity models using Emukit [82]. Ref. [46] quantifies the accuracy of our emulator using a leave-one-out technique, in which one simulation is chosen as a 'left out' sample. A smaller emulator built excluding all samples from this simulation is used to predict the summary statistic for the left out sample. After computing leave-one-out interpolation errors for all potential test samples, we found on average 0.2% accuracy for the low-fidelity simulations and 1% for the high fidelity simulations. The last is likely a significant over-estimate of the actual error since the leave-one-out procedure in this case is missing 1/3 of the total training data. The emulator accuracy indicates that the emulators are performing well, with potential errors significantly smaller than the 7% average uncertainty in mean IGM temperature measurements [38]. Our likelihood function and emulator code is publicly available5. Footnote 5: [https://github.com/sbird/lya_emulator_full](https://github.com/sbird/lya_emulator_full) ## 3 Inference Scheme and Likelihood Function In this Section, we describe the inference scheme and likelihood function by which our cosmological parameter constraints are derived from our emulator, the eBOSS flux power spectrum [38] and the mean IGM temperature. The overall inference scheme is: 1. Use the emulator to predict the flux power spectrum and IGM mean temperature for a set of input parameters (see Table 2). 2. Calculate a likelihood comparing these predictions to their observational counterparts from eBOSS [38] and Ref. [30]. 3. Use Cobaya [83, 84] to run MCMC chains and compute posterior parameter constraints. Section 3.1 discusses the flux power spectrum data, while Section 3.2 discusses the IGM temperature data. We derive our covariance matrix in Section 3.3. Details of the likelihood calculation used in the MCMC sampling are given in Section 3.4. We validate our pipeline on simulated data in Section 3.5. ### Flux Power Spectrum Data We use the observed Lyman-\(\alpha\) forest flux power spectrum from [38], which is based on the Baryon Oscillation Spectroscopic Survey (BOSS) and extended-BOSS (eBOSS) quasar samples [85, 86]. In [38], the BOSS/eBOSS quasar samples are refined to remove spectra that have not been visually inspected, and to remove spectra with broad absorption lines. Sky lines and damped Lyman-\(\alpha\) absorbers (DLAs) are masked. Our simulations include a realistic population of DLAs, which are masked in the same way. The sample of Lyman-\(\alpha\) forests from the set of remaining quasar spectra is then further refined based on cuts to the spectral resolution, signal-to-noise ratio, number of masked pixels, and forest length, with a final sample of about \(43,000\) spectra. The redshifts and scales covered by these observations set the redshift range and scales we use in our flux power spectrum emulator, namely \(z=2.2-4.6\) (redshift bin size of \(\Delta z=0.2\)), and \(k\approx 0.001-0.02\) s/km (over 35 linearly spaced bins, \(\Delta k=5.42\times 10^{-4}\) s/km). Note, our emulator can easily be re-trained for the smaller scales probed by DESI. The average uncertainty in the Lyman-\(\alpha\) forest flux power from Ref. [38] ranges from \(\approx 6\%\) at low redshifts or large scales, to \(\approx 16\%\) for high redshifts or small scales, and is often dominated by systematic uncertainty. We apply correction terms to the Lyman-\(\alpha\) forest flux power spectrum predicted by our emulator to model DLAs and metal contamination. We correct for DLAs using the template from Ref. [87]. This allows us to account for differences in the DLA masking between our simulated pipeline and the observed pipeline. An example would be DLAs, or Lyman limit systems (LLS), which are not detected in the observational pipeline due to low spectral signal-to-noise. Note that our simulation includes a model that produces realistic populations of LLSs and DLAs, so the marginalised template allows for aspects in which the simulated model differs from the real Universe. In [87], there are four parameters, with sub-DLAs separate from LLSs, and DLAs divided into two categories. For each of the parameters, a redshift and scale dependent correction is applied, where a positive (negative) value for the parameter implies that our simulation has underestimated (overestimated) the number of absorbers in that category. We found that in practice our dataset was unable to measure separately all four of the column density bins. We thus simplify our likelihood by using only two additional free parameters, one parameter covering sub-DLAs and LLS, \(\alpha_{\rm lls}\), and one parameter covering DLAs, \(\alpha_{\rm dla}\). \(\alpha_{\rm lls}\) covers column densities between \(1.6\times 10^{17}-10^{20}\) cm\({}^{-2}\), and \(\alpha_{\rm dla}\) covers \(10^{21}-10^{22.5}\) cm\({}^{-2}\). We account for correlated Si iii absorption within the Lyman-\(\alpha\) forest following Ref. [9]. Our likelihood includes an additional nuisance parameter, \(f_{\rm SiIII}\), which measures the amplitude of the metal contamination. ### IGM Mean Temperature Data We use the mean IGM temperatures from Ref. [30], derived from simulation modeling of high resolution quasar spectra from the KODIAQ survey [88]. Ultimately the dataset is a relatively small, visually inspected set of high resolution quasar spectra. Importantly, these spectra are independent of the eBOSS quasar sample, justifying our choice of separate likelihood functions. We include IGM mean temperature data for \(z=2.2-3.8\), for consistency with the available Lyman-\(\alpha\) forest flux power data. The average uncertainty for this data set is \(\approx 10\%\), whereas our mean temperature emulator has an average uncertainty of \(\sim 1\%\). Ref. [30] provides mean temperatures derived from four different statistics: the Lyman-\(\alpha\) forest flux power spectrum, curvature, wavelet decomposition, and Doppler width distribution. We use the Lyman-\(\alpha\) forest flux power spectrum derived temperatures in the main body of this work, but show results using these other data sets in Appendix C. To derive temperatures from the observed quasar spectra, Ref. [30] calculated several summary statistics and compared them to those derived from spectra drawn from simulations. The simulations they used were similar in resolution to our HF suite (gas mass resolution of \(\sim 10^{5}\) M\({}_{\odot}\)), though much smaller in volume (10 Mpc/h box side length). Because these observed mean temperatures are themselves derived using a suite of simulations, it would be optimal to remove the middle step in future work, i.e. calculate mean temperatures using the observed small-scale 1D flux power spectrum and our own set of simulations. ### Covariance Matrix In this Section, we derive the covariance matrix, \(\mathbf{K}\), that is used for our inference. We decompose \(\mathbf{K}\) as: \[\mathbf{K}=\mathbf{K}_{\rm BOSS}+\mathbf{\sigma}_{GP}(\mathbf{p})\cdot\mathbf{\sigma}_{GP}^{T}(\bm {p})+\mathbf{\sigma}_{CV}\cdot\mathbf{\sigma}_{CV}^{T}\,. \tag{10}\] Here, \(\mathbf{K}_{\rm BOSS}\) is the covariance matrix from the eBOSS pipeline [38], and is the largest term in the covariance matrix on most scales. We also add two extra terms which model theoretical error in our model. \(\mathbf{\sigma}_{GP}(\mathbf{p})\) is the parameter dependent estimate of the interpolation error from the Gaussian process. We found that in some cases when a parameter was poorly constrained this could unphysically drive the chain towards the edge of parameter space where the interpolation error was large. We thus choose to omit it from the overall covariance matrix. Appendix A shows that its addition has a small effect on our final results. The second theoretical error in our simulation suite (which dominates) is \(\mathbf{\sigma}_{CV}\), which models residual sample variance from the finite box size, analogous to cosmic variance from Figure 2: Diagonal elements of the covariance matrix as a function of scale and for selected redshift bins. Shown are the square root of the diagonal elements of the eBOSS covariance matrix and the diagonal cosmic variance error estimated from leave-one-out errors, \(\mathbf{\sigma}_{CV}\). the finite cosmological horizon6. We include an estimate of sample variance using the leave-one-out errors discussed in Ref. [46], a technique made possible by the inclusion of \(h\) in our simulation suite. The Hubble parameter does not directly affect the gravitational evolution in our simulations due to Gadget's use of Mpc/h units, and Ref. [46] showed that the effect on the thermal history is small on the scales probed by eBOSS. However, in our parameterization, \(h\) also changes \(\Omega_{M}\)7 and so the conversion of wavenumbers from h/Mpc to s/km. Individual Fourier modes thus move between bins depending on the value of \(h\), mimicking the sample variance from different initial phases. We thus approximate \(\mathbf{\sigma}_{CV}\) with the averaged variance of the leave-one-out errors using the low fidelity simulations. Leave-one-out errors are found by building a reduced emulator, which is trained on all but a single sample, then evaluating the prediction accuracy for that left-out sample using the reduced emulator. This is then repeated, such that every sample takes a turn being left out (see Figure 2 of [46]): Footnote 6: Ref. [89] reduced sample variance by interpolating the parameters of a higher order polynomial, rather than fitting the binned flux power spectrum directly. Our emulator is much less affected by sample variance as our simulated volume is 8 times larger. Footnote 7: \(\Omega_{M}h^{2}\) is a separate parameter and so kept fixed when varying \(h\). \[\mathbf{\sigma}_{CV}^{2}=\frac{1}{N_{LF}}\Sigma_{i}\left(P_{F}^{\rm Predict}(k,z,p_ {i})-P_{F}^{\rm True}(k,z,p_{i})\right)^{2}\,. \tag{10}\] Here the sum runs over all simulated low-fidelity parameter sets \(p_{i}\) and \(N_{LF}\) is the number of low-fidelity simulations. Figure 2 shows the magnitude of the \(\mathbf{\sigma}_{CV}\) term compared to the eBOSS errors. \(\mathbf{\sigma}_{CV}\) is significant only on the largest scales, \(k<2.5\times 10^{-3}\) s/km, as expected from an effect due to finite box size. In addition, there is a significant redshift dependence: \(\mathbf{\sigma}_{CV}\) is large compared to the eBOSS errors only for \(2.8<z<3.4\). For clarity, Figure 2 shows only \(z=2.8\) and \(z=3.4\). However, we have verified that \(\mathbf{\sigma}_{CV}\) reduces for lower redshifts. For the redshift range \(4.2>z>3.4\), \(\mathbf{\sigma}_{CV}\) remains approximately constant, but becomes less significant as the eBOSS statistical errors increase. These details reveal the physical source of this large-scale variance. The relevant scale is close to the 20 Mpc/h size of the helium reionization bubbles, and the relevant redshift range is when our model performs helium reionization. Helium reionization bubbles are placed randomly around rare large halos, which creates sample variance in a finite box. ### Likelihood We use a log normal likelihood summed over all redshifts and, for the flux power, all scale bins: \[\log\!\mathcal{L}=-\frac{1}{2}\sum_{z=2.2}^{z=4.6}\left(\left(\mathbf{P}_{F}^{\rm diff }\right)^{\top}\mathbf{K}^{-1}\cdot\mathbf{P}_{F}^{\rm diff}+\log\left(\det(\mathbf{K}) \right)\right) \tag{11}\] where \(\mathbf{P}_{F}^{\rm diff}=\mathbf{P}_{F}^{\rm sim}-\mathbf{P}_{F}^{\rm obs}\) is the vector difference between the simulation prediction and the observation. The covariance matrix, \(\mathbf{K}\), is described in Equation 10. The likelihood for the IGM mean temperature is similar, but single valued per redshift. We compute the Lyman-\(\alpha\) forest flux power and IGM mean temperature likelihoods separately. Because the flux power spectrum has larger magnitude likelihoods (due to the extra dimension of information, \(k\), and additional redshift bins), the mean temperature likelihood is scaled by a factor of \(\sim 10\), to ensure both information sources contribute roughly equally to the posterior. The factor of \(\sim 10\) was determined by calculating the mean likelihood from MCMC chains for the IGM mean temperature and the flux power separately. We make use of the Cobaya package [83, 84, 90, 91] to run MCMC chains using this likelihood. The MCMC sampler uses the Metropolis method discussed in [90], and uses a Gaussian + exponential proposal distribution that dynamically learns the proposal covariance. Convergence is determined using the Gelman-Rubin statistic, \(R\), also detailed in [90]. The chains presented here were all run until a convergence of \(R-1<0.01\), with results plotted for those chains for samples at \(R-1<1\). #### 3.4.1 Priors We use the parameter limits shown in Table 2. As we showed in Ref. [46], the AGN feedback parameter \(\epsilon_{AGN}\) has minimal effect on the Lyman-\(\alpha\) forest 1D flux power spectrum. Preliminary chains indicated that it is indeed poorly constrained by the data and has minimal correlations with other parameters. We use a strong Gaussian prior with \(\mu=0.05\) and \(\sigma=0.005\), which dominates over data constraints, and will omit constraints on \(\epsilon_{AGN}\) from our results. We also place a weak Gaussian prior on the Hubble parameter, \(h\), with \(\mu=0.70\) and \(\sigma=0.015\), as it is weakly constrained and this prior avoids the inference straying into areas near the edge of parameter volume where the emulation is less accurate. For all other parameters we use uniform priors within the parameter limits. ### Inference Using Simulation Data In this section we test our inference framework with simulation outputs in place of the observational data, confirming that we recover the known input parameters. We first used the flux power spectrum from one of the three high fidelity simulations, and confirmed that the maximum likelihood was indeed at the input parameter values for all parameters. All input parameters were recovered to better than one sigma. We next ran chains using data from a low-fidelity simulation with a different random seed from the main PRIYA suite. For these runs only, we used an emulator built using the low fidelity suite. This test was designed to quantify whether the finite box size of our simulations can affect our parameter constraints. Figure 3 shows the results, with dashed black lines indicating the correct parameters. We have performed three runs. The first (Seed LOO) is our preferred error model, using the eBOSS covariance and a leave-one-out (LOO) error term. The second adds an error term for the expected interpolation error from the Gaussian Process (Seed LOO + GP). Both of these runs use only information from the eBOSS flux power spectrum and thus do not provide strong constraints on the parameters of helium reionization. We therefore run another chain including constraints from the mean temperature. In all three runs, the optical depth parameters, \(\tau_{0}\) and \(d\tau_{0}\), are tightly constrained around the true value in our pipeline, despite the effect of a different structure seed. The GPERR chain increases the uncertainty, especially on \(d\tau_{0}\), but does not bias the measurement. The best estimate comes from the flux power spectrum data alone (Seed FPS). We also consistently recover the true values of the cosmological parameters \(n_{P}\) and \(A_{P}\), although \(A_{P}\) is about \(1-\sigma\) high for the chain which includes the mean temperature. This is not unreasonable as we deliberately constructed our test data, with a different structure seed, to be different from the training data. \(\Omega_{M}h^{2}\) is poorly constrained in all chains, as expected given that our prior volume includes only a narrow range for \(\Omega_{M}h^{2}\), motivated by Planck results. All parts of the prior range are within the \(1-\sigma\) posteriors. Figure 3: Posteriors using mock data, a simulation output with a different initial seed to the main PRIYA suite. The true parameter values for the input are indicated by the black dashed lines. Three chains are used: ‘Seed FPS’ uses the default error model, with the eBOSS covariance and leave-one-out errors. ‘Seed FPS + GPERR’ adds the covariance from the Gaussian Process. ‘Seed FPS + \(T_{0}\)’ uses the default error model but supplements the flux power spectrum data with information from the mean temperature. \(v_{scale}\) is the Hubble parameter \(h\), re-labelled for reasons explained in the text. The redshift of hydrogen reionization, \(z_{HI}\), is estimated from the mean IGM temperature at \(z>3.6\) or from a large-scale increase in the flux power spectrum at \(z>4\) (see Ref. [46]). The second effect is due to a scale-dependent bias arising from placement of the reionization bubbles [92]. Figure 3 indicates that this bias is sensitive to sample variance from the finite box, and so the hydrogen reionization redshift is not well measured by the flux power spectrum data alone. The three parameters which govern helium reionization, \(z_{i}^{HeII}\), \(z_{f}^{HeII}\) and \(\alpha_{q}\), are well constrained by the mean temperature data. The runs which do not include mean temperature data have a preference for a larger \(\alpha_{q}\) than the input value. As discussed above, the main effect of a different structure seed is through the placement of helium reionization bubbles. \(\alpha_{q}\) is thus measured using a similar scale-dependent bias as \(z_{HI}\), and so is slightly sensitive to the finite box size in the same way. However, the mean IGM temperature is sensitive to \(\alpha_{q}\) through the peak temperature during helium reionization, and thus the chains including it correctly infer \(\alpha_{q}\). The chain including Gaussian Process errors sometimes produced incorrect parameter inferences, notably in \(\alpha_{q}\). It seems that GP errors create an implicit prior which sometimes shifts the results towards regions of higher interpolation accuracy. The simulated data seems to exaggerate this effect, perhaps because of the exact choice of simulated parameters near the edge of the prior volume. We show in Appendix A that the posteriors from eBOSS data are not significantly changed by including GP error. Nevertheless, to be conservative our main results are reported with chains run omitting GP errors from the likelihood. As discussed above, the Hubble parameter, \(h\), does not affect the evolution of our simulations except through its effect on \(\Omega_{M}\) (at fixed \(\Omega_{M}h^{2}\)) and thus the scaling between velocity (km/s) and comoving (Mpc/h) units. Constraints on \(h\) (\(v_{\rm scale}\)) are incorrect in the chains shown in Figure 3, driven by sample variance in the finite box. We confirmed that gradients of the likelihood with respect to \(h\) are largest for the largest scales, particularly the first 4 \(k\)-bins. We computed the \(\chi^{2}\) per degree of freedom, which was \(\sim 0.9\) for \(h=0.65\) and \(\chi^{2}\sim 1\) for the true input value, indicating over-fitting to the noise created by different structure seeds. We confirmed that fixing \(h\) to the known true value results in very small changes to the other parameters and their confidence intervals. There is thus zero cosmological information in our \(h\) constraints. In order to avoid unwarranted conclusions, we will henceforth relabel \(h\) as \(v_{\rm scale}\), emphasising that it merely controls the mapping between the native Fourier-space binning of the simulations and the observed velocity space of the spectra, and its inference is dominated by modelling error from sample variance. ## 4 Results In this Section, we report posterior constraints on the parameters listed in Table 2. Section 4.1 discusses the results for those parameters which are most strongly constrained by the flux power spectrum data. These are: the optical depth parameters, \(\tau_{0}\) and \(d\tau_{0}\), the power spectrum parameters, \(n_{P}\) and \(A_{p}\), and the growth function \(\Omega_{M}h^{2}\). Section 4.2 then discusses the constraints on the other parameters, and shows the best fit to the mean IGM temperature data. These are the three parameters defining the He ii reionization model, \({\rm z}_{i}^{\rm He\ II}\), \({\rm z}_{f}^{\rm He\ II}\), and \(\alpha_{q}\); the parameter for the midpoint of H i reionization, \({\rm z}^{\rm H\ I}\), the strong absorber models, \(\alpha_{LLS}\) and \(\alpha_{DLA}\)), the Silicon III correction (fSiIII) and the velocity to distance scale parameter \(v_{scale}\). The same chains are used in all sections: we split parameters into two sections merely for readability. We show the full corner plot, containing all constrained parameters, in Appendix D. Table 3 shows marginalised posterior parameter constraints, including the derived parameters \(A_{s}\) and \(\sigma_{8}\). ### Cosmological Parameters Figure 4 shows the results of our chains for the cosmological parameters. We show three MCMC chains. Two chains are fit to the eBOSS flux power spectrum data only. The first fits to the full redshift range measured by eBOSS, \(z=2.2-4.6\) (black), while the second fits a limited redshift range \(z=2.6-4.6\) (red). The third chain uses the limited redshift range eBOSS dataset but adds the mean IGM temperature likelihood. The chain including the \(z<2.6\) data prefers lower \(n_{P}\), lower \(A_{P}\) and higher \(\tau_{0}\) than the reduced redshift range. Figure 5 shows that the shift in posterior parameters is driven by the fit. The best-fit flux power spectrum to the data at \(z\geq 2.6\) is a poor fit to the flux power spectra measured at \(z=2.2\) and \(z=2.4\). Since the lowest redshift bins have the smallest statistical error, when Figure 4: Posteriors for the optical depth and power spectrum parameters, \(\tau_{0}\), \(d\tau_{0}\), \(n_{P}\), \(A_{p}\), and \(\Omega_{M}h^{2}\). Results are from three MCMC chains: those labelled ‘FPS’ use the flux power spectrum only. These include a chain using the full eBOSS dataset, ‘FPS \(z=2.2-4.6\)’ (black) and a chain using a limited range eBOSS dataset, found to remove the internal tension (Section 5.1), ‘FPS \(z=2.6-4.6\)’ (red). The third chain, ‘FPS \(z=2.6-4.6\)’ (gold), uses the limited range eBOSS dataset but adds the mean IGM temperature constraints. Our preferred cosmological constraints are the red chain, ‘FPS \(z=2.6-4.6\)’. they are included they drive the best-fit flux power spectrum to a region which is a poorer fit to the higher redshift data. We confirmed that a chain which included the \(z=2.4\) bin but not the \(z=2.2\) bin produced posterior constraints mid-way between the chain including \(z=2.2-4.6\) and the chain including \(z=2.6-4.6\). Table 4 shows this quantitatively. The chains which fit to \(z>2.6\) are a poor fit to the lowest two redshift bins. The chain fitting to \(z=2.2\) is a better fit to these bins, at the cost of an overall worse fit to most higher redshifts. The total \(\chi^{2}\) per degree of freedom for the reduced redshift chains is close to 1.03, indicating a good fit, while the full reduced range has a \(\chi^{2}\)/dof of 1.15. There is thus an internal tension in the eBOSS dataset, when compared to our model, driven by the lowest two redshift bins. We discuss possible reasons for this tension in Section 5.1 and compare to the results of earlier analyses further in Section 5.2. It is important not to over-interpret the \begin{table} \begin{tabular}{|l|c|c|c|} \hline & FPS \(z>2.6\) & FPS \(+\)\(T_{0}\) & FPS \(z>2.2\) \\ Parameter & 68\% (95\%) & 68\%, (95\%) & 68\%, (95\%) \\ \hline \(d\tau_{0}\) & \(-0.006\pm 0.037\)\({}^{+0.071}_{-0.076}\) & \(0.028\pm 0.036\)\({}^{+0.070}_{-0.071}\) & \(-0.240\pm 0.030\)\({}^{+0.058}_{-0.059}\) \\ \(\tau_{0}\) & \(1.098^{+0.016}_{-0.020}\)\({}^{+0.036}_{-0.033}\) & \(1.093\pm 0.017\)\({}^{+0.034}_{-0.033}\) & \(1.214^{+0.013}_{-0.010}\)\({}^{+0.022}_{-0.024}\) \\ \(n_{\rm P}\) & \(>0.969\)\((>0.949)\) & \(0.972^{+0.018}_{-0.0095}\)\((>0.948)\) & \(0.891\pm 0.011\)\({}^{+0.021}_{-0.020}\) \\ \(A_{\rm P}/10^{-9}\) & \(1.56\pm 0.12\)\({}^{+0.25}_{-0.23}\) & \(1.38^{+0.08}_{-0.11}\)\((<1.56)\) & \(<1.28\)\((<1.36)\) \\ \(\Omega_{M}h^{2}\) & \(0.1425^{+0.0011}_{-0.0015}\)\({}^{+0.0024}_{-0.0024}\) & \(0.1427^{+0.0013}_{-0.0018}\) (—) & \(0.1414^{+0.0006}_{-0.0010}\)\((<0.143)\) \\ \(z_{i}^{HeII}\) & \(>4.04\)\((>3.95)\) & \(4.04^{+0.05}_{-0.02}\)\((>3.97)\) & \(>4.07\)\((>4.02)\) \\ \(z_{f}^{HeII}\) & \(<2.76\)\((<2.98)\) & \(2.72\pm 0.04\)\({}^{+0.09}_{-0.09}\) & \(<2.68\)\((<2.80)\) \\ \(\alpha_{q}\) & \(>2.32\)\((>2.03)\) & \(1.40^{+0.04}_{-0.09}\)\((<1.52)\) & \(>2.41\)\((>2.28)\) \\ \(z^{HI}\) & \(>7.61\)\((>7.24)\) & \(7.58^{+0.37}_{-0.16}\)\((>7.09)\) & \(>7.79\)\((>7.54)\) \\ \(v_{\rm scale}\) & \(0.691^{+0.011}_{-0.009}\)\({}^{+0.021}_{-0.022}\) & \(0.684^{+0.016}_{-0.010}\)\({}^{+0.021}_{-0.026}\) & \(0.694^{+0.007}_{-0.009}\)\({}^{+0.018}_{-0.016}\) \\ \(\alpha_{lls}\) & \(0.162\pm 0.029\)\({}^{+0.056}_{-0.057}\) & \(0.191\pm 0.028\)\({}^{+0.054}_{-0.056}\) & \(0.039^{+0.017}_{-0.019}\)\({}^{+0.037}_{-0.034}\) \\ \(\alpha_{dla}/10^{-2}\) & \(-0.4\pm 0.6\)\({}^{+1.2}_{-1.2}\) & \(-1.3\pm 0.5\)\({}^{+1.0}_{-1.1}\) & \(-0.7\pm 0.3\)\({}^{+0.6}_{-0.6}\) \\ \(f_{SiIII}/10^{-3}\) & \(9.66\pm 0.51\)\({}^{+1.0}_{-1.0}\) & \(9.62\pm 0.51\)\({}^{+1.0}_{-0.99}\) & \(8.81\pm 0.40\)\({}^{+0.76}_{-0.77}\) \\ \hline \(A_{\rm s}/10^{-9}\) & \(1.67^{+0.12}_{-0.13}\)\({}^{+0.27}_{-0.24}\) & \(1.49^{+0.09}_{-0.12}\)\({}^{+0.21}_{-0.20}\) & \(1.71^{+0.06}_{-0.09}\)\({}^{+0.16}_{-0.15}\) \\ \(\sigma_{8}\) & \(0.733\pm 0.026\)\({}^{+0.052}_{-0.049}\) & \(0.696^{+0.021}_{-0.025}\)\({}^{+0.045}_{-0.043}\) & \(0.713^{+0.012}_{-0.018}\)\({}^{+0.032}_{-0.028}\) \\ \hline \end{tabular} \end{table} Table 3: Posterior parameter constraints, including the derived parameters \(A_{s}\) and \(\sigma_{8}\). Maximum posterior values, and 68% confidence limits are shown, with 95% confidence intervals in brackets. Each column shows a separate chain, from left to right: fits to the flux power spectrum alone from the reduced redshift range \(z=2.6-4.6\), fits to the flux power spectrum from the reduced redshift range \(z=2.6-4.6\) and the mean IGM temperature, and fits to the flux power spectrum alone from the full redshift range \(z=2.2-4.6\). Single sided limits are shown when one bound is larger than the prior volume of the emulator. posterior constraints from the \(z=2.2-4.6\) chain. When there is an internal tension in the data and the overall reduced \(\chi^{2}\) is poor, the posteriors can be driven by noise in the dataset and may not be meaningful. We also checked for other redshift bins where the fit is poor. Visually, Figure 5 suggests that the fit is poor for \(z=4.0\) and \(z=4.2\). The reduced \(\chi^{2}\) in Table 4 is moderately higher than 1, but not significantly so, as the statistical errors are also large. It is possible that some element of the covariance matrix, theoretical or systematic, is moderately underestimated in these bins. The \(z=2.6\) has a slightly elevated \(\chi^{2}/\)dof in the reduced redshift range chains, which may suggest that whatever causes the low redshift tension still has a small effect at \(z=2.6\). Posterior constraints from the reduced redshift range flux power spectrum data show a mean optical depth in good agreement with other measurements. As a reminder, these parameters measure deviations from the mean flux relation of Ref. [79], so a value of \(\tau_{0}=1\) and \(d\tau_{0}=0\) corresponds to agreement with that model. The best fit mean flux at \(z=3\), Figure 5: Observed Lyman-\(\alpha\) forest flux power spectrum [38], from \(z=4.6\) to \(z=2.2\) (black lines and circles, with shading corresponding to one sigma uncertainty). Also shown are three predictions for the Lyman-\(\alpha\) forest flux power spectrum from our multi-fidelity emulator corresponding to the maximum posterior input parameters compiled in Table 3. The negative of the log-likelihood for these fits is compiled in Table 4. 1.1, and the redshift variation \(d\tau_{0}\) is consistent with 0. This implies a mean optical depth at \(z=3\) of \(\tau_{\rm H1}^{\rm eff}(z=3)=0.398\), which is extremely close to the best-fit value of \(\tau_{\rm H1}^{\rm eff}(z=3)=0.4\) from Ref. [93], and consistent within the error bars with \(\tau_{\rm H1}^{\rm eff}(z=3)=0.36\pm 0.1\) from [79]. The slope of the spectral index, \(n_{P}\), is \(n_{P}=0.972^{+0.018}_{-0.009}\) when using \(T_{0}\) and \(n_{P}>0.969\) when using only the flux power spectrum, where the upper limit of the prior is consistent at \(2\sigma\). This is consistent with the Planck value of \(n_{s}=0.965\pm 0.004\)[94]. The growth factor, \(\Omega_{M}h^{2}\), is weakly constrained and not strongly affected by including mean temperature data. Planck found \(\Omega_{M}h^{2}=0.1424\pm 0.001\), which is close to the posterior of our chains. The primordial power spectrum amplitude is \(A_{p}/10^{-9}=1.56\pm 0.12\) for the flux power spectrum reduced redshift result. The inclusion of the mean temperature shrinks the constraints moderately and shifts the posterior value down by about \(1.5\sigma\). Table 3 shows \(A_{s}\), the power spectrum amplitude measured by Planck on large scales, which is related to \(A_{P}\) via: \[A_{s}=\left(0.4/2\pi\right)^{n_{P}-1}A_{P}\,. \tag{4.1}\] We find \(A_{s}=\left(1.67\pm 0.12\right)\times 10^{-9}\) for the flux power spectrum alone and \(A_{s}=\left(1.5\pm 0.012\right)\times 10^{-9}\) when including the mean IGM temperature. Planck [94] found a value of \(A_{s}=\left(2.101^{+0.031}_{-0.034}\right)\times 10^{-9}\). We also derived the value of \(\sigma_{8}\) implied by our parameters by using CLASS in post-processing [95]. For the flux power spectrum alone, we find \(\sigma_{8}=0.733\pm 0.026\), and when the mean IGM temperature is included, \(\sigma_{8}=0.696\pm 0.025\) (see Table 3). The Planck result is \(\sigma_{8}=0.811\pm 0.006\)[94]. We thus measure a power spectrum amplitude around \(3-\sigma\) lower than Planck or ACT CMB lensing [96]. Interestingly, the dark energy survey year 3 results measure \(\sigma_{8}=0.733^{+0.039}_{-0.049}\)[97], in good agreement with our results. Other small-scale structure probes vary [98, 99, 100, e.g. ]. Figure 5 shows the Lyman-\(\alpha\) forest flux power spectrum from [38], along with their estimated one sigma uncertainty (black). Also shown are predictions from our multi-fidelity emulator based on the maximum posterior input parameters from MCMC analysis with only the Lyman-\(\alpha\) forest flux power emulator in the full and reduced redshift ranges, and MCMC analysis using both the mean temperature and flux power emulators. The correlation between Lyman-\(\alpha\) and Si iii absorption is visible in the form of regular oscillations in the \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Redshift & 4.6 & 4.4 & 4.2 & 4.0 & 3.8 & 3.6 & 3.4 \\ \hline FPS \(z=2.2-4.6\) & 1.08 & 1.03 & 1.46 & 1.79 & 0.87 & 0.51 & 0.63 \\ FPS \(z=2.6-4.6\) & 1.22 & 0.85 & 1.20 & 1.40 & 0.74 & 0.36 & 0.69 \\ FPS \(+T_{0}\)\(z=2.6-4.6\) & 1.19 & 0.85 & 1.28 & 1.46 & 0.80 & 0.31 & 0.73 \\ \hline Redshift & 3.2 & 3.0 & 2.8 & 2.6 & 2.4 & 2.2 & Total \\ \hline FPS \(z=2.2-4.6\) & 0.83 & 1.82 & 1.64 & 0.98 & 0.99 & 1.36 & 1.15 \\ FPS \(z=2.6-4.6\) & 0.76 & 1.35 & 1.29 & 1.45 & 2.97 & 4.17 & 1.03 \\ FPS \(+T_{0}\)\(z=2.6-4.6\) & 0.82 & 1.27 & 1.22 & 1.67 & 3.38 & 4.51 & 1.06 \\ \hline \end{tabular} \end{table} Table 4: \(\chi^{2}\) per degree of freedom for the flux power spectrum for each redshift bin. Shown is the likelihood at the best fit parameters in each chain. We show chains fitting to the flux power spectrum only at \(z=2.2-4.6\), fitting to the flux power spectrum only at \(z=2.6-4.6\), and fitting to the flux power spectrum and mean IGM temperature at \(z=2.6-4.6\). The column labelled ‘Total’ is the total \(\chi^{2}\) per degree of freedom for the redshift bins in the fit (i.e., excluding \(z=2.2\) and \(z=2.4\) for the last two chains). power spectrum (in Section 3.4 we describe the correction we make for Si iii). The best-fit flux power spectrum is not significantly affected by the inclusion of the \(T_{0}\) data in the likelihood, as is expected if the \(T_{0}\) data mainly breaks degeneracies. ### Reionization and Other Parameters Figure 6 shows the other parameters of our model from the same chains as Figure 4. These are: three parameters of the helium reionization model (\(z_{i}^{\rm He\ II}\), \(z_{f}^{\rm He\ II}\), and \(\alpha_{q}\)), the midpoint of H i reionization \({\rm z^{H\ I}}\), the parameters of the strong absorber model (\(\alpha_{LLS}\), \(\alpha_{DLA}\)), the strength of the metal contamination \(f_{SiIII}\) and the box velocity scale \(v_{scale}\). The flux power spectrum data alone prefers an early start to helium reionization, \(z_{i}^{HeII}>3.9\). Interestingly, this is in agreement with constraints from the helium Lyman Figure 6: Posterior constraints for the parameters of the helium reionization model (\(z_{i}^{HeII}\), \(z_{f}^{HeII}\), \(\alpha_{q}\)), the hydrogen reionization model (\(z^{HI}\)), the strong absorber models (\(\alpha_{LLS}\), \(\alpha_{DLA}\)), the Silicon III correction (\(f_{\rm SiIII}\)), and the velocity to distance scale parameter \(v_{scale}\). Results are from three MCMC chains: those labelled ‘FPS’ use the flux power spectrum only. These are a chain with the full eBOSS dataset, ‘FPS \(z=2.2-4.6\)’ (black) and a chain with the limited range eBOSS dataset found to remove the internal tension ‘FPS \(z=2.6-4.6\)’ (red). The third chain uses the limited range eBOSS dataset but adds the mean IGM temperature constraints. ‘FPS \(z=2.6-4.6\)’ (gold). \(\alpha\) forest, where regions of high transmission suggest that HeII reionization has already started at \(z=3.5\)[101; 102]. The mean temperature data reinforce this, but do not substantially shrink constraints, perhaps because the highest redshift mean IGM temperature we fit to is 3.8, and the flux power spectrum already constrains \(z_{i}^{HeII}>3.95\) at 95% confidence. The end of helium reionization, \(z_{f}^{HeII}\), is weakly constrained by the flux power spectrum data alone, but once the mean IGM temperature data is included (importantly, this is the only source of data at \(z<2.6\)), constraints tighten to \(z=2.7\pm 0.09\) at 95% confidence. This is consistent with the He ii Lyman-\(\alpha\) forest, which suggests an end at \(z\leq 2.7\)[103; 104; 105]. The most significant effect of the mean IGM temperature data is on the spectral index during helium reionization, \(\alpha_{q}\). Smaller values of \(\alpha_{q}\) correspond to a larger heating rate. The flux power spectrum data prefers a high value of \(\alpha_{q}\), although the constraints are weak. However, as shown in Figure 7, this high value of \(\alpha_{q}\) produces a mean IGM temperature which is extremely low and in clear disagreement with the data from Ref. [30]. Figure 5 shows that the flux power spectrum is not significantly different at the maximum posterior parameters of either chain. The chains are exploiting a multi-dimensional degeneracy between the helium reionization parameters, discussed in Section 5, to marginally improve the match to the flux power spectrum (as also seen in Table 4, where the \(\chi^{2}\) of the best-fits are similar). Appendix C shows the results of chains which include only the mean temperature data. They are consistent with the combined chains, and the helium reionization parameters are constrained at similar values. One difference is that while the combined chains prefer a value of \(\alpha_{q}\) marginally lower than allowed by our prior volume, the mean temperature prefers \(\alpha_{q}\sim 1.4\). This is because the mean temperature allows for a lower value of \(z_{i}^{HeII}\). Figure 7: IGM mean temperatures from [30] (black lines and circles, with shading corresponding to one sigma uncertainty). Specifically their temperatures derived from the flux power spectrum calculated using high resolution Lyman-\(\alpha\) forest spectra. Also shown are predictions for the mean temperature from our multi-fidelity emulator corresponding to the same three maximum posterior input parameters used in Figure 5. The midpoint of hydrogen reionization is poorly constrained in all models. The redshift range explored here (\(z=2.2-4.6\)) is well after the completion of hydrogen reionization, even in models where it ends late. All our chains suggest a midpoint \(\mathrm{z^{H\,{}^{1}}}\gtrsim 7.1\). This is well within the range allowed by other experiments: Planck suggests \(\mathrm{z^{H\,{}^{1}}}=7.7\pm 0.7\), while an analysis of the Lyman-\(\alpha\) emitter luminosity function finds \(\mathrm{z^{H\,{}^{1}}}\sim 7.25\)[106]. We show results for the nuisance parameters associated with the strong absorbers and the Lyman-\(\alpha\) -SiIII cross-correlation. As discussed in Ref. [46], our simulation suite includes strong absorbers self-consistently using a galaxy formation model. The strong absorber parameters measure the difference between the strong absorber model in the simulation and the model in the observed spectra, so that \(\alpha_{LLS}=0\) means that our galaxy formation model is a good match to the circumgalactic gas in the observed Universe. DLAs are subtracted from both the observed and simulated spectra, and so \(\alpha_{DLA}\) measures primarily the efficiency of the observational DLA finder. All our chains produce a posterior \(\alpha_{DLA}\) tightly peaked and close to 0. The chains using the flux power alone are centered on \(\alpha_{DLA}=0\), while the chain including the mean temperature prefers a slightly negative value, perhaps indicating that the eBOSS DLA finder includes some false positives. Note that DESI includes improved DLA finding algorithms based on machine learning [107, 108, 109]. For the restricted redshift chains, we measure \(\alpha_{LLS}\sim 0.16\), while for the full redshift range \(\alpha_{LLS}\sim 0.04\). The preference for a non-zero \(\alpha_{LLS}\) in the reduced redshift chains suggest that our simulations have fewer LLS than the real Universe. LLS are on the boundary of being optically thick and thus radiative transfer effects within the gas are important, making them the most difficult absorbers to model accurately. \(\alpha_{LLS}\) can affect the flux power spectrum normalisation, and so it is reasonable to interpret the preference of the full redshift chains for a low \(\alpha_{LLS}\) as an artifact of the fit. However, it is also possible that the low \(\alpha_{LLS}\) points to the origin of the internal tension, a possibility we discuss further in Section 5.1. The SiIII cross-correlation is \(f_{SiIII}=0.0095\pm 0.001\) from our \(z=2.6-4.6\) chains. The full redshift range prefers a slightly lower value of \(0.0085\pm 0.001\), which is in good agreement with the measurement of \(0.008\pm 0.001\) from DR9 by Ref. [110]. The effect of \(f_{SiIII}\) can be seen in the oscillations of the flux power spectrum in Figure 5. The results for \(v_{\mathrm{scale}}\) are dominated by the prior, as expected [111, 22]. Constraints are weaker than for the simulated data, likely because the simulated data did not include noise and so was over-fitting. Figure 7 shows the IGM mean temperature from the flux power spectrum on small scales from Ref. [30]. Also shown in Figure 7 are predictions from our multi-fidelity emulator based on the maximum posterior input parameters from the same chains used in Figure 5. Once the mean temperature data is included in the fit, the chains are in good agreement. However, when it is not included the chains prefer a lower mean IGM temperature, exploiting an internal degeneracy in the flux power spectrum. The thermal history preferred by the full redshift range of the flux power spectrum is very similar to that preferred by the restricted range flux power spectrum. ### Parameter Correlations Figure 8 shows the correlations between our parameters, for the chain using the flux power spectrum from \(z=2.6-4.6\), as well as the mean IGM temperature. Most correlations are weak. We have deliberately chosen our pivot scale of 0.78 h/Mpc to minimise the correlation between \(A_{P}\) and \(n_{P}\), and the correlation matrix confirms it is weak, with a correlation coefficient \(r=0.2\). There is a correlation between \(\tau_{0}\) and \(d\tau_{0}\) (\(r=-0.54\)), as the redshift bin which provides the strongest constraints on the optical depth is not exactly \(z=3\). The optical depth \(\tau_{0}\) is anti-correlated with both \(A_{P}\) (\(r=-0.7\)) and \(n_{P}\) (\(r=-0.62\)) as its main effect is to change the amplitude of the flux power spectrum. There is a three-dimensional degeneracy between \(\alpha_{q}\), \(z_{i}^{HeII}\) and \(z_{i}^{HeII}\) (see Figure 8), which allows a wide range of \(\alpha_{q}\) to fit the flux power spectrum data, and is only broken by information from the thermal history. Lower \(\alpha_{q}\) corresponds to more heating from quasars during He ii reionization. If He ii reionization starts earlier or ends later, the IGM requires more heating from quasars to match the observations, while the opposite is true for late starting, or early ending He ii reionization. Appendix C shows the results of chains which include only the mean temperature data, which clearly shows this three-dimensional degeneracy: a slightly later start to helium reionization would require less total heating and thus a higher value of \(\alpha_{q}\). Several of these correlations could be broken by the inclusion of higher redshift thermal history data, or lower redshift flux power spectrum data. Finally, the abundance of Lyman Limit Systems, \(\alpha_{LLS}\), exhibits several interesting correlations. \(\alpha_{LLS}\) is anti-correlated with \(\alpha_{DLA}\) (\(r=-0.59\)), as the flux power spectrum templates for strong absorbers have similar shapes in neighbouring column density bins. \(\alpha_{LLS}\) is also correlated with \(n_{P}\) (\(r=0.61\)) and \(d\tau_{0}\) (\(r=0.42\)), due to similarities in the shapes of their flux power spectrum templates. The combination of the three-way correlation between \(n_{P}\), \(\alpha_{LLS}\) and \(\tau_{0}\) is exploited by the chains to explain the inconsistent \(z=2.2,2.4\) flux power Figure 8: Correlation matrix between parameters for the chain using the flux power spectrum and mean IGM temperature for \(z:2.6-4.6\). spectrum bins and drives the discrepant constraints these chains show. This correlation may be reduced by the inclusion of extra small-scale data available in the DESI early data release. ## 5 Discussion In this Section we discuss the implications of the results in Section 4. Section 5.1 discusses possible explanations for the internal tension in the data between \(z=2.2-2.4\) and \(z\geq 2.6\). Section 5.2 compares our results to other datasets and earlier Lyman-\(\alpha\) analyses. Section 5.3 discusses how our results are affected by modifications to our likelihood. ### The Tension in the Lowest Redshift Bins In this Section, we discuss possible explanations for the internal tension between the flux power spectrum data at \(z=2.2-2.4\) and \(z\geq 2.6\). There are two generic possibilities: either an important physical effect is missing from our simulation model, or there is a systematic in the dataset not captured by the systematic error budget. To evaluate the possibility of systematic error, we can look at independent measurements of the flux power spectrum on similar scales and at similar redshifts. Figure 9 shows different measurements of the 1D flux power, \(P_{F}(k)\), at \(z=2.2\) and \(z=2.6\). We show the results from SDSS DR14 [38], SDSS DR9 [110], a recent analysis using high resolution spectra from KODIAQ/SQUAD [44], and from DESI Early Data Release data (DESI EDR) [40]. At \(z=2.6\) (and higher redshift bins) all analyses are in reasonably good agreement, given their respective statistical errors. However, this is not the case at \(z=2.2\), where there is some discrepancy between SDSS (DR14 and DR9), DESI and KODIAQ. SDSS DR14 and DR9 are in good agreement, and in Appendix B we show that the posterior parameter constraints also agree well. The KODIAQ flux power spectrum is lower by around \(1-\sigma\) on the smallest scales measured by eBOSS, which could be due to the effect of continuum modelling in the KODIAQ data [44]. The DESI EDR data agrees well with eBOSS for \(k>0.01\) s/km, but is discrepant by \(>2\sigma\) for \(k<0.01\) s/km. This discrepancy is also present at \(z=2.4\), and is discussed in the DESI EDR papers, see Ref. [41], Appendix D. They ascribed 30% of the difference between eBOSS and DESI to continuum fitting, but the origin of the rest is currently unclear. Figure 9: Observational 1D power spectrum data from SDSS DR14 (solid black) [38], SDSS DR9 (green dotted) [110] and KODIAQ/SQUAD [44]. Filled bands show the range covered by diagonal elements of the covariance matrix. (Left) At \(z=2.2\). (Right) At \(z=2.6\). Given the fairly large disagreements between different measurements at \(z=2.2\), systematic error is a highly plausible explanation for the internal tension we find. In future work we will combine our likelihood function with the DESI flux power spectra and investigate their cosmological implications. We should also consider possible theoretical explanations. Explanations rooted in alternative early Universe models seem a priori unlikely as they would have to cause an effect only for \(z<2.6\), when the Universe is known to be matter dominated. However, there are a few possible astrophysical explanations. Feedback effects from AGN become increasingly important at low redshift. It is possible that a stronger AGN feedback prescription than we use, or than is implemented in current cosmological simulation suites, could efficiently disrupt gas at \(z<2.6\) on small scales and explain these results. Interestingly, such an AGN feedback model has recently been proposed as an explanation for the low value of \(S_{8}=\sigma_{8}(\Omega_{M}/0.3)^{0.5}\) preferred by some weak lensing surveys [112], which matches our results. However, Ref. [113] examined a wide range of AGN feedback models, including some much more aggressive than those in PRIYA (or ASTRID). None of these models can affect the \(z=2.2\) flux power spectrum at the level required to explain these results (Tillman, private communication). DLAs are important at low redshift, and do affect the slope of the flux power spectrum. However, our simulations include a population of DLAs in good agreement with observations [46], which are masked using the same procedure as the observational pipeline. In addition, we include a free parameter to model the residual power from any DLAs not detected by eBOSS, and the posterior value for this free parameter is consistent with zero even when the \(z=2.2\) data is included. At low redshift, the Lyman-\(\alpha\) forest is increasingly contaminated by metal lines. We include a simple prescription for Si III and the inclusion of low redshift data does not drive the best-fit parameter for this model, preferring slightly less metal contamination. However, it is possible that a more sophisticated model could help reduce the tension. One interesting but entirely speculative possibility is suggested by the LLS abundance, \(\alpha_{LLS}\), which is lower in the full redshift chains. [114] identified a systematic in the SDSS colour selection which causes quasar sightlines containing Lyman Limit Systems (LLS) to be preferentially selected for spectroscopic followup. [115] and [116] showed that, due to the width of the \(u\)-band filter in SDSS, LLS are over-sampled for \(z=2.5-3.6\) for all quasars in the redshift range \(z=3.0-3.6\). It is thus possible that \(\alpha_{LLS}\) depends on the quasar (not absorber) redshift. Note that the flux power spectrum we measure depends on the absorber redshift and so a simple redshift split would not detect this effect8. A check for colour selection systematics would involve the flux power spectrum being computed from two different quasar redshift bins. Footnote 8: We ran a chain with two \(\alpha_{LLS}\) parameters for \(z<2.6\) and \(z\geq 2.6\). The maximum likelihood for \(\alpha_{LLS}\) at \(z>2.6\) was \(\sim 2-\sigma\) larger than in the full redshift chain, and \(n_{P}\) increased by \(\sim 0.5\sigma\), but all other parameters were unchanged. ### Comparison of the Posterior Constraints to Other Lyman-\(\alpha\) Analyses It is interesting to compare the results of our chains to those of Ref. [22] (for DR14) and Ref. [111] (for DR9). The most notable difference is that Ref. [22] tested excluding the lowest two redshift bins and found minimal change in the posteriors of their cosmological parameter. This disagrees with our results. We believe this discrepancy can be ascribed to our different treatment of nuisance parameters. Ref. [22] employed correction functions for supernova feedback from the OWLs simulation suite [14] and for AGN feedback from the Horizon-AGN suite [15]. Each correction function is most significant at low redshift, and is included with a free amplitude parameter which is marginalised over. In addition, earlier models were forced by computational limits to use the'splicing' technique of Ref. [50]. In this model multiple simulation boxes with over-lapping scale ranges are combined to model the scales probed by the Lyman-\(\alpha\) forest. A single larger simulation with \(2048^{3}\) particles was used to generate a scale and redshift dependent correction function, and the amplitude of this correction was marginalised over with a Gaussian prior. Our larger simulations can instead model all relevant scales in a single simulation and so do not need a free parameter for splicing. In addition, we self-consistently incorporate models for stellar and AGN feedback and star formation into our simulations. Thus the analysis of Ref. [22] differs from ours in that it has three nuisance parameters, each of which affects the lowest redshift bins most strongly and each of which is marginalised over in the chains. Ref. [117] mentions that removing splicing reduces \(n_{s}\) significantly (although the posterior value of the splicing correction is not reported), which is what we would expect if the splicing correction were absorbing an internal tension. Thus we believe that the \(z=2.2\) and \(z=2.4\) redshift bins contribute only marginally to the cosmological constraints in Ref. [22] and instead constrain splicing and AGN feedback. We should therefore compare the quantitative results of Ref. [22] to our reduced redshift chains. Ref. [111] find a mean optical depth of \(\tau_{eff}(z=3)=0.0025\pm 0.0001\) and \(d\tau=3.734\pm 0.015\) in DR9. Ref. [22] do not report a value for \(\tau_{eff}(z=3)\) from DR14, but we presume it is similar to that in DR9. We define the mean optical depth relative to the power law \(\tau_{eff}=0.0023(1+z)^{3.65}\), so that in our parameterization these constraints correspond to \(\tau_{0}=1.09\pm 0.05\) and \(d\tau_{0}=0.084\pm 0.015\). Their optical depth measurements are thus in good agreement with our measurements for the reduced redshift range of \(z=2.6-4.6\). Ref. [22] found \(n_{s}=0.954\pm 0.006\) for DR14 and \(n_{s}=0.938\pm 0.010\) for DR9. Meanwhile the power spectrum amplitude as measured by \(\sigma_{8}\) is \(0.826\pm 0.02\) in DR14 and similar in DR9. We find \(n_{P}\sim 0.97\) and \(\sigma_{8}\sim 0.73\). Some of the differences between our constraints on \(A_{P}\) and \(\sigma_{8}\) are due to a lever arm effect: for \(n_{s}<1\) the power spectrum amplitude will be increased on larger scales and measuring \(n_{s}\) will induce a correlation between \(n_{s}\) and \(\sigma_{8}\). Ref. [118] constrain the slope and amplitude of the linear matter power spectrum at \(z=3\) using the reduced likelihood from Ref. [89]. They use the full redshift range of the data, but without adding nuisance parameters. Their best-fit parameters approximately correspond to \(n_{P}\sim 0.93\) and \(A_{P}\sim 1.5\times 10^{-9}\), a low slope and low amplitude compared to Planck, and reasonably close to the results from our full redshift chain. As shown in Appendix B, we do not reproduce the \(\sim 1\sigma\) shift in \(n_{s}\) from DR9 to DR14 observed by Ref. [22], and attributed by them to the different catalogues for masking DLAs and BAL. Our DR9 and DR14 constraints are in very good agreement (as expected, since the two datasets have a large fraction of the spectra in common). Our simulations include self-shielding of the gas following Ref. [119] and a realistic DLA model, masked from the flux power spectrum in the same way as the observational data. That the cosmological parameters are not affected by the change in DLA catalogue is reassuring, as it suggests that our analysis is indeed correctly marginalising over the uncertainty from DLAs. Rather than use effective broken power laws for the IGM thermal history as a function of redshift, we have explicit physical models for hydrogen and helium reionization, which include the scale-dependent effects of patchy reionization. As shown in Figure 7, the preferred IGM thermal history shows a temperature peak at \(z=2.8\) and thus \(T_{0}(z)\) cannot be described by a power law broken at \(z=3\) as assumed in Ref. [22] and many earlier works. Figure 5 shows that this does not affect the flux power spectrum on the scales measured by eBOSS. However, DESI data will probe smaller scales, so it is not clear that this will be the case in future. ### Likelihood Modifications We also considered modifications to the likelihood not shown here. First, we increased the observational uncertainty from eBOSS by a uniform factor of two. This increased the posterior uncertainties, but did not significantly resolve the internal tension at low redshift (as this is many \(\sigma\)). We also considered removing the high redshift data, with \(z>3.8\), as is done in some earlier analyses [120]. We found that this made little difference as the statistical errors in the high redshift data are large and so they provide little information. We considered removing the largest and smallest scales with cuts in \(k\). Several of the smallest scale bins are highly correlated, and so removing them either led to very poor constraints or had small effects, depending on the scale cut. Removing the largest bins on the largest scales increased the posterior uncertainty, but did not noticeably shift the posteriors. Thus none of these checks show any evidence for an internal tension between scales. We tested whether a second mean flux rescaling slope would improve the fit to the observed Lyman-\(\alpha\) forest flux power (Figure 5), especially at lower redshifts and smaller scales. To do this, we added a second mean flux slope to the MCMC sampled parameters, and assigned each to a specific redshift range (we tested this using a redshift pivot of \(z=3\) and \(z=3.6\)). Posterior constraints on the other parameters from a chain run using the second mean flux slope were unaffected and the fit was not improved. ## 6 Conclusions We have developed a new likelihood and pipeline for the analysis of Lyman-\(\alpha\) forest data and run MCMC chains using the Cobaya package [83, 84]. Our likelihood is built on a percent-level accurate emulator using the PRIYA simulations [46]. We use the multi-fidelity emulation technique [53] and a set of high resolution simulations to avoid the need to'splice' together multiple simulations resolving different scales. We model the Lyman-\(\alpha\) forest 1D flux power spectrum from eBOSS [38], and include several simulated and post-processed parameters. Our main cosmological constraints are on the slope and amplitude of the primordial power spectrum on the scales probed by the Lyman-\(\alpha\) forest (\(n_{P}\) and \(A_{P}\)). We augment our Lyman-\(\alpha\) flux power spectrum likelihood with information about the IGM thermal history [30]. With this information, we constrain the start, end, and heating rate for a patchy model of helium reionization (\(\mathrm{z}_{i}^{\mathrm{He\ II}}\), \(\mathrm{z}_{f}^{\mathrm{He\ II}}\), \(\mathrm{\alpha}_{q}\)), as well as the mean optical depth and its evolution with redshift (\(\tau_{0}\) and \(d\tau_{0}\)). Our likelihood includes corrections to the Lyman-\(\alpha\) forest flux power spectrum from correlated Si iii absorption and from the presence of Damped Lyman-\(\alpha\) systems. We found that the lowest redshift bins in the eBOSS flux power spectrum, at \(z=2.2\) and \(z=2.4\), produced results which were discrepant with those from higher redshifts. The flux power spectrum from the DESI early data release is also discrepant with eBOSS at these redshifts. It thus seems likely that this discrepancy is due to an as-yet unidentified systematic in the eBOSS pipeline at \(z<2.6\), although an unmodelled astrophysical effect, perhaps connected with AGN feedback, is still possible. When removing the lowest redshift bins from the analysis, we find: * A primordial power spectrum slope of \(1.0>n_{P}>0.975\), in good agreement with Planck. The upper limit is set by the prior volume of our emulator. * A primordial power amplitude \(A_{p}=\left(1.56\times 10^{-9}\right)\pm 0.12\), which translates to \(A_{s}=\left(1.67\pm 0.13\times 10^{-9}\right)\) or \(\sigma_{8}=0.733\pm 0.026\), approximately \(2-3\sigma\) lower than measurements from Planck, but in agreement with some other measurements from weak lensing or galaxy surveys. * When the IGM mean temperature data is included, we find an early start to helium reionization, beginning at \(z=3.9-4.0\) and ending around \(z=2.6-2.8\). The heating rate during reionization is at the higher end of our prior volume, with a spectral index of \(\alpha_{q}\sim 1.4\) from the IGM mean temperature. * Weak constraints on the midpoint of hydrogen reionization and the growth function \(\Omega_{M}h^{2}\). In future work, we will combine our Lyman-\(\alpha\) likelihood with other cosmological information, in particular the Planck CMB and Baryon Acoustic Oscillation measurements, with which we can constrain several extensions to the \(\Lambda\)CDM model. While we defer quantitative constraints to later papers, we are able to qualitatively discuss the constraints we expect. We will be able to constrain the running of the spectral index, \(\alpha_{s}=\frac{dn_{s}}{d\ln k}\). Constraints on \(\alpha_{s}\) come from the difference between the spectral index measured by the CMB on large scales and the spectral index measured by the Lyman-\(\alpha\) forest on small scales. The sum of neutrino masses can be constrained via a comparison between the power spectrum amplitude on CMB and Lyman-\(\alpha\) scales [121]. Since our preferred power spectrum amplitude is lower than that of Planck, we will likely have a preference for a non-zero neutrino mass, although the strength of the preference and the value preferred is yet to be determined. We will also incorporate new data sets into our likelihood. We will examine the posterior parameter constraints from the DESI EDR flux power spectrum. The statistical power of DESI EDR is currently weaker than that of eBOSS, but it is able to measure smaller scales (\(k_{F}<0.05\) s/km rather than \(k_{F}<0.02\) s/km for eBOSS). The higher resolution data may also improve the internal consistency of the dataset at \(z<2.6\). Finally, we can perform a joint analysis of eBOSS and the high resolution Lyman-\(\alpha\) forest flux power spectra from Ref. [44]. These smaller scales would directly measure the parameters of helium reionization, without the intermediate step of the mean IGM temperature, allowing an end-to-end validation of the consistency of our modelling of large and small scales. MAF is supported by a National Science Foundation Graduate Research Fellowship under grant No. DGE-1326120. SB was supported by NSF grant AST-1817256 and by NASA-80NSSC21K1840. MFH is supported by a NASA FINESST grant No. ASTRO20-0022. Computing resources were provided by Frontera LRAC AST21005. The authors acknowledge the Frontera computing project at the Texas Advanced Computing Center (TACC) for providing HPC and storage resources that have contributed to the research results reported within this paper. Frontera is made possible by National Science Foundation award OAC-1818253. URL: [http://www.tacc.utexas.edu](http://www.tacc.utexas.edu). Analysis computations were performed using the resources of the UCR HPCC, which were funded by grants from NSF (MRI-2215705, MRI-1429826) and NIH (1S10OD016290-01A1). ## Appendix A Leave-one-out versus Emulator Error In this Appendix we evaluate the impact of including the Gaussian Process interpolation error, \(\mathbf{\sigma}_{GP}\), on the posterior parameters, as discussed in Equation 3.1. Figure 10 compares the effect of including emulator errors, showing the training samples, GP emulator errors and scale-averaged leave-one-out errors. The leave-one-out error is independent of position in parameter space, whereas the GP error is larger towards the edge of parameter space. The effect of the GP error is less than \(2\sigma\) for all parameters. The largest effect is on the growth function \(\Omega_{M}h^{2}\) and the redshift of hydrogen reionization, which are both poorly constrained by the flux power spectrum data. The parameters of the helium reionization model also shift slightly. ## Appendix B Boss DR9 Data In this section we compare posteriors obtained using a previous observational data set, specifically the flux power spectrum from [110], which is based on BOSS DR9 quasar spectra. Figure 10: Emulator error and leave-one-out errors across parameter space. For eight of the input parameters, the training samples (grey crosses for LF, red circles for HF), GP emulator errors (yellow dots), and scale-averaged leave-one-out errors (red dashed) are shown. Shown are 1D marginalised posteriors for the default flux power only chains (red) compared to chains run adding the GP emulator error to the likelihood (yellow). Figure 11 shows the posteriors for chains run with DR14, with both reduced and full redshift ranges, and chains with DR9, again with the reduced and full redshift ranges. We show chains using only the flux power spectrum likelihood, to emphasise any differences between the datasets. Differences between DR9 and DR14 are small, although, as expected, the posterior parameter ranges for DR9 are moderately larger than for DR14. The largest parameter shift is \(d\tau_{0}\), which shifts by \(<0.5\sigma\). The shift from the omission of the lowest redshift bins is consistent between the two observational data sets. Both exhibit the same internal tension. Figure 11: Posteriors for chains run using observations from the earlier SDSS data release, DR9, for the reduced redshift range (gold) and full redshift range (blue), compared to our main chains using DR14 with the reduced redshift range (black) and full redshift range (red). ## Appendix C Mean Temperature Only Posteriors In this section, we present results from chains run using only the mean temperature likelihood. Shown in Figure 12 are four chains, each using one of the observational mean temperatures, derived from different Lyman-\(\alpha\) forest summary statistics: the flux power spectrum, the Doppler width distribution, the curvature statistic, and a wavelet decomposition. Most of the cosmology parameters are entirely unconstrained by the mean temperature and are omitted from the Figure. We show \(A_{p}\) for reference. The three He ii reionization parameters are Figure 12: Posteriors for chains run with only the mean temperature likelihood. Shown are chains using each of the four observational measurements of the mean temperature: using the flux power spectrum (blue), using the Doppler width distribution (BPDF, yellow), using the curvature statistic (red), and using a wavelet decomposition (black). The main results of this work use the flux power derived mean temperatures. well constrained by the mean temperature history. There is very little difference between the different mean temperature observations, with the BPDF derived temperature differing the most, specifically preferring a later start to He ii reionization, and less heating, corresponding to a lower mean IGM temperature. However, differences are well within \(1\sigma\). The midpoint of H i reionization has a marginal preference for a late midpoint, but is very weakly constrained. The mean temperature provides information on \(z_{HI}\) as the IGM cools from the completion of H i reionization, setting the temperature before the onset of helium reionization. Incorporating measurements of the mean IGM temperature at \(z>3.8\) [e.g. 122] could substantially improve these constraints and we may do so in future work. ## Appendix D Full Posteriors Figure 13 presents the full posteriors, including the correlations between the cosmology and astrophysics parameter sets, using the same chains discussed extensively in Section 4. Correlations are discussed in Section 4.3.
We present new cosmological parameter constraints from the eBOSSLyman-$\alpha$ forest survey. We use a new theoretical model and likelihood-based on the PRIYA simulation suite. PRIYA is the first suite to resolve theLyman-$\alpha$ forest in a ($120$Mpc/h~)$^3$ volume, using a multi-fidelityemulation technique. We use PRIYA to predict Lyman-$\alpha$ forest observables with $\lesssim 1\%$ interpolation error over an $11$ dimensional ($9$ simulated, $2$ in post-processing) parameter space. We identify an internal tension within the flux power spectrum data. Once the discrepant data is removed, we find the primeval scalar spectral index measured at a pivot scale of $k_0 = 0.78$ Mpc$^{-1}$ to be $n_P = 1.009^{+0.027}_{-0
2309.13588
A new class of partial orders
Let $R$ be a unital $*$-ring. For any $a,w,b\in R$, we apply the defined $w$-core inverse to define a new class of partial orders in $R$, called the $w$-core partial order. Suppose $a,b\in R$ are $w$-core invertible. We say that $a$ is below $b$ under the $w$-core partial order, denoted by $a\overset{\tiny{\textcircled{\#}}}\leq_w b$, if $a_w^{\tiny{\textcircled{\#}}} a=a_w^{\tiny{\textcircled{\#}}} b$ and $awa_w^{\tiny{\textcircled{\#}}} =bwa_w^{\tiny{\textcircled{\#}}}$, where $a_w^{\tiny{\textcircled{\#}}}$ denotes the $w$-core inverse of $a$. Characterizations of the $w$-core partial order are given. Also, the relationships with several types of partial orders are considered. In particular, we show that the core partial order coincides with the $a$-core partial order, and the star partial order coincides with the $a^*$-core partial order.
Huihui Zhu, Liyun Wu
2023-09-24T09:08:51
http://arxiv.org/abs/2309.13588v1
# A new class of partial orders ###### Abstract Let \(R\) be a unital \(*\)-ring. For any \(a,w,b\in R\), we apply the defined \(w\)-core inverse to define a new class of partial orders in \(R\), called the \(w\)-core partial order. Suppose \(a,b\in R\) are \(w\)-core invertible. We say that \(a\) is below \(b\) under the \(w\)-core partial order, denoted by \(a\stackrel{{\text{\tiny\textcircled{\tiny\textcircled{\tiny\textcircled{\tiny\textcircled{\tiny\textcircled{\tiny\textcircled{\tiny\textcircled{\tiny\textcircled{\tiny\text class of partial order and establish its characterizations. In Section 3, the relationships between the \(w\)-core partial order and other partial orders are considered. It is proved that the \(w\)-core partial order is between the core partial order and the diamond partial order. We show in Theorem 3.9 that the star partial order and the core partial order are both instances of the \(w\)-core partial order. More precisely, the core partial order coincides with the \(a\)-core partial order, and the star partial order coincides with the \(a^{*}\)-core partial order. Then, the equivalence between \(a\stackrel{{\text{\tiny\textcircled{\char 30}}}}{{\leq_{w}}}b\) and \(b-a\stackrel{{\text{\tiny\textcircled{\char 30}}}}{{\leq_{w}}}b\) is derived, under certain conditions. In the end, the reverse order law for the \(w\)-core inverse is given. Let us now recall several notions of generalized inverses in rings. Let \(R\) be an associative ring with the identity \(1\). An element \(a\in R\) is called (von Neumann) regular if there is some \(x\in R\) such that \(a=axa\). Such an \(x\) is called an inner inverse of \(a\), and is denoted by \(a^{-}\). If in addition, \(xax=x\), then \(a\) is called reflexive or \(\{1,2\}\)-invertible. Such an \(x\) is called a reflexive inverse of \(a\), and is denoted by \(a^{+}\). Further, an element \(a\in R\) is called group invertible if there exists a reflexive inverse \(a^{+}\) of \(a\) which commutes with \(a\). Such an element \(a^{+}\) is called a group inverse of \(a\). It is unique if it exists, and is denoted by \(a^{\#}\). We denote by \(R^{\#}\) the set of all group invertible elements in \(R\). Given any \(a,d\in R\), \(a\) is invertible along \(d\)[9] if there exists some \(b\in R\) such that \(bad=d=dab\) and \(b\in dR\cap Rd\). Such an element \(b\) is called the inverse of \(a\) along \(d\). It is unique if it exists, and is denoted by \(a^{\|d}\). As usual, by the symbol \(R^{\|d}\) we denote the set of all invertible elements along \(d\). It is known from [10, Theorem 2.2] that \(a\in R^{\|d}\) if and only if \(d\in dadR\cap Rdad\). Mary in [9, Theorem 11] proved that \(a\in R^{\#}\) if and only if \(a\in R^{\|a}\) if and only if \(1\in R^{\|a}\). Moreover, \(a^{\#}=a^{\|a}\) and \(1^{\|a}=aa^{\#}\). More results on the inverse along an element can be referred to [10, 16, 17]. Throughout this paper, we assume that \(R\) is a unital \(*\)-ring, that is a ring with unity \(1\) and an involution \(*\) satisfying \((x^{*})^{*}=x\), \((xy)^{*}=y^{*}x^{*}\) and \((x+y)^{*}=x^{*}+y^{*}\) for all \(x,y\in R\). An element \(a\in R\) (with involution) is Moore-Penrose invertible [12] if there is some \(x\in R\) such that (i) \(axa=a\), (ii) \(xax=x\), (iii) \((ax)^{*}=ax\), (iv) \((xa)^{*}=xa\). Such an \(x\) is called a Moore-Penrose inverse of \(a\). It is unique if it exists, and is denoted by \(a^{\dagger}\). We denote by \(R^{\dagger}\) the set of all Moore-Penrose invertible elements in \(R\). It was proved in [9, 17] that \(a\in R^{\dagger}\) if and only if \(a^{\|a^{*}}\) exists if and only if \((a^{*})^{\|a}\) exists. In this case, \(a^{\dagger}=a^{\|a^{*}}=((a^{*})^{\|a})^{*}\). If \(a\) and \(x\) satisfy the equations (i) \(axa=a\) and (iii) \((ax)^{*}=ax\), then \(x\) is called a \(\{1,3\}\)-inverse of \(a\), and is denoted by \(a^{(1,3)}\). If \(a\) and \(x\) satisfy the equations (i) \(axa=a\) and (iv) \((xa)^{*}=xa\), then \(x\) is called a \(\{1,4\}\)-inverse of \(a\), and is denoted by \(a^{(1,4)}\). We denote by \(R^{(1,3)}\) and \(R^{(1,4)}\) the sets of all \(\{1,3\}\)-invertible and \(\{1,4\}\)-invertible elements in \(R\), respectively. It is well known that \(a\in R^{\dagger}\) if and only if \(a\in R^{(1,3)}\cap R^{(1,4)}\) if and only if \(a\in aa^{*}R\cap Ra^{*}a\) if and only if \(a\in aa^{*}aR\) if and only if \(a\in Raa^{*}a\). In this case, \(a^{\dagger}=a^{(1,4)}aa^{(1,3)}\). The core inverse of complex matrices was firstly introduced by Baksalary and Trenkler [1]. In 2014, Rakic et al. [13] extended the core inverse of a complex matrix to an element in a unital \(*\)-ring. They showed that the core inverse of \(a\in R\) is the solution of the following five equations (1) \(axa=a\), (2) \(xax=x\), (3) \(ax^{2}=x\), (4) \(xa^{2}=a\), (5) \((ax)^{*}=ax\). The core inverse of \(a\in R\) is unique if it exists, and is denoted by \(a^{\tiny\mbox{\tiny\char 3.5}}\). By \(R^{\tiny\mbox{\char 3.5}}\) we denote the set of all core invertible elements in \(R\). In [15, Theorem 2.6], Xu et al. showed that \(a\in R^{\tiny\mbox{\char 3.5}}\) if and only if \(a\in R^{\tiny\mbox{\char 3.5}}\cap R^{(1,3)}\). In this case, the expression of the core inverse can be given as \(a^{\tiny\mbox{\char 3.5}}=a^{\tiny\mbox{\char 3.5}}aa^{(1,3)}\). Recently, the present authors [18] defined the \(w\)-core inverse in a ring \(R\). Given any \(a,w\in R\), we say that \(a\) is \(w\)-core invertible if there exists some \(x\in R\) such that \(awx^{2}=x\), \(xawa=a\) and \((awx)^{*}=awx\). Such an \(x\) is called a \(w\)-core inverse of \(a\). It is unique if it exists, and is denoted by \(a_{w}^{\tiny\mbox{\char 3.5}}\). Also, the \(w\)-core inverse \(x\) of \(a\) satisfies \(awxa=a\) and \(xawx=x\) (see [18, Lemma 2.2]). By \(R_{w}^{\tiny\mbox{\char 3.5}}\) we denote the set of all \(w\)-core invertible elements in \(R\). It was proved in [18, Theorem 2.11] that \(a\in R_{w}^{\tiny\mbox{\char 3.5}}\) if and only if \(w\in R^{\|a}\) and \(a\in R^{(1,3)}\). Moreover, \(a_{w}^{\tiny\mbox{\char 3.5}}=w^{\|a}a^{(1,3)}\). Several well known partial orders on a ring \(R\) are given below. (1) The minus partial order: \(a\stackrel{{-}}{{\leq}}b\) if and only if there exists an inner inverse \(a^{-}\in R\) of \(a\) such that \(a^{-}a=a^{-}b\) and \(aa^{-}=ba^{-}\). (2) The plus partial order: \(a\stackrel{{+}}{{\leq}}b\) if and only if there exists a reflexive inverse \(a^{+}\in R\) of \(a\) such that \(a^{+}a=a^{+}b\) and \(aa^{+}=ba^{+}\). (3) The sharp partial order: \(a\stackrel{{\#}}{{\leq}}b\) if and only if there exists the group inverse \(a^{\#}\in R\) of \(a\) such that \(a^{\#}a=a^{\#}b\) and \(aa^{\#}=ba^{\#}\). (4) The star partial order: \(a\stackrel{{*}}{{\leq}}b\) if and only if \(a^{*}a=a^{*}b\) and \(aa^{*}=ba^{*}\). In particular, if \(a\in R^{\dagger}\), then \(a\stackrel{{+}}{{\leq}}b\) if and only if \(a^{\dagger}a=a^{\dagger}b\) and \(aa^{\dagger}=ba^{\dagger}\). (5) The diamond partial order: \(a\stackrel{{\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{ \tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{$\mbox{$\tiny$\mbox{$ \mbox{$\tiny$\mbox{$\mbox{$}}$}$}}}}}}}}}{\mbox{\tiny$\mbox{$\mbox{ \tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$ \mbox{\tiny$\mbox{$\tiny$\mbox{$\mbox{\tiny$\mbox{$\mbox{$\tiny$\mbox{$\mbox{$\mbox{$}}}$}$}}}}}}}}}}b\) if and only if \(aa^{*}a=ab^{*}a\), \(aR\subseteq bR\) and \(Ra\subseteq Rb\). (6) The core partial order: \(a\stackrel{{\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{ \tiny$\mbox{\tiny$\mbox{$\mbox{\tiny$\mbox{$\mbox{$\tiny$\mbox{$\mbox{$}}$}$}}}}}}}}}b\) if and only if there exists the core inverse \(a^{\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{ \tiny$\mbox{$\mbox{\tiny$\mbox{$\mbox{$\tiny$\mbox{$\mbox{$\tiny${$\mbox{$}}$}$}$}}}}}}}}}}\in R\) of \(a\) such that \(a^{\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{ \tiny$\mbox{\tiny$\mbox{$\mbox{\tiny$\mbox{$\mbox{$\tiny${\mbox{$\mbox{$ \mbox{$\tiny$}}}$}$}}}}}}}}}}a=a^{\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$ \mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny${\mbox{ \tiny$\mbox{$\mbox{$\tiny${\mbox{$\mbox{$\mbox{$\tiny$}}}$}$}$}}}}}}}}}}b\) and \(aa^{\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$ \mbox{\tiny$\mbox{\tiny$\mbox{\tiny${\mbox{$\tiny${\mbox{$\mbox{$\tiny${\mbox{$ \mbox{$}}}$}$}$}}}}}}}}}}=ba^{\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$ \mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny${\tiny$\mbox{\tiny${ \mbox{\tiny${\mbox{$\tiny${\mbox{$\tiny${\mbox{$}}}}$}$}$}}}}}}}}}}\). ## 2 The \(w\)-core partial order In this section, we aim to define a class of partial orders and to give its characterizations in a ring \(R\). **Definition 2.1**: _Let \(a,b,w\in R\) with \(a,b\in R^{\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{ \tiny$\mbox{\tiny$\mbox{$\tiny${\mbox{$\tiny${\mbox{$\tiny${\mbox{$\mbox{$ \tiny$}}}$}$}}}}}}}}}}\). We say that \(a\) is below \(b\) under the \(w\)-core relation and write \(a\stackrel{{\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{ \tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny${\mbox{$\tiny${\mbox{$\mbox{$\tiny${\mbox{$ \mbox{$}}}$}$}$}}}}}}}}}}}{\leq_{w}b\) if \(a^{\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny${ \mbox{\tiny${\mbox{\tiny${\mbox{$\tiny${\mbox{$\mbox{${\mbox{$}}}}$}$}$}}}}}}}}}}a=a^{ \mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$ \mbox{\tiny${\mbox{\tiny${\mbox{\tiny${\mbox{\tiny${\mbox{${\mbox{${\mbox{$ }}}}$}$}$}}}}}}}}}}}b\) and \(awa^{\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny${\mbox{\tiny${\mbox{ \tiny${\mbox{\tiny${\mbox{\tiny${\mbox{\tiny${\mbox{${\mbox{$}}}}}$}$}$}}}}}}}}}}}=b^{ \mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny${ \mbox{\tiny${\mbox{\tiny${\mbox{\tiny${\mbox{\tiny${\mbox{\tiny${\mbox{${\mbox{\mbox{$ }}}}}$}$}}}}}}}}}}}}}w\)._ We next show that the relation \(a\stackrel{{\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$ \mbox{\tiny$\mbox{\tiny{\tiny$\mbox{\tiny${\mbox{\tiny${\mbox{\tiny${\mbox{$ \mbox{${\mbox{$}}}$}$}$}}}}}}}}}}}}{\leq_{w}b\) is a partial order. First, an auxiliary lemma is given below. **Lemma 2.2**: _Let \(a,b,w\in R\) with \(a,b\in R^{\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{ \tiny$\mbox{\tiny${\mbox{\tiny${\mbox{\tiny${\mbox{\tiny${\mbox{$}}}$}$}}}}}}}}}}}\). If \(a^{\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny${ \mbox{\tiny${\mbox{\tiny${\mbox{\tiny${\mbox{${\mbox{${\mbox{\mbox{$}}}}$}$}}}}}}}}}}}}a=a^{ \mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny${ \mbox{\tiny${\mbox{\tiny${\mbox{\tiny${\mbox{\tiny${\mbox{${\mbox{\mbox{$}}}}$}$}}}}}}}}}}}}b\) and \(awa^{\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny${\mbox{ \tiny${\mbox{\tiny${\mbox{\tiny${\mbox{\tiny${\mbox{${\mbox{\mbox{$$}}}}$}$}}}}}}}}}}}}=b^{ \mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny${\mbox{ \tiny${\mbox{\tiny${\mbox{\tiny${\mbox{\tiny${\mbox{\mbox{\mbox{$${\mbox{$}}}}$}$}}}}}}}}}}}}w\), then we have_ (i)_\(a^{\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny${\mbox{\tiny${\mbox{ \tiny${\tiny${\mbox{\tiny${\mbox{\tiny${\mbox{\mbox{$${\mbox{\mbox{\mbox{\mbox{$}}}}$}$}}}}}}}}}}}}}a=b^{ \mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny${\mbox{ \tiny${\mbox{\tiny${\mbox{\tiny${\tiny${\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{$}}}}}$}}}}}}}}}}}}}a\)._ (ii)_\(awa^{\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny${\mbox{ \tiny${\mbox{\tiny${\tiny${\mbox{\tiny${\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{}}}}}}}}}}}}}}}}}}a=awb^{ \mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny${\mbox{ \tiny${\tiny{\mbox{\tiny${\mbox{\tiny${\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{}}}}}$}}}}}}}}}}}}_{w}a=a\)._ (iii)_\(awb^{\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny${\mbox{ \tiny${\tiny${\mbox{\tiny${\mbox{\tiny{\mbox{\tiny${\mbox{\mbox{\mbox{\mbox{\mbox{\mbox$}}}}}}}}}}}}}}}}}a=a\)._ (iv)_\(b^{\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny${\mbox{\tiny${\mbox{ \tiny${\mbox{\tiny${\mbox{\tiny${\mbox{\mbox{\mbox{\tiny$\mbox{\mbox{${$}}}}${\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{$}}}}}}}}}}}}}}}}}a^{ \mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$ \mbox{\tiny$\mbox{\tiny{\tiny${\mbox{\tiny${\tiny${\mbox{\tiny${\mbox{\tiny$\mbox{\mbox{\mbox{\mbox{$}}}}}}}}}}}}}}}}}=(a^{ \mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{ \tiny$\mbox{\tiny${\mbox{\tiny${\tiny${\mbox{\tiny${\tiny${\tiny${\mbox{\mbox{\mbox{\mbox{\mbox{{$}}}}}$}}}}}}}}}}}}} )^{2}\)_._ (i)_\(a^{\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny${\mbox{ \tiny${\mbox{\tiny${\tiny${\mbox{\tiny${\tiny${\mbox{\mbox{\mbox{\mbox{\mbox{{{$}}}}}}}}}}}}}}}}}}a=b^{ \mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny${\mbox{ \tiny$\mbox{\tiny{\tiny${\tiny$\mbox{\tiny{\tiny${\tiny${\mbox{\mbox{\tiny${\ **Theorem 2.3**.: _The relation \(a\stackrel{{\text{\tiny\char 6}}}{{\leq_{w}}}b\) of Definition 2.1 is a partial order on \(R\)._ Proof. To prove that \(a\stackrel{{\text{\tiny\char 6}}}{{\leq_{w}}}b\) is a partial order. It suffices to show (1) reflexivity, i.e., \(a\stackrel{{\text{\tiny\char 6}}}{{\leq_{w}}}a\), (2) antisymmetry, i.e., \(a\stackrel{{\text{\tiny\char 6}}}{{\leq_{w}}}b\) and \(b\stackrel{{\text{\tiny\char 6}}}{{\leq_{w}}}a\) imply \(a=b\), (3) transitivity, i.e., \(a\stackrel{{\text{\tiny\char 6}}}{{\leq_{w}}}b\) and \(b\stackrel{{\text{\tiny\char 6}}}{{\leq_{w}}}c\) give \(a\stackrel{{\text{\tiny\char 6}}}{{\leq_{w}}}c\). (1) The reflexivity is clear. (2) Suppose \(a\stackrel{{\text{\tiny\char 6}}}{{\leq_{w}}}b\), i.e., \(a_{w}^{\text{\tiny\char 6}}a=a_{w}^{\text{\tiny\char 6}}b\) and \(awa_{w}^{\text{\tiny\char 6}}=bwa_{w}^{\text{\tiny\char 6}}\). Then \(a=awa_{w}^{\text{\tiny\char 6}}a=bwa_{w}^{\text{\tiny\char 6}}a=bwb_{w}^{\text{ \tiny\char 6}}a\) by Lemma 2.2(i). Suppose in addition that \(b\stackrel{{\text{\tiny\char 6}}}{{\leq_{w}}}a\), i.e., \(b_{w}^{\text{\tiny\char 6}}b=b_{w}^{\text{\tiny\char 6}}a\) and \(bwb_{w}^{\text{\tiny\char 6}}=awb_{w}^{\text{\tiny\char 6}}\). Then \(b=bwb_{w}^{\text{\tiny\char 6}}b=bwb_{w}^{\text{\tiny\char 6}}a\), which together with \(a=bwb_{w}^{\text{\tiny\char 6}}a\) give \(a=b\). (3) Assume \(a\stackrel{{\text{\tiny\char 6}}}{{\leq_{w}}}b\) and \(b\stackrel{{\text{\tiny\char 6}}}{{\leq_{w}}}c\), i.e., \(a_{w}^{\text{\tiny\char 6}}a=a_{w}^{\text{\tiny\char 6}}b\), \(awa_{w}^{\text{\tiny\char 6}}=bwa_{w}^{\text{\tiny\char 6}}\), \(b_{w}^{\text{\tiny\char 6}}b=b_{w}^{\text{\tiny\char 6}}c\) and \(bwb_{w}^{\text{\tiny\char 6}}=cwb_{w}^{\text{\tiny\char 6}}\). Then, by the equality \(awa_{w}^{\text{\tiny\char 6}}=awb_{w}^{\text{\tiny\char 6}}\) of Lemma 2.2(ii), we get \(a_{w}^{\text{\tiny\char 6}}a=a_{w}^{\text{\tiny\char 6}}b=a_{w}^{\text{\tiny \char 6}}bwb_{w}^{\text{\tiny\char 6}}b=a_{w}^{\text{\tiny\char 6}}bwb_{w}^{\text{ \tiny\char 6}}c=a_{w}^{\text{\tiny\char 6}}awb_{w}^{\text{\tiny \char 6}}c=a_{w}^{\text{\tiny\char 6}}awa_{w}^{\text{\tiny\char 6}}c=a_{w}^{\text{\tiny \char 6}}c\). Similarly, we have \[awa_{w}^{\text{\tiny\char 6}} = bwa_{w}^{\text{\tiny\char 6}}=bwb_{w}^{\text{\tiny\char 6}}bwa_{w}^{ \text{\tiny\char 6}}=cwb_{w}^{\text{\tiny\char 6}}bwa_{w}^{\text{\tiny \char 6}}=cwb_{w}^{\text{\tiny\char 6}}awa_{w}^{\text{\tiny\char 6}}=cwa_{w}^{\text{ \tiny\char 6}}awa_{w}^{\text{\tiny\char 6}}\] \[= cwa_{w}^{\text{\tiny\char 6}}.\] The proof is completed. \(\square\) From now on, the partial order \(\stackrel{{\text{\tiny\char 6}}}{{\leq_{w}}}\) is called the \(w\)-core partial order. The \(w\)-core partial order can be seen as an extension of the core partial order [1]. However, the \(w\)-core partial order may not imply the core partial order in general. See Example 2.4 below. Specially, by fixing the element \(w\in R\), we in Theorem 3.9 below show that the \(w\)-core partial order \(\stackrel{{\text{\tiny\char 6}}}{{\leq_{w}}}\) coincides with the classical star partial order \(\stackrel{{*}}{{\leq}}\) and the core partial order \(\stackrel{{\text{\tiny\char 6}}}{{\leq}}\), respectively. **Example 2.4**.: Let \(R=M_{2}(\mathbb{C})\) be the ring of all \(2\times 2\) complex matrices and let the involution \(*\) be the conjugate transpose. Take, for example, \(a=\begin{bmatrix}1&1\\ 0&0\end{bmatrix}\), \(b=\begin{bmatrix}1&1\\ 2&-2\end{bmatrix}\), \(w=\begin{bmatrix}1&0\\ 1&0\end{bmatrix}\in R\), then \(a_{w}^{\text{\tiny\char 6}}=\begin{bmatrix}\frac{1}{2}&0\\ 0&0\end{bmatrix}\) and \(a^{\text{\tiny\char 6}}=\begin{bmatrix}1&0\\ 0&0\end{bmatrix}\). We have \(a_{w}^{\text{\tiny\char 6}}a=a_{w}^{\text{\tiny\char 6}}b=\begin{bmatrix}\frac{1}{2}&\frac{1}{2}\\ 0&0\end{bmatrix}\) and \(awa_{w}^{\text{\tiny\char 6}}=bwa_{w}^{\text{\tiny\char 6}}=\begin{bmatrix}1&0\\ 0&0\end{bmatrix}\). Hence, \(a\stackrel{{\text{\tiny\char 6}}}{{\leq_{w}}}b\). However, \(aa^{\text{\tiny\char 6}}=\begin{bmatrix}1&0\\ 0&0\end{bmatrix}\neq\begin{bmatrix}1&0\\ 2&0\end{bmatrix}=ba^{\text{\tiny\char 6}}\). We next give the characterization of the \(w\)-core partial order \(a\stackrel{{\text{\tiny\textcircled{\char 6}}}}{{\leq_{w}}}b\) in \(R\). **Proposition 2.5**.: _Let \(a,b,w\in R\) with \(a,b\in R_{w}^{\text{\tiny\textcircled{\char 6}}}\). Then the following conditions are equivalent_:__ (i)_\(a\stackrel{{\text{\tiny\textcircled{\char 6}}}}{{\leq_{w}}}b\)._ (ii)_\(a_{w}^{\text{\tiny\textcircled{\char 6}}}b=b_{w}^{\text{\tiny\textcircled{\char 6}}}a\), \(bwa_{w}^{\text{\tiny\textcircled{\char 6}}}=awb_{w}^{\text{\tiny\textcircled{\char 6}}}\) and \(awb_{w}^{\text{\tiny\textcircled{\char 6}}}a=a\)._ Proof. (i) \(\Rightarrow\) (ii) It follows from Lemma 2.2. (ii) \(\Rightarrow\) (i) We have \(a_{w}^{\text{\tiny\textcircled{\char 6}}}a=a_{w}^{\text{\tiny\textcircled{ \char 6}}}awb_{w}^{\text{\tiny\textcircled{\char 6}}}a=a_{w}^{\text{\tiny\textcircled{ \char 6}}}awa_{w}^{\text{\tiny\textcircled{\char 6}}}b=a_{w}^{\text{\tiny \textcircled{\char 6}}}b\), and \(awa_{w}^{\text{\tiny\textcircled{\char 6}}}=(awb_{w}^{\text{\tiny\textcircled{ \char 6}}}a)wa_{w}^{\text{\tiny\textcircled{\char 6}}}=(bwa_{w}^{\text{\tiny \textcircled{\char 6}}})awa_{w}^{\text{\tiny\textcircled{\char 6}}}=bw(a_{w}^{\text{\tiny \textcircled{\char 6}}}awa_{w}^{\text{\tiny\textcircled{\char 6}}})=bwa_{w}^{\text{\tiny \textcircled{\char 6}}}\), as required. \(\square\) **Proposition 2.6**.: _Let \(a,b,w\in R\) with \(a,b\in R_{w}^{\text{\tiny\textcircled{\char 6}}}\). If \(a\stackrel{{\text{\tiny\textcircled{\char 6}}}}{{\leq_{w}}}b\), then_ (i)_\(a_{w}^{\text{\tiny\textcircled{\char 6}}}bwb_{w}^{\text{\tiny\textcircled{\char 6}}}=b_{w}^{\text{\tiny \textcircled{\char 6}}}bwa_{w}^{\text{\tiny\textcircled{\char 6}}}=a_{w}^{\text{\tiny \textcircled{\char 6}}}bwa_{w}^{\text{\tiny\textcircled{\char 6}}}=a_{w}^{\text{\tiny \textcircled{\char 6}}}\)._ (ii)_\(a_{w}^{\text{\tiny\textcircled{\char 6}}}awb_{w}^{\text{\tiny\textcircled{\char 6}}}=b_{w}^{\text{\tiny \textcircled{\char 6}}}awa_{w}^{\text{\tiny\textcircled{\char 6}}}=b_{w}^{\text{\tiny \textcircled{\char 6}}}awb_{w}^{\text{\tiny\textcircled{\char 6}}}=a_{w}^{\text{\tiny \textcircled{\char 6}}}\)._ Proof. (i) Given \(a\stackrel{{\text{\tiny\textcircled{\char 6}}}}{{\leq_{w}}}b\), then \(a_{w}^{\text{\tiny\textcircled{\char 6}}}b=a_{w}^{\text{\tiny\textcircled{\char 6}}}a=b_{w}^{\text{\tiny \textcircled{\char 6}}}a\) and \(bwa_{w}^{\text{\tiny\textcircled{\char 6}}}=awa_{w}^{\text{\tiny\textcircled{\char 6}}}=awb_{w}^{\text{\tiny \textcircled{\char 6}}}\) by Lemma 2.2. So, \(a_{w}^{\text{\tiny\textcircled{\char 6}}}bwa_{w}^{\text{\tiny\textcircled{\char 6}}}=a_{w}^{\text{\tiny\textcircled{\char 6}}}awb_{w}^{\text{\tiny \textcircled{\char 6}}}=a_{w}^{\text{\tiny\textcircled{\char 6}}}awa_{w}^{\text{\tiny \textcircled{\char 6}}}=a_{w}^{\text{\tiny\textcircled{\char 6}}}=b_{w}^{\text{\tiny \textcircled{\char 6}}}awa_{w}^{\text{\tiny\textcircled{\char 6}}}=b_{w}^{\text{\tiny \textcircled{\char 6}}}awa_{w}^{\text{\tiny\textcircled{\char 6}}}=b_{w}^{\text{\tiny \textcircled{\char 6}}}awa_{w}^{\text{\tiny\textcircled{\char 6}}}=b_{w}^{\text{\tiny \textcircled{\char 6}}}awb_{w}^{\text{\tiny\textcircled{\char 6}}}=a_{w}^{\text{\tiny\textcircled{\char 6}}}bwb_{w}^{\text{\tiny \textcircled{\char 6}}}\). (ii) By (i) and Lemma 2.2. \(\square\) An element \(e\in R\) is idempotent if \(e=e^{2}\). If in addition, \(e=e^{*}\), then \(e\) is called a projection. It follows from [18] that if \(a\in R_{w}^{\text{\tiny\textcircled{\char 6}}}\) then \(a_{w}^{\text{\tiny\textcircled{\char 6}}}=w^{\parallel a}a^{(1,3)}\). So, \(a_{w}^{\text{\tiny\textcircled{\char 6}}}aw=w^{\parallel a}a^{(1,3)}aw=w^{ \parallel a}w\), \(wa_{w}^{\text{\tiny\textcircled{\char 6}}}a=ww^{\parallel a}a^{(1,3)}a=ww^{ \parallel a}\) and \(awa_{w}^{\text{\tiny\textcircled{\char 6}}}=aww^{\parallel a}a^{(1,3)}=aa^{(1,3)}\). Clearly, \(a_{w}^{\text{\tiny\textcircled{\char 6}}}aw\) and \(wa_{w}^{\text{\tiny\textcircled{\char 6}}}a\) are both idempotents, and \(awa_{w}^{\text{\tiny\textcircled{\char 6}}}\) is a projection. In [8, Lemma 2.1], Marovt derived several characterizations for the idempotent \(aa^{\#}\) in a ring. Inspired by Marovt's result, we establish several characterizations for the projection \(awa_{w}^{\text{\tiny\textcircled{\char 6}}}\), the idempotents \(a_{w}^{\text{\tiny\textcircled{\char 6}}}aw\) and \(wa_{w}^{\text{\tiny\textcircled{\char 6}}}a\), respectively. Given any \(a\in R\), the symbol \(a^{0}=\{x\in R:ax=0\}\) denotes all right annihilators of \(a\). Dually, \({}^{0}a=\{x\in R:xa=0\}\) denotes all left annihilators of \(a\). It should be noted that (see, e.g., [13, Lemmas 2.5 and 2.6]) \(aR=bR\) implies \({}^{0}a={}^{0}b\), and \(Ra=Rb\) implies \(a^{0}=b^{0}\) for any \(a,b\in R\). **Lemma 2.7**.: _Let \(a,w\in R\) with \(a\in R_{w}^{\text{\tiny\textcircled{\char 6}}}\). Then the following conditions are equivalent_:__ (i) \(p=awa^{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny \mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\mbox{\tiny\mbox{\mbox{\tiny\mbox{\mbox{\tiny\mbox{\mbox{\mbox{ \tiny{\mbox{\mbox{\mbox}}}}}}}}}}}}}}\). (ii) \(aR=pR\) _for some projection_ \(p\in R\)_._ (iii) \({}^{0}a={}^{0}p\) _for some projection_ \(p\in R\)_._ (iv) \(a=pa\)_, \({}^{0}a\subseteq{}^{0}p\) for some projection_ \(p\in R\)_._ Proof. (i) \(\Rightarrow\) (ii) Given \(p=awa^{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny \mbox{\mbox{\tiny\mbox{\mbox{\tiny\mbox{\mbox{\tiny{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{ \mbox{ \mbox{ \mbox{ \mbox{ }}}}}}}}}}}}}\), then \(pR=awa^{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny{\mbox{\tiny\mbox{\tiny{\mbox{ \mbox{\tiny{\mbox{\mbox{\tiny{\mbox{\mbox{\tiny{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{ \mbox{ \mbox \mbox{ \ }}}}}}}}}}}}}}}R \subseteq aR\). Also, \(aR=(awa^{\tiny\mbox{\tiny\mbox{\tiny{\mbox{\tiny{\mbox{\tiny{\mbox{\tiny{\mbox{ \mbox{\tiny{\mbox{\mbox{\tiny{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{ \mbox{ \mbox{ \mbox{ \mbox }}}}}}}}}}}}}}}}}}a )R=(pa)R\subseteq pR\). (ii) \(\Rightarrow\) (iii) is a tautology. (iii) \(\Rightarrow\) (iv) As \((1-p)p=0\) and \({}^{0}p={}^{0}a\), then \((1-p)a=0\) and hence \(a=pa\). (iv) \(\Rightarrow\) (i) Note that \((1-awa^{\tiny\mbox{\tiny\mbox{\tiny{\mbox{\tiny{\mbox{\tiny{\mbox{\tiny{\mbox{ \mbox{\tiny{\mbox{\mbox{\tiny{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{ \mbox{ \mbox{ \mbox{ \mbox{ \mboxmbox{ \mboxmbox { \ \mbox{ \mboxmbox{ \mbox{ }}}}}}}}}}}}} )\ }}a=0\). Then we get \((1-awa^{\tiny\mbox{\tiny{\mbox{\tiny{\mbox{\tiny{\mbox{\tiny{\mbox{\tiny{\mbox{\mbox{ \tiny{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{ \mbox{ \mbox{ \mbox{ \mboxmbox { \mboxmbox { \mbox { }}}}}}}}}}}}}} )\ **Lemma 2.9**: _Let \(a,w,e\in R\) with \(a\in R^{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{ \hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hss$ \hss$}$}}}}}}}}}_{w}\). Then the following conditions are equivalent_:__ (i)_\(e=a^{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{ \hss$\hss$}$}}}}}}_{w}aw\)._ (ii)_\(Re=Raw\) and \(ea=a\)._ (iii)_\(e^{0}=(aw)^{0}\) and \(ea=a\)._ (iv)_\((aw)^{0}\subseteq e^{0}\) and \(ea=a\)._ (i)\(\Rightarrow\) (ii) Since \(e=a^{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{ \hss$\hss$}$}}}}}_{w}aw\), we have \(ea=a^{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{ \hbox to 0.0pt{\hss$\hss$}$}}}}}}_{w}awa=a\) and \(Re\subseteq Raw=Raw_{w}^{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{ \hbox to 0.0pt{\hbox to 0.0pt{\hss$\hss$}$}}}}}}_{w}aw=Raw\subseteq Re\). (ii) \(\Rightarrow\) (iii) and (iii) \(\Rightarrow\) (iv) are clear. (iv) \(\Rightarrow\) (i) Note that \(ea=a\) concludes \(ea^{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{ \hss$\hss$}$}}}}}_{w}=a^{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{ \hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hss$\hss$}$}}}}}}_{w}\) by \(aw(a^{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{ \hbox to 0.0pt{\hbox to 0.0pt{\hss$\hss$}$}}}}}}}_{w})^{2}=a^{\hbox to 0.0pt{\hbox to 0.0pt{ \hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hss$\hss$}$}}}}}}}_{w}\). Also, from \(aw(1-a^{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{ \hbox to 0.0pt{\hbox to 0.0pt{\hss$\hss$}$}}}}}}}_{w}aw)=0\) and \((aw)^{0}\subseteq e^{0}\), it follows that \(e=ea^{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{ \hbox to 0.0pt{\hbox to 0.0pt{\hss$\hss$}}}}}}}_{w}aw=a^{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{ \hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hss$\hss$}$}}}}}}}}_{w}aw\). \(\Box\) A similar characterization for the idempotent \(wa^{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{ \hss$\hss$}$}}}}}}}_{w}a\) can also be obtained, whose proof is omitted. **Lemma 2.10**: _Let \(a,w,f\in R\) with \(a\in R^{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{ \hbox to 0.0pt{\hbox to 0.0pt{\hss$\hss$}$}}}}}}}_{w}\). Then the following conditions are equivalent_:__ (i)_\(f=wa^{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{ \hbox to 0.0pt{\hss$\hss$}$}}}}}}}_{w}a\)._ (ii)_\(Ra=Rf\) and \(fwa=wa\)._ (iii)_\(a^{0}=f^{0}\) and \(fwa=wa\)._ (iv)_\(a^{0}\subseteq f^{0}\) and \(fwa=wa\)._ The characterization for \(awa^{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{ \hss$\hss$}$}}}}}}_{w}=bwa^{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{ \hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hss$\hss$}$}}}}}}}_{w}\) can also be derived similarly. First, a preliminary lemma is given. **Lemma 2.11**: _[_10_, Theorem 2.1]_ _Let \(a,w\in R\). Then the following conditions are equivalent_:__ (i)_\(w\in R^{\parallel a}\)._ (ii)_\(a\in awR\) and \(aw\in R^{\#}\)._ (iii)_\(a\in Rwa\) and \(wa\in R^{\#}\)._ _In this case, \(w^{\parallel a}=a(wa)^{\#}=(aw)^{\#}a\)._ **Theorem 2.12**: _Let \(a,b,w\in R\) with \(a\in R^{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{ \hss$\hss$}$}}}}}_{w}\). Then the following conditions are equivalent_:__ (i)_\(awa^{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hss$ \hss$}$}}}}}}}_{w}=bwa^{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hss$ \hss$}$}}}}}}}_{w}\)._ (ii)_\(a=bww^{\parallel a}\)._ (iii)_\(awa=bwa\)._ (iv) \(a(wa)^{\#}=b(wa)^{\#}\). (v) \(a=bwa^{\tiny\mbox{\tiny\char 37}\!\!\tiny\char 37}a\). (vi) _There exists some \(e\in R\) such that \(Re=Raw\), \(ea=a\) and \(awe=bwe\)._ (vii) _There exists some \(e\in R\) such that \(e^{0}=(aw)^{0}\), \(ea=a\) and \(awe=bwe\)._ (viii) _There exists some \(e\in R\) such that \((aw)^{0}\subseteq e^{0}\), \(ea=a\) and \(awe=bwe\)._ (ix) _There exists some \(f\in R\) such that \(Ra=Rf\), \(af=bf\) and \(fwa=wa\)._ (x) _There exists some \(f\in R\) such that \(a^{0}=f^{0}\), \(af=bf\) and \(fwa=wa\)._ (xi) _There exists some \(f\in R\) such that \(a^{0}\subseteq f^{0}\), \(af=bf\) and \(fwa=wa\)._ Proof. (i) \(\Rightarrow\) (ii) Given \(awa^{\tiny\mbox{\tiny\char 37}\!\!\tiny\char 37}_{w}=bwa^{\tiny\mbox{\tiny \char 37}\!\!\tiny\char 37}_{w}\), then \(a=awa^{\tiny\mbox{\tiny\char 37}\!\!\tiny\char 37}_{w}a=bwa^{\tiny\mbox{\tiny \char 37}\!\!\tiny\char 37}_{w}a=bww^{\|a}a^{(1,3)}a=bww^{\|a}\). (ii) \(\Rightarrow\) (iii) As \(a=bww^{\|a}\), then \(awa=(bww^{\|a})wa=bw(w^{\|a}wa)=bwa\). (iii) \(\Rightarrow\) (iv) Note that \(a\in R^{\tiny\mbox{\tiny\char 37}\!\!\tiny\char 37}_{w}\) implies \(wa\in R^{\#}\) by Lemma 2.11 (i) \(\Rightarrow\) (iii). Post-multiplying \(awa=bwa\) by \(((wa)^{\#})^{2}\) gives \(a(wa)^{\#}=b(wa)^{\#}\). (iv) \(\Rightarrow\) (v) As \(a\in R^{\tiny\mbox{\char 37}\!\!\tiny\char 37}_{w}\), then \(w^{\|a}=a(wa)^{\#}\) in terms of Lemma 2.11. Note that \(w^{\|a}\in aR\cap Ra\). Then \(a^{\tiny\mbox{\tiny\char 37}\!\!\tiny\char 37}_{w}a=w^{\|a}a^{(1,3)}a=w^{\|a}\). Thus, \(a=w^{\|a}wa=a(wa)^{\#}wa=b(wa)^{\#}wa=bwa(wa)^{\#}=bww^{\|a}=bwa^{\tiny\mbox{ \char 37}\!\!\tiny\char 37}_{w}a\). (v) \(\Rightarrow\) (vi) Write \(e=a^{\tiny\mbox{\tiny\char 37}\!\!\tiny\char 37}_{w}aw\), then \(Re\subseteq Raw=R(awa^{\tiny\mbox{\char 37}\!\!\tiny\char 37}_{w}a)w=Raw(a^{\tiny \mbox{\char 37}\!\!\tiny\char 37}_{w}aw)=Raw\subseteq Re\) and \(ea=a^{\tiny\mbox{\char 37}\!\!\tiny\char 37}_{w}awa=a\). Also, \(awe=awa^{\tiny\mbox{\char 37}\!\!\tiny\char 37}_{w}aw=aw=bw^{\tiny\mbox{\char 37}\!\!\tiny \char 37}_{w}aw=bwe\). (vi) \(\Rightarrow\) (vii) and (vii) \(\Rightarrow\) (viii) are obvious. (viii) \(\Rightarrow\) (i) It follow from Lemma 2.9 that \(e=a^{\tiny\mbox{\char 37}\!\!\tiny\char 37}_{w}aw\) and \(awa^{\tiny\mbox{\char 37}\!\!\tiny\char 37}_{w}=awa^{\tiny\mbox{\char 37}\!\!\tiny \char 37}_{w}awa^{\tiny\mbox{\char 37}\!\!\tiny\char 37}_{w}=awa^{\tiny\mbox{\char 37}\!\!\tiny \char 37}_{w}=bwa^{\tiny\mbox{\char 37}\!\!\tiny\char 37}_{w}=bwa^{\tiny\mbox{\char 37}\!\!\tiny \char 37}_{w}\). (i) \(\Rightarrow\) (ix) Write \(f=wa^{\tiny\mbox{\char 37}\!\!\tiny\char 37}_{w}a\), then \(Rf\subseteq Ra=Raf\subseteq Rf\), \(af=awa^{\tiny\mbox{\char 37}\!\!\tiny\char 37}_{w}a=bwa^{\tiny\mbox{\char 37}\!\!\tiny \char 37}_{w}a=bf\) and \(fwa=(wa^{\tiny\mbox{\char 37}\!\!\tiny\char 37}_{w}a)wa=w(a^{\tiny\mbox{\char 37}\!\!\tiny \char 37}_{w}awa)=wa\). (ix) \(\Rightarrow\) (x) and (x) \(\Rightarrow\) (xi) are clear. (xi) \(\Rightarrow\) (i) Given \(a^{0}\subseteq f^{0}\) and \(fwa=wa\), then, by Lemma 2.10, \(f=wa^{\tiny\mbox{\char 37}\!\!\tiny\char 37}_{w}a\) and \(fwa^{\tiny\mbox{\char 37}\!\!\tiny\char 37}_{w}=fw(aw(a^{\tiny\mbox{\char 37}\!\!\tiny \char 37}_{w})^{2})=(fwa)w(a^{\tiny\mbox{\char 37}\!\!\tiny \char 37}_{w})^{2}=(wa)w(a^{\tiny\mbox{\char 37}\!\!\tiny\char 37}_{w})^{2}=w(aw(a^{\tiny \mbox{\char 37}\!\!\tiny\char 37}_{w})^{2})=w^{\tiny\mbox{\char 37}\!\!\tiny \char 37}_{w}\). Thus, \(awa^{\tiny\mbox{\char 37}\!\!\tiny\char 37}_{w}=a(fwa^{\tiny\mbox{\char 37}\!\!\tiny \char 37}_{w})=(af)wa^{\tiny\mbox{\char 37}\!\!\tiny\char 37}_{w}=(bf)wa^{\tiny\mbox{\char 37}\!\!\tiny \char 37}_{w}=b(fwa^{\tiny\mbox{\char 37}\!\!\tiny\char 37}_{w})=bwa^{\tiny\mbox{\char 37}\!\!\!\char 37}_{w}\). \(\Box\) Combining with Theorems 2.8 and 2.12, we get several characterizations for the \(w\)-core partial order \(a\stackrel{{\mbox{\tiny\char 37}\!\!\!\tiny\char 37}}{{\leq_{w}}}b\). **Theorem 2.13**: **.** _Let \(a,b,w\in R\) with \(a\in R^{\mbox{\tiny\char 67}}_{w}\). Then the following conditions are equivalent_:__ (i)_\(a\stackrel{{\mbox{\tiny\char 67}}}{{\leq}}_{w}b\)._ (ii)_\(w^{\parallel a}=w^{\parallel a}a^{(1,3)}b\) and \(a=bww^{\parallel a}\)._ (iii)_\(a^{*}a=a^{*}b\) and \(bwa=awa\)._ (iv)_\(a^{*}a=a^{*}b\) and \(b(wa)^{\#}=a(wa)^{\#}\)._ (v)_\(a=awa^{\mbox{\tiny\char 67}}_{w}b=bwa^{\mbox{\tiny\char 67}}_{w}a\)._ (vi) _There exist a projection \(p\in R\) and an element \(e\in R\) such that \(aR=pR\), \(pa=pb\), \(Re=Raw\), \(ea=a\) and \(awe=bwe\)._ (vii) _There exist a projection \(p\in R\) and an element \(e\in R\) such that \({}^{0}p={}^{0}a\), \(pa=pb\), \(e^{0}=(aw)^{0}\), \(ea=a\) and \(awe=bwe\)._ (viii) _There exist a projection \(p\in R\) and an element \(f\in R\) such that \(aR=pR\), \(pa=pb\), \(Ra=Rf\), \(af=bf\) and \(fwa=wa\)._ (ix) _There exist a projection \(p\in R\) and an element \(f\in R\) such that \({}^{0}p={}^{0}a\), \(pa=pb\), \(a^{0}=f^{0}\), \(af=bf\) and \(fwa=wa\)._ (x) _There exist a projection \(p\in R\) and an element \(f\in R\) such that \({}^{0}p={}^{0}a\), \(pa=pb\), \(a^{0}\subseteq f^{0}\), \(af=bf\) and \(fwa=wa\)._ (xi) _There exist a projection \(p\in R\) and an element \(e\in R\) such that \(pb=a=ea\) and \(bwe=aw\)._ (xi) _There exist a projection \(p\in R\) and an element \(e\in R\) such that \(pb=a=ea\) and \(bwe=aw\)._ (xi)-(x) are equivalent by Theorems 2.8 and 2.12. It next suffices to prove (i) \(\Leftrightarrow\) (xi) \(\Leftrightarrow\) (xii). (i) \(\Rightarrow\) (xi) Set \(p=awa^{\mbox{\tiny\char 67}}_{w}\) and \(e=a^{\mbox{\tiny\char 67}}_{w}aw\), then \(p=p^{2}=p^{*}\). It is obvious that \(ea=a^{\mbox{\tiny\char 67}}_{w}awa=a=awa^{\mbox{\tiny\char 67}}_{w}a=awa^{\mbox{ \tiny\char 67}}_{w}b=pb\). Also, \(bwe=bwa^{\mbox{\tiny\char 67}}_{w}aw=awa^{\mbox{\tiny\char 67}}_{w}aw=awa^{\mbox{ \char 67}}_{w}aw=aw\). (xi) \(\Rightarrow\) (xii) is obvious. (xi) \(\Rightarrow\) (i) It follows from \(a=pb\) that \(pa=ppb=pb=a\) and hence \(a^{*}b=(pa)^{*}b=a^{*}pb=a^{*}ba\). Also, \(bwa=bw(ea)=(bwe)a=(awe)a=awa\). So, \(a\stackrel{{\mbox{\tiny\char 67}}}{{\leq}}_{w}b\) by (iii) \(\Rightarrow\) (i). \(\Box\) Note the fact that \(1^{\parallel a}=aa^{\#}\) provided that \(a\in R^{\#}\). Set \(w=1\) in Theorem 2.13, then the condition (ii) can be reduced to \(aa^{\#}=aa^{\#}a^{(1,3)}b\) and \(a=ba^{\#}a\), which are equivalent to \(a=aa^{(1,3)}b=ba^{\#}a\). We hence get several characterizations for the core partial order, some of which were given in [14, Theorems 2.4 and 2.6]. **Corollary 2.14**.: _Let \(a,b\in R\) with \(a\in R^{\textcircled{\char 6}}\). Then the following conditions are equivalent_:__ (i)_\(a\stackrel{{\textcircled{\char 6}}}{{\leq}}b\)._ (ii)_\(a=aa^{(1,3)}b=ba^{\#}a\)._ (iii)_\(a^{*}a=a^{*}b\) and \(ba=a^{2}\)._ (iv)_\(a^{*}a=a^{*}b\) and \(ba^{\#}=aa^{\#}\)._ (v)_\(a=aa^{\textcircled{\char 6}}b=ba^{\textcircled{\char 6}}a\)._ (vi) _There exist a projection \(p\in R\) and an element \(e\in R\) such that \(aR=pR\), \(pa=pb\), \(Re=Ra\), \(ea=a\) and \(ae=be\)._ (vii) _There exist a projection \(p\in R\) and an element \(e\in R\) such that \({}^{0}p={}^{0}a\), \(pa=pb\), \(e^{0}=(a)^{0}\), \(ea=a\) and \(ae=be\)._ (viii) _There exist a projection \(p\in R\) and an element \(f\in R\) such that \(aR=pR\), \(pa=pb\), \(Ra=Rf\), \(af=bf\) and \(fa=a\)._ (ix) _There exist a projection \(p\in R\) and an element \(f\in R\) such that \({}^{0}p={}^{0}a\), \(pa=pb\), \(a^{0}=f^{0}\), \(af=bf\) and \(fa=a\)._ (x) _There exist a projection \(p\in R\) and an element \(f\in R\) such that \({}^{0}p={}^{0}a\), \(pa=pb\), \(a^{0}\subseteq f^{0}\), \(af=bf\) and \(fa=a\)._ (xi) _There exist a projection \(p\in R\) and an element \(e\in R\) such that \(pb=a=ea\) and \(be=a\)._ (xii) _There exist a projection \(p\in R\) and an element \(e\in R\) such that \(pb=a=ea\) and \(be=ae\)._ ## 3 Connections with other partial orders For any \(a,b\in R\), recall that the left star partial order \(a\)\(*\leq b\) is defined as \(a^{*}a=a^{*}b\) and \(aR\subseteq bR\). The right sharp partial order \(a\leq_{\#}b\) is defined by \(aa^{\#}=ba^{\#}\) and \(Ra\subseteq Rb\) for \(a\in R^{\#}\). It is known from [1] that \(A\stackrel{{\textcircled{\char 6}}}{{\leq}}B\) if and only if \(A\)\(*\leq B\) and \(A\leq_{\#}B\), where \(A,B\) are \(n\times n\) complex matrices of index \(1\). The characterization for the core partial order indeed holds in a \(*\)-ring, namely, for any \(a,b\in R\) and \(a\in R^{\textcircled{\char 6}}\), then \(a\stackrel{{\textcircled{\char 6}}}{{\leq}}b\) if and only if \(a\)\(*\leq b\) and \(a\leq_{\#}b\). As stated in Section 1, \(a\in R^{\textcircled{\char 6}}_{w}\) if and only if \(w\in R^{\parallel a}\) and \(a\in R^{(1,3)}\). In terms of Lemma 2.11, one knows that \(a\in R^{\textcircled{\char 6}}_{w}\) implies \(wa\in R^{\#}\). It is natural to ask whether the \(w\)-core partial order also has a similar characterization, i.e., if \(a\in R^{\textcircled{\char 6}}_{w}\), whether \(a\stackrel{{\textcircled{\char 6}}}{{\leq}}_{w}b\) is equivalent to \(a\)\(*\leq b\) and \(wa\leq_{\#}wb\). The following result gives the implication \(a\stackrel{{\text{\tiny\text@underline{\char 6}}}}{{\leq_{w}}}b \Rightarrow a\ast\leq b\) and \(wa\leq_{\#}wb\). For the converse part, there is, of course, a counterexample (see Example 3.2) to illustrate that it is not true in general. **Proposition 3.1**.: _For any \(a,b,w\in R\) and \(a\in R_{w}^{\text{\tiny\text@underline{\char 6}}}\), if \(a\stackrel{{\text{\tiny\text@underline{\char 6}}}}{{\leq_{w}}}b\), then \(a\ast\leq b\) and \(wa\leq_{\#}wb\)._ Proof. Given \(a\stackrel{{\text{\tiny\text@underline{\char 6}}}}{{\leq_{w}}}b\), i.e., \(a_{w}^{\text{\tiny\text@underline{\char 6}}}a=a_{w}^{\text{\tiny\text@underline{ \char 6}}}b\) and \(awa_{w}^{\text{\tiny\text@underline{\char 6}}}=bwa_{w}^{\text{\tiny\text@underline{ \char 6}}}\), then \(a=awa_{w}^{\text{\tiny\text@underline{\char 6}}}a=bwa_{w}^{\text{\tiny\text@underline{ \char 6}}}a\) and hence \(aR\subseteq bR\). It follows from Theorem 2.13 (i) \(\Rightarrow\) (iii) that \(a^{\ast}a=a^{\ast}b\). So, \(a\ast\leq b\). From \(a_{w}^{\text{\tiny\text@underline{\char 6}}}a=a_{w}^{\text{\tiny\text@underline{ \char 6}}}b\), we have \(R(wa)=R(wawa_{w}^{\text{\tiny\text@underline{\char 6}}}a)=R(wawa_{w}^{\text{\tiny \text@underline{\char 6}}}b)=R(wawa_{w}^{\text{\tiny\text@underline{\char 6}}}(b_{w}^{\text{\tiny \text@underline{\char 6}}}bwb))\subseteq R(wb)\). Note also that \(awa_{w}^{\text{\tiny\text@underline{\char 6}}}=bwa_{w}^{\text{\tiny \text@underline{\char 6}}}\). Then, by Theorem 2.13, \(a(wa)^{\#}=b(wa)^{\#}\) and consequently \(wa(wa)^{\#}=wb(wa)^{\#}\). So, \(wa\leq_{\#}wb\). \(\square\) **Example 3.2**.: Let \(R\) and the involution be the same as that of Example 2.4. Set \(a=\begin{bmatrix}1&1\\ 0&0\end{bmatrix}\), \(w=\begin{bmatrix}1&0\\ 0&0\end{bmatrix}\), \(b=\begin{bmatrix}1&1\\ 2&0\end{bmatrix}\in R\), then \(a_{w}^{\text{\tiny\text@underline{\char 6}}}=\begin{bmatrix}1&0\\ 0&0\end{bmatrix}\). By a direct check, \(a^{\ast}a=\begin{bmatrix}1&0\\ 1&0\end{bmatrix}\begin{bmatrix}1&1\\ 0&0\end{bmatrix}=\begin{bmatrix}1&1\\ 1&1\end{bmatrix}=\begin{bmatrix}1&0\\ 1&0\end{bmatrix}\begin{bmatrix}1&1\\ 2&0\end{bmatrix}=a^{\ast}b\) and \(aR\subseteq bR\). So, \(a\ast\leq b\). Note that \(wa=wb=\begin{bmatrix}1&1\\ 0&0\end{bmatrix}\) are idempotent. Then \(Rwa\subseteq Rwb\), \((wa)^{\#}=(wb)^{\#}=\begin{bmatrix}1&1\\ 0&0\end{bmatrix}\) and \(wa(wa)^{\#}=\begin{bmatrix}1&1\\ 0&0\end{bmatrix}=wb(wa)^{\#}\). So, \(wa\leq_{\#}wb\). However, \(awa_{w}^{\text{\tiny\text@underline{\char 6}}}=\begin{bmatrix}1&0\\ 0&0\end{bmatrix}\neq\begin{bmatrix}1&0\\ 2&0\end{bmatrix}=bwa_{w}^{\text{\tiny\text@underline{\char 6}}}\). So, \(a\stackrel{{\text{\tiny\text@underline{\char 6}}}}{{\leq_{w}}}b\) does not hold. It is of interest to consider, under what conditions, \(a\ast\leq b\) and \(wa\leq_{\#}wb\) can imply \(a\stackrel{{\text{\tiny\text@underline{\char 6}}}}{{\leq_{w}}}b\), provided that \(a\in R_{w}^{\text{\tiny\text@underline{\char 6}}}\). The result below shows that the implication is true under the hypothesis \(w\in U(R)\), where \(U(R)\) denotes the group of all units in \(R\). **Theorem 3.3**.: _For any \(a,b,w\in R\) and \(a\in R_{w}^{\text{\tiny\text@underline{\char 6}}}\), if \(w\in U(R)\), then \(a\stackrel{{\text{\tiny\text@underline{\char 6}}}}{{\leq_{w}}}b\) if and only if \(a\ast\leq b\) and \(wa\leq_{\#}wb\)._ Proof.: We only need to prove the "if" part. Note that \(a\,*\leq b\) implies \(a^{*}a=a^{*}b\) and hence \(a^{\tiny\mbox{\char 31}}_{w}a=a^{\tiny\mbox{\char 31}}_{w}b\) in terms of Theorem 2.8. To show \(a\stackrel{{\tiny\mbox{\char 31}}}{{\leq_{w}}}b\), it suffices to prove \(awa=bwa\) by Theorem 2.13. Since \(wa(wa)^{\#}=wb(wa)^{\#}\), we have \(wa=wb(wa)^{\#}wa=wbwa(wa)^{\#}=wbww^{\|a}\) and hence \(a=w^{-1}wa=w^{-1}wbww^{\|a}=bww^{\|a}\). Post-multiplying \(a=bww^{\|a}\) by \(wa\) implies \(awa=bw(w^{\|a}wa)=bwa\), as required. From the proof of Theorem 3.3, one knows that if \(w\in R\) is left invertible, then we also have the equivalence that \(a\stackrel{{\tiny\mbox{\char 31}}}{{\leq_{w}}}b\) if and only if \(a\,*\leq b\) and \(wa\leq_{\#}wb\). With the next theorem we will present more relationship between the \(w\)-core partial order and the left star partial order in a \(*\)-ring \(R\). **Theorem 3.4**.: _Let \(a,b,w\in R\) with \(a,b\in R^{\tiny\mbox{\char 31}}_{w}\). Then the following conditions are equivalent_:__ (i)_\(a\stackrel{{\tiny\mbox{\char 31}}}{{\leq_{w}}}b\)._ (ii)_\(a\,*\leq b\) and \(a=bwa^{\tiny\mbox{\char 31}}_{w}b\)._ (iii)_\(a\,*\leq b\) and \(a^{\tiny\mbox{\char 31}}_{w}=b^{\tiny\mbox{\char 31}}_{w}awa^{\tiny\mbox{ \char 31}}_{w}\)._ (iv)_\(a\,*\leq b\) and \(a^{\tiny\mbox{\char 31}}_{w}=b^{\tiny\mbox{\char 31}}_{w}awb^{\tiny\mbox{ \char 31}}_{w}\)._ Proof.: (i) \(\Rightarrow\) (ii) follows from Theorem 2.13 and Proposition 3.1. (i) \(\Rightarrow\) (iii) and (i) \(\Rightarrow\) (iv) by Propositions 2.6 and 3.1. (ii) \(\Rightarrow\) (i) Given \(a\,*\leq b\), then \(a^{*}a=a^{\tiny\mbox{\char 31}}_{w}b\) and hence \(a^{\tiny\mbox{\char 31}}_{w}a=a^{\tiny\mbox{\char 31}}_{w}b\) by Theorem 2.8. Also, one has \(awa^{\tiny\mbox{\char 31}}_{w}=(bwa^{\tiny\mbox{\char 31}}_{w}b)wa^{\tiny \mbox{\char 31}}_{w}=bw(a^{\tiny\mbox{\char 31}}_{w}a)wa^{\tiny\mbox{\char 31}}_{w}=bw(a^{\tiny \mbox{\char 31}}_{w}awa^{\tiny\mbox{\char 31}}_{w})=bwa^{\tiny\mbox{\char 31}}_{w}\). So, \(a\stackrel{{\tiny\mbox{\char 31}}}{{\leq_{w}}}b\). (iii) \(\Rightarrow\) (i) As \(a\,*\leq b\), i.e., \(a^{*}a=a^{*}b\) and \(aR\subseteq bR\), then there exists some \(x\in R\) such that \(a=bx=bwb^{\tiny\mbox{\char 31}}_{w}bx=bwb^{\tiny\mbox{\char 31}}_{w}a\). Hence, \(awa^{\tiny\mbox{\char 31}}_{w}=(bwb^{\tiny\mbox{\char 31}}_{w}a)wa^{\tiny \mbox{\char 31}}_{w}=bw(b^{\tiny\mbox{\char 31}}_{w}awa^{\tiny\mbox{\char 31}}_{w})=bwa^{\tiny \mbox{\char 31}}_{w}\). Since \(a^{*}a=a^{*}b\), \(a^{\tiny\mbox{\char 31}}_{w}a=a^{\tiny\mbox{\char 31}}_{w}b\) by Theorem 2.8. (iv) \(\Rightarrow\) (i) By (ii) \(\Rightarrow\) (i), \(a^{\tiny\mbox{\char 31}}_{w}a=a^{\tiny\mbox{\char 31}}_{w}b\). Next, it is only need to show \(awa^{\tiny\mbox{\char 31}}_{w}=bwa^{\tiny\mbox{\char 31}}_{w}\). Note that \(aR\subseteq bR\) implies \(a=bwb^{\tiny\mbox{\char 31}}_{w}a\). So, \(bwa^{\tiny\mbox{\char 31}}_{w}=bw(b^{\tiny\mbox{\char 31}}_{w}awb^{\tiny \mbox{\char 31}}_{w})=(bwb^{\tiny\mbox{\char 31}}_{w}a)wb^{\tiny\mbox{\char 31}}_{w}=awb^{\tiny \mbox{\char 31}}_{w}\). Hence, we have \[awa^{\tiny\mbox{\char 31}}_{w} = aw(b^{\tiny\mbox{\char 31}}_{w}awb^{\tiny\mbox{\char 31}}_{w})=awb^{ \tiny\mbox{\char 31}}_{w}aw(b^{\tiny\mbox{\char 31}}_{w}bwb^{\tiny\mbox{\char 31}}_{w})\] \[= aw(b^{\tiny\mbox{\char 31}}_{w}awb^{\tiny\mbox{\char 31}}_{w})bwb^{ \tiny\mbox{\char 31}}_{w}=awa^{\tiny\mbox{\char 31}}_{w}bwb^{\tiny\mbox{\char 31}}_{w}\] \[= aw(a^{\tiny\mbox{\char 31}}_{w}a)wb^{\tiny\mbox{\char 31}}_{w}=awb^{ \tiny\mbox{\char 31}}_{w}\] \[= bwa^{\tiny\mbox{\char 31}}_{w}.\] The proof is completed. Similarly, the connection between the \(w\)-core partial order and the right sharp partial order can be given. The proof is left to the reader. **Theorem 3.5**.: _Let \(a,b,w\in R\) with \(a,b\in R^{\textcircled{\char 60}}_{w}\). If \(w\in U(R)\), then the following conditions are equivalent\(:\)_ (i)_\(a\overset{\textcircled{\char 60}}{\leq_{w}}b\)._ (ii)_\(wa\leq_{\#}wb\) and \(a=bwa^{\textcircled{\char 60}}_{w}b\)._ (iii)_\(wa\leq_{\#}wb\) and \(a^{\textcircled{\char 60}}_{w}=a^{\textcircled{\char 60}}_{w}awb^{\textcircled{ \char 60}}_{w}\)._ As is noted in Example 2.4, the \(w\)-core partial order may not imply the core partial order. We next consider under what conditions the \(w\)-core partial order gives a core partial order. Herein, a lemma is presented, which was indeed given in [18, Theorem 2.25]. For completeness, we give its proof. **Lemma 3.6**.: _Let \(a,w\in R\). Then \(a\in R^{\textcircled{\char 60}}_{w}\) if and only if \(aR=awR\) and \(aw\in R^{\textcircled{\char 60}}\). In this case, \(a^{\textcircled{\char 60}}_{w}=(aw)^{\textcircled{\char 60}}\)._ Proof.: Suppose \(a\in R^{\textcircled{\char 60}}_{w}\). Then \(w\in R^{\parallel a}\) and hence \(a\in awaR\subseteq awR\). Also, there exists some \(x\in R\) such that \(xawa=a\), \(awx^{2}=x\) and \((awx)^{*}=awx\), which guarantee \(xawaw=aw\), \(awx^{2}=x\) and \((awx)^{*}=awx\). So \(aw\in R^{\textcircled{\char 60}}\). Conversely, as \(aw\in R^{\textcircled{\char 60}}\), then there exists some \(y\in R\) such that \(awyaw=aw\), \(yawy=y\), \(y(aw)^{2}=aw\), \(awy^{2}=y\) and \(awy=(awy)^{*}\). Since \(aR=awR\), \(a=awt\) for some \(t\in R\) and hence \(a=awt=(awyaw)t=awya=(awy)^{*}a=(wy)^{*}a^{*}a\in Ra^{*}a\), i.e., \(a\in R^{(1,3)}\). Note the fact that \((aw)^{\textcircled{\char 60}}=(aw)^{\#}aw(aw)^{(1,3)}=w^{\parallel a}w(aw)^{(1,3)}\). To show that \(w^{\parallel a}w(aw)^{(1,3)}\) is the \(w\)-core inverse of \(a\), it suffices to prove that \(z=w(aw)^{(1,3)}\) is a \(\{1,3\}\)-inverse of \(a\). Since \(a\in awR\), it follows that \(a=awt\) for some \(t\in R\) and \(a=aw(aw)^{(1,3)}awt=aw(aw)^{(1,3)}a=aza\). Also, \(az=aw(aw)^{(1,3)}=(az)^{*}\), as required. **Theorem 3.7**.: _For any \(a,b,w\in R\) and \(a\in R^{\textcircled{\char 60}}_{w}\), if \(w\in U(R)\), then the following conditions are equivalent\(:\)_ (i)_\(a\overset{\textcircled{\char 60}}{\leq_{w}}b\)._ (ii)_\(aw\overset{\textcircled{\char 60}}{\leq}bw\)._ Proof. (i) \(\Rightarrow\) (ii) Suppose \(a\stackrel{{\mbox{\tiny\char 62}}}{{\leq_{w}}}b\), i.e., \(a_{w}^{\mbox{\tiny\char 62}}a=a_{w}^{\mbox{\tiny\char 62}}b\) and \(awa_{w}^{\mbox{\tiny\char 62}}=bwa_{w}^{\mbox{\tiny\char 62}}\). As \(a\in R_{w}^{\mbox{\tiny\char 62}}\), then \(aw\in R^{\mbox{\tiny\char 62}}\) and \(a_{w}^{\mbox{\tiny\char 62}}=(aw)^{\mbox{\tiny\char 62}}\) by Lemma 3.6. So, \(aw(aw)^{\mbox{\tiny\char 62}}=bw(aw)^{\mbox{\tiny\char 62}}\), \((aw)^{\mbox{\tiny\char 62}}a=(aw)^{\mbox{\tiny\char 62}}b\) and hence \((aw)^{\mbox{\tiny\char 62}}aw=(aw)^{\mbox{\tiny\char 62}}bw\). So, \(aw\stackrel{{\mbox{\tiny\char 62}}}{{\leq}}bw\). (ii) \(\Rightarrow\) (i) As \(aw\stackrel{{\mbox{\tiny\char 62}}}{{\leq}}bw\), then \((aw)^{\mbox{\tiny\char 62}}aw=(aw)^{\mbox{\tiny\char 62}}bw\). Pre-multiplying \((aw)^{\mbox{\tiny\char 62}}aw=(aw)^{\mbox{\tiny\char 62}}bw\) by \(aw\) gives \(aw=aw(aw)^{\mbox{\tiny\char 62}}bw\). Post-multiplying \(aw=aw(aw)^{\mbox{\tiny\char 62}}bw\) by \(w^{\|a}\) implies \(a=aww^{\|a}=aw(aw)^{\mbox{\tiny\char 62}}bww^{\|a}\in awR\). So, \(a_{w}^{\mbox{\tiny\char 62}}=(aw)^{\mbox{\tiny\char 62}}\) by Lemma 3.6. Thus, \(aw(aw)^{\mbox{\tiny\char 62}}=bw(aw)^{\mbox{\tiny\char 62}}\) guarantees that \(awa_{w}^{\mbox{\tiny\char 62}}=bwa_{w}^{\mbox{\tiny\char 62}}\), and \((aw)^{\mbox{\tiny\char 62}}aw=(aw)^{\mbox{\tiny\char 62}}bw\) gives \(a_{w}^{\mbox{\tiny\char 62}}aw=a_{w}^{\mbox{\tiny\char 62}}bw\). Since \(w\in U(R)\), \(a_{w}^{\mbox{\tiny\char 62}}a=a_{w}^{\mbox{\tiny\char 62}}b\), as required. \(\Box\) Combining with Theorems 3.3 and 3.7, we have the following result. **Theorem 3.8**.: _For any \(a,b,w\in R\) and \(a\in R_{w}^{\mbox{\tiny\char 62}}\), if \(w\in U(R)\), then the following conditions are equivalent:_ (i)_\(a\stackrel{{\mbox{\tiny\char 62}}}{{\leq_{w}}}b\)._ (ii)_\(aw\stackrel{{\mbox{\tiny\char 62}}}{{\leq}}bw\)._ (iii)_\(a\,*\leq b\) and \(wa\leq_{\#}wb\)._ We next show that star partial order \(\stackrel{{*}}{{\leq}}\) and the core partial order \(\stackrel{{\mbox{\tiny\char 62}}}{{\leq}}\) are instances of the \(w\)-core partial order \(\stackrel{{\mbox{\tiny\char 62}}}{{\leq_{w}}}\). **Theorem 3.9**.: _Let \(a,b\in R\). Then we have_ (i) _If \(a\in R^{\mbox{\tiny\char 62}}\), then \(a\stackrel{{\mbox{\tiny\char 62}}}{{\leq}}b\) if and only if \(a\stackrel{{\mbox{\tiny\char 62}}}{{\leq_{a}}}b\) if and only if \(a\stackrel{{\mbox{\tiny\char 62}}}{{\leq_{1}}}b\)._ (ii) _If \(a\in R^{\dagger}\), then \(a\stackrel{{*}}{{\leq}}b\) if and only if \(a\stackrel{{\mbox{\tiny\char 62}}}{{\leq_{a^{*}}}}b\)._ Proof. (i) It is known that \(a\in R_{a}^{\mbox{\tiny\char 62}}\) if and only if \(a\in R^{\mbox{\tiny\char 62}}\) if and only if \(a\in R_{1}^{\mbox{\tiny\char 62}}\). We first show that \(a\stackrel{{\mbox{\tiny\char 62}}}{{\leq}}b\) if and only if \(a\stackrel{{\mbox{\tiny\char 62}}}{{\leq_{a}}}b\). Suppose \(a\stackrel{{\mbox{\tiny\char 62}}}{{\leq}}b\), i.e., \(a^{\mbox{\tiny\char 62}}a=a^{\mbox{\tiny\char 62}}b\) and \(aa^{\mbox{\tiny\char 62}}=ba^{\mbox{\tiny\char 62}}\). Then \(a^{\#}a=a^{\#}aa^{(1,3)}b\) and \(aa^{(1,3)}=ba^{\#}aa^{(1,3)}\), and consequently \(a_{a}^{\mbox{\tiny\char 62}}a=a^{\#}a^{(1,3)}a=(a^{\#})^{2}aa^{(1,3)}a=a^{\#}a^{\#}a=a^{\#}aa ^{(1,3)}b=a^{\#}a^{(\!\#)}b\). Similarly, \(a^{2}a_{a}^{\mbox{\tiny\char 62}}=a^{2}a^{\#}a^{(1,3)}=aa^{(1,3)}=ba^{\#}aa^{(1,3)}=ba^{\#} a^{(1,3)}=ba^{\#}a^{(\!\#)}a^{(\!\#)}a^{(\!\#)}=baa_{a}^{\mbox{\tiny\char 62}}\). Conversely, if \(a\stackrel{{\mbox{\tiny\char 62}}}{{\leq_{a}}}b\), then \(a_{a}^{\mbox{\tiny\char 62}}a=a_{a}^{\mbox{\tiny\char 62}}b\) and \(a^{2}a_{a}^{\mbox{\tiny\char 62}}=baa_{a}^{\mbox{\tiny\char 62}}\), that is, \(a^{\#}a^{(1,3)}a=a^{\#}a^{(1,3)}b\) and \(a^{2}a^{\#}a^{(1,3)}=baa^{\#}a^{(1,3)}\). So, \(a^{\mbox{\tiny\char 62}}a=a^{\#}aa^{(1,3)}a=aa^{\#}a^{(1,3)}a=aa^{\#}a^{(1,3)}b=a^{ \mbox{\tiny\char 62}}b\) and \(aa^{\mbox{\tiny\char 62}}=aa^{(1,3)}=a^{2}a^{\#}a^{(1,3)}=baa^{\#}a^{(1,3)}=ba^{\mbox{ \tiny\char 62}}a^{(\!\#)}\). So, \(a\stackrel{{\mbox{\tiny\char 62}}}{{\leq}}b\). Note that \(a^{\tiny\textcircled{\char 10}}=a_{1}^{\tiny\textcircled{\char 10}}\). Then \(a\overset{\tiny\textcircled{\char 10}}{\leq}b\) if and only if \(a\overset{\tiny\textcircled{\char 10}}{\leq}_{1}b\). So, the result follows. (ii) One knows that \(a\in R^{\dagger}\) if and only if \((a^{*})^{\parallel a}\) exists if and only if \(a\in R_{a^{*}}^{\tiny\textcircled{\char 10}}\). Moreover, \(a_{a^{*}}^{\tiny\textcircled{\char 10}}=(a^{*})^{\parallel a}a^{(1,3)}=(a^{ \dagger})^{*}a^{(1,3)}=(a^{\dagger})^{*}a^{\dagger}\). It is also known that, for any \(a\in R^{\dagger}\), \(a\overset{*}{\leq}b\) if and only if \(a^{\dagger}a=a^{\dagger}b\) and \(aa^{\dagger}=ba^{\dagger}\). We next only show that \(a\overset{\tiny\textcircled{\char 10}}{\leq}_{a^{*}}b\) if and only if \(a^{\dagger}a=a^{\dagger}b\) and \(aa^{\dagger}=ba^{\dagger}\). Suppose \(a^{\dagger}a=a^{\dagger}b\) and \(aa^{\dagger}=ba^{\dagger}\). We hence have \(a_{a^{\tiny\textcircled{\char 10}}}^{\tiny\textcircled{\char 10}}a=(a^{ \dagger})^{*}a^{\dagger}a=(a^{\dagger})^{*}a^{\dagger}b=a_{a^{\tiny\textcircled{ \char 10}}}^{\tiny\textcircled{\char 10}}^{\tiny\textcircled{\char 10}}a^{ \tiny\textcircled{\char 10}}a^{\tiny\textcircled{\char 10}}=aa^{*}(a^{\dagger})^{*}a^{ \dagger}=a(a^{\dagger}a)^{*}a^{\dagger}=aa^{\dagger}=ba^{\dagger}=ba^{*}(a^{ \dagger})^{*}a^{\dagger}=ba^{*}a_{a^{\tiny\textcircled{\char 10}}}^{\tiny \textcircled{\char 10}}\). For the converse part, given \(a_{a^{\tiny\textcircled{\char 10}}}^{\tiny\textcircled{\char 10}}a=a_{a^{\tiny \textcircled{\char 10}}}^{\tiny\textcircled{\char 10}}b\) and \(aa^{*}a_{a^{*}}^{\tiny\textcircled{\char 10}}=ba^{*}a_{a^{\tiny \textcircled{\char 10}}}^{\tiny\textcircled{\char 10}}\), i.e., \((a^{\dagger})^{*}a^{\dagger}a=(a^{\dagger})^{*}a^{\dagger}b\) and \(aa^{*}(a^{\dagger})^{*}a^{\dagger}=ba^{*}(a^{\dagger})^{*}a^{\dagger}\), then \(a^{\dagger}a=a^{\dagger}aa^{\dagger}a=(a^{\dagger}a)^{*}a^{\dagger}a=a^{*}(a^{ \dagger})^{*}a^{\dagger}a=a^{*}(a^{\dagger})^{*}a^{\dagger}b=(a^{\dagger}a)^{*}a^ {\dagger}b=a^{\dagger}b\). Similarly, \(aa^{\dagger}=a(a^{\dagger}a)^{*}a^{\dagger}=aa^{*}(a^{\dagger})^{*}a^{\dagger}=ba ^{*}(a^{\dagger})^{*}a^{\dagger}=b(a^{\dagger})^{*}a^{\dagger}=b(a^{\dagger}a)^{ *}a^{\dagger}=ba^{\dagger}\), as required. \(\square\) An element \(a\in R\) is called EP if \(a\in R^{\#}\cap R^{\dagger}\) and \(a^{\#}=a^{\dagger}\). A well known characterization for EP elements is that \(a\) is EP if and only if \(a\in R^{\dagger}\) and \(aa^{\dagger}=a^{\dagger}a\). According to Theorem 3.9, we have the following result, of which (iii) \(\Leftrightarrow\) (iv) \(\Leftrightarrow\) (v) were essentially given in [1]. **Theorem 3.10**.: _Let \(a,b\in R\). If \(a\) is EP, then the following conditions are equivalent_:__ (i)_\(a\overset{\tiny\textcircled{\char 10}}{\leq}_{a}b\)._ (ii)_\(a\overset{\tiny\textcircled{\char 10}}{\leq}_{a^{*}}b\)._ (iii)_\(a\overset{\tiny\textcircled{\char 10}}{\leq}b\)._ (iv)_\(a\overset{\tiny\textcircled{\char 10}}{\leq}b\)._ (v)_\(a\overset{\tiny\textcircled{\char 10}}{\leq}b\)._ Given any \(a,b\in R\) with \(a\in R_{w}^{\tiny\textcircled{\char 10}}\), the \(w\)-core partial order gives the diamond partial order, i.e., \(a\overset{\tiny\textcircled{\char 10}}{\leq}_{w}b\Rightarrow a\overset{\tiny\textcircled{ \char 10}}{\leq}b\). Indeed, given \(a\overset{\tiny\textcircled{\char 10}}{\leq}_{w}b\), by Theorem 2.13, we have \(a^{*}a=a^{*}b=b^{*}a\) and hence \(aa^{*}a=ab^{*}a\). Moreover, \(a=bwa_{w}^{\tiny\textcircled{\char 10}}a=awa_{w}^{\tiny\textcircled{\char 10}}b\) imply \(aR\subseteq bR\) and \(Ra\subseteq Rb\), respectively. For any \(a,b,w\in R\) and \(a\in R^{\tiny\textcircled{\char 10}}\cap R_{w}^{\tiny\textcircled{\char 10}}\), we claim that the \(w\)-core partial order is between the core partial order and the diamond partial order, that is, \(a\overset{\tiny\textcircled{\char 10}}{\leq}b\Rightarrow a\overset{\tiny\textcircled{ \char 10}}{\leq}_{w}b\Rightarrow a\overset{\tiny\textcircled{\char 10}}{\leq}b\). However, the converse implications may not be true in general, i.e., \(a\overset{\tiny\textcircled{\char 10}}{\leq}b\nRightarrow a\overset{\tiny\textcircled{ \char 10}}{\leq}_{w}b\nRightarrow a\overset{\tiny\textcircled{\char 10}}{\leq}b\) (see Examples 2.4 and 3.2). The equivalences between \(a\stackrel{{ x}}{{\leq}}b\) and \(b-a\stackrel{{ x}}{{\leq}}b\) were considered by several scholars [4, 8, 11], where \(\stackrel{{ x}}{{\leq}}\) denotes the minus partial order \(\stackrel{{-}}{{\leq}}\), the star partial order \(\stackrel{{*}}{{\leq}}\) or the sharp partial order \(\stackrel{{\#}}{{\leq}}\). The characterization fails to hold for the core partial order \(\stackrel{{\tiny\mbox{\char 37}}}{{\leq}}\) (see, e.g., [1, p. 695]). Recently, Ferreyra and Malik [4, Theorem 4.2] derived the equivalence between \(A\stackrel{{\tiny\mbox{\char 37}}}{{\leq}}B\) and \(B-A\stackrel{{\tiny\mbox{\char 37}}}{{\leq}}B\) in the ring of all \(n\times n\) complex matrices, under certain conditions. More precisely, if \(A\), \(B\) and \(B-A\) are group invertible complex matrices of \(n\) by \(n\) size, then \(A\stackrel{{\tiny\mbox{\char 37}}}{{\leq}}B\) and \(AB=BA\) if and only if \(B-A\stackrel{{\tiny\mbox{\char 37}}}{{\leq}}B\) and \(AB=BA\) if and only if \(A\ast\leq B\) and \(A\stackrel{{\#}}{{\leq}}B\). We next give a similar characterization for the \(w\)-core inverse by a pure algebraic method in a \(\ast\)-ring. **Theorem 3.11**.: _For any \(a,b,w\in R\) with \(awb=bwa\), let \(a,b\in R^{\tiny\mbox{\char 37}}_{w}\) such that \(a-b\in R^{\tiny\mbox{\char 37}}_{w}\). Then the following conditions are equivalent_:__ (i)_\(a\stackrel{{\tiny\mbox{\char 37}}}{{\leq}}_{w}b\)._ (ii)_\(b-a\stackrel{{\tiny\mbox{\char 37}}}{{\leq}}_{w}b\)._ (iii)_\(a\ast\leq b\) and \(wa\stackrel{{\#}}{{\leq}}wb\)._ Proof. (i) \(\Rightarrow\) (ii) Suppose \(a\stackrel{{\tiny\mbox{\char 37}}}{{\leq}}_{w}b\). Then, by Theorem 2.13, \(a^{*}a=a^{*}b\) and \(bwa=awa\). To show \(b-a\stackrel{{\tiny\mbox{\char 37}}}{{\leq}}_{w}b\), it suffices to prove \((b-a)^{*}(b-a)=(b-a)^{*}b\) and \((b-a)w(b-a)=bw(b-a)\). As \((b-a)^{*}a=0\), then \((b-a)^{*}(b-a)=(b-a)^{*}b-(b-a)^{*}a=(b-a)^{*}b\). Similarly, we get \((b-a)w(b-a)=bw(b-a)-aw(b-a)=bw(b-a)\). (ii) \(\Rightarrow\) (iii) Given \(b-a\stackrel{{\tiny\mbox{\char 37}}}{{\leq}}_{w}b\), then \((b-a)^{*}(b-a)=(b-a)^{*}b\), which implies \((b-a)^{*}a=0\) and so \(a^{*}a=a^{*}b\). Also, \((b-a)w(b-a)=bw(b-a)\) gives \(awb=awa\) and \(bwa=awa\). Post-multiplying \(bwa=awa\) by \(w(a^{\tiny\mbox{\char 37}}_{w})^{2}\) gives \(bwa^{\tiny\mbox{\char 37}}_{w}=awa^{\tiny\mbox{\char 37}}_{w}\). Hence, \(a=awa^{\tiny\mbox{\char 37}}_{w}a=bwa^{\tiny\mbox{\char 37}}_{w}a\in bR\) and \(a\ast\leq b\). Again, it follows from \(awa=awb=bwa\) that \(wawa=wawb=wbwa\). We hence have \((wa)^{\#}wa=(wa)^{\#}wb\) and \(wa(wa)^{\#}=wb(wa)^{\#}\). So, \(wa\stackrel{{\#}}{{\leq}}wb\). (iii) \(\Rightarrow\) (i) Note that \(a\ast\leq b\) implies \(a^{*}a=a^{*}b\). To prove \(a\stackrel{{\tiny\mbox{\char 37}}}{{\leq}}_{w}b\), we only need to verify that \(bwa=awa\) by Theorem 2.13. Since \(wa\stackrel{{\#}}{{\leq}}wb\), \((wa)^{\#}wa=(wa)^{\#}wb\) and \((wa)^{2}=wawb\). Pre-multiplying \((wa)^{2}=wawb\) by \(w^{\parallel a}\) implies \(awa=(w^{\parallel a}wa)wa=(w^{\parallel a}wa)wb=awb\), which together with \(awb=bwa\) to guarantee \(bwa=awa\), as required. \(\square\) **Remark 3.12**.: It should be noted that, in Theorem 3.11 above, the condition (iii) cannot imply the condition (i) without the assumption \(awb=bwa\) in general. Indeed, Example 3.2 can illustrate this fact. However, for the case of \(w=1\), the implication (iii) \(\Rightarrow\) (i) is clear by the fact that \(a\stackrel{{\#}}{{\leq}}b\) gives \(a^{2}=ab=ba\). As is given in Theorem 3.9 above, for any \(a\in R\), \(a\in R^{\tiny\textcircled{\char 3}}\) if and only if \(a\in R^{\tiny\textcircled{\char 3}}\) if and only if \(a\in R^{\tiny\textcircled{\char 3}}_{1}\). In terms of Theorem 3.9 and Remark 3.12, we get the characterization for the core partial order \(a\stackrel{{\tiny\textcircled{\char 3}}}{{\leq}}b\). **Corollary 3.13**.: _For any \(a,b\in R\), let \(a,b\in R^{\tiny\textcircled{\char 3}}\) such that \(a-b\in R^{\tiny\textcircled{\char 3}}\). Then the following conditions are equivalent_:__ (i)_\(a\stackrel{{\tiny\textcircled{\char 3}}}{{\leq}}b\) and \(ab=ba\)._ (ii)_\(b-a\stackrel{{\tiny\textcircled{\char 3}}}{{\leq}}b\) and \(ab=ba\)._ (iii)_\(a\,*\leq b\) and \(a\stackrel{{\#}}{{\leq}}b\)._ Set \(w=a\) in Theorem 3.11, another characterization for the core partial order \(a\stackrel{{\tiny\textcircled{\char 3}}}{{\leq}}b\) can be obtained as follows. **Corollary 3.14**.: _For any \(a,b\in R\) with \(a^{2}b=ba^{2}\), let \(a,b\in R^{\tiny\textcircled{\char 3}}\) such that \(a-b\in R^{\tiny\textcircled{\char 3}}\). Then the following conditions are equivalent_:__ (i)_\(a\stackrel{{\tiny\textcircled{\char 3}}}{{\leq}}b\)._ (ii)_\(b-a\stackrel{{\tiny\textcircled{\char 3}}}{{\leq}}b\)._ (iii)_\(a\,*\leq b\) and \(a^{2}\stackrel{{\#}}{{\leq}}ab\)._ For any \(a,b\in U(R)\), it is well known that \((ab)^{-1}=b^{-1}a^{-1}\), the formula above is well known as the reverse order law. Reverse order laws for the group inverse, the Moore-Penrose inverse and the core inverse do not hold in general. For the case of the the reverse order for the core inverse, a counterexample was constructed in [2] to show that \((ab)^{\tiny\textcircled{\char 3}}=b^{\tiny\textcircled{\char 3}}a^{\tiny\textcircled{\char 3}}\) does not hold. Later, Malik et al. [7] showed the reverse order law for the core inverse of \(AB\), under the core partial order \(A\stackrel{{\text{\tiny\textcircled{\char 6}}}}{{\leq}}B\), where \(A\) and \(B\) are two \(n\times n\) complex matrices. A natural question is that whether the \(w\)-core inverse shares the reverse order law property under the \(w\)-core partial order, i.e., whether \((ab)^{\text{\tiny\textcircled{\char 6}}}_{w}=b^{\text{\tiny\textcircled{ \char 6}}}_{w}a^{\text{\tiny\textcircled{\char 6}}}_{w}\) under the \(w\)-core partial order \(a\stackrel{{\text{\tiny\textcircled{\char 6}}}}{{\leq}}_{w}b\). Example 3.15 below illustrates that the hypothesis is not accurate in general. **Example 3.15**.: Let \(R\) and the involution be the same as that of Example 2.4. Take \(a=b=\begin{bmatrix}1&1\\ 0&0\end{bmatrix}\), \(w=\begin{bmatrix}1&0\\ 1&0\end{bmatrix}\in R\), then \((ab)^{\text{\tiny\textcircled{\char 6}}}_{w}=a^{\text{\tiny\textcircled{ \char 6}}}_{w}=b^{\text{\tiny\textcircled{\char 6}}}_{w}=\begin{bmatrix} \frac{1}{2}&0\\ 0&0\end{bmatrix}\) by Example 2.4. By a direct check, \(a^{\text{\tiny\textcircled{\char 6}}}_{w}a=a^{\text{\tiny\textcircled{ \char 6}}}_{w}b=\begin{bmatrix}\frac{1}{2}&\frac{1}{2}\\ 0&0\end{bmatrix}\) and \(awa^{\text{\tiny\textcircled{\char 6}}}_{w}=bwa^{\text{\tiny\textcircled{ \char 6}}}_{w}=\begin{bmatrix}1&0\\ 0&0\end{bmatrix}\), i.e., \(a\stackrel{{\text{\tiny\textcircled{\char 6}}}}{{\leq}}_{w}b\). However, \(\begin{bmatrix}\frac{1}{2}&0\\ 0&0\end{bmatrix}=(ab)^{\text{\tiny\char 6}}_{w}\neq b^{\text{\tiny \textcircled{\char 6}}}_{w}a^{\text{\tiny\textcircled{\char 6}}}_{w}=\begin{bmatrix} \frac{1}{2}&0\\ 0&0\end{bmatrix}\begin{bmatrix}\frac{1}{2}&0\\ 0&0\end{bmatrix}=\begin{bmatrix}\frac{1}{4}&0\\ 0&0\end{bmatrix}\). It is natural to ask what types of the product has the reverse order law property, under the assumption \(a\stackrel{{\text{\tiny\textcircled{\char 6}}}}{{\leq}}_{w}b\). The following result illustrates that the \(w\)-core inverse of \(awb\) has the reverse order law property. **Theorem 3.16**.: _Let \(a,b,w\in R\) with \(a,b\in R^{\text{\tiny\textcircled{\char 6}}}_{w}\). If \(a\stackrel{{\text{\tiny\textcircled{\char 6}}}}{{\leq}}_{w}b\), then \(awb\in R^{\text{\tiny\textcircled{\char 6}}}_{w}\) and \((awb)^{\text{\tiny\textcircled{\char 6}}}_{w}=b^{\text{\tiny\textcircled{ \char 6}}}_{w}a^{\text{\tiny\textcircled{\char 6}}}_{w}\)._ Proof. It follows from Lemma 2.2 that \(b^{\text{\tiny\textcircled{\char 6}}}_{w}a^{\text{\tiny\textcircled{\char 6}}}_{w}=(a^{\text{\tiny \textcircled{\char 6}}}_{w})^{2}\). We next show that \(x=(a^{\text{\tiny\textcircled{\char 6}}}_{w})^{2}\) is the \(w\)-core inverse of \(awb\) by the following three steps. (1) Note that \(bwa=awa\) and \(a^{\text{\tiny\textcircled{\char 6}}}_{w}awa=a\). Then \(x(awb)w(awb)=(a^{\text{\tiny\textcircled{\char 6}}}_{w})^{2}aw(bwa)wb=(a^{\text{ \tiny\textcircled{\char 6}}}_{w})^{2}aw(awa)wb=a^{\text{\tiny\textcircled{ \char 6}}}_{w}(a^{\text{\tiny\textcircled{\char 6}}}_{w}awa)awb=(a^{\text{\tiny \textcircled{\char 6}}}_{w}awa)wb=awb\). (2) \((awb)wx=awbw(a^{\text{\tiny\textcircled{\char 6}}}_{w})^{2}=awaw(a^{\text{\tiny \textcircled{\char 6}}}_{w})^{2}=awa^{\text{\tiny\textcircled{\char 6}}}_{w}=((awb)wx)^{*}\). (3) \((awb)wx^{2}=(awbwx)x=awa^{\text{\tiny\textcircled{\char 6}}}_{w}(a^{\text{ \tiny\textcircled{\char 6}}}_{w})^{2}=aw(a^{\text{\tiny\textcircled{\char 6}}}_{w})^{2}a^{\text{ \tiny\textcircled{\char 6}}}_{w}=x\). \(\square\) **ACKNOWLEDGMENTS** The authors are highly grateful to the referees for their valuable comments and suggestions which greatly improved this paper. This research is supported by the National Natural Science Foundation of China (No. 11801124) and China Postdoctoral Science Foundation (No. 2020M671068). **References**
$R$を unital $*$-リングとする。$a,w,b \in R$ に対して、定義された$w$コア逆元を用いて、$R$ の部分順序を新たなクラスとして定義する。これを$w$コア部分順序と呼ぶ。$a,b \in R$ が$w$コア逆元となる場合、$a$ が$b$の下に位置する($w$コア部分順序の下で)と述べる。$a \overset{\tiny{\textcircled{\#}}}\leq_w b$ と書かれる。これは $a_w^{\tiny{\textcircled{\#}}}a = a_w^{\tiny{\textcircled{\#}}} b$ と $awa_w^{\tiny{\textcircled{\#}}} = bwa_w^{\tiny{\textcircled{\#}}}$ を満たすことを意味する。$a_w^{\tiny{\textcircled{\#
2309.03957
Orbital magnetization of a metal is not a bulk property in the mesoscopic regime
We find that, in the mesoscopic regime, modification of the material's surface can induce an extensive change of the material's magnetic moment. In other words, perturbation of order $N^2$ atoms on the surface of a 3-dimensional solid can change the magnetic moment proportionally to $N^3$. When the solid's surface is perturbed, it triggers two changes in the magnetization. One arises from variations of the electron wavefunction and energy, while the other arises from a modification in the kinetic angular momentum operator. In the macroscopic regime of our model, these two bulk effects cancel each other, resulting in no impact of the surface perturbation on the magnetization - consistent with prior work. In the mesoscopic regime, we find a departure from this behavior, as the cancelation of two terms is not complete.
Kevin Moseni, Sinisa Coh
2023-09-07T18:06:31
http://arxiv.org/abs/2309.03957v3
# Surface sensitivity of magnetization in the mesoscopic regime ###### Abstract Some of the magnetization of a solid originates from orbital currents flowing on its surface. When the solid's surface is perturbed, it triggers two changes in the magnetization. One arises from variations of the electron wavefunction and energy, while the other emerges from a modification in the kinetic angular momentum operator. In the macroscopic regime of our model, these two bulk effects cancel each other, resulting in no impact of the surface perturbation on the magnetization -- consistent with prior work. We find a departure from this behavior in the mesoscopic regime, where the cancelation of two terms is not complete. In this regime, surprisingly, perturbation of the surface of the solid can change the magnetic dipole of the solid in proportion to the size of the entire solid. In a ferromagnet, the magnetic moment primarily arises from the unequal population of electrons with different spin states. A smaller, yet significant contribution, known as orbital magnetization, originates from the spatial motion of electrons. Some of these orbital electron currents flow around individual atoms in the bulk, while other currents traverse the surface of the sample, as demonstrated in Ref. [1] using a localized picture of electronic structure. Although only a fraction of electrons participate in surface currents, their collective effect contributes to the magnetic dipole moment, scaling with the size of the entire sample (area in two dimensions, volume in three dimensions). The question then arises whether the magnetic moment of the ferromagnet could be modified by perturbing these surface currents? For instance, one may wonder if adsorbing different atoms to the surface of a ferromagnet could change the magnitude of surface currents, and consequently the magnetic dipole of the solid, in proportion to the size of the entire solid. Or could one take a non-magnetic system and induce in it a bulk orbital magnetization by modifying its surface? The seminal work from Ref. [1] rigorously demonstrated that these scenarios are not possible for insulating systems. In an insulating system, the surface currents are quite remarkably determined by the material properties deep in the bulk of the material! Intuitively one would expect that such a statement should also extend to metallic cases, but this has not been rigorously demonstrated. Reference [2] gives heuristic reasons for why magnetization in a metal is equally well determined by the properties in the bulk of the material, as in the case of an insulator. (The same was also suggested for topological insulators in Refs. [2; 3; 4].) Additional support is given by the semi-classical formulation of orbital magnetization from Ref. [5] as well as the long-wave perturbation from Ref. [6]. A more recent indication that orbital magnetization in a metal is a bulk property relies on a local measure of the orbital moment from Refs. [7; 8]. In this paper, our focus lies on a distinct range of length and temperature scales, one that complements the scope of previous investigations. When the electron's time of flight across our sample exceeds \(\hbar\)/thermal energy, our findings corroborate the conclusions drawn in Refs. [1; 2; 3; 4; 5; 6; 7; 8]. Specifically, the surface modifications leave the magnetization unaffected. Therefore, within the framework of our model, the prospect of altering the magnetization of a sizable solid, at a non-zero temperature, through surface modifications is unlikely. Nevertheless, an intriguing situation emerges when we shift to the opposite (mesoscopic) regime, marked by either small sample sizes or lower temperatures. Our work shows that in this regime the surface can indeed change the overall magnetic moment of the sample, in proportion to the size of the entire sample. Before introducing our numerical model, we first motivate it by considering a continuous one-particle effective Hamiltonian, denoted \(H_{\rm c}^{0}\), for a periodic infinite solid in two dimensions. To simplify our analysis, throughout this work we neglect self-consistency, many-electron effects, and disorder. Our system is assumed to be in thermal equilibrium. We ignore any temperature effects beyond electron occupation smearing. The complete basis of the eigenstates of \(H_{\rm c}^{0}\) can be expressed in the Bloch form, \(\psi_{\mathbf{k}}(\mathbf{r})=e^{i\mathbf{k}\cdot\mathbf{r}}u_{\mathbf{k}}(\mathbf{r})\). However, not every eigenstate of \(H_{\rm c}^{0}\) has the Bloch form. Generally, we can construct arbitrary linear combinations of states that share the same eigenvalue \(E_{\mathbf{k}}=E\), and the resulting function \[\phi_{E}(\mathbf{r})=\int_{0}^{1}e^{if(s)}\psi_{\mathbf{k}(s)}(\mathbf{r})ds \tag{1}\] is a valid eigenstate of \(H_{\rm c}^{0}\). Here \(s\to\mathbf{k}(s)\) is a continuous parameterization of a curve in the Brillouin zone along which \(E_{\mathbf{k}(s)}=E\). For now we limit \(f(s)\) so that \(f(0)=f(1)\). We choose \(f(s)\) so that \(\phi_{E}(\mathbf{r})\) is as localized as possible in the real space. \(\phi_{E}\) is only algebraically localized due to integration over part of the Brillouin zone, unlike exponential localization of a Wannier function. [9] By selecting a fixed \(f(s)\), we can create a family of func tions, \(\phi_{mE}\), for any integer \(m\), defined as follows, \[\phi_{mE}(\mathbf{r})=\int_{0}^{1}e^{i2\pi ms}e^{if(s)}\psi_{\mathbf{k}(s)}(\mathbf{r})ds. \tag{2}\] Note, trivially, that \(\langle\phi_{mE}|\phi_{m^{\prime}E^{\prime}}\rangle=\delta_{mm^{\prime}}\delta_ {EE^{\prime}}\). Therefore, \(\phi_{mE}\) for all \(m\) and \(E\) span the same vector space as the Bloch states. Let us now take \(H_{\rm c}^{0}\) to correspond to the free-electron system. One can easily show that, in this case, \[\langle\phi_{mE}|\,L_{z}\,|\phi_{mE}\rangle=\hbar m. \tag{3}\] Each \(\phi_{mE}\) state therefore carries angular momentum \(\hbar m\), and orbital magnetic moment \(\mu_{\rm B}m\). Let us now confine our system to a circular region with radius \(R\). States with large enough \(m\) (\(\approx R\frac{\sqrt{2m_{e}E}}{\hbar}\)) are localized near the edge of the sample and carry an angular momentum that scales with \(\sim R\). Since there are order \(\sim R\) states in the region near the edge of the sample, one might now ask whether including the potential \(V^{\rm edge}\) on the edge of the sample could rearrange these states near the surface and induce a net orbital moment that scales as \(\sim R^{2}\)? If one could construct a surface potential satisfying \[\langle\phi_{mE}|\,V^{\rm edge}\,|\phi_{m^{\prime}E}\rangle\sim m\delta_{mm^{ \prime}} \tag{4}\] then this would be a good candidate surface perturbation, as it breaks the time-reversal symmetry by differently acting on state with different \(m\). We now attempt to create surface potential satisfying Eq. 4 in a concrete finite-size model using a numerically convenient tight-binding approach. To construct the tight-binding model, we project our continuous free-electron Hamiltonian \(H_{\rm c}^{0}\) on the basis of a \(N\times N\) square mesh of s-like orbitals, each separated from the others by a distance \(a\). We label the orbital at site \(i\) as \(\ket{i}\). For the position operators \(x\) and \(y\), we assume \(\bra{i}x\ket{j}=x_{i}\delta_{ij}\) and \(\bra{i}y\ket{j}=y_{i}\delta_{ij}\). For convenience, we work with the centered operators \(\tilde{x}=x-\sum_{i}x_{i}/N^{2}\) and \(\tilde{y}=y-\sum_{i}y_{i}/N^{2}\). We also define the following quantity \(\tilde{L}(A)\) for any operator \(A\), \[\tilde{L}(A)=\frac{m_{\rm e}}{\hbar}\left(i\tilde{x}A\tilde{y}-i\tilde{y}A \tilde{x}\right). \tag{5}\] Clearly \(\tilde{L}(H)\) corresponds to the angular momentum operator for a system described by the Hamiltonian \(H\). (Since the angular momentum operator is a cross-product of the position operator and the kinetic linear momentum operator, \(\mathbf{p}=\frac{im_{\rm e}}{\hbar}[H,\mathbf{r}]\).) We start with the simplest case for \(H^{0}\), where \(H_{ij}^{0}=\bra{i}H^{0}\ket{j}\) is a real number \(t<0\) for the nearest-neighbor orbitals \(i\) and \(j\), and \(0\) for any other pair of orbitals. Our goal is now to construct an edge potential with the property given in Eq. 4. At first it is not clear how to satisfy Eq. 4 in our model, as eigenvectors of \(H^{0}\) don't have a well-defined angular momentum (our tight-binding model is projected into a finite square mesh of orbitals). Therefore, before discussing the edge perturbation, we first add a commutator correction term \(H^{\rm comm}\) which ensures that total bulk Hamiltonian, \[H^{\rm bulk}=H^{0}+H^{\rm comm} \tag{6}\] at least approximately commutes with the angular momentum operator, \(\tilde{L}(H^{\rm bulk})\). Ignoring the second-order terms in \(H^{\rm comm}\) and imposing this requirement to our tight-binding model, we arrive at the following system of \(N^{4}\) equations, \[\left[H^{0}+H^{\rm comm},\tilde{L}\left(H^{0}\right)\right]+\left[H^{0}, \tilde{L}\left(H^{\rm comm}\right)\right]\approx 0. \tag{7}\] The unknown matrix elements \(H^{\rm comm}_{ij}\) are further restricted to be zero for distant orbitals \(i\) and \(j\), making \(H^{\rm comm}\) a local operator. To approximately solve the resulting system of \(\sim N^{2}\) equations, we minimize the quadratic norm of the left-hand side of Eq. 7 using the least squares method. This approach produces a purely real \(H^{\rm comm}\) that only includes the first-nearest neighbors. The maximum value of \(|H^{\rm comm}_{ij}|\) is \(0.5|t|\) independent of \(N\). The operator \(H^{\rm comm}_{ij}\) breaks periodicity in the bulk of the sample and resembles the functional form of a parabolic well. The approximate form of \(H^{\rm comm}\) is provided in the supplementary material, obtained by fitting the results of our procedure for low \(N\). The energy spectrum of \(H^{0}\) as a function of \(N\) exhibits some regularity by having spikes in the density of states separated by \(\Delta\sim 1/N\). However, the number of states in between spikes is not strictly zero, and these states don't follow an obvious pattern as a function of increasing \(N\). If we include the \(H^{\rm comm}\) term to \(H^{0}\) we find that it redistributes the spectrum of the system, creating small gaps in the spectrum (scaling as \(\Delta\sim 1/N\)). We find that placing a Fermi level \(E_{\rm F}\) within one of these gaps has the additional benefit of stabilizing the finite-size effects in our calculations. Related finite-size effects for Landau diamagnetism have also been reported in Refs. [10, 11, 12, 13, 14]. The orbital magnetic moment is zero for any thermal occupation of the system described by \(H^{\rm bulk}\) since all \(H^{\rm bulk}_{ij}\) are real. We now induce the desired symmetry-breaking behavior, as in Eq. 4, by introducing a perturbation \(V^{\rm edge}\) at the edge of the sample, \[V^{\rm edge}_{ij}=-\frac{eb}{2m_{\rm e}}S_{ij}\tilde{L}_{ij}(H^{0}). \tag{8}\] If we set \(S_{ij}=1\) then \(V^{\rm edge}_{ij}\) would represent an approximate interaction term of the orbital magnetic moment with a spatially uniform external magnetic field \(b\), as in the study of Landau diamagnetism. Trivially, the matrix element of such a perturbation is proportional to \(m\), as in Eq. 4. However, our objective was to keep this potential non-zero only on the edge of the sample. We achieve this by setting \(S_{ij}\) to zero in the interior of the sample and to a constant function proportional to \(1/N\) at the edge. This choice ensures that the complex phase acquired by an electron traversing a closed loop around the edge plaquette (flux) is nearly independent of \(N\) and its location along the edge. Our choice of \(S_{ij}\) also ensures that the total flux through the entire sample is zero. Instead, as detailed in the supplementary material, \(V^{\rm edge}\) applies an effective flux of alternating sign to the first and second cells closest to the edge of the sample. After diagonalizing our full Hamiltonian, which includes both bulk and edge contribution, \[\left(H^{\rm bulk}+V^{\rm edge}\right)\left|\psi_{n}\right>=E_{n}\left|\psi_{n}\right> \tag{9}\] we obtain a set of eigenstates \(\left|\psi_{n}\right>\). We use even \(N\), although odd \(N\) yields qualitatively similar results with slightly different chemical potential. The largest \(N\) used is 100, corresponding to a system with 10,000 orbitals. We set the Fermi level \(E_{\rm F}\) to \(-2.55\left|t\right|\), placing it within a small energy gap \(\Delta\) in the spectrum. As discussed above, the gap \(\Delta\) scales as \(1/N\). The magnetic dipole moment is computed as follows, \[m_{\rm dip}=\frac{e}{2m_{\rm e}}\sum_{n}\left<\psi_{n}\right|\tilde{L}(H) \left|\psi_{n}\right>f_{n} \tag{10}\] where \(f_{n}\) is the Fermi-Dirac distribution with effective smearing of electron occupation by \(k_{\rm B}T\). Figure 1 shows the calculated \(m_{\rm dip}\) as a function of \(N\). The computed \(m_{\rm dip}\) scales nearly perfectly as \(N^{2}\) with the size of the system. Since \(m_{\rm dip}=0\) when \(V^{\rm edge}=0\) we conclude that our \(m_{\rm dip}\) results solely from surface modification (\(V^{\rm edge}\)). However, the \(N^{2}\) scaling persists only in the mesoscopic regime, when \(k_{\rm B}T\) is small compared to \(\Delta\). Since \(\Delta\) is proportional to the electron bandwidth and inversely proportional to \(N\), one can interpret \(\Delta\) as a characteristic energy scale that corresponds to the inverse time it takes for an electron to travel from one edge of the sample to the other. With the approximate scaling of the gap at \(E_{\rm F}\) shown in the supplement, we determine that our system is in a mesoscopic regime as long as \(k_{\rm B}T/|t|\lesssim 0.6/N\). This observation motivated us to fit the results for \(m_{\rm dip}\) as a function of \(N\) and \(k_{\rm B}T\) to the following function, \[m_{\rm dip}\sim\frac{N^{2}}{1+\exp\left[3.8\frac{k_{\rm B}T}{|t|}\left(N-0.6 \frac{|t|}{k_{\rm B}T}\right)\right]}. \tag{11}\] Clearly, \(m_{\rm dip}\sim N^{2}\) as long as \(N\) and \(k_{\rm B}T\) are in a mesoscopic regime. Since \(\lim_{N\to\infty}\lim_{T\to 0^{+}}\frac{m_{\rm dip}}{N^{2}}\neq 0\) we find that the \(N^{2}\) scaling of the magnetic moment continues for all \(N\), as long as the temperature is small enough. On the other hand, for any finite, positive, temperature \(T\), the limit \(\lim_{N\to\infty}\frac{m_{\rm dip}}{N^{2}}\) is zero. Therefore, for any small positive \(T\) there is an \(N\) beyond which the magnetic dipole no longer scales as \(N^{2}\). In the supplementary material, we provide explicit numerical values of Hamiltonian matrix elements \(H_{ij}\) for different values of \(N\), as well as a computer code that diagonalizes Eq. 9, computes Eq. 10, and performs a range of consistency checks on \(H_{ij}\). Our finding that \(m_{\rm dip}\) in a metal is surface sensitive is perhaps not that surprising considering that a similar surface dependence can be found for the electric dipole \(d_{\rm dip}\) of a metal. [15] However, importantly, the electric dipole \(d_{\rm dip}\) is surface sensitive in a metal even in a macroscopic regime. Therefore, we can naturally ask why, in the macroscopic regime, \(m_{\rm dip}\) from our model behaves differently from \(d_{\rm dip}\)? To establish a parallel between the electric and magnetic dipole it is instructive to construct a surface potential \(V^{\rm edge}\) that changes the bulk _electric_ dipole, in analogy to how \(V^{\rm edge}\) changed the bulk magnetic dipole. By analogy to Eq. 8 we can now use the position operator, instead of the angular momentum operator, to arrive at \(V^{\rm edge}_{ij}\sim S_{i}\tilde{x}_{i}\delta_{ij}\). Here \(S_{i}\) is a constant term \(\sim 1/N\) that is non-zero only on the surface. If we now compute the expectation value of the _electric_ dipole moment, \(d_{\rm dip}\sim\sum_{n}\left<\psi_{n}\right|\tilde{x}\left|\psi_{n}\right>f_{n}\), we find that \(d_{\rm dip}\) induced by \(V^{\rm edge}\) scales as \(\sim N^{2}\) even in the macroscopic regime. We assign a different behavior of an electrical dipole to that of a magnetic dipole due to the fact that for the electric dipole the same operator (\(\tilde{x}\)) appears in the perturbation (\(V^{\rm edge}\)) as in the induced response (\(d_{\rm dip}\)). This is not the case for the orbital magnetization, as our \(V^{\rm edge}\) is constructed from \(\tilde{L}(H^{0})\), while \(m_{\rm dip}\) is computed from \(\tilde{L}(H)=\tilde{L}(H^{\rm bulk})+\tilde{L}(V^{\rm edge})\) which clearly includes the perturbation \(V^{\rm edge}\) itself. (This is analogous to how Figure 1: Magnetic dipole \(m_{\rm dip}\) induced by \(V^{\rm edge}\) is proportional to \(N^{2}\). \(k_{\rm B}T\) in the Fermi-Dirac distribution is set to \(0\). \(b\) is chosen so that \(a^{2}b=0.2h/e\). Fermi level \(E_{\rm F}\) is set to \(-2.55\left|t\right|\) so that electron density is \(\approx 0.12/a^{2}\). Parameters \(t\) and \(a\) are set so that effective mass at low doping is the same as the free-electron mass. Inset shows that the second derivative of \(m_{\rm dip}\) with respect to \(N\) (scaled by \(10^{2}\)) is constant. external magnetic field described by vector potential \(\mathbf{A}\) changes the kinetic linear momentum operator from \(\mathbf{p}\) to \(\mathbf{p}-\frac{e}{c}\mathbf{A}\), but there is no change in the position operator due to external _electric_ field.) Therefore, including surface perturbation has two effects on the induced magnetic dipole, and we can trivially write \(m_{\rm dip}\) as a sum of \[m_{\rm dip}^{\rm st} =\frac{e}{2m_{\rm e}}\sum_{n}\left\langle\psi_{n}\right|\tilde{L} (H^{\rm bulk})\left|\psi_{n}\right\rangle f_{n}\quad\text{and} \tag{12}\] \[m_{\rm dip}^{\rm op} =\frac{e}{2m_{\rm e}}\sum_{n}\left\langle\psi_{n}\right|\tilde{L} (V^{\rm edge})\left|\psi_{n}\right\rangle f_{n}. \tag{13}\] The first term (\(m_{\rm dip}^{\rm st}\)) arises from changes to the electron state (wavefunction and energy) due to surface perturbation. The second term (\(m_{\rm dip}^{\rm op}\)) originates from the change in the angular momentum operator itself, and in the lowest order of perturbation theory, it can be computed from the unperturbed electron wavefunction and energy. While each of these terms is finite in the macroscopic regime, as \[\lim_{N\to\infty}\frac{m_{\rm dip}^{\rm st}}{N^{2}}\neq 0\quad\text{and} \quad\lim_{N\to\infty}\frac{m_{\rm dip}^{\rm op}}{N^{2}}\neq 0 \tag{14}\] for any fixed non-zero temperature \(T\), they exactly cancel each other in the same limit, so that \[\lim_{N\to\infty}\frac{m_{\rm dip}^{\rm st}+m_{\rm dip}^{\rm op}}{N^{2}}=0. \tag{15}\] In contrast, in the case of the electric dipole, there is only one contribution (the one coming from changes in the electron's state), so there is no cancellation, and the electric dipole is surface sensitive in the macroscopic regime. We now briefly comment on the spatial distribution of the orbital currents that cause the \(m_{\rm dip}\sim N^{2}\) scaling in our model. In the supplement, we compute \(m_{\rm dip}\) by projecting into the edge region of the sample, varying the thickness. We find that approximately half of \(m_{\rm dip}\) is recovered from the edge of the sample with thickness \(0.3(Na/2)\). Consequently, the active area of the sample that contributes to \(m_{\rm dip}\) scales as \(N^{2}\). In our work, we focus on the simplest choice of \(H^{0}\), which corresponds to a square lattice with first-neighbor hoppings. However, the procedure presented in this paper can be done for any \(H^{0}\). An interesting case that is the Haldane model in a topologically non-trivial insulator phase with a non-zero Chern number.[16] Here, even when the Fermi level is within the bulk gap and crosses the topologically protected surface states, we find \(m_{\rm dip}\sim N^{2}\). This is numerically robust even without including commutator correction term \(H^{\rm comm}\). Furthermore, for any given \(H^{0}\), we note that \(V^{\rm edge}\) is not the only surface perturbation that can change the magnetization. Generally, we find that surface modification can induce \(m_{\rm dip}\sim N^{2}\) whenever the perturbation allows, by symmetry, circulation of surface currents with a consistent handedness on each edge of the sample. Specifically, for a two-dimensional model, this implies that the surface perturbation must break mirror symmetry along the edge and time-reversal symmetry. However, the product of these two operations can still remain a symmetry. This work was supported by the NSF DMR-1848074 grant. We acknowledge discussions with R. Wilson and L. Vuong on inverse Faraday effect as these discussions have motivated our work.
物質の表面の変更により、mesoscopic regimeで材料の磁気モーメントは広範な変化をもたらす。言い換えれば、3次元固体表面の$N^2$個の原子による擾乱は、$N^3$の比例で磁気モーメントを変える。表面が擾乱されると、磁化の2つの変化が生じる。一つは電子波関数の変動とエネルギー変動によるもので、もう一つは運動角運動量演算子の変動によるものである。このモデルの宏观領域では、これらの2つの主要な効果が互いにキャンセルし合うため、表面の擾乱は磁化に影響を与えない - これは過去の研究の結果と一致している。mesoscopic regimeでは、この性質が異なることが見られる。この2つの項のキャンセルが完全ではない。
2309.12682
The comparison of two Zagreb-Fermat eccentricity indices
In this paper, we focus on comparing the first and second Zagreb-Fermat eccentricity indices of graphs. We show that $$\frac{\sum_{uv\in E\left( G \right)}{\varepsilon_3\left( u \right) \varepsilon_3\left( v \right)}}{m\left( G \right)} \leq \frac{\sum_{u\in V\left( G \right)}{\varepsilon_{3}^{2}\left( u \right)}}{n\left( G \right)} $$ holds for all acyclic and unicyclic graphs. Besides, we verify that the inequality may not be applied to graphs with at least two cycles.
Xiangrui Pan, Cheng Zeng, Longyu Li, Gengji Li
2023-09-22T07:38:51
http://arxiv.org/abs/2309.12682v3
# The comparison of two Zagreb-Fermat eccentricity indices ###### Abstract In this paper, we focus on comparing the first and second Zagreb-Fermat eccentricity indices of graphs. We show that \[\frac{\sum_{uv\in E(G)}\varepsilon_{3}\left(u\right)\varepsilon_{3}\left(v \right)}{m\left(G\right)}\leq\frac{\sum_{u\in V(G)}\varepsilon_{3}^{2}\left(u \right)}{n\left(G\right)}\] holds for all acyclic and unicyclic graphs. Besides, we verify that the inequality may not be applied to graphs with at least two cycles. **Key words:** Fermat eccentricity; Zagreb-Fermat eccentricity indices; acyclic graphs; unicyclic graphs; multi-cyclic graphs ## 1 Introduction In the fields of mathematical chemistry and graph theory, various graph invariants have been developed to characterize the structural properties of chemical compounds and complex networks, see Refs. [1, 2, 3, 4]. Suppose \(G=\left(V(G),E(G)\right)\) is a connected simple graph with node set \(V(G)\) and edge set \(E(G)\). Gutman and Trinajstic proposed two degree-based topological indices, namely, the first and second Zagreb indices [5], which are defined by \[Z_{1}(G)=\sum_{u\in V(G)}\deg_{G}^{2}(u)\] and \[Z_{2}(G)=\sum_{uv\in E(G)}\deg_{G}(u)\deg_{G}(v),\] respectively. Here \(\deg_{G}(u)\) is the degree of vertex \(u\) in \(G\). Then, Gutman, Ruscic, Trinajstic and Wilcox [6] elaborated Zagreb indices. In Ref. [7], a survey of the most significant estimates of Zagreb indices has been introduced. Todeschini and Consonni [1, 8] pointed out that the Zagreb indices and their variants have a wide range of applications in QSPR and QSAR models. We denote by \(n=n(G)\) the number of vertices of \(G\) and by \(m=m(G)\) the number of its edges. Recently, a conjecture about the averages of two Zagreb indices has been studied, that is, \[\frac{Z_{2}(G)}{m(G)}\geq\frac{Z_{1}(G)}{n(G)}.\] Although this conjecture has been disproved, it has been shown to hold for all graphs with the maximum degree at most four [9], for acyclic graphs [10] and for unicyclic graphs [11]. To better describe the structural characteristics of graphs and chemical compounds, Zagreb index variants are continuously extended, such as improved Zagreb index, regularized Zagreb index, etc. [12, 13, 14]. Another important invariant is eccentricity \(\varepsilon_{2}(u;G)\) of a vertex \(u\) in \(G\), namely the maximum distance from \(u\) to other vertices in \(G\), which is defined by \(\varepsilon_{2}(u;G):=\max\{d_{G}(u,v):v\in V(G)\}\). In analogy with the first and the second Zagreb indices, the first, \(E_{1}(G)\) and second, \(E_{2}(G)\), Zagreb eccentricity indices [15, 16], an important class of graph indices, were proposed by replacing degrees by eccentricities of the vertices. Here the first and second Zagreb eccentricity indices are defined by \[E_{1}(G)=\sum_{u\in V(G)}\varepsilon_{2}^{2}\left(u\right),\;E_{2}(G)=\sum_{ uv\in E(G)}\varepsilon_{2}\left(u\right)\varepsilon_{2}\left(v\right).\] respectively. The two Zagreb eccentricity indices attract a large amount of interest. Some general mathematical properties of \(E_{1}(G)\) and \(E_{2}(G)\) have been investigated in [17, 18]. Very recently, Zagreb eccentricity indices have been applied to fractals [19] to show the structural properties of fractals. In [15], Vukicevic and Graovac compared \(E_{1}(G)/n(G)\) with \(E_{2}(G)/m(G)\). They showed that the inequality \[\frac{E_{2}(G)}{m(G)}\leq\frac{E_{1}(G)}{n(G)}\] holds for all acyclic and unicyclic graphs and is not always valid for general graphs. Based on the conclusions in [15], Qi and Du [20] determined the minimum and maximum Zagreb eccentricity indices of trees, while Qi, Zhou and Li [21] presented the minimum and maximum Zagreb eccentricity of unicycle graphs. Given a vertex triplet \(S=\{u,v,w\}\) of \(V(G)\), the distance of \(S\) is the minimum of the total distance from a vertex \(\sigma\in V(G)\) to the three vertices in \(S\), and we may write \[d_{G}(S)=\mathcal{F}_{G}(u,v,w)=\min_{\sigma\in V(G)}\{d(\sigma,u)+d(\sigma,v) +d(\sigma,w)\}.\] This concept in Euclidean space was first raised by Fermat [25] and then extended in graphs. So the distance \(\mathcal{F}_{G}(u,v,w)\) is also called Fermat distance. Similarly, we call \(\varepsilon_{3}(u;G)=\max_{v,w\in V(G)}\mathcal{F}_{G}(u,v,w)\) the Fermat eccentricity of \(u\) in \(G\). For notational convenience, we sometimes use \(\varepsilon_{3}\left(u\right)\) instead of \(\varepsilon_{3}\left(u;G\right)\). Very recently, Fermat eccentricity and its average were first investigated by Li, Yu and Klavzar et al. [22, 23]. They also extended this concept to Steiner \(k\)-eccentricity, see [24]. Fermat distance, Steiner distance and their related variants have been widely applied to many fields, including operations research, VLSI design, optical and wireless communication networks [26]. We now introduce two modified Zagreb eccentricity indices based on Fermat eccentricity, that is, the first Zagreb-Fermat eccentricity index \[F_{1}(G)=\sum_{u\in V(G)}\varepsilon_{3}^{2}\left(u\right),\] and the second Zagreb-Fermat eccentricity index \[F_{2}(G)=\sum_{uv\in E(G)}\varepsilon_{3}\left(u\right)\varepsilon_{3}\left( v\right).\] In this paper, we focus on comparison of the first and second Zagreb-Fermat eccentricity indices and show that inequality \[\frac{F_{2}(G)}{m\left(G\right)}\leq\frac{F_{1}(G)}{n\left(G\right)} \tag{1}\] holds for all acyclic and unicyclic graphs. Based on two counterexamples given in Section 5, we observe that the inequality is not always valid for multicyclic graphs. Notations and Preliminaries We first recall some notations and definitions. The degree of a vertex \(u\in V(G)\), denoted by \(\deg_{G}(u)\), is the number of adjacent vertices of \(u\) in graph \(G\). We call a vertex a leaf if its degree is \(1\). If a vertex has degree at least \(2\), then it is an internal vertex. Let \(P_{n}\), \(C_{n}\), \(K_{1,n-1}\) be the \(n\)-vertex path, cycle and star. Define the radius and diameter of \(G\) by \(\operatorname{rad}(G):=\min_{u\in V(G)}\varepsilon_{2}(u)\) and \(\operatorname{diam}(G):=\max_{u\in V(G)}\varepsilon_{2}(u)\), respectively. A vertex that realizes \(\operatorname{rad}(G)\) is called the central vertex. It is well known that tree \(T\) contains one or two central vertices. A path in \(G\) of length \(\operatorname{diam}(G)\) is called the diametrical path. Note that every graph contains at least one diametrical path. A path joining a vertex and its most distance vertex is called a eccentric path. We now focus on Fermat eccentricity. The vertex \(\sigma\) that minimize the sum \(d(\sigma,u)+d(\sigma,v)+d(\sigma,w)\) is called Fermat vertex of vertex set \(\left\{u,v,w\right\}\). Meanwhile, the minimal spanning tree consisting of \(\left\{u,v,w,\sigma\right\}\) is a Fermat tree of \(\left\{u,v,w\right\}\). Note that the Fermat vertex and Fermat tree of a given vertex triplet in a graph may not be unique. We call the vertices that realize \(\varepsilon_{3}(u;G)\) the Fermat eccentric vertices of \(u\) and the corresponding Fermat tree the Fermat eccentric \(u\)-tree. The following lemma shows that the difference of Fermat eccentricity of two adjacent vertices can not exceed \(1\). **Lemma 1**.: _For all \(uv\in E\left(G\right)\), we have_ \[\left|\varepsilon_{3}\left(u\right)-\varepsilon_{3}\left(v\right)\right|\leq 1.\] Proof.: Note that \(u\), \(v\) are adjacent. Hence for any vertices \(x\), \(y\) in \(G\), \(\left|\mathcal{F}(u,x,y)-\mathcal{F}(v,x,y)\right|\leq 1\) holds by the minimum definition of the Fermat distance. Let \(\omega_{1}\), \(\omega_{2}\) be two vertices that realize \(\varepsilon_{3}(u)\). It is obvious that \[\left|\varepsilon_{3}(u)-\mathcal{F}(v,\omega_{1},\omega_{2})\right|\leq 1.\] Similarly, suppose that \(\varepsilon_{3}(v)=\mathcal{F}(v,\omega_{3},\omega_{4})\), we thus have \[\left|\varepsilon_{3}(v)-\mathcal{F}(u,\omega_{3},\omega_{4})\right|\leq 1.\] Then we can know that \[\varepsilon_{3}\left(u\right)-1\leq\mathcal{F}\left(v,\omega_{1},\omega_{2} \right)\leq\varepsilon_{3}\left(v\right)\] and \[\varepsilon_{3}\left(v\right)-1\leq\mathcal{F}\left(u,\omega_{3},\omega_{4} \right)\leq\varepsilon_{3}\left(u\right).\] We thus get the desired inequality \(\left|\varepsilon_{3}\left(v\right)-\varepsilon_{3}\left(u\right)\right|\leq 1\). The following lemma is the corollary of Lemma 2.6 in [22] and will be implicitly applied in our paper. **Lemma 2**.: _Suppose \(T\) is a tree and \(u\in V(T)\). Then the Fermat eccentric \(u\)-tree contains the longest path starting from \(u\). In other words, we can select the farthest vertex from \(u\) as a Fermat eccentric vertex of \(u\)._ In the rest of our paper, much operations on the diametrical path will be added. So for a given path \(P_{d+1}=v_{0}v_{1}\cdots v_{d}\), we set path segment of \(P_{d+1}\) by \(P_{\left[p:q\right]}=v_{p}\cdots v_{q}\), where \(0\leq p<q\leq d\). ## 3 Acyclic Graphs Two basic properties about eccentric path are listed below: 1. If tree has one central vertex, then each eccentric path passes through the central vertex; 2. If tree have two central vertices, then each eccentric path passes through these two vertices. The above properties ensure that for a tree \(T\), all central vertices must be on a diametrical path. Let \(P_{d+1}=v_{0}v_{1}\cdots v_{d}\) be a diametrical path of length \(d\) which contains all central vertices of a given tree \(T\). Now the tree \(T\) can be depicted by Fig. 1 when \(T\) has only one central vertex \(c\), and Fig. 2 when \(T\) has two central vertices \(c_{1}\), \(c_{2}\). Moreover, \(P_{d+1}\) has one central vertex \(c\) when \(d\) is odd, and two central vertices \(c_{1}\) and \(c_{2}\) when \(d\) is even. Let \(T_{v_{i}}\) be the subtree with root vertex \(v_{i}\), as shown in Figs. 1 and 2, where \(i=1,\ldots,d-1\). We denote \(\ell_{i}\) as the maximal distance from \(v_{i}\) to all leaves in \(T_{v_{i}}\), where \(i=1,\ldots,d-1\), that is, \(\ell_{i}=\varepsilon_{2}(v_{i};T_{v_{i}})\). We set \(\ell=\max\{\ell_{i}:i=1,\ldots,d-1\}\) and denote one subtree \(T_{v_{i}}\) with \(\ell_{i}=\ell\) by \(T_{\ell}\). The following lemma is easy to prove since the diametrical path \(P_{d+1}\) is the longest path in tree \(T\). **Lemma 3**.: _For all \(i=1,\ldots,d-1\), inequality \(\ell_{i}\leq\min\{i,d-i\}\) holds._ Immediately from Lemma 3, we find the symmetry property of diametrical path, as established below: **Lemma 4**.: _For all \(i=1,\ldots,d-1\), \(\varepsilon_{3}\left(v_{i}\right)=\varepsilon_{3}\left(v_{d-i}\right)\)._ Proof.: Symmetrically, we only need to check that the equation holds for \(i=0,\cdots,\lfloor\frac{d}{2}\rfloor\). We obtain that \[\varepsilon_{3}\left(v_{i}\right) =d\left(v_{i},v_{d}\right)+\max_{k=i+1,\cdots,d-i-1}\left\{\ell_ {k},i\right\}\] \[=d-i+\max_{k=i+1,\cdots,d-i-1}\left\{\ell_{k},d-i\right\}\] \[=d\left(v_{d-i},v_{0}\right)+\max_{k=i+1,\cdots,d-i-1}\left\{\ell_ {k},d-i\right\}\] \[=\varepsilon_{3}\left(v_{d-i}\right).\] From Lemma 1, we give more precise properties on the diametrical path \(P_{d+1}\) of trees. Figure 1: \(G\) expanding at \(P_{d+1}\) (One central vertex case). Figure 2: \(G\) expanding at \(P_{d+1}\) (Two central vertices case). **Lemma 5**.: _For \(i\in\left\{0,\ldots,\left\lfloor\frac{d}{2}\right\rfloor\right\}\), Fermat eccentricity satisfies_ \[\varepsilon_{3}\left(v_{i+1}\right)\leq\varepsilon_{3}\left(v_{i}\right)\leq \varepsilon_{3}\left(v_{i+1}\right)+1.\] _For \(i\in\left\{\left\lfloor\frac{d}{2}\right\rfloor+1,\ldots,d\right\}\), Fermat eccentricity satisfies_ \[\varepsilon_{3}\left(v_{i}\right)\leq\varepsilon_{3}\left(v_{i+1}\right)\leq \varepsilon_{3}\left(v_{i}\right)+1.\] _In other words, for vertices in \(P_{d+1}\), the central vertices reach the minimum Fermat eccentricity._ Proof.: For a given \(v_{i}\), \(i=0,\ldots,\left\lfloor\frac{d}{2}\right\rfloor\), we obtain that \[\varepsilon_{3}\left(v_{i+1}\right) =d\left(v_{i+1},v_{d}\right)+\max_{k=i+2,\cdots,d-i-2}\left\{ \ell_{k},i+1\right\}\] \[=d\left(v_{i},v_{d}\right)+\max_{k=i+2,\cdots,d-i-2}\left\{\ell_ {k}-1,i\right\}\] \[\leq d\left(v_{i},v_{d}\right)+\max_{k=i+1,\cdots,d-i-1}\left\{ \ell_{k},i\right\}=\varepsilon_{3}\left(v_{i}\right).\] By Lemma 3, for \(i=0,\ldots,\left\lfloor\frac{d}{2}\right\rfloor-1\), we deduce that \[\ell_{i+1},\ell_{d-i-1}\leq i+1,\] and hence, \[\varepsilon_{3}\left(v_{i+1}\right)+1 =d\left(v_{i},v_{d}\right)+\max_{k=i+2,\cdots,d-i-2}\left\{\ell_ {k},i+1\right\}\] \[=d\left(v_{i},v_{d}\right)+\max_{k=i+1,\cdots,d-i-1}\left\{\ell_ {k},i+1\right\}\] \[\geq d\left(v_{i},v_{d}\right)+\max_{k=i+1,\cdots,d-i-1}\left\{ \ell_{k},i\right\}=\varepsilon_{3}\left(v_{i}\right).\] By an analogous discussion, it is easy to prove the second assertion of Lemma 5. The formulas in Lemma 5 explain that the Fermat eccentricity of vertex \(v_{i}\) for \(i\in\left\{0,\ldots,d\right\}\) is diminishing from two endpoints to the center of the diametrical path \(P_{d+1}\), so central vertices reach the minimum Fermat eccentricity. **Lemma 6**.: _(1) For all \(uv\in E\left(P_{[c:d-\ell]}\right)\), it satisfies \(\varepsilon_{3}(u)=\varepsilon_{3}(v)\);_ _(2) For all \(uv\in E\left(P_{[0:\ell]}\cup P_{[d-\ell:d]}\right)\), it satisfies \(\left|\varepsilon_{3}(u)-\varepsilon_{3}(v)\right|=1\)._ Proof.: Suppose that \(u,v\in\left\{v_{0},\ldots,v_{\left\lfloor\frac{d}{2}\right\rfloor}\right\}\) and \(u\) is more distant from the center. Let us proceed with the discussion by considering the following two cases: 1. \(d\left(v_{0},u\right)<\ell\). We have \[\varepsilon_{3}\left(u\right)=d\left(u,v_{d}\right)+\ell,\] \[\varepsilon_{3}\left(v\right)=d\left(v,v_{d}\right)+\ell= \varepsilon_{3}\left(u\right)-1.\] 2. \(d\left(v_{0},u\right)\geq\ell\). We have \[\varepsilon_{3}\left(u\right)=\varepsilon_{3}\left(v\right)=d.\] For \(u,v\in\left\{v_{\left\lfloor\frac{d}{2}\right\rfloor},\ldots,v_{d}\right\}\), we have the similar conclusion. Above all, when \(uv\in E\left(P_{[c:d-\ell]}\right)\), we obtain \(\varepsilon_{3}(u)=\varepsilon_{3}(v)\), and when \(uv\in E\left(P_{[0:\ell]}\cup P_{[d-\ell:d]}\right)\), we obtain \(\left|\varepsilon_{3}(u)-\varepsilon_{3}(v)\right|=1\). Lemma 6 divides edges of \(P_{d+1}\) into two parts and is the key lemma that will be used to prove Theorem 1. The following lemma determines the behaviors of subtree \(T_{v_{i}}\), \(i=1,2,\ldots,d-1\). **Lemma 7**.: _For any vertex \(u\in T_{v_{i}}\), \(i=1,\ldots,d-1\), it holds that_ \[\varepsilon_{3}\left(u\right)=d\left(u,v_{i}\right)+\varepsilon_{3}\left(v_{i} \right). \tag{2}\] Proof.: By Lemma 3, for any vertex \(u\in T_{v_{i}}\), \(i=1,\ldots,\lfloor\frac{d}{2}\rfloor\), it is obvious that one eccentric vertex of \(u\) is \(v_{d}\), and another one can be chosen as \(v_{0}\) when \(\ell_{i}=\ell\) and in subtree \(T_{\ell}\) when \(\ell>\ell_{i}\). In other words, we have \[\varepsilon_{3}\left(u\right)=d\left(u,v_{d}\right)+\max_{t=i+1,\cdots,d-i-1} \left\{\ell_{t},i\right\}=d\left(u,v_{i}\right)+\varepsilon_{3}\left(v_{i} \right).\] Similarly, Eq. (2) holds for \(i=\lceil\frac{d}{2}\rceil,\ldots,d-1\). **Theorem 1**.: _For any acyclic graph \(T\), we have_ \[\frac{\sum_{uv\in E\left(T\right)}\varepsilon_{3}\left(u\right)\varepsilon_{ 3}\left(v\right)}{m}\leq\frac{\sum_{u\in V\left(T\right)}\varepsilon_{3}^{2} \left(u\right)}{n}, \tag{3}\] _where the equation holds if and only if \(T\cong P_{n}\)._ Proof.: In this proof, we always assume that \(u\) is further from the center than \(v\) for any given edge \(uv\in E(T)\). Whether the selected edge \(uv\) is on the diametrical path \(P_{d+1}\) leads to two cases as follows. 1. Edge \(uv\in E\left(T_{v_{k}}\setminus\{v_{k}\}\right)\), \(k=1,2,\ldots,d-1\). By Lemma 7, we have \(\varepsilon_{3}\left(u\right)-\varepsilon_{3}\left(v\right)=1\) and thus \[\frac{\sum_{uv\in E\left(T_{v_{k}}\right)}\varepsilon_{3}\left(u \right)\varepsilon_{3}\left(v\right)}{m}-\frac{\sum_{u\in V\left(T_{v_{k}} \setminus\{v_{k}\}\right)}\varepsilon_{3}^{2}\left(u\right)}{n}\] \[=\frac{n\sum_{uv\in E\left(T_{v_{k}}\right)}\varepsilon_{3}\left( u\right)\varepsilon_{3}\left(v\right)-\left(n-1\right)\sum_{u\in V\left(T_{v_{k}} \setminus\{v_{k}\}\right)}\varepsilon_{3}^{2}\left(u\right)}{n\left(n-1\right)}\] \[=\frac{n\sum_{u\in V\left(T_{v_{k}}\setminus\{v_{k}\}\right)} \varepsilon_{3}\left(u\right)\varepsilon_{3}\left(u\right)-1-\left(n-1\right) \sum_{u\in V\left(T_{v_{k}}\setminus\{v_{k}\}\right)}\varepsilon_{3}^{2} \left(u\right)}{n\left(n-1\right)}\] \[=\frac{\sum_{u\in V\left(T_{v_{k}}\setminus\{v_{k}\}\right)} \varepsilon_{3}\left(u\right)\left(\varepsilon_{3}\left(u\right)-n\right)}{n \left(n-1\right)}<0.\] Hence, \[\frac{\sum_{uv\in E\left(T_{v_{k}}\right)}\varepsilon_{3}\left(u\right) \varepsilon_{3}\left(v\right)}{m}\leq\frac{\sum_{u\in V\left(T_{v_{k}}\setminus \{v_{k}\}\right)}\varepsilon_{3}^{2}\left(u\right)}{n}\] (4) 2. Edge \(uv\in E\left(P_{d+1}\right)\). By Lemma 6, the difference can be rewritten by \[L =\frac{\sum_{uv\in E\left(P_{d+1}\right)}\varepsilon_{3}\left(u \right)\varepsilon_{3}\left(v\right)}{m}-\frac{\sum_{u\in V\left(P_{d+1} \right)}\varepsilon_{3}^{2}\left(u\right)}{n}\] \[=\frac{n\sum_{uv\in E\left(P_{d+1}\right)}\varepsilon_{3}\left( u\right)\varepsilon_{3}\left(v\right)-\left(n-1\right)\sum_{u\in V\left(P_{d+1} \right)}\varepsilon_{3}^{2}\left(u\right)}{n\left(n-1\right)}\] \[=\frac{n\sum_{uv\in E\left(P_{[d:d-\ell]}\right)}\varepsilon_{3} \left(u\right)\varepsilon_{3}\left(v\right)-\left(n-1\right)\sum_{u\in V\left( P_{[d:d-\ell]}\right)}\varepsilon_{3}^{2}\left(u\right)}{n\left(n-1\right)}\] \[\quad+\frac{n\sum_{uv\in E\left(P_{[0:d\cup P_{[d-\ell d]}]}\right)}\varepsilon_{3}\left(u\right)\varepsilon_{3}\left(v\right)- \left(n-1\right)\sum_{u\in V\left(P_{[0:d\cup P_{[d-\ell d]}\setminus\{v_{ \ell},v_{d-\ell}\}}\right)}\varepsilon_{3}^{2}\left(u\right)}{n\left(n-1 \right)}.\] We consider the following two subcases. Notice that \(\varepsilon_{3}(u)\leq n-1\) and \(d+1\leq n\) always hold. 1. Acyclic graph \(G\) has only one central vertex \(c\). It derives that \[L =\frac{n\sum_{u\in V\left(P_{[c:d-\ell]}\right)\setminus\left\{c \right\}}\varepsilon_{3}^{2}\left(u\right)-\left(n-1\right)\sum_{u\in V\left(P_ {[c:d-\ell]}\right)}\varepsilon_{3}^{2}\left(u\right)}{n\left(n-1\right)}\] (5) \[\quad+\frac{-\left(n-1\right)\sum_{u\in V\left(P_{[0:d]}\cup P_{[ d-\ell:d]}\setminus\left\{v_{\ell},v_{d-\ell}\right\}\right)}\varepsilon_{3} \left(u\right)\left(\varepsilon_{3}\left(u\right)-1\right)}{n\left(n-1\right)}\] \[\quad+\frac{-\left(n-1\right)\sum_{u\in V\left(P_{[0:d]}\cup P_{[ d-\ell:d]}\setminus\left\{v_{\ell},v_{d-\ell}\right\}\right)}\varepsilon_{3}^{2} \left(u\right)}{n\left(n-1\right)}\varepsilon_{3}^{2}\left(u\right)}\] \[\quad=\frac{-n\sum_{u\in V\left(P_{[0:d]}\cup P_{[d-\ell:d]} \setminus\left\{v_{\ell},v_{d-\ell}\right\}\right)}\varepsilon_{3}\left(u \right)}{n\left(n-1\right)}\] \[\quad=\frac{(d-2l+1)d^{2}-nd^{2}+\sum_{u\in V\left(P_{[0:d]}\cup P _{[d-\ell:d]}\setminus\left\{v_{\ell},v_{d-\ell}\right\}\right)}\varepsilon_{3 }\left(u\right)\left(\varepsilon_{3}\left(u\right)-n\right)}{n\left(n-1\right)}\] \[\leq 0.\] Hence, \[\frac{\sum_{uv\in E\left(P_{d+1}\right)}\varepsilon_{3}\left(u\right) \varepsilon_{3}\left(v\right)}{m}\leq\frac{\sum_{u\in V\left(P_{d+1}\right)} \varepsilon_{3}^{2}\left(u\right)}{n}.\] 2. Acyclic graph \(G\) has two central vertices \(c_{1}\), \(c_{2}\). We see that \[L =\frac{n\sum_{u\in V\left(P_{[c:d-\ell]}\right)\setminus\left\{c_ {1}\right\}}\varepsilon_{3}^{2}\left(u\right)-\left(n-1\right)\sum_{u\in V \left(P_{[c:d-\ell]}\right)}\varepsilon_{3}^{2}\left(u\right)}{n\left(n-1\right)}\] (6) \[\quad+\frac{n\sum_{u\in V\left(P_{[0:d]}\cup P_{[d-\ell:d]} \setminus\left\{v_{\ell},v_{d-\ell}\right\}\right)}\varepsilon_{3}\left(u \right)\left(\varepsilon_{3}\left(u\right)-1\right)}{-\left(n-1\right)\sum_{u \in V\left(P_{[0:d]}\cup P_{[d-\ell:d]}\setminus\left\{v_{\ell},v_{d-\ell} \right\}\right)}\varepsilon_{3}^{2}\left(u\right)}\] \[\quad\quad+\frac{\sum_{u\in V\left(P_{[c:d-\ell]}\right)} \varepsilon_{3}^{2}\left(u\right)-n\varepsilon_{3}^{2}\left(c_{1}\right)+\sum_{u \in V\left(P_{[0:d]}\cup P_{[d-\ell:d]}\setminus\left\{v_{\ell},v_{d-\ell} \right\}\right)}\varepsilon_{3}^{2}\left(u\right)}{n\left(n-1\right)}\] \[=\frac{-n\sum_{u\in V\left(P_{[0:d]}\cup P_{[d-\ell:d]} \setminus\left\{v_{\ell},v_{d-\ell}\right\}\right)}\varepsilon_{3}\left(u \right)}{n\left(n-1\right)}\] \[=\frac{(d-2l+1)d^{2}-nd^{2}+\sum_{u\in V\left(P_{[0:d]}\cup P_{[ d-\ell:d]}\setminus\left\{v_{\ell},v_{d-\ell}\right\}\right)}\varepsilon_{3}\left(u \right)\left(\varepsilon_{3}\left(u\right)-n\right)}{n\left(n-1\right)}\] \[\leq 0.\] Hence, \[\frac{\sum_{uv\in E\left(P_{d+1}\right)}\varepsilon_{3}\left(u\right) \varepsilon_{3}\left(v\right)}{m}\leq\frac{\sum_{u\in V\left(P_{d+1}\right)} \varepsilon_{3}^{2}\left(u\right)}{n}.\] Combining above two cases together yields \[\frac{\sum_{uv\in E\left(T\right)}\varepsilon_{3}\left(u\right) \varepsilon_{3}\left(v\right)}{m}\leq\frac{\sum_{u\in V\left(T\right)} \varepsilon_{3}^{2}\left(u\right)}{n}.\] Suppose that \(T\cong P_{n}\). We thus have \(T_{v_{k}}=\left\{v_{k}\right\}\), \(k=1,2,\ldots,d-1\). Consequently, \[\frac{\sum_{uv\in E\left(T\right)}\varepsilon_{3}\left(u\right) \varepsilon_{3}\left(v\right)}{n-1}-\frac{\sum_{u\in V\left(T\right)} \varepsilon_{3}^{2}\left(u\right)}{n}=\frac{\left(n-1\right)d^{2}}{n-1}-\frac{ nd^{2}}{n}=0.\] On the other hand, if (3) is an equation, then (5) and (6) must be equations. This means \((d-2\ell+1)d^{2}-nd^{2}=0\), and we thus conclude \(\ell=0\), which implies that \(T\) is a path. Theorem 1 partly reflects the fact that \(F_{1}(T)\) and \(F_{2}(T)\) may share some extremal bounds. We thus give the following theorem. **Theorem 2**.: _Suppose \(T\) is a \(n\)-vertex tree, where \(n\geq 3\). Then we have_ \[F_{1}(K_{1,n-1})\leq F_{1}(T)\leq F_{1}(P_{n}) \tag{7}\] _and_ \[F_{2}(K_{1,n-1})\leq F_{2}(T)\leq F_{2}(P_{n}). \tag{8}\] Proof.: We first check the upper bounds of (7) and (8). If \(T\cong P_{n}\), then \(\varepsilon_{3}(u)\) reach its maximum value \(n-1\) for all \(u\in V(T)\) and thus the right sides of (7) and (8) hold. For the lower bound, we claim that the left sides of (7) and (8) hold if the diameter of \(T\) is \(2\), equivalently, the maximal degree of \(T\) is \(n-2\). Suppose that the diameter of \(T\) is \(d\) and \(P_{d+1}=v_{0}v_{1}\cdots v_{d}\) is the diametrical path. Note that \(v_{0}\) and \(v_{d}\) are leaves. Then the vertices with maximal degree must be internal vertices. We choose one maximal degree vertex \(v_{i}\) of \(P_{d}\) and transform \(T\) into another tree \(T^{\prime}\) by deleting edge \(v_{d-1}v_{d}\) and attach \(v_{d}\) to\(v_{i}\) by an edge. From the definition of diameter and Lemma 7, we obtain \(\varepsilon_{3}(u;T)\geq\varepsilon_{3}(u;T^{\prime})\) for all vertices if we consider vertices and their transformations in pairs. We thus get \[F_{1}(T^{\prime})-F_{1}(T)=\sum_{u\in V(T^{\prime})}\varepsilon_{3}^{2}(u)- \sum_{u\in V(T)}\varepsilon_{3}^{2}(u)\leq 0\] and \[F_{2}(T^{\prime})-F_{2}(T)=\sum_{uv\in E(T^{\prime})}\varepsilon_{3}(u) \varepsilon_{3}(v)-\sum_{uv\in E(T)}\varepsilon_{3}(u)\varepsilon_{3}(v)\leq 0.\] We can repeat this transformation a sufficient number of times until arrive at a tree with maximal degree \(n-2\), that is, we arrive at star \(K_{1,n-1}\), which is the only graph with the smallest diameter \(2\). Recall that the transformation does not increase the two Fermat-Zagreb indices. Therefore, we have \(F_{i}(K_{1,n-1})\leq F_{i}(T)\), \(i=1,2\). ## 4 Unicyclic Graphs As shown in Fig. 3, unicyclic graphs \(G\) is consisting of the unique cycle \(C_{g}\) with \(g\) vertices and the maximum subtree \(T_{x_{i}}\) with root vertex \(x_{i}\) on \(C_{g}\), where \(i=1,\ldots,g\). The following key lemma is from [15]. Figure 3: Unicyclic graph \(G\). **Lemma 8**.: _Let \(n\geq 2\) and \(x_{1},\ldots,x_{n}\) be positive integers such that \(\left|x_{i}-x_{i+1}\right|\leq 1\) for each \(i=1,\ldots,n\), where \(x_{1}=x_{n+1}\). Then \(\sum_{i=1}^{n}x_{i}^{2}\geq\sum_{i=1}^{n}x_{i}x_{i+1}\)._ We are now in a position to prove the inequality for unicyclic graphs. **Theorem 3**.: _When \(G\) is a unicyclic graph, we have_ \[\frac{\sum_{uv\in E(G)}\varepsilon_{3}\left(u\right)\varepsilon_{3}\left(v \right)}{m}\leq\frac{\sum_{u\in V(G)}\varepsilon_{3}^{2}\left(u\right)}{n}.\] Proof.: We prove it by making a difference. Note that \(n=m\) for unicyclic graphs. We thus have \[\frac{\sum_{uv\in E(G)}\varepsilon_{3}\left(u\right)\varepsilon_{3}\left(v \right)}{m}-\frac{\sum_{u\in V(G)}\varepsilon_{3}^{2}\left(u\right)}{n}=\frac {\sum_{uv\in E(G)}\varepsilon_{3}\left(u\right)\varepsilon_{3}\left(v\right)- \sum_{u\in V(G)}\varepsilon_{3}^{2}\left(u\right)}{n}.\] It is equivalent to prove that \[\sum_{uv\in E(G)}\varepsilon_{3}\left(u\right)\varepsilon_{3}\left(v\right)- \sum_{u\in V(G)}\varepsilon_{3}^{2}\left(u\right)\leq 0.\] Clearly, \[\sum_{uv\in E(G)}\varepsilon_{3}\left(u\right)\varepsilon_{3}\left(v\right)- \sum_{u\in V(G)}\varepsilon_{3}^{2}\left(u\right)\] We can get that \[\sum_{uv\in E(G_{g})}\varepsilon_{3}\left(u\right)\varepsilon_{3}\left(v\right) -\sum_{u\in V(C_{g})}\varepsilon_{3}^{2}\left(u\right)\leq 0.\] by Lemmas 1 and 8. We now just need to prove \[\sum_{uv\in E\left(T_{x_{i}}\right)}\varepsilon_{3}\left(u\right)\varepsilon_ {3}\left(v\right)-\sum_{u\in V\left(T_{x_{i}}\setminus\{x_{i}\}\right)} \varepsilon_{3}^{2}\left(u\right)\leq 0\] for any fixed \(T_{x_{i}}\), \(i=1,\ldots,g\). For a given \(x_{i}\), \(i=1,\ldots,g\), we denote \(x=x_{i}\) for convenient. In the remaining part of this proof, suppose that \(u\) is more distant from \(x\) than \(v\) for all \(uv\in E(G)\). Let us now investigate \(T_{x}\) in two cases. **Case I:**\(x\) is on one of the diametrical paths of \(T_{x}\), see Fig. 4. Suppose that \(P_{d+1}=v_{0}v_{1}\cdots v_{d}\) is one diametrical path of \(T_{x}\) which contains all central vertices of \(T_{x}\). Set \(c=v_{t}=v_{\lfloor\frac{d}{2}\rfloor}\), when \(T_{x}\) has only one central vertex \(c\) of \(T_{x}\), and \(c_{1}=v_{t}=v_{\lfloor\frac{d}{2}\rfloor}\), when \(T_{x}\) has two central vertices \(c_{1},c_{2}\) of \(T_{x}\). Without loss of generality, let \(x\) be on the left side of \(c\), which means that \(p\leq t\). We denote \(v_{q}\) the symmetry vertex of \(v_{p}\) about the center, i.e., \(q=d-p\). Let \(T_{v_{j}}\) be the subtree of \(T_{x}\) with root vertex \(v_{j}\). We also remark \(\ell_{j}=\varepsilon_{2}(v_{j};T_{v_{j}})\) and \(\ell=\max\{\ell_{j}:j=1,\ldots,d-1\}\). Let \(\omega^{\prime}\), \(\omega^{\prime\prime}\) be the vertices that realizes \(\varepsilon_{3}(x;G\setminus T_{x})\) and \(T_{\ell}\) be one subtree \(T_{v_{j}}\) with \(\ell_{j}=\ell\). Denote one vertex in \(T_{\ell}\) which is farthest from the root vertex of \(T_{\ell}\) by \(\bar{\omega}\). 1. \(uv\in E\left(T_{v_{j}}\right)\), \(j=1,\ldots,d-1\). Recall that \(\ell_{j}\leq\min\{j,d-j\}\). For all \(w\in V\left(T_{v_{j}}\right)\), we can select the Fermat eccentric vertices of \(\varepsilon_{3}(w)\) belonging to set \(\{v_{0},v_{d},\omega^{\prime},\omega^{\prime\prime},\bar{\omega}\}\) when \(\ell_{j}<\ell\), and \(\{v_{0},v_{d},\omega^{\prime},\omega^{\prime\prime}\}\) when \(\ell_{j}=\ell\). In other words, \(\varepsilon_{3}(w)=d(w,v_{j})+\varepsilon_{3}(v_{j})\) and hence \(\varepsilon_{3}\left(u\right)-\varepsilon_{3}\left(v\right)=1\). So, for any \(uv\in E\left(T_{v_{j}}\right)\), \(j=1\ldots,d-1\), we observe that \[\sum_{uv\in E\left(T_{v_{j}}\right)}\varepsilon_{3}\left(u\right) \varepsilon_{3}\left(v\right)-\sum_{u\in V\left(T_{v_{j}}\backslash\left\{v_{j} \right\}\right)}\varepsilon_{3}^{2}\left(u\right)\] \[=-\sum_{u\in V\left(T_{v_{j}}\backslash\left\{v_{j}\right\} \right)}\varepsilon_{3}\left(u\right)\leq 0.\] 2. \(uv\in E\left(P_{\left[0:p\right]}\cup P_{\left[q:d\right]}\right)\). For \(w\in V\left(P_{\left[0:p\right]}\right)\), the farthest vertex from \(w\) (also one Fermat eccentric vertex of \(w\)) belongs to \(\left\{v_{d},\omega^{\prime},\omega^{\prime\prime}\right\}\). Similarly, for \(w\in V\left(P_{\left[q:d\right]}\right)\), the farthest vertex from \(w\) (also one Fermat eccentric vertex of \(w\)) belongs to \(\left\{v_{0},\omega^{\prime},\omega^{\prime\prime}\right\}\). The two properties ensure that \(0\leq\varepsilon_{3}\left(u\right)-\varepsilon_{3}\left(v\right)\leq 1\). Hence, \[\sum_{uv\in E\left(P_{\left[0:p\right]}\cup P_{\left[q:d\right]} \right)}\varepsilon_{3}\left(u\right)\varepsilon_{3}\left(v\right)-\sum_{u\in V \left(P_{\left[0:p\right]}\cup P_{\left[q:d\right]}\backslash\left\{v_{p},v_{ q}\right\}\right)}\varepsilon_{3}^{2}\left(u\right)\] \[\leq\sum_{u\in V\left(P_{\left[0:p\right]}\cup P_{\left[q:d \right]}\backslash\left\{v_{p},v_{q}\right\}\right)}\varepsilon_{3}^{2}\left(u \right)-\sum_{u\in V\left(P_{\left[0:p\right]}\cup P_{\left[q:d\right]} \backslash\left\{v_{p},v_{q}\right\}\right)}\varepsilon_{3}^{2}\left(u\right)=0.\] 3. \(uv\in E\left(P_{\left[p:q\right]}\right)\). 1. For any \(uv\in E\left(P_{\left[t:q\right]}\right)\), the farthest Fermat eccentric vertex of \(u\in V\left(P_{\left[p:q\right]}\right)\) is in set \(\left\{v_{0},\omega^{\prime},\omega^{\prime\prime}\right\}\). This property leads to \(0\leq\varepsilon_{3}\left(u\right)-\varepsilon_{3}\left(v\right)\leq 1\). Hence, \[\sum_{uv\in E\left(P_{\left[t:q\right]}\right)}\varepsilon_{3}\left(u\right) \varepsilon_{3}\left(v\right)-\sum_{u\in V\left(P_{\left[t:q\right]} \backslash\left\{v_{t}\right\}\right)}\varepsilon_{3}^{2}\left(u\right)\leq 0.\] 2. For any \(uv\in E\left(P_{\left[p:t\right]}\right)\), we have \(-1\leq\varepsilon_{3}\left(u\right)-\varepsilon_{3}\left(v\right)\leq 1\) by Lemma 1. We need to check what occurs when \(\varepsilon_{3}\left(u\right)-\varepsilon_{3}\left(v\right)=-1\). Recall that \(u\) is more distant from \(x\) than \(v\). Suppose that there exists an edge \(v_{k}v_{k+1}\in E\left(P_{\left[p:t\right]}\right)\) such that \(\varepsilon_{3}\left(v_{k}\right)-\varepsilon_{3}\left(v_{k+1}\right)=1\). It is obvious that \(\varepsilon_{3}\left(v_{k}\right)=\varepsilon_{3}\left(v_{k};T_{x}\right)\) and \(\varepsilon_{3}\left(v_{k+1}\right)=\varepsilon_{3}\left(v_{k+1};T_{x}\right)\). Moreover, the Fermat eccentric vertices of \(\varepsilon_{3}\left(v_{k}\right)\) and \(\varepsilon_{3}\left(v_{k+1}\right)\) are both \(v_{d}\) and \(\bar{\omega}\). We thus have \(\ell>\max\{k-p+\ell_{p},k\}\) and then find that \[\varepsilon_{3}\left(v_{d-k}\right)-\varepsilon_{3}\left(v_{d-k-1}\right)=1.\] Now we can take \(v_{k}v_{k+1}\) and \(v_{d-k-1}v_{d-k}\) into consideration pairwisely. It is clear that \[\varepsilon_{3}(v_{k})\varepsilon_{3}(v_{k+1})-\varepsilon_{3}^{2}(v_{k+1})+ \varepsilon_{3}(v_{d-k-1})\varepsilon_{3}(v_{d-k})-\varepsilon_{3}^{2}(v_{d-k})\] \[=\varepsilon_{3}(v_{k+1})-\varepsilon_{3}(v_{d-k})=-1<0.\] Figure 4: \(T_{x}\) expanding at \(P_{d+1}\). So we get the inequation \[\sum_{uv\in E\left(P_{[p:q]}\right)}\varepsilon_{3}\left(u\right) \varepsilon_{3}\left(v\right)-\sum_{u\in V\left(P_{[p:q]}\backslash\left\{x \right\}\right)}\varepsilon_{3}^{2}\left(u\right)\leq 0.\] Combining all three subcases in Case I together yields \[\sum_{uv\in E\left(T_{x}\right)}\varepsilon_{3}\left(u\right) \varepsilon_{3}\left(v\right)-\sum_{u\in V\left(T_{x}\backslash\left\{x \right\}\right)}\varepsilon_{3}^{2}\left(u\right)\leq 0.\] **Case II:**\(x\) is not on any diametrical paths of the \(T_{x}\). Then we can display \(T_{x}\) in Fig. 5. Set \(P^{\prime}=v_{p}v_{d+1}v_{d+2}\cdots v_{s+1}\), and then denote \(T_{v_{p}}=\bigcup_{j=d+1}^{s+1}T_{v_{j}}\cup P^{\prime}\). Some notations utilized in Case I are no longer reiterated. 1. \(uv\in E\left(T_{x}\setminus T_{v_{p}}\right)\). We claim that \[\sum_{uv\in E\left(T_{x}\backslash T_{v_{p}}\right)}\varepsilon_{3}\left(u \right)\varepsilon_{3}\left(v\right)-\sum_{u\in V\left(T_{x}\backslash T_{v_{ p}}\right)}\varepsilon_{3}^{2}\left(u\right)\leq 0.\] Similar to the discussion in Case I, we have the following: 1. For \(uv\in T_{v_{j}}\), \(j=1,2,\ldots,p-1,p+1,\ldots d-1\), we have \(\varepsilon_{3}(u)-\varepsilon_{3}(v)=1\). 2. For \(uv\in E\left(P_{[0:p]}\cup P_{[q:d]}\right)\), we have \(0\leq\varepsilon_{3}(u)-\varepsilon_{3}(v)\leq 1\). 3. For \(uv\in E\left(P_{[pq:q]}\right)\), we focus on the edge \(uv\) such that \(\varepsilon_{3}(u)-\varepsilon_{3}(v)=-1\). Suppose that there exists a edge \(v_{k}v_{k+1}\) such that \(\varepsilon_{3}\left(v_{k}\right)-\varepsilon_{3}\left(v_{k+1}\right)=1\), \(k\in\left\{p,\ldots,t\right\}\). We can still consider edge \(v_{d-k-1}v_{d-k}\) with \(\varepsilon_{3}\left(v_{d-k}\right)-\varepsilon_{3}\left(v_{d-k-1}\right)=1\), and then get \[\sum_{uv\in E\left(P_{[p:q]}\right)}\varepsilon_{3}\left(u\right) \varepsilon_{3}\left(v\right)-\sum_{u\in V\left(P_{[pq:q]}\backslash\left\{x \right\}\right)}\varepsilon_{3}^{2}\left(u\right)\leq 0.\] In fact, the validity of the above three statements is obvious for the following reasons: In the discussion of Case I, our focus solely lies on determining the precise location of Fermat eccentric vertices, without relying on the unicycle property of \(G\backslash T_{x}\). This approach ensures the feasibility of our analysis in Case II when substituting \(G\backslash T_{x}\) and \(T_{v_{p}}\) in Figure 4 with \(T_{v_{p}}\cup G\backslash T_{x}\) and single vertex \(\left\{v_{p}\right\}\) in Figure 5, respectively. So our claim holds. Figure 5: \(T_{x}\) expanding at \(P_{d+1}\). Here let \(x=v_{s+1}\) for notational convenience. 2. We now investigate edges in \(E\left(T_{v_{j}}\right)\), \(j=d+1,\ldots,s+1\). Recall that \(x\) is not on any diametrical path. We have \(\ell_{p}=\varepsilon_{2}(v_{p};T_{v_{p}})<p\). Furthermore, for any \(uv\in E\left(T_{v_{j}}\right)\), \(j=d+1,\ldots,s+1\), the Fermat eccentric vertices of \(u\) and \(v\) do not belong to \(V\left(T_{v_{p}}\right)\). Note that the distant between \(u\) and \(x\) is greater than that between \(v\) and \(x\). Hence we have \[\varepsilon_{3}\left(u\right)=\varepsilon_{3}\left(v_{j}\right)+d\left(u,v_{j}\right)\] and \[\varepsilon_{3}\left(v\right)=\varepsilon_{3}\left(v_{j}\right)+d\left(v,v_{ j}\right)=\varepsilon_{3}\left(u\right)-1.\] We derive that \[\sum_{uv\in E\left(T_{v_{j}}\right)}\varepsilon_{3}\left(u\right) \varepsilon_{3}\left(v\right)-\sum_{u\in V\left(T_{v_{j}}\backslash\left\{v_{ j}\right\}\right)}\varepsilon_{3}^{2}\left(u\right)\] \[=-\sum_{u\in V\left(T_{v_{j}}\backslash\left\{v_{j}\right\} \right)}\varepsilon_{3}\left(u\right)\leq 0\] for \(j=d+1,\ldots,s+1\). 3. For \(uv\in E\left(P^{\prime}\right)\), note that \(-1\leq\varepsilon_{3}(u)-\varepsilon_{3}(v)\leq 1\) from Lemma 1. Let us deal with the following two subcases: 1. For all \(uv\in E\left(P^{\prime}\right)\), \(0\leq\varepsilon_{3}\left(u\right)-\varepsilon_{3}\left(v\right)\leq 1\). It is obvious that \[\sum_{uv\in E\left(P^{\prime}\right)}\varepsilon_{3}\left(u\right)\varepsilon_{ 3}\left(v\right)-\sum_{u\in V\left(P^{\prime}\backslash\left\{x\right\} \right)}\varepsilon_{3}^{2}\left(u\right)\leq 0.\] 2. There exists an edge \(v_{k}v_{k+1}\in E\left(P^{\prime}\right)\) such that \(\varepsilon_{3}\left(v_{k}\right)-\varepsilon_{3}\left(v_{k+1}\right)=-1\) and \(k\) is maximum. Recall that \(v_{k+1}\) is more distant from \(x\) than \(v_{k}\) and \(\ell_{p}<\ell\). So the Fermat eccentric vertices of \(v_{k}\) and \(v_{k+1}\) must be \(\omega^{\prime}\) and \(\omega^{\prime\prime}\). This statement is also true for all \(uv\in v_{p}v_{d+1}\cdots v_{k+1}\), that is to say, \(\varepsilon_{3}\left(u\right)-\varepsilon_{3}\left(v\right)=-1\). For any vertex \(u\) in \(P_{[0:p]}\), it now becomes evident that the Fermat eccentric vertices of \(u\) is also \(\omega^{\prime}\) and \(\omega^{\prime\prime}\). We thus have \(\varepsilon_{3}\left(u\right)-\varepsilon_{3}\left(v\right)=1\) for all \(uv\in P_{[0:p]}\). Taking all the edges \(uv\in E\left(P_{[0:k-d]}\right)\) and \(P^{\prime}\) into consideration, we have \[\sum_{uv\in E\left(P^{\prime}\cup P_{[0:k-d]}\right)}\varepsilon_{3} \left(u\right)\varepsilon_{3}\left(v\right)-\sum_{u\in V\left(\left(P_{[0:k-d] }\backslash\left\{v_{s-d}\right\}\right)\cup\left(P^{\prime}\backslash\left\{v _{p}\right\}\right)\right)}\varepsilon_{3}^{2}\left(u\right)\] \[=\sum_{u\in V\left(P^{\prime}\backslash\left\{v_{p}\right\} \right)}\varepsilon_{3}\left(u\right)-\sum_{u\in V\left(\left(P_{[0:k-d]} \backslash\left\{v_{k-d}\right\}\right)\right)}\varepsilon_{3}\left(u\right) \leq 0,\] where the last inequality holds due to \(\varepsilon_{3}(v_{j})\geq\varepsilon_{3}(v_{k+1-j})\) for all \(j=0,1,2,\ldots,k-d\). Note that although \(\varepsilon_{3}(u)-\varepsilon_{3}(v)=-1\) may exist in both paths \(P_{[p:t]}\) and \(P^{\prime}\). The vertex sets \(V\left(P_{[p:q]}\right)\), \(V\left(P^{\prime}\right)\) and \(V\left(P_{[0:k-d]}\right)\) we discussed are pairwise disjoint. Therefore, we conclude that \[\sum_{uv\in E\left(T_{x}\right)}\varepsilon_{3}\left(u\right)\varepsilon_{3} \left(v\right)-\sum_{u\in V\left(T_{x}\backslash\left\{x\right\}\right)} \varepsilon_{3}^{2}\left(u\right)\leq 0.\] Above all, all cases are proved perspectively, and inequality (1) holds for unicyclic graphs. Multicyclic graphs In this section, we give two counterexamples which show that inequality (1) does not always apply to general graphs. We give a bicyclic graph with \(3x+6\) vertices shown in Fig. 6. One can easily compute that \[\frac{\sum\limits_{u\in V(\tilde{G}_{2,x})}\varepsilon_{3}^{2}(u)}{n(\tilde{G}_{ 2,x})}-\frac{\sum\limits_{uv\in E(\tilde{G}_{2,x})}\varepsilon_{3}(u) \varepsilon_{3}(v)}{m(\tilde{G}_{2,x})}=\frac{-\frac{1}{2}x^{3}+31x^{2}+173x+5 5}{(3x+6)(3x+7)}.\] We check that \[\frac{\sum\limits_{u\in V(\tilde{G}_{2,64})}\varepsilon_{3}^{2}(u)}{n(\tilde{ G}_{2,64})}-\frac{uv\in E(\tilde{G}_{2,64})}{m(\tilde{G}_{2,64})}>0\] and \[\frac{\sum\limits_{u\in V(\tilde{G}_{2,65})}\varepsilon_{3}^{2}(u)}{n(\tilde{ G}_{2,65})}-\frac{\sum\limits_{uv\in E(\tilde{G}_{2,65})}\varepsilon_{3}(u) \varepsilon_{3}(v)}{m(\tilde{G}_{2,65})}<0.\] We next consider a graph with multiple cycles, as shown in Fig. 7. Without difficulty, we can get the following equation: \[\frac{\sum\limits_{u\in V(\tilde{G}_{k,x})}\varepsilon_{3}^{2}(u)}{n(G_{k,x} )}-\frac{\sum\limits_{uv\in E(\tilde{G}_{k,x})}\varepsilon_{3}(u)\varepsilon_ {3}(v)}{m(G_{k,x})}=\] Figure 6: Graph \(\tilde{G}_{2,x}\). Figure 7: Graph \(G_{k,x}\) (\(k\geq 3\)). It is clear that \[\frac{\sum\limits_{u\in V(G_{k,x})}\varepsilon_{3}^{2}(u)}{n(G_{k,x})}-\frac{ \sum\limits_{uv\in E(G_{k,x})}\varepsilon_{3}(u)\varepsilon_{3}(v)}{m(G_{k,x})}>0\] when \(x=0\), while \[\frac{\sum\limits_{u\in V(G_{k,x})}\varepsilon_{3}^{2}(u)}{n(G_{k,x})}-\frac{ \sum\limits_{uv\in E(G_{k,x})}\varepsilon_{3}(u)\varepsilon_{3}(v)}{m(G_{k,x} )}<0\] for \(x\) large enough. The above two counterexamples demonstrate that inequality (1) and its opposite inequality are not always valid for graphs with at least two cycles. ## 6 Conclusion In this paper, we have investigated inequality \[\frac{\sum_{u\in V(G)}\varepsilon_{3}^{2}\left(u\right)}{n(G)}\geq\frac{\sum _{uv\in E(G)}\varepsilon_{3}\left(u\right)\varepsilon_{3}\left(v\right)}{m(G)}\] and proved that this inequality holds for all acyclic and unicycle graphs. We negate the validity of this inequality and its opposite inequality in graphs with at least two cycles. The properties and applications of Zagreb-Fermat indices, or more generally, Zagreb Steiner \(k\)-indices, need further study. Zagreb-type indices are important in graphs, so a lot of extensive research and development still needs to be done. ## Acknowledgments The research is partially supported by the Natural Science Foundation of China (No. 12301107) and the Natural Science Foundation of Shandong Province, China (No. ZR202209010046).
2303.17979
Robust Detection for Mills Cross Sonar
Multi-array systems are widely used in sonar and radar applications. They can improve communication speeds, target discrimination, and imaging. In the case of a multibeam sonar system that can operate two receiving arrays, we derive new adaptive to improve detection capabilities compared to traditional sonar detection approaches. To do so, we more specifically consider correlated arrays, whose covariance matrices are estimated up to scale factors, and an impulsive clutter. In a partially homogeneous environment, the 2-step Generalized Likelihood ratio Test (GLRT) and Rao approach lead to a generalization of the Adaptive Normalized Matched Filter (ANMF) test and an equivalent numerically simpler detector with a well-established texture Constant False Alarm Rate (CFAR) behavior. Performances are discussed and illustrated with theoretical examples, numerous simulations, and insights into experimental data. Results show that these detectors outperform their competitors and have stronger robustness to environmental unknowns.
Olivier Lerda, Ammar Mian, Guillaume Ginolhac, Jean-Philippe Ovarlez, Didier Charlot
2023-03-31T11:31:18
http://arxiv.org/abs/2303.17979v2
# Robust Detection for Mills Cross Sonar ###### Abstract Multi-array systems are widely used in sonar and radar applications. They can improve communication speeds, target discrimination, and imaging. In the case of a multibeam sonar system that can operate two receiving arrays, we derive new adaptive to improve detection capabilities compared to traditional sonar detection approaches. To do so, we more specifically consider correlated arrays, whose covariance matrices are estimated up to scale factors, and an impulsive clutter. In a partially homogeneous environment, the 2-step Generalized Likelihood ratio Test (GLRT) and Rao approach lead to a generalization of the Adaptive Normalized Matched Filter (ANMF) test and an equivalent numerically simpler detector with a well-established texture Constant False Alarm Rate (CFAR) behavior. Performances are discussed and illustrated with theoretical examples, numerous simulations, and insights into experimental data. Results show that these detectors outperform their competitors and have stronger robustness to environmental unknowns. Sonar target detection, Adaptive Normalized Matched Filter, Multiple-Input Multiple-Output, Complex Elliptically Symmetric distributions, Tyler's M-estimator, robustness. ## I Introduction ### _Background and motivations_ Forward-Looking sonars are solutions for perceiving the underwater environment. In the context of a growing need for decision-making autonomy and navigation safety, they have become fundamental tools for understanding, anticipating obstacles and potential dangers, analyzing and identifying threats. They offer efficient results, allowing detection, tracking, and classification of surface [1], water column [2], or bottom targets [3], in civil [4] or military applications such as mine warfare [5]. At the detection level, monovariate statistical tests under the Gaussian or non Gaussian interference assumption, defined a priori, remain the prevalent approaches [6]-[8]. Nevertheless, many works on multivariate statistics have shown their great interest compared to algorithms developed from monovariate statistics in a large number of application fields. Indeed, multivariate statistics allow advanced modeling of propagation environments. By following these precepts, [9] gets a central detector with the total unknown of the noise parameters, [10] first derives a detector under the assumption of a known covariance then substitutes it, through a two-step procedure, by an appropriate estimator, finally, [11] and [12] have shown the relevance of subspace data models in consideration of mismatched signals for which, as an example, the target echo would not come precisely from the center of the main beam. These seminal works are now references in remote sensing, and ground or air surveillance but mainly for radar systems. Moreover, in the radar field, phenomenal progress has also been made in recent decades, guided by increasingly complex systems which required profound changes in concepts and processing. This is especially the case of Space-Time Adaptive Processing (STAP) for airborne radars [13], which bring considerable improvements in the ability to discriminate moving targets at very low speeds, or Multiple-Input Multiple-Output (MIMO) systems that advance detection performance [14], [15] and resolution [16], [17] by exploiting spatial, frequency or waveform diversities. Although some preliminary work has emerged in recent years [18]-[24], these methods still seem to be underused in sonar systems. This paper focuses on the adaptive detection of a point target by a correlated orthogonal arrays sonar system. Inspired by these previous works, we will first show that multibeam systems are perfectly adapted to multivariate formalism. We will then propose two new detectors following the GLRT, and Rao two-step approaches [25]-[27], assuming heterogeneous or partially homogeneous clutter [28], [29]. The performance in a Gaussian environment will first be evaluated. We will show that considering a sonar system with Mills [30] cross arrays leads to a better detectability of targets by reducing the clutter ridge. Nevertheless, complex multi-normality can sometimes be a poor approximation of physics. This is the case for fitting high-resolution clutter, impulsive noise, outliers, and interference. The Complex Elliptic Symmetric (CES) distributions [31] including the well-known compound Gaussian subclass are then natural extensions allowing the modeling of distributions with heavy or light tails in radar [32]-[35] as in sonar [36]. Mixtures of Scaled Gaussian (MSG) distributions [37] are derived and easily tractable approaches. In this context, particular covariance matrix estimators are recommended for adaptive processing such as the Tyler estimator [38] or Huber's M-estimator [39]. Their uses lead to very substantial performance gains in [40], [41], and [35]. In our application, these considerations will allow us to design a new covariance matrix estimator. The performance of the detectors in a non-Gaussian impulsive environment can then be studied. On this occasion, we will show on experimental data, this estimator's interest in the robustness to corruption of training data. ### _Paper organization_ This paper is organized as follows: Section II presents a dual array sonar system and the experimental acquisition conditions on which this work is based. In Section III, the signal model and detection problem are formalized. According to the two-step GLRT and Rao test design, coherent adaptive detectors are derived in Section IV. The performances are evaluated, compared, and analyzed in Sections V and VI. Conclusions are given in Section VII. Proofs and complementary results are provided in the Appendices. _Notations_: Matrices are in bold and capital, vectors in bold. Re(.) and Im(.) stand respectively for real and imaginary part operators. For any matrix \(\mathbf{A}\) or vector, \(\mathbf{A}^{T}\) is the transpose of \(\mathbf{A}\) and \(\mathbf{A}^{H}\) is the Hermitian transpose of \(\mathbf{A}\). \(\mathbf{I}\) is the identity matrix and \(\mathcal{CN}(\boldsymbol{\mu},\boldsymbol{\Gamma})\) is the circular complex Normal distribution of mean \(\boldsymbol{\mu}\) and covariance matrix \(\boldsymbol{\Gamma}\). \(\otimes\) denotes the Kronecker product. ## II Seapix system ### _Generalities_ The SEAPIX system is a three-dimensional multibeam echosounder developed by the sonar systems division of Exail (formerly iXblue) [42]. It is traditionally used by fishing professionals as a tool to assist in the selection of catches and the respect of quotas [43], by hydro-acousticians for the monitoring of stocks and morphological studies of fish shoals [44], by hydrographers for the establishment of bathymetric and sedimentary marine charts [45, 46]. Two uniform linear arrays of 64 elements, arranged in Mills cross, are entirely symmetric, reversible in transmission/reception, and electronically steerable. They generate transverse (i.e. across-track) or longitudinal (along-track) acoustic swaths of 120\({}^{\circ}\) by 1.8\({}^{\circ}\), tilttable on +/-60\({}^{\circ}\), providing a volumic coverage of the water column. ### _FLS experiment_ The SEAPIX system is being experimented with a Forward-Looking Sonar configuration for predictive target detection and identification. In this context of use, the active face is oriented in the "forward" direction rather than toward the seabed. In our study, the sensor is installed on the DriX Uncrewed Surface Vehicle (USV), and inclined by 20\({}^{\circ}\) according to the pitch angle to the sea surface (Figure 2 left). In transmission, the vertical antenna (formerly longitudinal) generates an enlarged beam of 9\({}^{\circ}\) in elevation by 120\({}^{\circ}\) in azimuth. A motion stabilized, and electronically tilted firing angle allows the upper limit of the -3 dB transmit beamwidth to graze the sea surface and the lower limit to reach a 50 m depth bottom at about 300 m range. In receive, the horizontal antenna (formerly transverse) generates beams of 2\({}^{\circ}\) in azimuth by 120\({}^{\circ}\) in elevation and the vertical antenna (which is used again) of 2\({}^{\circ}\) in elevation by 120\({}^{\circ}\) in azimuth. A rigid sphere of 71 cm diameter (Figure 2 right) is also immersed at 25 m depth in the middle of the water column. So after each transmission of 20 ms Linear Frequency Modulation pulses centered at 150 KHz with a 10 KHz bandwidth and a Pulse Repetition Interval of 0.5 s, the sensor signals from the two antennas are simultaneously recorded, allowing an azimuth and elevation representation of the acoustic environment (Figure 3). ### _Pre-processing and data format_ The signals from the 128 sensors provided by the sonar's embedded software are demodulated in baseband (InPhase and Quadrature components) and decimated at 43 KHz. During reception pre-processing, the digitized samples are compensated for the time-varying gain, filtered to the bandwidth of the waveform, pulse compressed, then decimated again to bandwidth. Finally, a ping dataset is a matrix of about 6000 range bins from 15 m to 400 m, by 64 sensors, by two arrays. Fig. 1: Multiswath capabilities seen from a schematic representation (left): transverse swath footprint is blue, a 60\({}^{\circ}\) steered transverse swath is displayed in red, the longitudinal swath is green, and a 45\({}^{\circ}\) steered longitudinal swath is orange. Illustration from the operator software (right): An across-track, an along-track, and a tilted along-track swath are observed, as well as an aggregation of fishes and an already constructed bathymetric map. Fig. 2: Exail’s DriX USV (left): The cross-shaped linear arrays are visible in the gondola. The real target in open water (right): a metallic sphere of target strength \(TS=-15\) dB. ## III Detection Schemes ### _Data model for a single array_ At each time, we, have acquired two digitalized synchronized complex data vectors of \(m=64\) elements, called "snapshots", which can be written as (by omitting the temporal parameterization): \[\mathbf{x}_{i}=\begin{bmatrix}x_{i,1}&x_{i,2}&\cdots&x_{i,m}\end{bmatrix}^{T} \tag{1}\] where \(i=1,2\) is the array identifier (respectively horizontal and vertical). After the pre-processing steps, a point-like target observed on the antenna \(i\) is simply: \[\mathbf{x}_{i}=\alpha_{i}\,\mathbf{p}_{i}+\mathbf{z}_{i} \tag{2}\] where \(\mathbf{x}_{i}\in\mathbb{C}^{m}\) is the received signal, \(\alpha_{i}\) is an unknown complex target amplitude, \(\mathbf{p}_{i}\in\mathbb{C}^{m}\) stands for the known deterministic angular steering vector, \(\mathbf{z}_{i}\in\mathbb{C}^{m}\) is a mixture of scaled Gaussian (MSG) random vector admitting the stochastic representation: \[\mathbf{z}_{i}\stackrel{{ d}}{{=}}\sqrt{\tau_{i}}\,\mathbf{c}_{i}. \tag{3}\] The _texture_\(\tau_{i}\) is an unknown positive deterministic scalar parameter, presumably different for each range bin (i.e. for all time samples). The _speckle_\(\mathbf{c}_{i}\sim\mathcal{CN}(\mathbf{0},\sigma_{i}^{2}\mathbf{M}_{ii})\in \mathbb{C}^{m}\) is a complex circular Gaussian random vector whose covariance matrix \(\mathbf{M}_{ii}\) is known up to a scaling factor \(\sigma_{i}^{2}\). The term _speckle_ should be understood in the sense of CES statistics rather than as the result of a sum of contributions from reflections in a resolution cell. This model is strongly related to the class of compound Gaussian distributions [31], which assumes a speckle-independent random texture with a given density \(p_{\tau}\). The MSG distribution is more robust than the Gaussian one because the relative scaling between samples allows flexibility in the presence of heterogeneities, such as impulsive noise, outliers, and inconsistent data. This model explicitly allows considering the power fluctuation across range bins, especially for heavy-tailed clutter distributions. The detection problem is written as a binary hypothesis test: \[\left\{\begin{array}{lclcl}H_{0}:\mathbf{x}_{i}=\mathbf{z}_{i}&;&\mathbf{x }_{i,k}=\mathbf{z}_{i,k}&k=1\ldots K\\ H_{1}:\mathbf{x}_{i}=\alpha_{i}\,\mathbf{p}_{i}+\mathbf{z}_{i}&;&\mathbf{x}_{i, k}=\mathbf{z}_{i,k}&k=1\ldots K.\end{array}\right. \tag{4}\] In (4) it is assumed that \(K\geq m\) independent and identically distributed (i.i.d.) signal-free secondary data \(\mathbf{x}_{i,k}\in\mathbb{C}^{m}\) are available under both hypotheses for background parameters estimation. We recall that \(\mathbf{z}_{i,k}\stackrel{{ d}}{{=}}\sqrt{\tau_{i,k}}\,\mathbf{c}_ {i,k}\), \(\mathbf{c}_{i,k}\sim\mathcal{CN}(\mathbf{0},\mathbf{M}_{ii})\). Conditionally to the unknown deterministic texture, the densities of \(\mathbf{x}_{i}\) under \(H_{0}\) and \(H_{1}\) are Gaussian: \[p_{\mathbf{x}_{i}}(\mathbf{x}_{i};H_{0})=\frac{1}{\pi^{m}\hat{ \sigma}_{i}^{2m}|\mathbf{M}_{ii}|}\mathrm{exp}\left(-\frac{\mathbf{x}_{i}^{H} \mathbf{M}_{ii}^{-1}\,\mathbf{x}_{i}}{\hat{\sigma}_{i}^{2}}\right)\,, \tag{5}\] \[p_{\mathbf{x}_{i}}(\mathbf{x}_{i};H_{1})=\] \[\frac{1}{\pi^{m}\hat{\sigma}_{i}^{2m}|\mathbf{M}_{ii}|}\mathrm{ exp}\left(-\frac{(\mathbf{x}_{i}-\alpha_{i}\mathbf{p}_{i})^{H}\mathbf{M}_{ii}^{-1} \,(\mathbf{x}_{i}-\alpha_{i}\mathbf{p}_{i})}{\hat{\sigma}_{i}^{2}}\right)\,,\] where \(\hat{\sigma}_{i}=\sigma_{i}\,\sqrt{\tau_{i}}\). ### _Data model for the two arrays_ If we consider the two antennas, this model can be written more appropriately: \[\left\{\begin{array}{lcl}H_{0}:\mathbf{x}=\mathbf{z}&;&\mathbf{x}_{k}= \mathbf{z}_{k}&k=1\ldots K\\ H_{1}:\mathbf{x}=\mathbf{P}\,\boldsymbol{\alpha}+\mathbf{z}&;&\mathbf{x}_{k}= \mathbf{z}_{k}&k=1\ldots K\end{array}\right. \tag{6}\] where \(\mathbf{x}=\begin{bmatrix}\mathbf{x}_{1}\\ \mathbf{x}_{2}\end{bmatrix}\in\mathbb{C}^{2m}\) is the concatenation of the two received signals, \(\boldsymbol{\alpha}=\begin{bmatrix}\alpha_{1}\\ \alpha_{2}\end{bmatrix}\in\mathbb{C}^{2}\) is the vector of the target amplitudes and \(\mathbf{z}=\begin{bmatrix}\mathbf{z}_{1}\\ \mathbf{z}_{2}\end{bmatrix}\in\mathbb{C}^{2m}\) is the additive clutter. The matrix \(\mathbf{P}=\begin{bmatrix}\mathbf{p}_{1}&\mathbf{0}\\ \mathbf{0}&\mathbf{p}_{2}\end{bmatrix}\in\mathbb{C}^{2m\times 2}\) contains the steering vectors. \(\left\{\mathbf{x}_{k}=\begin{bmatrix}\mathbf{x}_{1,k}\\ \mathbf{x}_{2,k}\end{bmatrix}\right\}_{k\in\mathbb{I},K}\in\mathbb{C}^{2m}\) for \(K\geq 2\,m\) are i.i.d. signal-free secondary data. This formulation allows considering the correlation between sensors of arrays. The covariance is \(\mathbf{M}=\begin{bmatrix}\mathbf{M}_{11}&\mathbf{M}_{12}\\ \mathbf{M}_{21}&\mathbf{M}_{22}\end{bmatrix}\), with \(\mathbf{M}_{ii}\) the block-covariance matrix of array \(i\), and \(\mathbf{M}_{ij}=\mathbf{M}_{ji}^{H}\) the cross-correlation block of array \(i\) and \(j\). We further assume that the covariance is known, or estimated, up to two scalars, possibly different on each array. These scalars are conditioning the \(\mathbf{M}_{ii}\) block-covariance, but also all the cross-correlations blocks associated with the array \(i\). We can therefore write: \[\widetilde{\mathbf{C}}=\widetilde{\mathbf{\Sigma}}\,\mathbf{M}\,\widetilde{ \mathbf{\Sigma}}\,, \tag{7}\] with \(\widetilde{\mathbf{\Sigma}}=\begin{bmatrix}\tilde{\sigma}_{1}\,\mathbf{I}_{m}& \mathbf{0}\\ \mathbf{0}&\tilde{\sigma}_{2}\,\mathbf{I}_{m}\end{bmatrix}=\begin{bmatrix} \tilde{\sigma}_{1}&0\\ 0&\tilde{\sigma}_{2}\end{bmatrix}\otimes\mathbf{I}_{m}\) the unknown diagonal matrix of scalars \(\sigma_{i}\) and \(\tau_{i}\). We remind that the Fig. 3: Azimuth (top) and elevation (bottom) view from a single ping, in 50 m depth in La Ciotat bay. \(\sigma_{i}\) parameter drives the partial homogeneity of the data (i.e. the difference in scale factor between the covariance matrices of the primary and secondary data) and \(\tau_{i}\) drives the non-Gaussianity of the data (i.e. the power variation of the observations over time). In this model, each array, although correlated, has a possibly distinct texture and an unknown scaling factor on the covariance matrix which may also be dissimilar. It would therefore become entirely possible to model, as an example, a first array whose observations would be Gaussian (\(\tau_{1}=1\)) and homogeneous (\(\sigma_{1}=1\)), and a second array whose observations would be \(K\)-distributed (for which \(\tau_{2}\) is a realization of a Gamma distribution) and non-homogeneous (\(\sigma_{2}\neq 1\)). The PDFs under each hypothesis can be rewritten as: \[p_{\mathbf{x}}(\mathbf{x};H_{0}) =\frac{1}{\pi^{2m}|\widehat{\mathbf{C}}|}\exp\left(-\mathbf{x}^{ H}\widetilde{\mathbf{C}}^{-1}\mathbf{x}\right)\,,\] \[p_{\mathbf{x}}(\mathbf{x};H_{1}) =\frac{1}{\pi^{2m}|\widetilde{\mathbf{C}}|}\exp\left(-(\mathbf{x }-\mathbf{P}\boldsymbol{\alpha})^{H}\widetilde{\mathbf{C}}^{-1}\left( \mathbf{x}-\mathbf{P}\boldsymbol{\alpha}\right)\right). \tag{8}\] ## IV Robust Detectors We discuss the derivation of detectors using the GLRT and Rao procedures. Following a two-step approach, the covariance matrix \(\mathbf{M}\) will first be assumed known, and then replaced by an appropriate estimator. ### _Detectors' derivation with \(\mathbf{M}\) known (step-1)_ #### Iv-A1 Generalized Likelihood Ratio Test The Generalized Likelihood Ratio Test (GLRT) design methodology proposes to solve the detection problem from the ratio of the probability densities function under \(H_{1}\) and \(H_{0}\), substituting the unknown parameters with their maximum likelihood estimates: \[L_{G}(\mathbf{x})=\frac{\max\limits_{\boldsymbol{\alpha}}\max\limits_{ \widetilde{\mathbf{\Sigma}}}p_{\mathbf{x}}\left(\mathbf{x};\boldsymbol{\alpha },\widetilde{\mathbf{\Sigma}},H_{1}\right)}{\max\limits_{\widetilde{\mathbf{ \Sigma}}}p_{\mathbf{x}}\left(\mathbf{x};\widetilde{\mathbf{\Sigma}},H_{0} \right)}. \tag{9}\] **Proposition IV.1**.: _The GLRT for the hypothesis test (6) is given by:_ \[L_{G}(\mathbf{x})=\frac{\hat{\sigma}_{1_{0}}\,\hat{\sigma}_{2_{0}}}{\hat{ \sigma}_{1_{1}}\,\hat{\sigma}_{2_{1}}} \tag{10}\] _where_ \[\left(\hat{\sigma}_{1_{0}}\,\hat{\sigma}_{2_{0}}\right)^{2}=\frac{ \operatorname{Re}\left(\mathbf{x}_{1}^{H}\mathbf{M}_{12}^{-1}\mathbf{x}_{2} \right)}{m}\\ +\sqrt{\frac{\mathbf{x}_{1}^{H}\mathbf{M}_{11}^{-1}\mathbf{x}_{1 }}{m}}\,\frac{\mathbf{x}_{2}^{H}\mathbf{M}_{22}^{-1}\mathbf{x}_{2}}{m},\] _and_ \[\left(\hat{\sigma}_{1_{1}}\,\hat{\sigma}_{2_{1}}\right)^{2}=\frac {\operatorname{Re}\left[\mathbf{x}_{1}^{H}\left(\mathbf{M}_{12}^{-1}-\mathbf{ D}_{12}^{-1}\right)\mathbf{x}_{2}\right]}{m}\\ +\sqrt{\frac{\mathbf{x}_{1}^{H}\left(\mathbf{M}_{11}^{-1}-\mathbf{ D}_{11}^{-1}\right)\mathbf{x}_{1}}{m}}\,\frac{\mathbf{x}_{2}^{H}\left( \mathbf{M}_{22}^{-1}-\mathbf{D}_{22}^{-1}\right)\mathbf{x}_{2}}{m}}\] _with \(\mathbf{D}^{-1}=\mathbf{M}^{-1}\mathbf{P}\left(\mathbf{P}^{H}\mathbf{M}^{-1} \mathbf{P}\right)^{-1}\mathbf{P}^{H}\mathbf{M}^{-1}\)._ Proof.: See Appendix A for a step-by-step derivation and Appendix B for some interesting equivalences. As \(\mathbf{x}_{i}=\sqrt{\tau_{i}}\,\mathbf{c}_{i}\) under \(H_{0}\), it is easily shown that this detector is texture independent (i.e. it has the _texture-CFAR_ property). This detection test will be called _M-NMF-G_ in the following. #### Iv-A2 Rao test The Rao test is obtained by exploiting the asymptotic efficiency of the ML estimate and expanding the likelihood ratio in the neighborhood of the estimated parameters [47]. A traditional approach for complex unknown parameters is to form a corresponding real-valued parameter vector and then use the real Rao test [48], [49]. Specifically, rewriting the detection problem (6) as: \[\left\{\begin{array}{l}H_{0}:\boldsymbol{\xi}_{R}=\mathbf{0},\boldsymbol{\xi }_{S}\\ H_{1}:\boldsymbol{\xi}_{R}\neq\mathbf{0},\boldsymbol{\xi}_{S}\,,\end{array}\right. \tag{11}\] where \(\boldsymbol{\xi}_{R}=\left[\operatorname{Re}\left(\boldsymbol{\alpha}\right)^{T }\,\operatorname{Im}\left(\boldsymbol{\alpha}\right)^{T}\right]^{T}\) is a \(4\times 1\) parameter vector and \(\boldsymbol{\xi}_{S}=[\tilde{\sigma}_{1}\,\tilde{\sigma}_{2}]^{T}\) is a \(2\times 1\) vector of nuisance parameters, the Rao test for the problem (11) is: \[L_{R}(\mathbf{x})=\left.\frac{\partial\ln p_{\mathbf{x}}(\mathbf{x}; \boldsymbol{\xi}_{R},\boldsymbol{\xi}_{S})}{\partial\boldsymbol{\xi}_{R}} \right|_{\boldsymbol{\xi}_{R}}^{T} =\hat{\boldsymbol{\xi}}_{R_{0}}\\ \boldsymbol{\xi}_{S}=\hat{\boldsymbol{\xi}}_{S_{0}}\] \[\left[\mathbf{I}^{-1}(\hat{\boldsymbol{\xi}}_{R_{0}},\hat{ \boldsymbol{\xi}}_{S_{0}})\right]_{\boldsymbol{\xi}_{R}\boldsymbol{\xi}_{R}} \boldsymbol{\xi}_{R}\] \[\left.\frac{\partial\ln p_{\mathbf{x}}(\mathbf{x};\boldsymbol{\xi} _{R},\boldsymbol{\xi}_{S})}{\partial\boldsymbol{\xi}_{R}}\right|_{\boldsymbol{ \xi}_{R}} \boldsymbol{\xi}_{R} =\hat{\boldsymbol{\xi}}_{R_{0}}\,. \tag{12}\] The PDF \(p_{\mathbf{x}}(\mathbf{x};\boldsymbol{\xi}_{R},\boldsymbol{\xi}_{S})\) is given in (8) and parametrized by \(\boldsymbol{\xi}_{R}\) and \(\boldsymbol{\xi}_{S}\): \(\hat{\boldsymbol{\xi}}_{R_{0}}\) and \(\hat{\boldsymbol{\xi}}_{S_{0}}\) are the ML estimates of \(\boldsymbol{\xi}_{R}\) and \(\boldsymbol{\xi}_{S}\) under \(H_{0}\). \(\mathbf{I}(\boldsymbol{\xi}_{R},\boldsymbol{\xi}_{S})\) is the Fisher Information Matrix that can be partitioned as: \[\mathbf{I}(\boldsymbol{\xi}_{R},\boldsymbol{\xi}_{S})=\begin{bmatrix} \mathbf{I}_{\boldsymbol{\xi}_{R}\boldsymbol{\xi}_{R}}(\boldsymbol{\xi}_{R}, \boldsymbol{\xi}_{S})&\mathbf{I}_{\boldsymbol{\xi}_{R}\boldsymbol{\xi}_{S}}( \boldsymbol{\xi}_{R},\boldsymbol{\xi}_{S})\\ \mathbf{I}_{\boldsymbol{\xi}_{S}\boldsymbol{\xi}_{R}}(\boldsymbol{\xi}_{R}, \boldsymbol{\xi}_{S})&\mathbf{I}_{\boldsymbol{\xi}_{S}\boldsymbol{\xi}_{S}}( \boldsymbol{\xi}_{R},\boldsymbol{\xi}_{S})\end{bmatrix}\,,\] and we have: \[\left[\mathbf{I}^{-1}(\boldsymbol{\xi}_{R},\boldsymbol{\xi}_{S}) \right]_{\boldsymbol{\xi}_{R}\boldsymbol{\xi}_{R}}=\left(\mathbf{I}_{ \boldsymbol{\xi}_{R}\boldsymbol{\xi}_{R}}(\boldsymbol{\xi}_{R},\boldsymbol{ \xi}_{S})-\right.\\ \mathbf{I}_{\boldsymbol{\xi}_{R}\boldsymbol{\xi}_{S}}(\boldsymbol{\xi}_{R}, \boldsymbol{\xi}_{S})\,\mathbf{I}_{\boldsymbol{\xi}_{S}\boldsymbol{\xi}_{S}}( \boldsymbol{\xi}_{R},\boldsymbol{\xi}_{S})\,\mathbf{I}_{\boldsymbol{\xi}_{S} \boldsymbol{\xi}_{R}}(\boldsymbol{\xi}_{R},\boldsymbol{\xi}_{S})\Big{)}^{-1}\,. \tag{13}\] The following proposition can be finally stated: **Proposition IV.2**.: _The Rao test is given by:_ \[L_{R}(\mathbf{x})=2\,\mathbf{x}^{H}\widehat{\mathbf{C}}_{0}^{-1}\,\mathbf{P} \left(\mathbf{P}^{H}\widehat{\mathbf{C}}_{0}^{-1}\,\mathbf{P}\right)^{-1} \mathbf{P}^{H}\widehat{\mathbf{C}}_{0}^{-1}\,\mathbf{x} \tag{14}\] _where \(\hat{\mathbf{C}}_{0}=\hat{\mathbf{\Sigma}}_{0}\,\mathbf{M}\,\hat{\mathbf{\Sigma}}_{0}\) and \(\hat{\mathbf{\Sigma}}_{0}\) is defined in Appendix A._ Proof.: See Appendix C. As for the GLRT, (14) is _texture-CFAR_. This detector will be referred to as _M-NMF-R_ in the sequel. ### _Covariance estimation and adaptive detectors (step-2)_ In practice, the noise covariance matrix is unknown and estimated using the \(K\) available i.i.d. signal-free secondary data. In Gaussian environment, in which PDFs are given by (8) with \(\sigma_{i}=1\) and \(\tau_{i}=1\), the MLE of \(\mathbf{M}\) is the well-known Sample Covariance Matrix (SCM): \[\widehat{\mathbf{M}}_{SCM}=\frac{1}{K}\sum_{k=1}^{K}\mathbf{x}_{k}\mathbf{x}_{ k}^{H} \tag{15}\] which is an unbiased and minimum variance estimator. In the presence of outliers or a heavy-tailed distribution (as modeled by a mixture of scaled Gaussian), this estimator is no longer optimal or robust. This leads to a strong performance degradation. **Proposition IV.3**.: _In MSG environment the MLE of \(\mathbf{M}\) is given by:_ \[\widehat{\mathbf{M}}_{\text{2TYL}}=\frac{1}{K}\sum_{k=1}^{K}\widehat{\mathbf{ T}}_{k}^{-1}\mathbf{x}_{k}\mathbf{x}_{k}^{H}\widehat{\mathbf{T}}_{k}^{-1}\,, \tag{16}\] _where \(\widehat{\mathbf{T}}_{k}=\begin{bmatrix}\sqrt{\hat{\tau}_{1_{k}}}&0\\ 0&\sqrt{\hat{\tau}_{2_{k}}}\end{bmatrix}\otimes\mathbf{I}_{m}\), \(\hat{\tau}_{1_{k}}=t_{1}+\sqrt{\frac{t_{1}}{t_{2}}}t_{12}\), \(\hat{\tau}_{2_{k}}=t_{2}+\sqrt{\frac{t_{2}}{t_{1}}}t_{12}\) and \(t_{1}=\frac{\mathbf{x}_{1,k}^{H}\widehat{\mathbf{M}}_{11}^{-1}\mathbf{x}_{1,k }}{m}\), \(t_{2}=\frac{\mathbf{x}_{2,k}^{H}\widehat{\mathbf{M}}_{22}^{-1}\mathbf{x}_{2,k }}{m}\), \(t_{12}=\frac{\text{Re}\left(\mathbf{x}_{1,k}^{H}\widehat{\mathbf{M}}_{12}^{-1} \mathbf{x}_{2,k}\right)}{m}\)._ Proof.: The demonstration is provided in Appendix D. A key point is that this estimator is independent of the textures, i.e. the power variations, on each array. It can be thought of as a _multi-texture_ generalization of [38]. From a practical point of view \(\widehat{\mathbf{M}}_{\text{2TYL}}\) is the solution of the recursive algorithm: \[\widehat{\mathbf{T}}_{k}^{(n)} =\begin{bmatrix}\sqrt{\hat{\tau}_{1_{k}}^{(n)}}&0\\ 0&\sqrt{\hat{\tau}_{2_{k}}^{(n)}}\end{bmatrix}\otimes\mathbf{I}_{m}\,, \tag{17}\] \[\widehat{\mathbf{M}}_{\text{2TYL}}^{(n)} =\frac{1}{K}\sum_{k=1}^{K}\left(\widehat{\mathbf{T}}_{k}^{(n-1)} \right)^{-1}\mathbf{x}_{k}\mathbf{x}_{k}^{H}\left(\widehat{\mathbf{T}}_{k}^{ (n-1)}\right)^{-1}\,, \tag{18}\] where \(n\in\mathbb{N}\) is the iteration number, whatever the initialization \(\widehat{\mathbf{M}}_{\text{2TYL}}^{(0)}\). The convergence of recursive equations (17) and (18) in the estimation of \(\widehat{\mathbf{M}}_{\text{2TYL}}\) is illustrated in Figure 4 for 500 iterations and \(\widehat{\mathbf{M}}_{\text{2TYL}}^{(0)}=\mathbf{I}_{2m}\). The relative difference between estimates decreases with the number of iterations. From iteration 60, the accuracy becomes limited by the simulation environment. In practice, it is not necessary to go to this limit, and we notice that from iteration 20 the relative deviation becomes lower than \(10^{-6}\). At last, the adaptive versions of the tests (10) and (14) will be simply obtained by replacing the covariance matrix \(\mathbf{M}\) by an appropriate estimate: (15) or (16) according to the environment. Those detectors will be referred as _M-ANMF-G\({}_{SCM}\)_, _M-ANMF-R\({}_{SCM}\)_, _M-ANMF-G\({}_{TYL}\)_, and _M-ANMF-R\({}_{TYL}\)_. ## V Numerical results on simulated data ### _Performance assessment_ Two correlation coefficients \(\rho_{1}\) and \(\rho_{2}\) (\(0<\rho_{1},\rho_{2}<1\)), are used in the construction of a _speckle_ covariance matrix model defined as: \[\left[\mathbf{M}\right]_{jl}=\beta\,\rho_{1}^{[j_{1}-l_{1}]}\times\rho_{2}^{ [j_{2}-l_{2}]}\,,\] where \(j,l\in[1,\,2m]\) are sensor numbers of coordinates \((j_{1},j_{2})\) and \((l_{1},l_{2})\) respectively and \(\beta\) is a scale factor. Thus, denoting \(\mathbf{M}=\begin{bmatrix}\mathbf{M}_{11}&\mathbf{M}_{12}\\ \mathbf{M}_{21}&\mathbf{M}_{22}\end{bmatrix}\), \(\mathbf{M}_{11}\) and \(\mathbf{M}_{22}\) are the covariance matrices of array 1 and 2 and \(\mathbf{M}_{12}=\mathbf{M}_{21}^{H}\) is a cross-correlation block. In the FLS context, the block \(\mathbf{M}_{11}\) is weakly correlated and Toeplitz structured (close to the identity matrix). \(\mathbf{M}_{22}\) is also Toeplitz but more strongly correlated (due to the wider transmission beam). The cross-correlation blocks could be considered null under the uncorrelated arrays assumption. In our case, these will be different from zero because the arrays cross each other in their central part. This results in the general structure displayed in Figure 5, where we visually show the adequacy of this model with the SCM covariance estimator established on real data. We choose \(\sigma_{i}=1\). \(\tau_{i}=1\) for Gaussian clutter and \(\tau_{i}\sim\text{Gam}(\nu,1/\nu)\) with \(\nu=0.5\) for impulsive non-Gaussian _K_-distributed data (that is, the texture variables follow a gamma distribution with shape parameter 0.5 and scale parameter 2). The PFA-threshold curves are established on the basis of 1000 realizations of random vectors. The detection probabilities are statistically estimated by adding a synthetic narrow band far field point target with identical amplitudes on the arrays (\(\alpha_{1}=\alpha_{2}\)). 10000 Monte Carlo iterations are performed and 150 target amplitudes are evaluated. ### _Benchmark tests_ The GLRT for a single array in a partially homogeneous Gaussian environment when \(\mathbf{M}_{ii}\) is known is the _Normalized Matched Filter_[11]: \[\mathrm{NMF}i(\mathbf{x}_{i})=\frac{|\mathbf{p}_{i}^{H}\mathbf{M}_{ii}^{-1} \mathbf{x}_{i}|^{2}}{\left(\mathbf{p}_{i}^{H}\mathbf{M}_{ii}^{-1}\mathbf{p}_{i }\right)\left(\mathbf{x}_{i}^{H}\mathbf{M}_{ii}^{-1}\mathbf{x}_{i}\right)}. \tag{19}\] Adaptive versions are obtained by substituting the covariance matrix by a suitable estimate [35] and will be referred to as \(\mathrm{ANMF}_{SCM}i\) in Gaussian case, or \(\mathrm{ANMF}_{TYL}i\) for the non-Gaussian case. When the two arrays are considered, in the very favorable case of a Gaussian homogeneous environment where the covariance matrix \(\mathbf{C}\) is perfectly known, the GLRT is: \[\mathrm{MIMO-MF}(\mathbf{x})=\mathbf{x}^{H}\mathbf{C}^{-1}\mathbf{P}\left( \mathbf{P}^{H}\mathbf{C}^{-1}\mathbf{P}\right)^{-1}\mathbf{P}^{H}\mathbf{C}^{ -1}\mathbf{x}_{\mathbf{x}}\,. \tag{20}\] This is the _MIMO Optimum Gaussian Detector_ (R-MIMO OGD) in [50], which is a multi-array generalization of the _Matched Filter_ test. One can note the very strong similarity with (14). Its adaptive version is MIMO-\(\mathrm{AMF}_{SCM}\). It seems useful to specify that this detector is relevant only in a Gaussian and perfectly homogeneous environment. Especially, exactly as in the single-array case [31], the covariance estimator (16) is defined up to a constant scalar. Detectors (10) and (14) are invariant when the covariance is changed by a scale factor. This is a direct result of the partial homogeneity assumption. This is not the case for (20), thus the adaptive MIMO-\(\mathrm{AMF}_{TYL}\) version is not relevant and will not be used in the following. These tests will be considered as performance bounds for the proposed detectors: M-NMF-G and M-NMF-R when \(\mathbf{M}\) is known, M-\(\mathrm{ANMF-G}_{SCM}\), M-\(\mathrm{ANMF-R}_{SCM}\), M-\(\mathrm{ANMF-G}_{TYL}\) and M-\(\mathrm{ANMF-R}_{TYL}\) otherwise. ### _Performance in Gaussian clutter_ We have showed that all developed detectors are texture-CFAR. Unfortunately, the matrix CFAR property (distribution of the test keeps identical whatever the true covariance matrix) is much more difficult to show. Therefore, we propose to perform a study on simulated data to check this matrix CFAR property. Figure 6 experimentally demonstrates the CFAR behavior of the detectors with respect to the covariance matrix in a Gaussian environment. On the left side we represent the false alarm probability of Rao's detector (for known and estimated \(\mathbf{M}\)) as a function of the detection threshold. On the right, these curves are plotted for the GLRT detector. The deviation of the adaptive detectors comes from the covariance estimation process: the red curve converges to the blue one when the number of secondary data used for the estimation of \(\mathbf{M}\) increases. The markers represent curves for different covariances. In solid line \(\beta=3\times 10^{-4}\), \(\rho_{1}=0.4\), \(\rho_{2}=0.9\). The cross markers are established from \(\beta=1\), \(\rho_{1}=0.95\), \(\rho_{2}=0.95\). Circular markers are for \(\beta=100\), \(\rho_{1}=0.1\), \(\rho_{2}=0.1\) and null anti-diagonal blocks. For very distinct covariances, the superposition of curves and markers underlines the _matrix-CFAR_ property of the detectors. Figure 7 illustrates the value of merging arrays with a cross-shaped geometry. The detection probabilities of NMF 1, NMF 2, and M-NMF-R are plotted as a function of azimuth (\(\theta_{1}\)) and elevation (\(\theta_{2}\)) angles at fixed SNR. In the first angular dimension, which depends on the array considered, each ULA has a zone of least detectability close to \(0^{\circ}\). This is due to the correlated nature of clutter. The decrease in detection probabilities from +/-\(60^{\circ}\) to \(0^{\circ}\) is a function of the correlation level (\(\rho_{1}\) or \(\rho_{2}\)). In the second dimension, the detection performances are invariant. Thus, the poor detection performance near \(0^{\circ}\) propagates in a whole direction of space: "vertically" for NMF 1 and "horizontally" for NMF 2. As an illustration, the probability of detection of NMF 1 in \(\theta_{1}=0^{\circ}\) whatever \(\theta_{2}\) is \(PD=0.17\), and the probability of detection of NMF 2 in \(\theta_{2}=0^{\circ}\) for all \(\theta_{1}\) is 0.03. The M-NMF-R detector spatially minimizes this area of least detectability. In this case, the probability of detection at \(\theta_{1}=0^{\circ}\) becomes greater than 0.8 for \(|\theta_{2}|>10.5^{\circ}\) and Fig. 5: Sonar arrays (left): two uniform linear arrays intersect in their centers. Empirical covariance matrix on real sonar data shown in dB (center): The SCM is estimated on a homogeneous area between 265 m and 328 m. Cross-correlation blocks are non-zero. The covariance matrix model used for the data simulation is shown in dB (right): \(\beta=3\times 10^{-4}\), \(\rho_{1}=0.4\), \(\rho_{2}=0.9\). Fig. 6: Pfa-threshold curves of the M-(A)NMF-R\({}_{SCM}\) and M-(A)NMF-G\({}_{SCM}\) detectors in Gaussian environment. Left: the Rao detector for known \(\mathbf{M}\) (blue), for \(\mathbf{M}\) estimated from \(2\times 2m\) secondary data (red), the OGD detector or _Matched Filter_ (black). Right: The GLRT detector for known \(\mathbf{M}\) (blue), and estimated based on \(2\times 2m\) secondary data (red). greater than 0.5 for \(\theta_{2}=0^{\circ}\) and \(|\theta_{1}|>24^{\circ}\). The _Rigde clutter_ is minimized. The probability of detection (PD) is plotted as a function of the signal-to-noise ratio (SNR) in Figure 8 when \(\mathbf{M}\) is known. The MIMO-MF detector (shown in black), which requires a perfect knowledge of the covariance, is logically the most efficient. The single array NMF detectors (NMF 1 and 2, in purple and green) have comparable and the lowest performance. The M-NMF-I detector (in blue) between these curves assumes antenna independence. The proposed M-NMF-G (red) and M-NMF-R (yellow) detectors are both equivalent and superior to the M-NMF-I (0.2 dB at \(PD=0.8\)) and much more efficient than NMF 1 (2.5 dB) and NMF 2 tests (2 dB). The difference with MIMO-MF is slight, around 0.2 dB at \(PD=0.8\). Detection performances by the Rao approach are slightly superior to those obtained by the GLRT. Figure 9 compares the performance of the tests in their adaptive versions. The MIMO-MF curve is shown in black dotted lines for illustrative purposes only (as it assumes the covariance known). It can be seen that when the SCM is evaluated on \(2\times 2m=256\) secondary data, the covariance estimation leads to a loss of 3 dB between the performances of the MIMO-MF test and its adaptive version MIMO-AMF, as expected from the Reed-Mallett-Brennan's rule [51]. In their adaptive versions, the proposed detectors offer equivalent performances to the MIMO-AMF detectors while offering additional robustness and flexibility conditions on a possible lack of knowledge of the covariance (estimated to be within two scale factors). The gain compared to single array detectors, although reduced compared to the case where \(\mathbf{M}\) is known, remains favorable and of the order of 0.5 to 1 dB at \(PD=0.8\). In other words, for an SNR of -11 dB, \(PD=0.75\) for ANMF 1 and 0.85 for M-ANMF-G\({}_{SCM}\) (or M-ANMF-R\({}_{SCM}\)). ### _Performance in impulsive non-Gaussian clutter_ PFA-threshold curves in _K_-distributed environment are displayed in Figure 10 in order to characterize the matrix-CFAR behavior. The detectors based on the new covariance matrix estimator systematically have lower detection thresholds to those obtained with the SCM. While optimal for a Gaussian clutter, the MIMO-MF detector is no longer suitable. The marker overlays highlight the matrix-CFAR behavior of the Rao detector in a non- Fig. 8: Probability of detection in Gaussian environment for known \(\mathbf{M}\) (\(\beta=3\times 10^{-4}\), \(\rho_{1}=0.4\), \(\rho_{2}=0.9\) and \(PFA=10^{-2}\)). Fig. 7: Probability of detection in Gaussian environment for known covariance matrix \(\mathbf{M}\) as a function of \(\theta_{1}\) and \(\theta_{2}\) (\(\beta=3\times 10^{-4}\), \(\rho_{1}=0.4\), \(\rho_{2}=0.9\), \(SNR=-12\) dB and \(PFA=10^{-2}\)). Top left: NMF 1. Top right: NMF 2. Bottom left: M-NMF-R. Fig. 9: Probability of detection in Gaussian environment for unknown \(\mathbf{M}\) (\(PFA=10^{-2}\), SCMs are estimated based on \(2\times 2m\) secondary data). case of the GLRT detector is much less obvious: at this stage, it seems complicated to consider that the false alarm curves are strictly independent of the covariance matrix. The detection performances are compared in Figure 11. For the Rao and GLRT detectors, using the estimator (16) in an impulsive non-Gaussian environment induces a gain in detection performance of the order of 1.2 dB at \(PD=0.8\) with the SCM. Compared to the best competitors the improvement is 3 dB at \(PD=0.8\). ## VI Experimental data with real target The actual sonar measurements were collected during an acquisition campaign in May 2022 in La Ciotat bay, Mediterranean Sea. The experimental conditions are described in II-B, and the dataset of interest is shown in Figure 3. Echoes from the true target (right in Figure 2 ) are observed at the 2040th range cell. ### _Illustrations of detection test outputs_ Figure 12 provides examples of real data outputs from conventional ANMF detectors based on a single array. The real target is observed in range bin 2040 (143 m away from the sonar), at angular bin \(\theta_{1}=26\) (\(-12.4^{\circ}\) azimuth), and angular bin \(\theta_{2}=37\) (\(8.6^{\circ}\) elevation). Figure 13 shows the outputs of the Rao and GLRT detectors applied simultaneously to both arrays at the specific range bin 2040. These subfigures are not directly comparable with each other or with Figure 12. The target is perfectly located in azimuth and elevation. ### _Robustness to training data corruption_ Previously, we assumed the availability of a set of \(K\geq 2m\) i.i.d. secondary data, supposed free of signal components and sharing statistic properties with the noise in the cell under test. In practice, such data can be obtained by processing samples in spatial proximity to the range bin being evaluated, and the absence of signals is not always verified or checkable. In particular, this assumption is no longer valid if another target is present in these secondary data. Fig. 11: Probability of detection in \(K\)-distributed environment for unknown \(\mathbf{M}\) (\(\nu=0.5\), \(PFA=10^{-2}\), \(2\times 2m\) secondary data). In solid lines, the detectors are built with the SCM estimate. In dotted lines, the detectors are built with Tyler’s (single array) or the new (dual arrays) estimator. Fig. 12: ANMF\({}_{SCM}\) detector outputs on real sonar data. A target is observed on array 1 (left) at coordinates (26, 2040) and on array 2 (right) at coordinates (37, 2040). The SCM is built from 256 secondary data. Fig. 10: Pfa-threshold curves in non-Gaussian \(K\)-distributed environment (\(\nu=0.5\)). Left: the Rao detector for known \(\mathbf{M}\) (blue), for \(\mathbf{M}\) estimated with the SCM (red), \(\mathbf{M}\) estimated with the new estimator (yellow), the OGD detector or _Matched Filter_ (black). Right: The GLRT detector for known \(\mathbf{M}\) (blue), \(\mathbf{M}\) estimated based on the SCM (red), based on Tyler’s extension (yellow). Fig. 13: Outputs of the M-ANMF-R\({}_{SCM}\) (left) and M-ANMF-G\({}_{SCM}\) (right) detectors. The real target is at coordinates (37, 26). The SCM is built from 256 secondary data. Figure 14 illustrates the robustness of the covariance matrix estimators to data corruption. A synthetic target is added 100 samples away from the real target and contaminates the secondary dataset. On the left side, the output of M-ANMF-R\({}_{SCM}\) is strongly degraded. The SCM is not robust to the presence of outliers. The target is hardly discernible, and the maximum is wrongly located. Under the same conditions, the response of the M-ANMF-R\({}_{TYL}\) detector is visualized in the right part. Although degraded compared to Figure 13, the behavior is still largely usable. The target presence is easily identified, and the new estimator (16) is much more resistant to data contamination. ## VII Conclusions In this paper, we considered the problem of adaptive point target detection by a correlated multi-arrays Mills Cross sonar system. Using a 2-step approach, we first derived two new detectors that are robust to differences in target amplitudes and to unknown scaling factors on the covariances. Subsequently, we have introduced an innovative estimator of the covariance matrix suitable to any non-Gaussian MSG environment. By these very general assumptions, the framework of the study can therefore concern systems with co-located or remote arrays. Experimental results show that the detection performance is up to 3 dB better than conventional approaches. The detectors cover a larger detection area and are particularly robust to spikes, impulsive noise, and data contamination. Future work will focus on establishing a theoretical demonstration of the matrix-CFAR behavior of these detectors, and on generalizing solutions for different numbers and geometries of arrays. ## Acknowledgments This work has been done thanks to the facilities offered by the Univ. Savoie Mont Blanc - CNRS/IN2P3 MUST computing center. ## Appendix A GLRT's derivation For the following, and for ease of reading, the punctuation mark "tilde" will be omitted. Thus \(\Sigma\) and \(\tilde{\sigma}_{i}\) will simply be denoted as \(\Sigma\) and \(\sigma_{i}\). The same is true for their respective estimates. ### _Maximum Likelihood Estimator of \(\mathbf{\Sigma}\) under \(H_{0}\)_ The MLE \(\mathbf{\hat{\Sigma}}_{0}\) is derived from the log-likelihood function: \[\begin{array}{l}\ln p_{\mathbf{x}}(\mathbf{x};\mathbf{\Sigma},H_{0})=\\ -mL\ln\pi+2\ln|\mathbf{\Sigma}^{-1}|-\ln|\mathbf{M}|-\left(\mathbf{x}^{H} \mathbf{\Sigma}^{-1}\mathbf{M}^{-1}\mathbf{\Sigma}^{-1}\mathbf{x}\right)\,, \end{array}\] whose derivative relative to \(\mathbf{\Sigma}^{-1}\) is: \[\frac{\partial\ln p_{\mathbf{x}}(\mathbf{x};\mathbf{\Sigma},H_{0})}{\partial \mathbf{\Sigma}^{-1}}=2\frac{\partial\ln|\mathbf{\Sigma}^{-1}|}{\partial \mathbf{\Sigma}^{-1}}-\frac{\partial\mathbf{x}^{H}\mathbf{\Sigma}^{-1} \mathbf{M}^{-1}\mathbf{\Sigma}^{-1}\mathbf{x}}{\partial\mathbf{\Sigma}^{-1}}.\] Knowing that ([52] (57)) \(\frac{\partial\ln|\mathbf{\Sigma}^{-1}|}{\partial\mathbf{\Sigma}^{-1}}= \mathbf{\Sigma}\), and ([52] (82)) \[\frac{\partial\mathbf{x}^{H}\mathbf{\Sigma}^{-1}\mathbf{M}^{-1}\mathbf{ \Sigma}^{-1}\mathbf{x}}{\partial\mathbf{\Sigma}^{-1}}=2\mathrm{Re}\left( \mathbf{M}^{-1}\mathbf{\Sigma}^{-1}\mathbf{x}\,\mathbf{x}^{H}\right)\text{ we have:}\] which leads to: \[\mathbf{\hat{\Sigma}}_{0}=\mathrm{Re}\left(\mathbf{M}^{-1}\mathbf{\hat{\Sigma }}_{0}^{-1}\mathbf{x}\,\mathbf{x}^{H}\right)\,. \tag{21}\] Expanding this matrix product with \(\mathbf{\hat{\Sigma}}_{0}=\begin{bmatrix}\hat{\sigma}_{1_{0}}\mathbf{I}_{m}& \mathbf{0}\\ \mathbf{0}&\hat{\sigma}_{2_{0}}\mathbf{I}_{m}\end{bmatrix}\), we have: \[\hat{\sigma}_{1_{0}}\,\mathbf{I}_{m}=\mathrm{Re}\left(\mathbf{M}_{11}^{-1} \frac{\mathbf{x}_{1}\,\mathbf{x}_{1}^{H}}{\hat{\sigma}_{1_{0}}}+\mathbf{M}_{1 2}^{-1}\frac{\mathbf{x}_{2}\,\mathbf{x}_{1}^{H}}{\hat{\sigma}_{2_{0}}}\right)\,,\] and using the trace operator: \[\begin{array}{l}m\,\hat{\sigma}_{1_{0}}=\mathrm{tr}\left[\mathrm{Re}\left( \mathbf{M}_{11}^{-1}\frac{\mathbf{x}_{1}\,\mathbf{x}_{1}^{H}}{\hat{\sigma}_{1 _{0}}}+\mathbf{M}_{12}^{-1}\frac{\mathbf{x}_{2}\,\mathbf{x}_{1}^{H}}{\hat{ \sigma}_{2_{0}}}\right)\right]\\ =\frac{\mathbf{x}_{1}^{H}\mathbf{M}_{11}^{-1}\mathbf{x}_{1}}{\hat{\sigma}_{1 _{0}}}+\mathrm{Re}\left(\frac{\mathbf{x}_{1}^{H}\mathbf{M}_{12}^{-1}\mathbf{ x}_{2}}{\hat{\sigma}_{2_{0}}}\right)\,.\end{array}\] We then have: \[\hat{\sigma}_{1_{0}}=\frac{1}{\hat{\sigma}_{1_{0}}}\frac{\mathbf{x}_{1}^{H} \mathbf{M}_{11}^{-1}\mathbf{x}_{1}}{m}+\frac{1}{\hat{\sigma}_{2_{0}}}\frac{ \mathrm{Re}\left(\mathbf{x}_{1}^{H}\mathbf{M}_{12}^{-1}\mathbf{x}_{2}\right) }{m}, \tag{22}\] and \[\begin{array}{l}\hat{\sigma}_{2_{0}}=\frac{1}{\hat{\sigma}_{1_{0}}}\frac{ \mathrm{Re}\left(\mathbf{x}_{2}^{H}\mathbf{M}_{21}^{-1}\mathbf{x}_{1}\right)} {m}+\frac{1}{\hat{\sigma}_{2_{0}}}\frac{\mathbf{x}_{2}^{H}\mathbf{M}_{22}^{-1 }\mathbf{x}_{2}}{m}.\end{array} \tag{23}\] Denoting \(a_{1}=\frac{\mathbf{x}_{1}^{H}\mathbf{M}_{11}^{-1}\mathbf{x}_{1}}{m}\), \(a_{12}=\frac{\mathrm{Re}\left(\mathbf{x}_{1}^{H}\mathbf{M}_{12}^{-1}\mathbf{ x}_{2}\right)}{m}\), \(a_{2}=\frac{\mathbf{x}_{2}^{H}\mathbf{M}_{22}^{-1}\mathbf{x}_{2}}{m}\) and using (22) and (23): \[\left\{\begin{array}{l}\hat{\sigma}_{1_{0}}^{2}=a_{1}+\frac{\hat{\sigma}_{1 _{0}}}{\hat{\sigma}_{2_{0}}}a_{12}\\ \hat{\sigma}_{2_{0}}^{2}=\frac{\hat{\sigma}_{2_{0}}}{\hat{\sigma}_{1_{0}}}a_{12} +a_{2}\,,\end{array}\right. \tag{24}\] or: \[\left\{\begin{array}{l}\hat{\sigma}_{1_{0}}^{2}=a_{1}+\frac{\hat{\sigma}_{1 _{0}}}{\hat{\sigma}_{2_{0}}}a_{12}\\ \hat{\sigma}_{1_{0}}^{2}=\frac{\hat{\sigma}_{1_{0}}}{\hat{\sigma}_{2_{0}}}a_{12} +\frac{\hat{\sigma}_{1_{0}}^{2}}{\hat{\sigma}_{2_{0}}^{2}}a_{2}\,.\end{array}\right.\] Fig. 14: Rao detector outputs based on the SCM (left) or Tyler (right) with secondary data corruption. The covariance matrix estimators are based on 256 secondary data. By equalization of right-hand terms: \[a_{1}=\frac{\hat{\sigma}_{1_{0}}^{2}}{\hat{\sigma}_{2_{0}}^{2}}a_{2}\,,\] and keeping the positive solution: \(\frac{\hat{\sigma}_{1_{0}}}{\hat{\sigma}_{2_{0}}}=\sqrt{\frac{a_{1}}{a_{2}}}\), we obtain from (24): \[\left\{\begin{array}{l}\hat{\sigma}_{1_{0}}^{2}=a_{1}+\sqrt{\frac{a_{1}}{a_{2 }}}a_{12}\\ \hat{\sigma}_{2_{0}}^{2}=a_{2}+\sqrt{\frac{a_{2}}{a_{1}}}a_{12}\,,\end{array}\right. \tag{25}\] and \[\hat{\sigma}_{1_{0}}^{2}\,\hat{\sigma}_{2_{0}}^{2}=\left(\sqrt{a_{1}\,a_{2}}+a _{12}\right)^{2}. \tag{26}\] ### _Maximum Likelihood Estimator of \(\boldsymbol{\alpha}\) under \(H_{1}\)_ The ML estimates \(\hat{\boldsymbol{\alpha}}=\left[\hat{\alpha}_{1}\,\hat{\alpha}_{2}\right]^{T}\) is found minimizing \(\left(\mathbf{x}-\mathbf{P}\boldsymbol{\alpha}\right)^{H}\mathbf{C}^{-1} \left(\mathbf{x}-\mathbf{P}\boldsymbol{\alpha}\right)\) with respect to \(\boldsymbol{\alpha}\) as ([53] (15.50)): \[\begin{split}\hat{\boldsymbol{\alpha}}&=\left( \mathbf{P}^{H}\mathbf{C}^{-1}\mathbf{P}\right)^{-1}\mathbf{P}^{H}\mathbf{C}^{- 1}\mathbf{x}\,,\\ &=\left(\mathbf{P}^{H}\boldsymbol{\Sigma}^{-1}\mathbf{M}^{-1} \boldsymbol{\Sigma}^{-1}\mathbf{P}\right)^{-1}\mathbf{P}^{H}\boldsymbol{ \Sigma}^{-1}\mathbf{M}^{-1}\boldsymbol{\Sigma}^{-1}\mathbf{x}\,.\end{split} \tag{27}\] ### _Maximum Likelihood Estimator of \(\boldsymbol{\Sigma}\) under \(H_{1}\)_ The derivative of the log-likelihood function under \(H_{1}\) with respect to \(\boldsymbol{\Sigma}^{-1}\) is: \[\frac{\partial\ln p_{\mathbf{x}}(\mathbf{x};\hat{\boldsymbol{ \alpha}},\boldsymbol{\Sigma},H_{1})}{\partial\boldsymbol{\Sigma}^{-1}}=2\, \frac{\partial\ln|\boldsymbol{\Sigma}^{-1}|}{\partial\boldsymbol{\Sigma}^{-1}} \\ -\frac{\partial\left(\mathbf{x}-\mathbf{P}\hat{\boldsymbol{ \alpha}}\right)^{H}\boldsymbol{\Sigma}^{-1}\mathbf{M}^{-1}\boldsymbol{\Sigma}^ {-1}\left(\mathbf{x}-\mathbf{P}\hat{\boldsymbol{\alpha}}\right)}{\partial \boldsymbol{\Sigma}^{-1}}.\] Furthermore: \[\left(\mathbf{x}-\mathbf{P}\hat{\boldsymbol{\alpha}}\right)^{H} \boldsymbol{\Sigma}^{-1}\mathbf{M}^{-1}\boldsymbol{\Sigma}^{-1}\left(\mathbf{x }-\mathbf{P}\hat{\boldsymbol{\alpha}}\right)\] \[=\mathbf{x}^{H}\boldsymbol{\Sigma}^{-1}\mathbf{M}^{-1} \boldsymbol{\Sigma}^{-1}\mathbf{x}-\hat{\boldsymbol{\alpha}}^{H}\mathbf{P}^{H }\boldsymbol{\Sigma}^{-1}\mathbf{M}^{-1}\boldsymbol{\Sigma}^{-1}\mathbf{x}\] \[=\mathbf{x}^{H}\boldsymbol{\Sigma}^{-1}\mathbf{M}^{-1} \boldsymbol{\Sigma}^{-1}\mathbf{x}-\mathbf{x}^{H}\boldsymbol{\Sigma}^{-1} \mathbf{M}^{-1}\boldsymbol{\Sigma}^{-1}\mathbf{P}\] \[\left(\mathbf{P}^{H}\boldsymbol{\Sigma}^{-1}\mathbf{M}^{-1} \boldsymbol{\Sigma}^{-1}\mathbf{P}\right)^{-1}\mathbf{P}^{H}\boldsymbol{\Sigma }^{-1}\mathbf{M}^{-1}\boldsymbol{\Sigma}^{-1}\mathbf{x}\,.\] If \(\mathbf{Z}=\begin{bmatrix}\sigma_{1}&0\\ 0&\sigma_{2}\end{bmatrix}\), we then notice that \(\boldsymbol{\Sigma}^{-1}\mathbf{P}=\mathbf{P}\mathbf{Z}^{-1}\) and \(\mathbf{P}^{H}\boldsymbol{\Sigma}^{-1}=\mathbf{Z}^{-1}\mathbf{P}^{H}\). Thus: \[\mathbf{x}^{H}\boldsymbol{\Sigma}^{-1}\mathbf{M}^{-1}\boldsymbol {\Sigma}^{-1}\mathbf{P}\left(\mathbf{P}^{H}\boldsymbol{\Sigma}^{-1}\mathbf{M}^ {-1}\boldsymbol{\Sigma}^{-1}\mathbf{P}\right)^{-1}\\ \mathbf{P}^{H}\boldsymbol{\Sigma}^{-1}\mathbf{M}^{-1}\boldsymbol{ \Sigma}^{-1}\mathbf{x}\] \[=\mathbf{x}^{H}\boldsymbol{\Sigma}^{-1}\mathbf{M}^{-1} \mathbf{P}\mathbf{Z}^{-1}\left(\mathbf{Z}^{-1}\mathbf{P}^{H}\mathbf{M}^{-1} \mathbf{P}\mathbf{Z}^{-1}\right)^{-1}\] \[\mathbf{Z}^{-1}\mathbf{P}^{H}\mathbf{M}^{-1}\boldsymbol{\Sigma}^{- 1}\mathbf{x}\] \[=\mathbf{x}^{H}\boldsymbol{\Sigma}^{-1}\mathbf{M}^{-1} \mathbf{P}\left(\mathbf{P}^{H}\mathbf{M}^{-1}\mathbf{P}\right)^{-1}\mathbf{P}^{ H}\mathbf{M}^{-1}\boldsymbol{\Sigma}^{-1}\mathbf{x}\,.\] By denoting \(\mathbf{D}^{-1}=\mathbf{M}^{-1}\mathbf{P}\left(\mathbf{P}^{H}\mathbf{M}^{-1} \mathbf{P}\right)^{-1}\mathbf{P}^{H}\mathbf{M}^{-1}\): \[\left(\mathbf{x}-\mathbf{P}\hat{\boldsymbol{\alpha}}\right)^{H} \boldsymbol{\Sigma}^{-1}\mathbf{M}^{-1}\boldsymbol{\Sigma}^{-1}\left(\mathbf{x }-\mathbf{P}\hat{\boldsymbol{\alpha}}\right)\,,\] \[=\mathbf{x}^{H}\boldsymbol{\Sigma}^{-1}\mathbf{M}^{-1} \boldsymbol{\Sigma}^{-1}\mathbf{x}-\mathbf{x}^{H}\boldsymbol{\Sigma}^{-1} \mathbf{D}^{-1}\boldsymbol{\Sigma}^{-1}\mathbf{x}\,.\] So: \[\frac{\partial\left(\mathbf{x}-\mathbf{P}\hat{\boldsymbol{\alpha} }\right)^{H}\boldsymbol{\Sigma}^{-1}\mathbf{M}^{-1}\boldsymbol{\Sigma}^{-1} \left(\mathbf{x}-\mathbf{P}\hat{\boldsymbol{\alpha}}\right)}{\partial \boldsymbol{\Sigma}^{-1}}\] \[=2\mathrm{Re}\left[\left(\mathbf{M}^{-1}-\mathbf{D}^{-1} \right)\boldsymbol{\Sigma}^{-1}\mathbf{x}\,\mathbf{x}^{H}\right]\,.\] Finally: \[\frac{\partial\ln p_{\mathbf{x}}(\mathbf{x};\hat{\boldsymbol{ \alpha}},\boldsymbol{\Sigma},H_{1})}{\partial\boldsymbol{\Sigma}^{-1}}=2 \boldsymbol{\Sigma}-2\mathrm{Re}\left[\left(\mathbf{M}^{-1}-\mathbf{D}^{-1} \right)\boldsymbol{\Sigma}^{-1}\mathbf{x}\,\mathbf{x}^{H}\right]\,.\] The minimum is given by: \[\hat{\boldsymbol{\Sigma}}_{1}=\mathrm{Re}\left[\left(\mathbf{M}^{-1}-\mathbf{D} ^{-1}\right)\hat{\boldsymbol{\Sigma}}_{1}^{-1}\mathbf{x}\,\mathbf{x}^{H}\right]\,. \tag{28}\] The rest is identical to paragraph A-A. ### _Expression of the GLRT_ As \(\mathbf{x}^{H}\hat{\boldsymbol{\Sigma}}_{0}^{-1}\mathbf{M}^{-1}\hat{\boldsymbol{ \Sigma}}_{0}^{-1}\mathbf{x}\) is a real positive scalar, we have: \[\mathbf{x}^{H}\hat{\boldsymbol{\Sigma}}_{0}^{-1}\mathbf{M}^{-1} \hat{\boldsymbol{\Sigma}}_{0}^{-1}\mathbf{x} =\mathrm{Re}\left[\mathrm{tr}\left(\hat{\boldsymbol{\Sigma}}_{0}^{-1} \mathbf{M}^{-1}\hat{\boldsymbol{\Sigma}}_{0}^{-1}\mathbf{x}\,\mathbf{x}^{H} \right)\right]\,,\] \[=\mathrm{tr}\left[\hat{\boldsymbol{\Sigma}}_{0}^{-1}\mathrm{ Re}\left(\mathbf{M}^{-1}\hat{\boldsymbol{\Sigma}}_{0}^{-1}\mathbf{x}\,\mathbf{x}^{H} \right)\right]\,,\] \[=mL\,.\] So the two PDF take the form: \[p_{\mathbf{x}}(\mathbf{x};\hat{\boldsymbol{\Sigma}}_{0},H_{0}) = \frac{1}{\pi^{mL}|\hat{\boldsymbol{\Sigma}}_{0}|^{2}|\mathbf{M}|} \mathrm{exp}\left(-mL\right)\,,\] \[p_{\mathbf{x}}(\mathbf{x};\hat{\boldsymbol{\alpha}},\hat{ \boldsymbol{\Sigma}}_{1},H_{1}) = \frac{1}{\pi^{mL}|\hat{\boldsymbol{\Sigma}}_{1}|^{2}|\mathbf{M}|} \mathrm{exp}\left(-mL\right)\,.\] The GLRT is expressed as: \[L_{G}(\mathbf{x})=\frac{\frac{1}{|\hat{\boldsymbol{\Sigma}}_{1}|^{2}}}{|\hat{ \boldsymbol{\Sigma}}_{0}|^{2}}=\frac{|\hat{\boldsymbol{\Sigma}}_{0}|^{2}}{|\hat{ \boldsymbol{\Sigma}}_{1}|^{2}} \tag{29}\] which can be thought of as a generalized variance ratio. As \(\ Replacing into the likelihood \(L_{G}(\mathbf{x})=\dfrac{|\hat{\mathbf{\Sigma}}_{0}|^{2}}{|\hat{\mathbf{\Sigma}}_{1}|^{2}}= \dfrac{\hat{\sigma}_{1_{0}}^{2m}}{\hat{\sigma}_{1_{1}}^{2m}}\) leads to: \[L_{G}(\mathbf{x})^{1/m} =\dfrac{\hat{\sigma}_{1_{0}}^{2}}{\hat{\sigma}_{1_{1}}^{2}}=\dfrac{ \dfrac{\mathbf{x}_{1}^{H}\mathbf{M}_{11}^{-1}\mathbf{x}_{1}}{m}}{\dfrac{1}{m} \Bigg{(}\mathbf{x}_{1}^{H}\mathbf{M}_{11}^{-1}\mathbf{x}_{1}-\dfrac{|\mathbf{ p}_{1}^{H}\mathbf{M}_{11}^{-1}\mathbf{x}_{1}|^{2}}{\mathbf{p}_{1}^{H}\mathbf{M}_{11}^{- 1}\mathbf{p}_{1}}\Bigg{)}}\] \[=\Bigg{(}1-\dfrac{|\mathbf{p}_{1}^{H}\mathbf{M}_{11}^{-1}\mathbf{ x}_{1}|^{2}}{\left(\mathbf{p}_{1}^{H}\mathbf{M}_{11}^{-1}\mathbf{p}_{1}\right) \left(\mathbf{x}_{1}^{H}\mathbf{M}_{11}^{-1}\mathbf{x}_{1}\right)}\Bigg{)}^{- 1}\,.\] Defining \(l_{G}(\mathbf{x})=\dfrac{L_{G}(\mathbf{x})^{1/m}-1}{L_{G}(\mathbf{x})^{1/m}}= 1-L_{G}(\mathbf{x})^{-1/m}\), we obtain the well-known NMF detector [11]: \[l_{G}(\mathbf{x})=\dfrac{|\mathbf{p}_{1}^{H}\mathbf{M}_{11}^{-1}\mathbf{x}_{ 1}|^{2}}{\left(\mathbf{p}_{1}^{H}\mathbf{M}_{11}^{-1}\mathbf{p}_{1}\right) \left(\mathbf{x}_{1}^{H}\mathbf{M}_{11}^{-1}\mathbf{x}_{1}\right)}. \tag{30}\] ### _Uncorrelated arrays case_ When the two arrays are fully uncorrelated, we have \(\hat{\sigma}_{i_{0}}^{2}=\dfrac{\mathbf{x}_{i}^{H}\mathbf{M}_{ii}^{-1} \mathbf{x}_{i}}{m}\) and \(\hat{\sigma}_{i_{1}}^{2}=\dfrac{1}{m}\Bigg{(}\mathbf{x}_{i}^{H}\mathbf{M}_{ii }^{-1}\mathbf{x}_{i}-\dfrac{|\mathbf{p}_{i}^{H}\mathbf{M}_{ii}^{-1}\mathbf{x} _{i}|^{2}}{\mathbf{p}_{i}^{H}\mathbf{M}_{ii}^{-1}\mathbf{p}_{i}}\Bigg{)}\). We obtain: \[L_{G}(\mathbf{x}) =\dfrac{|\hat{\mathbf{\Sigma}}_{0}|^{2}}{|\hat{\mathbf{\Sigma}}_{1}|^{2} }=\dfrac{\prod_{i=1}^{2}\hat{\sigma}_{i_{0}}^{2m}}{\prod_{i=1}^{ 2}\hat{\sigma}_{i_{1}}^{2m}}=\prod_{i=1}^{2}\left[\hat{\sigma}_{i_{0}}^{2} \right]^{m}\,, \tag{31}\] \[=\prod_{i=1}^{2}\left[1-\dfrac{|\mathbf{p}_{i}^{H}\mathbf{M}_{ii }^{-1}\mathbf{x}_{i}|^{2}}{\left(\mathbf{p}_{i}^{H}\mathbf{M}_{ii}^{-1}\mathbf{ p}_{i}\right)\left(\mathbf{x}_{i}^{H}\mathbf{M}_{ii}^{-1}\mathbf{x}_{i}\right)} \right]^{-m}\,. \tag{32}\] This corresponds to the MIMO ANMF detector on independent arrays presented in [40]. ### \(\mathbf{\Sigma}=\sigma\,\mathbf{I}_{2m}\) _case_ When \(\mathbf{\Sigma}\) is the identity matrix up to a scalar factor, \(\sigma_{1}=\sigma_{2}\), whose estimators are renamed \(\hat{\sigma}_{0}\) under \(H_{0}\) and \(\hat{\sigma}_{1}\) under \(H_{1}\). \[L_{G}(\mathbf{x}) =\dfrac{|\hat{\mathbf{\Sigma}}_{0}|^{2}}{|\hat{\mathbf{\Sigma}}_{1}|^{2}}, \text{with }\hat{\mathbf{\Sigma}}_{0}=\hat{\sigma}_{0}\mathbf{I}_{mL}\text{ and }\hat{\mathbf{\Sigma}}_{1}=\hat{\sigma}_{1}\mathbf{I}_{mL}\] \[=\dfrac{\hat{\sigma}_{0}^{2mL}}{\hat{\sigma}_{1}^{2mL}}\,.\] From \(\mathbf{\hat{\Sigma}}_{0}=\operatorname{Re}\left(\mathbf{M}^{-1}\mathbf{\hat{\Sigma}} _{0}^{-1}\mathbf{x}\,\mathbf{x}^{H}\right)\), we have the following relations: \[\operatorname{tr}\left(\mathbf{\hat{\Sigma}}_{0}\right) =\operatorname{tr}\left[\operatorname{Re}\left(\mathbf{M}^{-1} \mathbf{\hat{\Sigma}}_{0}^{-1}\mathbf{x}\,\mathbf{x}^{H}\right)\right]\,,\] \[\hat{\sigma}_{0}\operatorname{tr}\left(\mathbf{I}_{mL}\right) =\dfrac{1}{\hat{\sigma}_{0}}\!\operatorname{Re}\left[\operatorname{ tr}\left(\mathbf{M}^{-1}\mathbf{x}\,\mathbf{x}^{H}\right)\right]\,,\] \[\hat{\sigma}_{0}^{2} =\dfrac{\mathbf{x}^{H}\mathbf{M}^{-1}\mathbf{x}}{mL},\text{as }\mathbf{M}^{-1}\text{ is positive definite}\,.\] Identically, we have: \[\hat{\sigma}_{1}^{2} =\dfrac{\left(\mathbf{x}-\mathbf{P}\mathbf{\alpha}\right)^{H}\mathbf{ M}^{-1}\left(\mathbf{x}-\mathbf{P}\mathbf{\alpha}\right)}{mL},\] \[=\dfrac{\mathbf{x}^{H}\mathbf{M}^{-1}\mathbf{x}-\mathbf{x}^{H} \mathbf{M}^{-1}\mathbf{P}\left(\mathbf{P}^{H}\mathbf{M}^{-1}\mathbf{P}\right)^ {-1}\mathbf{P}^{H}\mathbf{M}^{-1}\mathbf{x}}{mL}.\] \[L_{G}(\mathbf{x})^{1/mL}=\dfrac{\hat{\sigma}_{0}^{2}}{\hat{ \sigma}_{1}^{2}},\] \[=\dfrac{\mathbf{x}^{H}\mathbf{M}^{-1}\mathbf{x}}{\mathbf{x}^{H} \mathbf{M}^{-1}\mathbf{x}-\mathbf{x}^{H}\mathbf{M}^{-1}\mathbf{P}\left( \mathbf{P}^{H}\mathbf{M}^{-1}\mathbf{P}\right)^{-1}\mathbf{P}^{H}\mathbf{M}^{-1} \mathbf{x}}\] \[=\left(1-\dfrac{\mathbf{x}^{H}\mathbf{M}^{-1}\mathbf{P}\left( \mathbf{P}^{H}\mathbf{M}^{-1}\mathbf{P}\right)^{-1}\mathbf{P}^{H}\mathbf{M}^{-1} \mathbf{x}}{\mathbf{x}^{H}\mathbf{M}^{-1}\mathbf{x}}\right)^{-1}\,.\] By defining \(l_{G}(\mathbf{x})=\dfrac{L_{G}(\mathbf{x})^{1/mL}-1}{L_{G}(\mathbf{x})^{1/mL}}\) or \(L_{G}(\mathbf{x})^{1/mL}=[1-l_{G}(\mathbf{x})]^{-1}\), we obtain an equivalent test: \[l_{G}(\mathbf{x})=\dfrac{\mathbf{x}^{H}\mathbf{M}^{-1}\mathbf{P}\left( \mathbf{P}^{H}\mathbf{M}^{-1}\mathbf{P}\right)^{-1}\mathbf{P}^{H}\mathbf{M}^{-1} \mathbf{x}}{\mathbf{x}^{H}\mathbf{M}^{-1}\mathbf{x}}, \tag{33}\] which corresponds to the subspace version of the ACE test presented in [54]. ## Appendix C Rao's detector derivation The partial derivative of the log-likelihood function is defined as: \[\dfrac{\partial\ln p_{\mathbf{x}}(\mathbf{x};\mathbf{\xi}_{R},\mathbf{\xi}_{S})}{ \partial\mathbf{\xi}_{R}}=\begin{bmatrix}\dfrac{\partial\ln p_{\mathbf{x}}( \mathbf{x};\mathbf{\xi}_{R},\mathbf{\xi}_{S})}{\partial\mathrm{Re}\left(\mathbf{\alpha} \right)}\\ \dfrac{\partial\ln p_{\mathbf{x}}(\mathbf{x};\mathbf{\xi}_{R},\mathbf{\xi}_{S})}{ \partial\mathrm{Im}\left(\mathbf{\alpha}\right)}\end{bmatrix}.\] From [53] (15.60), we obtain: \[\dfrac{\partial\ln p_{\mathbf{x}}(\mathbf{x};\mathbf{\xi}_{R},\mathbf{\xi}_{S})}{ \partial\mathbf{\xi}_{R}}=\begin{bmatrix}2\operatorname{Re}\left[\mathbf{P}^{H} \mathbf{C}^{-1}(\mathbf{\xi}_{S})\left(\mathbf{x}-\mathbf{P}\mathbf{\alpha}\right) \right]\\ 2\operatorname{Im}\left[\mathbf{P}^{H}\mathbf{C}^{-1}(\mathbf{\xi}_{S})\left( \mathbf{x}-\mathbf{P}\mathbf{\alpha}\right)\right]\end{bmatrix}\,.\] Thus: \[\dfrac{\partial\ln p_{\mathbf{x}}(\mathbf{x};\mathbf{\xi}_{R},\mathbf{\xi}_{S})}{ \partial\mathbf{\xi}_{R}}\Bigg{|}\mathbf{\xi}_{R}=\mathbf{0}=\begin{bmatrix}2\ Finally, by replacing (34) and (35) into (12) leads to: \[L_{R}(\mathbf{x})=\\ \left[2\operatorname{Re}\left(\mathbf{x}^{H}\mathbf{C}^{-1}(\hat{ \boldsymbol{\xi}}_{S_{0}})\mathbf{P}\right)-2\operatorname{Im}\left(\mathbf{x}^{ H}\mathbf{C}^{-1}(\hat{\boldsymbol{\xi}}_{S_{0}})\mathbf{P}\right)\right]\\ \left[\frac{\left(\mathbf{P}^{H}\mathbf{C}^{-1}(\hat{\boldsymbol {\xi}}_{S_{0}})\mathbf{P}\right)^{-1}}{2}\mathbf{0}\right]\\ \mathbf{0}\] \[\left[\begin{matrix}2\operatorname{Re}\left(\mathbf{P}^{H} \mathbf{C}^{-1}(\hat{\boldsymbol{\xi}}_{S_{0}})\mathbf{x}\right)\\ 2\operatorname{Im}\left(\mathbf{P}^{H}\mathbf{C}^{-1}(\hat{\boldsymbol{\xi}}_ {S_{0}})\mathbf{x}\right)\end{matrix}\right],\] which simplifies to: \[L_{R}(\mathbf{x})=2\,\left[\operatorname{Re}\left(\mathbf{x}^{ H}\mathbf{C}^{-1}(\hat{\boldsymbol{\xi}}_{S_{0}})\mathbf{P}\right)\left( \mathbf{P}^{H}\mathbf{C}^{-1}(\hat{\boldsymbol{\xi}}_{S_{0}})\mathbf{P}\right) ^{-1}\right.\\ \left.\operatorname{Re}\left(\mathbf{P}^{H}\mathbf{C}^{-1}( \hat{\boldsymbol{\xi}}_{S_{0}})\mathbf{x}\right)-\operatorname{Im}\left( \mathbf{x}^{H}\mathbf{C}^{-1}(\hat{\boldsymbol{\xi}}_{S_{0}})\mathbf{P}\right) \right.\\ \left.\left(\mathbf{P}^{H}\mathbf{C}^{-1}(\hat{\boldsymbol{\xi}}_ {S_{0}})\mathbf{P}\right)^{-1}\operatorname{Im}\left(\mathbf{P}^{H}\mathbf{C}^ {-1}(\hat{\boldsymbol{\xi}}_{S_{0}})\mathbf{x}\right)\right].\] Knowing that \(\mathbf{P}^{H}\mathbf{C}^{-1}(\hat{\boldsymbol{\xi}}_{S_{0}})\mathbf{P}\) is real and positive definite, it can be factorized and incorporated into the real and imaginary parts. After some algebraic manipulation, we obtain (14). ## Appendix D Maximum likelihood estimator of the covariance matrix in Compound-Gaussian clutter The likelihood of \(\left\{\mathbf{x}_{k}\right\}_{k\in[1,K]}\) under \(H_{0}\) can be rewritten as: \[p_{\mathbf{x}}\left(\left\{\mathbf{x}_{k}\right\}_{k\in[1,K]}; \mathbf{M},\mathbf{T}_{k},H_{0}\right)\\ =\prod_{k=1}^{K}\frac{1}{\pi^{2m}|\widetilde{\mathbf{C}}|} \exp\left(-\mathbf{x}_{k}^{H}\widetilde{\mathbf{C}}^{-1}\mathbf{x}_{k}\right) \,,\\ =\frac{1}{\pi^{2m\,K}|\mathbf{M}|^{K}}\prod_{k=1}^{K}\frac{1}{| \mathbf{T}_{k}|^{2}}\exp\left(-\mathbf{x}_{k}^{H}\mathbf{T}_{k}^{-1}\mathbf{M} ^{-1}\mathbf{T}_{k}^{-1}\mathbf{x}_{k}\right)\,,\] where \(\widetilde{\mathbf{C}}=\mathbf{T}_{k}\mathbf{M}\mathbf{T}_{k}\), and \(\mathbf{T}_{k}=\begin{bmatrix}\sqrt{\tau_{1_{k}}}&0\\ 0&\sqrt{\tau_{2_{k}}}\end{bmatrix}\otimes\mathbf{I}_{m}\). The log-likelihood can be written as: \[\ln p_{\mathbf{x}}\left(\left\{\mathbf{x}_{k}\right\}_{k\in[1,K] };\mathbf{M},\mathbf{T}_{k},H_{0}\right)=-2m\,K\ln\pi-K\,\ln|\mathbf{M}|\\ +2\sum_{k=1}^{K}\ln|\mathbf{T}_{k}^{-1}|-\sum_{k=1}^{K}\mathbf{x}_{ k}^{H}\mathbf{T}_{k}^{-1}\mathbf{M}^{-1}\mathbf{T}_{k}^{-1}\mathbf{x}_{k}\,. \tag{36}\] According to [52] (82), the derivative with respect to \(\mathbf{T}_{k}^{-1}\) is: \[\frac{\partial\ln p_{\mathbf{x}}(\left\{\mathbf{x}_{k}\right\}_ {k\in[1,K]};\mathbf{M},\mathbf{T}_{k},H_{0})}{\partial\mathbf{T}_{k}^{-1}}=2\, \mathbf{T}_{k}\\ -2\operatorname{Re}\left(\mathbf{M}^{-1}\mathbf{T}_{k}^{-1} \mathbf{x}_{k}\,\mathbf{x}_{k}^{H}\right)\,. \tag{37}\] Following the same approach as in Appendix A, we obtain the minimum for \(\mathbf{T}_{k}\) for a fixed \(\mathbf{M}\): \[\widehat{\mathbf{T}}_{k}=\operatorname{Re}\left(\mathbf{M}^{-1}\widehat{ \mathbf{T}}_{k}^{-1}\mathbf{x}_{k}\,\mathbf{x}_{k}^{H}\right)\,, \tag{38}\] where \[\widehat{\mathbf{T}}_{k}=\begin{bmatrix}\sqrt{\hat{\tau}_{1_{k}}}&0\\ 0&\sqrt{\hat{\tau}_{2_{k}}}\end{bmatrix}\otimes\mathbf{I}_{m}\,, \tag{39}\] and \[\hat{\tau}_{1_{k}} =\,t_{1}+\sqrt{\frac{t_{1}}{t_{2}}}t_{12}\,, \tag{40}\] \[\hat{\tau}_{2_{k}} =\,t_{2}+\sqrt{\frac{t_{2}}{t_{1}}}t_{12}\,, \tag{41}\] with \[t_{1} =\,\frac{1}{m}\mathbf{x}_{1,k}^{H}\mathbf{M}_{11}^{-1}\mathbf{x} _{1,k}\,, \tag{42}\] \[t_{2} =\,\frac{1}{m}\mathbf{x}_{2,k}^{H}\mathbf{M}_{22}^{-1}\mathbf{x}_ {2,k}\,,\] (43) \[t_{12} =\,\frac{1}{m}\operatorname{Re}\left(\mathbf{x}_{1,k}^{H}\mathbf{ M}_{12}^{-1}\mathbf{x}_{2,k}\right)\,. \tag{44}\] Replacing \(\mathbf{T}_{k}\) by \(\widehat{\mathbf{T}}_{k}\) in (36) and deriving with respect to \(\mathbf{M}^{-1}\) lead to: \[\frac{\partial\ln p_{\mathbf{x}}(\left\{\mathbf{x}_{k}\right\}_ {k\in[1,K]};\mathbf{M},\widehat{\mathbf{T}}_{k},H_{0})}{\partial\mathbf{M}^{-1}} =K\,\mathbf{M}\\ -\sum_{k=1}^{K}\left(\widehat{\mathbf{T}}_{k}^{-1}\mathbf{x}_{k} \right)\left(\widehat{\mathbf{T}}_{k}^{-1}\mathbf{x}_{k}\right)^{H}\,, \tag{45}\] and the minimum in \(\mathbf{M}\) is given by: \[\widehat{\mathbf{M}} =\frac{1}{K}\!\sum_{k=1}^{K}\left(\widehat{\mathbf{T}}_{k}^{-1} \mathbf{x}_{k}\right)\left(\widehat{\mathbf{T}}_{k}^{-1}\mathbf{x}_{k}\right)^{H}\,, \tag{46}\] \[=\frac{1}{K}\!\sum_{k=1}^{K}\widehat{\mathbf{T}}_{k}^{-1}\mathbf{x }_{k}\mathbf{x}_{k}^{H}\widehat{\mathbf{T}}_{k}^{-1}\,. \tag{47}\] The estimator (47) is independent of the textures. This could be shown by substituting \(\mathbf{x}_{k}=\begin{bmatrix}\mathbf{x}_{1,k}\\ \mathbf{x}_{2,k}\end{bmatrix}\) by \(\begin{bmatrix}\sqrt{\tau_{1_{k}}}\,\mathbf{c}_{1,k}\\ \sqrt{\tau_{2_{k}}}\,\mathbf{c}_{2,k}\end{bmatrix}\) in (39) and (47).
**Answer:** マルチアレイシステムは、 sono とレーダーアプリケーションで広く利用されています。 これらは、通信速度の向上、ターゲットの識別、画像化を向上させることができます。 2つの受信アレイで動作するマルチビームソナーシステムの場合、 私たちは、従来のSonar検出方法に比べて検出能力を向上させるために、新しい適応性を開発しました。 これを実現するために、相関性の高いアレイをより具体的に考慮し、 スケール因子まで covariance マトリクスを推定し、および、非線形雑音に注目しています。 部分的に均一な環境では、2段階の一般化された可能性比テスト (GLRT) と Rao アプローチは、 適応型正規化マッチングフィルタ (ANMF) テストの一般化と、 数値的に単純な検出器の生成に繋がり、 確立
2309.10095
A Semi-Supervised Approach for Power System Event Identification
Event identification is increasingly recognized as crucial for enhancing the reliability, security, and stability of the electric power system. With the growing deployment of Phasor Measurement Units (PMUs) and advancements in data science, there are promising opportunities to explore data-driven event identification via machine learning classification techniques. However, obtaining accurately-labeled eventful PMU data samples remains challenging due to its labor-intensive nature and uncertainty about the event type (class) in real-time. Thus, it is natural to use semi-supervised learning techniques, which make use of both labeled and unlabeled samples. %We propose a novel semi-supervised framework to assess the effectiveness of incorporating unlabeled eventful samples to enhance existing event identification methodologies. We evaluate three categories of classical semi-supervised approaches: (i) self-training, (ii) transductive support vector machines (TSVM), and (iii) graph-based label spreading (LS) method. Our approach characterizes events using physically interpretable features extracted from modal analysis of synthetic eventful PMU data. In particular, we focus on the identification of four event classes whose identification is crucial for grid operations. We have developed and publicly shared a comprehensive Event Identification package which consists of three aspects: data generation, feature extraction, and event identification with limited labels using semi-supervised methodologies. Using this package, we generate and evaluate eventful PMU data for the South Carolina synthetic network. Our evaluation consistently demonstrates that graph-based LS outperforms the other two semi-supervised methods that we consider, and can noticeably improve event identification performance relative to the setting with only a small number of labeled samples.
Nima Taghipourbazargani, Lalitha Sankar, Oliver Kosut
2023-09-18T19:07:41
http://arxiv.org/abs/2309.10095v2
# A Semi-Supervised Approach for Power System Event Identification ###### Abstract Event identification is increasingly recognized as crucial for enhancing the reliability, security, and stability of the electric power system. With the growing deployment of Phasor Measurement Units (PMUs) and advancements in data science, there are promising opportunities to explore data-driven event identification via machine learning classification techniques. However, obtaining accurately-labeled eventful PMU data samples remains challenging due to its labor-intensive nature and uncertainty about the event type (class) in real-time. Thus, it is natural to use semi-supervised learning techniques, which make use of both labeled and unlabeled samples. We evaluate three categories of classical semi-supervised approaches: (i) self-training, (ii) transductive support vector machines (TSVM), and (iii) graph-based label spreading (LS) method. Our approach characterizes events using physically interpretable features extracted from modal analysis of synthetic eventful PMU data. In particular, we focus on the identification of four event classes (load loss, generation loss, line trip, and bus fault) whose identification is crucial for grid operations. We have developed and publicly shared a comprehensive Event Identification package which consists of three aspects: data generation, feature extraction, and event identification with limited labels using semi-supervised methodologies. Using this package, we generate and evaluate eventful PMU data for the South Carolina 500-Bus synthetic network. Our evaluation consistently demonstrates that graph-based LS outperforms the other two semi-supervised methods that we consider, and can noticeably improve event identification performance relative to the setting with only a small number of labeled samples. Event identification, machine learning, Semi-supervised learning, phasor measurement units, mode decomposition. ## I Introduction Power systems are inherently complex dynamical systems, primarily due to the involvement of diverse components such as generators, buses, lines, and loads with varying sizes, all exhibiting non-linear behavior and intricate interactions. Given their extensive geographical coverage and scale, power systems are susceptible to various classes of events (for example, generation loss, load loss, line trips). Event identification methods can be used in real-time operations to guide control actions, as well as for off-line analysis of past events to make the system more reliable, secure, and stable in the future. Numerous studies have explored event detection in power systems, focusing on determining whether an event has occurred [1, 2, 3]. However, event identification, which involves discerning the specific class of event that occurred, presents even greater challenges since it requires learning the unique signatures of different events. Our primary focus here is on addressing the event identification problem in power systems. To this end, our analysis in the sequel assumes that an event has been detected with knowledge of its precise time. The increasing deployment of Phasor Measurement Units (PMUs) throughout the grid, coupled with advancements in machine learning techniques, presents invaluable opportunities for exploring advanced data-driven event identification methods. These methods offer the distinct advantage of differentiating between various classes of power system events based on high-dimensional spatio-temporally correlated time-synchronized phasor measurements with high resolution, without heavily relying on dynamic modeling of power system components. The majority of the recent literature in the field of data-driven event identification (e.g., [4, 5, 6, 7, 8, 9, 10, 11, 12]) employs machine learning and pattern recognition techniques for making statistical inferences or decisions using system measurements. A significant portion of these studies predominantly falls within the supervised learning paradigm, which requires accurate labeled data. However, acquiring expert knowledge for labeling various classes of events can be expensive and laborious. Consequently, the manual labeling of events by operators constitutes only about 2% of the events [13]. Such limitations motivate researchers to leverage simulation-based synthetic eventful PMU data for investigating and evaluating the performance of their proposed event identification methods (e.g., [8, 14, 15]). Despite the availability of several resources providing access to synthetic test cases with transmission and/or distribution network models for dynamic simulations [16, 17], conducting a fair comparison of diverse event identification methods might pose a significant challenge. This challenge primarily stems from the numerous parameters associated with dynamic models of system components and simulation settings, coupled with the diverse characteristics of simulated events, such as class, duration, and location, among others. While certain recent publications may have access to significant real and/or synthetic PMU data (e.g., [18, 19]), the lack of publicly available properly labeled eventful PMU data continues to be a persistent concern for numerous researchers. Unsupervised and semi-supervised learning are common practices in machine learning when dealing with no or limited labeled data. Unsupervised learning aims to infer the under lying structure within the unlabeled data. Although they can distinguish between clusters of events [20, 21, 22, 23, 24], they do not possess the ground truth to associate each cluster with its real-world meaning. Furthermore, when there is access to even a small amount of labeled data, supervised learning has been shown to perform better than unsupervised learning methods [2, 24]. Semi-supervised learning approaches, on the other hand, aim to label unlabeled data points using knowledge learned from a small number of labeled data points which can significantly enhance the performance of a classification task [25]. Reference [18] presents a framework for event detection, localization, and classification in power grids based on semi-supervised learning. A pseudo labeling (PL) technique is adopted to classify events using the convolutional neural network (CNN) backbone with cross-entropy loss. A semi-supervised event identification framework is proposed in [26] which utilizes a hybrid machine learning-based method to reduce biases of different classifiers. In [27], the authors explore the application of deep learning techniques and PMU data to develop real-time event identification models for transmission networks. This is achieved by leveraging information from a large pool of unlabeled events, while also taking into account the class distribution mismatch problem. In [28, 29], the authors proposed hidden structure semi-supervised machine (HS3M), a novel data-driven event identification method that combines unlabeled and partially labeled data to address limitations in supervised, semi-supervised, and hidden structure learning. The approach introduces a parametric dual optimization procedure to enhance the learning objective and improve event identification accuracy. The learning problem involves optimizing a non-smooth function that may be convex or concave. The existing literature on neural network-based event identification methods is marked by certain limitations and challenges. These encompass the scarcity of suitable historical labeled event data, restricted interpretability in feature extraction, elevated computational intricacy, and the necessity for meticulous parameter calibration. Additionally, pseudo labeling approaches such as [28] confront uncertainty regarding the attainability of global optimality. Moreover, it is worth noting that, to the best of the authors' knowledge, a thorough investigation into the ramifications arising from the initial distribution of labeled and unlabeled samples has not been undertaken. Building upon the promising results of our previous work on event identification in the supervised setting [30], this paper introduces a semi-supervised event identification framework. The aim of this study is to explore the potential benefits of incorporating unlabeled samples in enhancing the performance of the event identification task. To this end, we thoroughly investigate and compare the performance of various semi-supervised algorithms, including: (i) self-training with different base classifiers (i.e., support vector machine with linear kernel (SVML) as well as with radial basis function kernel (SVMR), gradient boosting (GB), decision trees (DT), and k-Nearest Neighbors (kNN)), (ii) transductive support vector machines (TSVM), and (iii) graph-based label spreading (LS) to explore their effectiveness. We chose these classical semi-supervised models for two primary reasons: firstly, the wide array of proposed semi-supervised classification algorithms in the past two decades (see, [31], and references therein) necessitates a comprehensive understanding of which models are most suitable and efficient for event identification; and secondly, they provide a clearer illustration and intuition of the impact of incorporating unlabeled samples compared to more advanced methods. Although there may not be a one-size-fits-all solution, each method has its own advantages and disadvantages, and it is important to evaluate their suitability. Notably, our experiments consistently illustrate the superior performance of the graph-based LS method compared to other approaches. Even in worst-case scenarios where the initial distribution of labeled and unlabeled samples does not necessarily reflect the true distribution of event classes, the graph-based LS method stands out in robustly and significantly enhancing event identification performance. Our key contributions are as follows: * Introduction of a semi-supervised event identification framework that leverages physically interpretable features derived from modal analysis of PMU data. * Thorough exploration of the influence of the initial distribution of labeled and unlabeled samples, along with the quantity of unlabeled samples, on the efficacy of diverse semi-supervised event identification techniques. * Development of an all-inclusive Event Identification package comprising of an event generation module based on the power system simulator for engineering (PSS\({}^{\#}\)E) Python application programming interface (API), a feature extraction module utilizing methodologies from our previous research [30], and a semi-supervised classification module. The remainder of the paper is organized as follows. Section. II describes the simulation process to generate the synthetic eventful PMU data. We explain the proposed semi-supervised event identification framework in Section. III. In Section. IV, we further elaborate on the pseudo labeling process of the unlabeled samples, and the classification models. We discuss the simulation results in Section. V. Finally, Section. VI concludes the paper. ## II Generation of the Synthetic Eventful Time-series PMU Data Consider an electric grid composed of set of loads, generators, lines, and buses. We investigate four distinct event classes denoted as \(\mathcal{E}\in\{\text{LL},\,\text{GL},\,\text{LT},\,\text{BF}\}\), representing load loss, generation loss, line trip, and bus fault events, respectively. Each PMU provides multiple measurement channels relative to its installation bus. In this study, we focus on voltage magnitude (\(V_{m}\)), corresponding angle (\(V_{a}\)), and frequency (\(F\)) channels for clarity, with potential inclusion of other channels. For any channel \(c\in\mathcal{C}=\{V_{m},V_{a},F\}\), let \(y_{i}^{c}(n)\in\mathbb{R}\) represent the \(n^{\text{th}}\) measurement, \(n=0,\ldots,N-1\), where the total number of samples is \(N\), from the \(i\)th PMU. Assuming PMU sampling period of \(T_{s}\), we thus collect eventful data for \(t_{s}=NT_{s}\) seconds. These measurements, for the \(c^{\text{th}}\) channel, are collated from \(m\) PMUs to form a matrix \(\,\mathbf{y}^{c}=[\cdots,\mathbf{y}_{i}^{c},\cdots]^{T}\in\mathbb{R}^{m\times N}\) where \(\mathbf{y}_{i}^{c}\) is a \(N\)-length (column) vector for the \(i^{\text{th}}\) PMU with entries \(y_{i}^{e}(n)\), for all \(n\). We use superscript \(T\) to denote the tranpose operator. Finally, for each event, we define \(\mathcal{M}=[[\mathcal{Y}^{V_{n}}]^{T},[\mathcal{Y}^{V_{2}}]^{T},[\mathcal{Y}^{ F}]^{T}]^{T}\in\mathbb{R}^{|\mathcal{C}|m\times N}\) by aggregating all the phasor measurements from \(m\) PMUs, \(3\) channels, and for \(N\) samples. Within this setting, we develop a publicly available Python code which leverages PSS@E software Python Application Program Interface (API) to generate synthetic eventful PMU data. To ensure realistic and diverse dataset, we consider the following two steps: Firstly, we linearly adjust all loads within a range of 95% to 105% of their normal loading conditions. Secondly, we add zero-mean random fluctuations, ranging from \(\pm 2\%\) of the adjusted loads, to simulate unpredictable variations observed in real-world power systems. 1 To generate eventful data, for each system component and loading condition considered, we employ the following systematic approach: (i) We begin by applying a new initial loading condition to each load in the system; a power flow analysis for this setting then gives us the initial state conditions for the next step. (ii) We use this initial condition to initiate a \(t_{f}\)-second flat run dynamic simulation. (iii) At the \(t_{f}\) second, we introduce a disturbance (i.e., LL, GL, and LT) to a selected component. For BF events, we clear the disturbance after \(t_{\text{clr}}\) seconds. (iv) Finally, we model the event simulation for additional \(t_{s}\) seconds which then allows us create the data matrix \(\mathcal{M}\), representing the PMU measurements associated with the simulated event. We repeat this procedure to generate a desired number of events for each event type. Footnote 1: The load change intervals specified in this paper can be adjusted depending on the stability of the system under study, ensuring that the system can return to an acceptable state of equilibrium following a disturbance. ### _Generating Event Features Using Modal Analysis_ The first step in identifying a system event is to extract a set of delineating features that are likely to contain information regarding the event class. Using the fact that temporal effects in a power system are driven by the interacting dynamics of system components, we use mode decomposition to extract features. More specifically, we assume that each PMU data stream after an event consists of a superposition of a small number of dominant dynamic modes. The resulting features then include frequency and damping ratio of these modes, as well as the residual coefficients indicating the quantity of each mode present. We briefly summarize the mathematical model and refer readers to our recent work [30] for additional details. We assume that \(y_{i}^{e}(n)\) after an event consists of a superposition of \(p\) common damped sinusoidal modes as \[y_{i}^{e}(n)=\sum_{k=1}^{p}R_{k,l}^{e}\times(Z_{k}^{e})^{n}+e_{i}^{e}(n),\quad i \in\{1,\cdots,m\},\quad c\in\mathcal{C} \tag{1}\] where for any given channel \(c\in\mathcal{C}\), \(e_{i}^{e}(n)\) represents the noise in the \(i^{\text{th}}\) PMU measurement and \(Z_{k}^{e}\) is the \(k^{\text{th}}\) mode associated with the event. We represent each mode as \(Z_{k}^{e}=\exp(\lambda_{k}^{e}T_{s})\) where \(\lambda_{k}^{e}=\sigma_{k}^{e}\pm j\sigma_{k}^{e}\) and \(\sigma_{k}^{e}\) and \(\omega_{k}^{e}\) are the damping factor and angular frequency of the \(k^{\text{th}}\) mode, respectively. The residue \(R_{k,l}^{e}\) of the \(k^{\text{th}}\) mode for the \(i^{\text{th}}\) PMU is defined by its magnitude \(|R_{k,l}^{e}|\) and angle \(\theta_{k,l}^{e}\). For any given channel \(c\), typically a small subset of the PMUs (\(m^{\prime}<m\)) capture the dynamic response of the system after an event. Thus, we only keep the residues of a set of \(m^{\prime}\) PMUs with the largest magnitudes. Note that the \(m^{\prime}\) PMUs are not necessarily the same PMUs for different events (see, [30] for further details). Using the above procedure, for each channel \(c\), we define a row vector of features, \(\mathcal{F}^{c}\), of length \(2p(m^{\prime}+1)\) as: \[\mathcal{F}^{c}=\left[\{\omega_{k}^{e}\}_{k=1}^{p},\{\sigma_{k}^{e}\}_{k=1}^{ p},\{|R_{k,l}^{e}|\}_{k=1}^{p},\{\theta_{k,l}^{e}\}_{k=1}^{p}\right]_{i\in \{1,\cdots,m^{\prime}\}} \tag{2}\] which consists of \(p\) angular frequencies, \(p\) damping factors and the corresponding magnitude and angle of the residues for each of the \(m^{\prime}\) PMUs (with the largest residue magnitudes) and the \(p\) modes. ### _Generating the overall dataset_ Let \(n_{D}\) be the total number of events generated over all event classes. Following modal analysis on the PMU measurements as described above, we can represent the \(i^{\text{th}}\) event, \(i\in\mathbb{\mathbb{\mathbb{missing}}}_{D}=\{1,...,n_{D}\}\), as a \(d=2p|\mathcal{C}|(m^{\prime}+1)\)-length vector \(x_{l}^{T}=[\mathcal{F}^{V_{n}},\mathcal{F}^{V_{a}},\mathcal{F}^{F}]\). Each event is associated with a positive integer label \(y_{i}\in\{1,\cdots,|\mathcal{E}|\}\) where \(|\mathcal{E}|\) is the total number of event classes. Collating the events and labels from all event classes, we obtain a large data matrix \(\mathbf{D}=\{\mathbf{X}_{D},\mathbf{Y}_{D}\}\) where \(\mathbf{X}_{D}=[x_{1},...,x_{n_{D}}]^{T}\in\mathbb{R}^{n_{D}\times d}\) and \(\mathbf{Y}_{D}=[y_{1},...,y_{n_{D}}]^{T}\in\mathbb{R}^{n_{D}}\). Finally, to highlight the possible choices for labeled and unlabeled events from \(\mathbf{D}\), we henceforth write \(\mathbf{D}=\{(x_{i},y_{i})\}_{i\in\mathbb{\mathbb{\mathbb{missing}}}_{D}}\). ## III Proposed Framework to Investigate the Impact of Unlabeled Data The efficacy of numerous semi-supervised algorithms is significantly impacted by the initial distribution of labeled and unlabeled samples. Consequently, a thorough investigation of the robustness of diverse semi-supervised learning techniques in the face of various initial labeled and unlabeled sample distributions becomes imperative. Furthermore, the effectiveness of semi-supervised learning is not universally guaranteed to enhance supervised models; its success relies on specific foundational assumptions. Among these assumptions are the smoothness and cluster principles [31], which posit that densely populated regions tend to have similar labels, and that samples within the same cluster often belong to the same class. To investigate the impact of incorporating unlabeled samples on event identification performance, and to ensure a fair comparison among various inductive (i.e., self-training) and transductive semi-supervised approaches (i.e., TSVM, LS), we utilize the k-fold cross-validation technique. First, we shuffle \(n_{\text{D}}\) samples in \(\mathbf{D}\) and partition the data into \(n_{K}\) equally sized folds. We use \(n_{K}-1\) folds as a training set, denoted as \(\mathbf{D}_{T}^{(k)}=\{(x_{i},y_{i})\}_{i\in\mathbb{\mathbb{\mathbb{missing}}}_{T}^{(k)}}\) with \(n_{T}=[(n_{K}-1)n_{D}/n_{K}]\) samples, and reserve the remaining fold as a validation set, denoted as \(\mathbf{D}_{V}^{(k)}=\{(x_{i},y_{i})\}_{i\in\mathbb{\mathbb{missing}}_{T}^{(k)}}\) with \(n_{V}=n_{D}-n_{T}\) samples, and \(k=1,...,n_{K}\). Here, \(\mathcal{I}_{T}^{(k)}\), and \(\mathcal{I}_{V}^{(k)}\) represents a subset of samples in the training set, and the validation set of the \(k^{\text{th}}\) fold, respectively, and \(\mathcal{I}_{T}^{(k)}\cup\mathbf{I}_{V}^{(k)}=\mathbb{\mathbb{missing}}_{D}\). We repeat this process \(K\) times, with each fold serving as the validation set once. To further investigate how the distribution of labeled and unlabeled samples affects the performance of various semi-supervised algorithms, we shuffle the samples in the training set for \(n_{Q}\) times and split it into a subset of \(n_{L}\) labeled samples, denoted as \(\mathbf{D}_{L}^{(k,q)}=\{(x_{i},y_{i})\}_{i\in I_{L}^{(k,q)}}\) and a subset of \(n_{U}\) unlabeled samples by ignoring their ground truth labels, denoted as \(\mathbf{D}_{U}^{(k,q)}=\{(x_{i},\cdot)\}_{i\in I_{U}^{(k,q)}}\) where \(\mathcal{I}_{L}^{(k,q)}\cup\mathcal{I}_{U}^{(k,q)}=\mathcal{I}_{T}^{(k)}\), and \(q=1,\cdots,n_{Q}\). To ensure the inclusion of samples from every class within the labeled subset, we verify the condition \(B_{\min}\leq\frac{n_{L}^{c}}{n_{L}}\leq B_{\max}\) where \(n_{L}^{c}\) is the number of samples corresponding to class \(c\), and \(B_{\min},B_{\max}\) are the specified balance range. To illustrate the impact of increasing the number of unlabeled samples, we propose the following procedure. Given the number of samples that we want to add at each step, denoted as \(\Delta_{U}\), we randomly select \(n_{U}^{(s)}=s\Delta_{U}\) from the pool of \(n_{U}\) samples where \(s=0,\cdots,n_{S}\), and \(n_{S}=[n_{U}/\Delta_{U}]+1\) represents the number of steps. To further investigate the impact of the initial distribution of the labeled samples along with the unlabeled samples, the random selection of the \(n_{U}^{(s)}\) samples at each step \(1\leq s\leq n_{S}-1\), is performed \(n_{R}\) times. Concatenating the labeled training samples, \(\mathbf{D}_{L}^{(k,q)}\), in the \(k\)-th fold and \(q\)-th split, with a subset of \(n_{U}^{(s)}\) unlabeled samples in the \(s\)-th step and \(r\)-th random selection (\(r\leq n_{R}\)), denoted as \(\mathbf{D}_{U}^{(k,q,s,r)}=\{(x_{i},\cdot)\}_{i\in I_{U}^{(k,q,s,r)}}\), where \(\mathbf{I}_{U}^{(k,q,s,r)}\subseteq\mathbf{I}_{U}^{(k,q)}\), we obtain a training dataset with mixed labeled and unlabeled samples, denoted as \(\mathbf{D}_{M}^{(k,q,s,r)}=\{(x_{i},y_{i})\}_{i\in I_{L}^{(k,q)}}\cup\{(x_{i}, \cdot)\}_{i\in I_{U}^{(k,q,s,r)}}\). To account for the semi-supervised learning assumptions, we sort the \(n_{U}^{(s)}\) unlabeled samples in the \(T_{U}^{(k,q,s,r)}\) based on their proximity to the nearest labeled sample. To improve clarity, for the given \(k\), \(q\), and \(r\), we will modify the superscripts of the training (labeled and unlabeled) and validation samples throughout the remainder of this paper, i.e., \(\mathbf{D}_{L}\), \(\mathbf{D}_{U}^{(s)}\), \(\mathbf{D}_{M}^{(s)}\), and \(\mathbf{D}_{V}\) represent the subsets of \(n_{L}\) labeled, \(n_{U}^{(s)}\) unlabeled, \(n_{M}^{(s)}=n_{L}+n_{U}^{(s)}\) mixed, and \(n_{V}\) validation samples, respectively. A visual representation of the outlined approach is depicted in Fig. 1. We can alternatively represent the labeled and unlabeled training samples in matrix format as described below. We define the matrix of event features with labeled samples as \(\mathbf{X}_{L}=[\ldots,x_{i},...]^{T}\) and the corresponding matrix of labels as \(\mathbf{Y}_{L}=[\ldots,y_{i},...]^{T}\) where \(i\in I_{L}^{(k,q)}\). Similarly, for the subset of unlabeled samples, we define \(\mathbf{X}_{U}=[\ldots,x_{i},...]^{T}\), \(i\in I_{U}^{(k,q,s,r)}\). For the sake of notation coherency as well as implementation considerations (e.g., learning the classification models), we assign value \(-1\) to the unlabeled samples, i.e., \(\mathbf{Y}_{U}=[-1,...,-1]^{T}\in\mathbb{R}^{n_{U}^{(0)}}\). Hence, the mixed labeled and unlabeled training set can be expressed as \[\mathbf{D}_{M}=\{\mathbf{X}_{M},\mathbf{Y}_{M}\} \tag{3}\] where \[\mathbf{X}_{M} =[\mathbf{X}_{L}{}^{T},\mathbf{X}_{U}{}^{T}]^{T}, \tag{4}\] \[\mathbf{Y}_{M} =[\mathbf{Y}_{L}{}^{T},\mathbf{Y}_{U}{}^{T}]^{T}.\] Similarly, the validation \(\mathbf{D}_{V}\) in the \(k^{\text{th}}\) fold can be represented in the matrix format as \(\mathbf{D}_{V}=\{\mathbf{X}_{V},\mathbf{Y}_{V}\}\) where \(\mathbf{X}_{V}=[\ldots,x_{i},...]^{T}\) and \(\mathbf{Y}_{V}=[\ldots,y_{i},...]^{T}\), and \(i\in I_{V}^{(k)}\). ## IV Semi-supervised Event Identification: Model Learning and Validation Our procedure to test semi-supervised methods consists of three steps: (i) pseudo-labeling of unlabeled samples in the training set with mixed labeled and unlabeled samples, \(\mathbf{D}_{M}^{(s)}\), (ii) training a classifier using the combined labeled and pseudo-labeled samples, and (iii) evaluating the classifier's performance on previously unseen data in the validation set, \(\mathbf{D}_{V}\). The overview of the proposed approach is shown in Fig. 1. Given semi-supervised model \(\mathcal{F}_{1}\) and a classifier \(\mathcal{F}_{2}\), we start with the labeled samples within the \(k^{\text{th}}\) fold and the \(q^{\text{th}}\) split of the training set. Using these labeled samples, we perform grid search [32] to obtain hyper-parameters for the models \(\mathcal{F}_{1}\) and \(\mathcal{F}_{2}\), denoted as \(\theta_{1}^{*}\) and \(\theta_{2}^{*}\). (Note that these hyper-parameters will differ based on \(k\) and \(q\).) Subsequently, we use the matrix of event features and the corresponding matrix of labels in the \(\mathbf{D}_{M}^{(s)}\) to assign pseudo labels on the unlabeled samples using \(\mathcal{F}_{1}\). Utilizing the obtained labeled and pseudo labeled samples, \(\mathbf{D}_{M}^{(s)}\), we then use model \(\mathcal{F}_{2}\) to assign labels to the events in the validation dataset \(\mathbf{D}_{V}\). In the subsequent subsections, we will describe which models we use as \(\mathcal{F}_{1},\mathcal{F}_{2}\) in this procedure. ### _Semi-Supervised Models for Pseudo Labeling (\(\mathcal{F}_{1}\))_ #### Iv-A1 Self-training Self-training, which dates back to the 1990s [33], has proven to be effective in leveraging unlabeled data to improve supervised classifiers [34, 35, 36, 37, 38]. Self-training works by assigning pseudo labels to unlabeled samples based on the model's predictions and then training the model iteratively with these pseudo labeled samples. More specifically, for any given base classifier, we learn a model Fig. 1: Overview of the proposed semi-supervised pipeline \(F_{1}\in\{\text{SVMR, SVML, GB, DT, kNN}\}\) from the labeled samples in the \(\mathbf{D}_{M}^{(s)}\). Then using the learned model, we predict the labels for each \(n_{U}^{(s)}\) unlabeled samples to obtain the augmented labeled and pseudo labeled samples, denoted as \(\widehat{\mathbf{D}}_{M}^{(s)}\). Algorithm 1 outlines the steps involved in this procedure. Note that the parameter \(\delta_{U}\) in this algorithm specifies the number of unlabeled samples (among the \(n_{U}^{(s)}\) samples) that will be assigned pseudo-labels in each iteration. ``` 1:Input:\(\mathbf{D}_{M}^{(s)}\) 2:Output:\(\mathbf{D}_{M}^{(s)}\) 3:Initialize:\(\{\text{f : }t\}=[1:\delta_{U}]\)\(\triangleright\) from sample f to sample t 4:\(\hat{\mathbf{X}}_{L}\leftarrow\mathbf{X}_{L},\hat{\mathbf{Y}}_{L}\leftarrow \mathbf{Y}_{L},\hat{\mathbf{X}}_{U}\leftarrow\mathbf{X}_{U}[\text{f : t}]\) 5:while\(\text{t }\leq n_{U}^{(s)}\)do 6:\(F_{1}:\hat{\mathbf{Y}}_{L}\leftarrow\hat{\mathbf{X}}_{L}\)\(\triangleright\) Learning the model 7:\(\hat{\mathbf{Y}}_{U}=F_{1}(\hat{\mathbf{X}}_{U})\)\(\triangleright\) Pseudo Labeling 8:\(\hat{\mathbf{X}}_{L}\leftarrow[\hat{\mathbf{X}}_{L}^{T},\hat{\mathbf{X}}_{U}^{ T}]^{T}\), \(\hat{\mathbf{Y}}_{L}\leftarrow[\hat{\mathbf{Y}}_{L}^{T},\hat{\mathbf{Y}}_{U}^{T}]^{T}\)\(\triangleright\) Augmentation 9:\(f\gets f+\delta_{U},\quad t\gets t+\delta_{U}\) 10:if\(\text{t }>n_{U}^{(s)}\): 11:\(\text{t }=n_{U}^{(s)}\) 12:\(\hat{\mathbf{X}}_{U}-\mathbf{X}_{U}[\text{f : t}]\) 13:endwhile 14:\(\hat{\mathbf{Y}}_{M}\leftarrow\hat{\mathbf{Y}}_{L}\) 15:Return:\(\mathbf{D}_{M}^{(s)}=\{\mathbf{X}_{M},\hat{\mathbf{Y}}_{M}\}\) ``` **Algorithm 1** Self-Training (for a given \(k,q,s\), and \(r\)). #### Iii-A2 Transductive Support Vector Machine (TSVM) The TSVM approach is a modification of the SVM formulation that addresses the challenge of limited labeled data in classification tasks [39, 31, 40]. The TSVM optimization problem is given by \[\min_{\mathbf{w},b,\boldsymbol{\zeta},\mathbf{z}}\quad C\ \left[\sum_{i\in I_{L}}\eta_{i}+\sum_{j\in I_{U}}\min(\zeta_{j},z_{j}) \right]+\|\mathbf{w}\|\] (5a) subject to: \[y_{i}(\mathbf{w}^{T}x_{i}-b)+\eta_{i}\geq 1,\quad\eta_{i}\geq 0, \quad i\in I_{L} \tag{5b}\] \[\mathbf{w}^{T}x_{i}-b+\zeta_{j}\geq 1,\quad\zeta_{j}\geq 0, \quad j\in I_{U}\] (5c) \[-(\mathbf{w}^{T}x_{i}-b)+z_{j}\geq 1,\quad z_{j}\geq 0, \quad j\in I_{U} \tag{5d}\] where \(\mathbf{w}\in\mathbb{R}^{d}\) and \(b\in\mathbb{R}\) represent the direction of the decision boundary and the bias (or intercept) term, respectively. It introduces two constraints (i.e., (5c), and (5d)) for each sample in the training dataset calculating the misclassification error as if the sample belongs to one class or the other. The objective function aims to find \(\mathbf{w}\) and \(b\) that, while maximizing the margin and reducing the misclassification error of labeled samples (i.e., \(\boldsymbol{\eta}\)), minimize the minimum of these misclassification errors (i.e., \(\boldsymbol{\zeta}\) and \(\mathbf{z}\)). This enables the TSVM to utilize both labeled and unlabeled samples for constructing a precise classification model. Subsequently, it assigns pseudo labels to the unlabeled samples. For brevity, we refer readers to [40] for more comprehensive details. #### Iii-A3 Label Spreading (LS) In the realm of semi-supervised learning, label spreading (LS) falls within the category of graph-based semi-supervised (GSSL) models [41]. It involves constructing a graph and inferring labels for unlabeled samples where nodes represent samples and weighted edges reflect similarities. Consider a graph \(G_{\boldsymbol{M}}=(\mathcal{V}_{\boldsymbol{M}},\mathcal{W}_{\boldsymbol{M}})\) which is constructed over the mixed labeled and unlabeled training set. Each sample, \(x_{i},\forall i\in I_{L}\cup I_{U}\), in the \(\mathbf{X}_{\boldsymbol{M}}\) can be represented as a graph node, \(v_{i}\), i.e., \(v_{i}\in\mathcal{V}_{\boldsymbol{M}}\leftarrow x_{i}\). Furthermore, the edge weights matrix defined as \(\mathcal{W}_{\boldsymbol{M}}\in\mathbb{R}^{h_{\boldsymbol{M}}^{(s)}\mathcal{W}_{ \boldsymbol{M}}^{(s)}}\). Considering the Euclidean distance \(\mathfrak{D}_{ij}=||x_{i}-x_{j}||^{2}\), the \(i^{\text{th}}\) row and \(j^{\text{th}}\) column of \(\mathcal{W}_{\boldsymbol{M}}\), denoted as \(w_{ij}\), can be obtained as \(w_{ij}=\exp(-\mathfrak{D}_{ij}/2\sigma^{2})\) if \(i\neq j\), and \(w_{ii}=0\). As a result, the closer the samples are, they will have larger weights. Then the intuition is that similar samples (i.e., with closer distance) have similar labels, and labels propagate from the labeled samples to unlabeled ones through weighted edges where the weights carry the notion of similarity. A pseudo code for the LS algorithm based on [42] is shown in Algorithm. 2. Note that, in updating the labels (line 7 in Algorithm. 2), samples receive information from their neighbors (first term) while preserving their initial information (second term). The parameter \(\alpha\) determines the weighting between neighbor-derived information and the sample's original label information. ``` 1:Input:\(G=(\mathcal{V},\mathcal{W})\leftarrow\mathbf{D}_{M}^{(s)}=\{\mathbf{X}_{ \boldsymbol{M}},\mathbf{Y}_{M}\}\) 2:Output:\(\widehat{\mathbf{D}}_{M}^{(s)}\) 3:Compute:\(\mathcal{D}_{ii}=\sum_{j}w_{ij},\quad\forall i\in I_{L}\cup I_{U}\) 4:Compute:\(\mathbf{Z}=\mathcal{D}^{-1/2}\mathcal{W}_{\boldsymbol{M}}\mathcal{D}^{-1/2}\) 5:Initialize:\(\begin{bmatrix}\mathbf{Y}_{L}|_{t=0}\\ \mathbf{Y}_{U}|_{t=0}\end{bmatrix}\leftarrow\begin{bmatrix}\mathbf{Y}_{L}\\ \mathbf{Y}_{U}\end{bmatrix}\) 6:while\(\begin{bmatrix}\mathbf{Y}_{L}|_{t}\\ \mathbf{Y}_{U}|_{t}\end{bmatrix}\) converges do\(\triangleright\) Based on some threshold 7:\(\begin{bmatrix}\mathbf{Y}_{L}|_{t+1}\\ \mathbf{Y}_{U}|_{t+1}\end{bmatrix}\gets a\mathbf{Z}\begin{bmatrix}\mathbf{Y}_{L}|_{t }\\ \mathbf{Y}_{U}|_{t}\end{bmatrix}+(1-\alpha)\begin{bmatrix}\mathbf{Y}_{L}\\ \mathbf{Y}_{U}\end{bmatrix}\) 8:\(t\gets t+1\) 9:endwhile 10:\(\widehat{\mathbf{Y}}_{M}\leftarrow\begin{bmatrix}\mathbf{Y}_{L}|_{t}\\ \mathbf{Y}_{U}|_{t}\end{bmatrix}\) 11:Return:\(\widehat{\mathbf{D}}_{M}^{(s)}=\{\mathbf{X}_{\boldsymbol{M}},\widehat{\mathbf{Y}}_{ \boldsymbol{M}}\}\) ``` **Algorithm 2** Label spreading (for a given \(k,q,s\), and \(r\)). ### _Pseudo Labeling Evaluation_ Using any of the abovementioned semi-supervised models, we obtain the augmented labeled and pseudo labeled samples, i.e., \(\widehat{\mathbf{D}}_{M}^{(s)}\), and learn a new classifier \(F_{2}\in\{\text{SVMR, SVML, GB, DT, kNN}\}\) to assess the model's performance on the previously unseen validation fold, \(\mathbf{D}_{V}\). Within this setting, to ensure a fair comparison among various inductive and transductive semi-supervised approaches, we consider two distinct approaches: * **Approach 1 (Inductive semi-supervised setting):** \(F_{1}\in\{\text{SVMR, SVML, GB, DT, kNN}\}\) represents the base classifier utilized in self-training for pseudo labeling, and the same type of classifier will be used as \(\mathcal{F}_{2}\). * **Approach 2 (Transductive semi-supervised setting): \(\mathcal{F}_{1}\in\{\text{TSVM, LS}\}\)** represents a semi-supervised method used for pseudo labeling, and \(\mathcal{F}_{2}\in\{\text{SVMR, SVML, GB, DT, kNN}\}\). ## V Simulation Results In order to investigate the performance of various semi-supervised learning algorithms, we first generate eventful synthetic PMU data, following the procedure described in Section II. Our simulations were carried out on the South-Carolina 500-Bus System [16, 17]. We allow the system to operate normally for \(t_{f}=1\) second and then we immediately apply a disturbance. We then run the simulation for an additional \(t_{s}=10\) seconds, and record the resulting eventful measurements at the PMU sampling rate of 30 samples/sec. We assume that 95 buses (which are chosen randomly) of the Carolina 500-bus system are equipped with PMU devices and extract features for each such bus from the \(V_{m}\), \(V_{a}\), and \(F\) channels. We thus collect \(N=300\) samples after the start of an event for each channel. We use the modal analysis methodology as outlined in our recent prior work [30] to extract features using modal analysis. In total, we simulated 1827 events including 500 LL, 500 GL, 500 LT, and 327 BF events. Figure 2 illustrates the measurements (i.e., \(V_{m}\), \(V_{a}\), and \(F\)) recorded from a single PMU after applying LL, GL, LT, and BF events, To quantitatively evaluate and compare the performance of different semi-supervised learning algorithms across various scenarios, we employ the area under curve (AUC) of the receiver operator characteristic (ROC). This metric enables the characterization of the accuracy of classification for different discrimination thresholds [43]. The ROC AUC value, which ranges from 0 to 1, provides an estimate of the classifier's ability to classify events. A value of AUC closer to 1 indicates a better classification performance. For a specified set of parameters \(k\), \(q\), \(s\), and \(r\), we evaluate the performance of a given classifier \(\mathcal{P}_{2}\) by assessing its ROC-AUC score in predicting event classes within the hold-out fold. This evaluation is based on the model learned from the augmented labeled and pseudo-labeled samples, which are obtained using the pseudo-labeling model \(\mathcal{F}_{1}\). Given that the aim of this study is to provide insight into the robustness of various semi-supervised models, we compare them by evaluating the average, \(5^{\text{th}}\) percentile, and \(95^{\text{th}}\) percentile of the AUC scores based on the accuracy of the assigned pseudo labels on the unlabeled samples and assess the impact of incorporating the assigned pseudo labels on the accuracy of a generalizable model in predicting the labels of validation samples. We use the \(5^{\text{th}}\) percentile of the AUC scores as our primary target performance metric for robustness, as it provides a (nearly) worst-case metric across different selections of the initial labeled and unlabeled samples. That is, if a method yields a high \(5^{\text{th}}\) percentile performance, then it is likely to lead to accurate results, even if the initial set of labeled and unlabeled samples are unfavorable. As discussed in Section IV-B, we investigate and compare the performance of various semi-supervised algorithms, including self-training with different base classifiers (SVML, SVMR, GB, DT, and kNN), TSVM, and LS to assess their effectiveness. In our evaluation process, we take into account \(n_{K}=10\) folds and \(n_{Q}=30\) random splits of the training samples into labeled and unlabeled subsets. Other simulation parameters are provided in Table. I. As depicted in Figure 3, the comparative performance of diverse classifiers (namely, SVML, SVMR, kNN, DT, and GB) is presented across distinct semi-supervised models (self-training, TSVM, and LS). The outcomes of this analysis highlight that the integration of additional unlabeled samples and the utilization of LS for pseudo labeling surpasses the outcomes achieved by the self-training and TSVM approaches. Moreover, the LS algorithm consistently enhances the performance of all classifiers more robustly. The following subsections provides further insight on the performance of each semi-supervised model. ### _Approach 1 - Inductive semi-supervised setting_ The simulation results for the \(5^{\text{th}}\) percentile of the AUC scores of the SVML, SVMR, kNN, DT, and GB classifiers in predicting the labels of validation samples are shown in 3a. It is clear that using a limited number of labeled samples, results in poor performance for the self-training method when utilizing SVMR, SMVL, and kNN base classifiers. Moreover, the utilization of GB and DT as base classifiers does not \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Parameter** & **Description** & **Value** \\ \hline \(n_{D}\) & Total No. of samples & 1827 \\ \hline \(n_{K}\) & No. of folds & 10 \\ \hline \(n_{T}\) & No. of training samples & 1644 \\ \hline \(n_{V}\) & No. of validation samples & 183 \\ \hline \(n_{Q}\) & No. of random splits of training samples into labeled and unlabeled samples & 20 \\ \hline \((B_{\text{mix}},B_{\text{max}})\) & Class balance range in the labeled samples & (0.2, 0.8) \\ \hline \(n_{L}\) & No. of labeled samples & 24 \\ \hline \(n_{U}\) & No. of Unlabeled samples & 1620 \\ \hline \(\tilde{v}_{U}\) & No. of unlabeled samples in each step & 100 \\ \hline \(n_{S}\) & Total No. of steps & 18 \\ \hline \(n_{R}\) & No. of random selection of \(n_{U}^{(a)}\) samples at each step & 20 \\ \hline \end{tabular} \end{table} TABLE I: Parameters used in the simulations for semi-supervised event identification Fig. 2: PMU measurements necessarily lead to an improvement in event identification accuracy. This primarily arises from the disparity between the pseudo labels and the initial subset of labeled samples. Training with biased and unreliable pseudo labels can result in the accumulation of errors. In essence, this pseudo label bias exacerbates particularly for classes that exhibit poorer behavior, such as when the distribution of labeled samples does not accurately represent the overall distribution of both labeled and unlabeled samples, and is further amplified as self-training continues. Another noteworthy observation is that self-training employing SVML or SVMR as the classifiers exhibits a high sensitivity to the distribution of both labeled and unlabeled samples. Due to the constraint of having a limited number of labeled samples, these techniques struggle to generate dependable pseudo-label assignments. On the other hand, although self-training with kNN as the base classifier performs better than SVML and SVMR cases, its performance deteriorates as we increase the number of the unlabeled samples. For the self-training with DT and GB base classifiers, it is evident that, although they exhibit more robust performance compared to other types of base classifiers, increasing the number of unlabeled samples does not enhance their performance. ### _Approach 2 - Transductive semi-supervised setting_ The simulation results for the second approach in which TSVM is employed as the semi-supervised method for pseudo-labeling are illustrated in Fig. 2(b). The weak performance of TSVM could be attributed to the specific characteristics of the dataset and the method's sensitivity to the distribution of labeled and unlabeled samples. If the distribution of these samples is unbalanced or exhibits complex patterns, the TSVM might struggle to accurately capture this distribution. As a result, it could assign inaccurate pseudo labels. Furthermore, it becomes evident that the integration of pseudo-labels acquired through the TSVM algorithm, although yielding an overall performance advantage for SVML and SVMR when compared to the same models utilizing pseudo-labels from the self-training algorithm involving SVMR and SVML, still exhibits substantial sensitivity. This sensitivity is particularly apparent when assessing the 5% AUC scores, highlighting that the accuracy of assigned pseudo-labels remains highly contingent on the initial distribution of labeled and unlabeled samples. This phenomenon is also observable in the diminish Fig. 3: Comparison between various classifiers based on the Pseudo labels obtained from (a) self-training method with various base classifiers, (b) TSVM, (C) LS. (d) comparison between the (GB, GB) and (LS, kNN). ing performance of the kNN, GB, and DT classifiers, which, surprisingly, deteriorates to a level worse than their utilization as base classifiers within the self-training framework. On the contrary, as shown in Fig. 2(c), the results demonstrate that utilizing the augmented labeled and pseudo labeled samples obtained from LS can significantly enhance the performance of event identification, as compared to the self-training and TSVM approaches. Furthermore, the performance of the event identification task improves with a higher number of unlabeled samples, which is particularly significant since labeled eventful PMU data is often scarce in practice. The principal advantage of the LS method, when compared to self-training and TSVM, primarily arises from its ability to leverage information from both labeled and unlabeled samples, as well as their inherent similarities, during the assignment of pseudo labels. For some classifiers (specifically GB and DT), we find that LS improves the 5th percentile line with more unlabeled samples, even though the average performance stays roughly unchanged. On the other hand, for the KNN classifier (as shown in Fig. 2(d)), the average, 5th, and 95th percentile lines all improve with more unlabeled samples. Indeed, LS with KNN seems to be the best overall classifier. ## VI Conclusion Given the practical scenario where a relatively small number of events are labeled in comparison to the total event count, we have introduced a semi-supervised event identification framework to explore the potential benefits of incorporating unlabeled samples in enhancing event identification performance. This framework comprises three core steps: (i) assigning pseudo-labels to unlabeled samples within the training set, which encompasses a mixture of labeled and unlabeled samples, (ii) training a classifier using the augmented set of labeled and pseudo-labeled samples, and (iii) evaluating the classifier's efficacy on the holdout fold. This proposed pipeline is deployed to scrutinize the effectiveness of three classical semi-supervised methods: self-training, TSVM, and LS. Our simulation results suggests that using a limited number of labeled samples, the self-training and TSVM methods perform poorly and does not necessary improve the accuracy of event identification. The study underscores the robust performance of GB and DT classifiers, though augmenting unlabeled samples does not enhance their performance. Conversely, using the augmented labeled and pseudo-labeled samples obtained from LS consistently outperform the self-training and TSVM approaches, and can significantly improve event identification performance. The performance also improves with a higher number of unlabeled samples, which is important given the scarcity of labeled eventful PMU data. ## Acknowledgments This work was supported by the National Science Foundation under Grants OAC-1934766, CCF-2048223, and CCF-2029044, in part by the Power System Engineering Research Center (PSERC) under Project S-87, and in part by the U.S.-Israel Energy Center managed by the Israel-U.S. Binational Industrial Research and Development (BIRD) Foundation.
電網の信頼性、セキュリティ、安定性を向上させるために、イベントの識別がますます重要な役割を果たしつつあります。Phasor Measurement Units (PMUs) の増大配備とデータサイエンスの進歩により、機械学習の分類技術を用いるデータ駆動型のイベント識別が実現する可能性があります。しかし、リアルタイムでイベントの種類(クラス)に関する不確実性と、そのラベル付けが時間と労力を要する性質により、正確なイベントデータサンプルを収集することは困難です。そのため、ラベル付けされていないデータも活用できる半教師データ学習技術を用いるのが自然な選択となります。この論文では、半教師データ学習技術を用いた既存のイベント識別手法を強化するための新たなフレームワークを提案しています。このフレームワークでは、ラベル付けされたデータと未ラベルデータの両方を使用します。私たちは、古典的な半教師データ学習方法の3つを評価します。それは、自己トレーニング
2309.08802
Geometric Projectors: Geometric Constraints based Optimization for Robot Behaviors
Generating motion for robots that interact with objects of various shapes is a complex challenge, further complicated when the robot's own geometry and multiple desired behaviors are considered. To address this issue, we introduce a new framework based on Geometric Projectors (GeoPro) for constrained optimization. This novel framework allows for the generation of task-agnostic behaviors that are compliant with geometric constraints. GeoPro streamlines the design of behaviors in both task and configuration spaces, offering diverse functionalities such as collision avoidance and goal-reaching, while maintaining high computational efficiency. We validate the efficacy of our work through simulations and Franka Emika robotic experiments, comparing its performance against state-of-the-art methodologies. This comprehensive evaluation highlights GeoPro's versatility in accommodating robots with varying dynamics and precise geometric shapes. For additional materials, please visit: https://www.xueminchi.com/publications/geopro
Xuemin Chi, Tobias Löw, Yiming Li, Zhitao Liu, Sylvain Calinon
2023-09-15T22:52:27
http://arxiv.org/abs/2309.08802v1
# Geometric Projectors: Geometric Constraints based Optimization for Robot Behaviors ###### Abstract Generating motion for robots that interact with objects of various shapes is a complex challenge, further complicated when the robot's own geometry and multiple desired behaviors are considered. To address this issue, we introduce a new framework based on Geometric Projectors (GeoPro) for constrained optimization. This novel framework allows for the generation of task-agnostic behaviors that are compliant with geometric constraints. GeoPro streamlines the design of behaviors in both task and configuration spaces, offering diverse functionalities such as collision avoidance and goal-reaching, while maintaining high computational efficiency. We validate the efficacy of our work through simulations and Franka Emika robotic experiments, comparing its performance against state-of-the-art methodologies. This comprehensive evaluation highlights GeoPro's versatility in accommodating robots with varying dynamics and precise geometric shapes. For additional materials, please visit: [https://www.xueminch.com/publications/geopro](https://www.xueminch.com/publications/geopro) ## I introduction In this paper, we focus on the generation of composable, non-conservative, and smooth behaviors for robots, particularly addressing three categories of behaviors: collision avoidance, goal-reaching, and self-limitation. Collision avoidance is a ubiquitous requirement in robotic applications and demands precise geometric modeling of both the robots and their surrounding environments for effective, non-conservative solutions. The use of Signed Distance Functions (SDFs) is prevalent in this context but introduces challenges in the problem formulation. Traditional approaches like TrajOpt [1] have ignored non-differentiable points in configuration space, justifying this omission empirically. However, recent advancements in dual approaches [2] have successfully transformed non-differentiable SDFs into twice-differentiable constraints. While this offers a theoretical solution, the computational complexity of this method limits its applicability to high-dimensional systems or intricate environmental scenarios. The objective of reaching behavior is commonly represented as a point target, but this can be generalized to encompass a geometrically defined region or even an object. Traditional box constraints applied to system state and control variables also inherently involve geometric shapes, albeit in configuration space. While Euclidean projection [3] serves as an efficient technique for handling constraints in configuration space, its importance is often underestimated in the broader context of robotic behaviors. Indeed, the generation of these behaviors is closely and fundamentally associated with a geometric problem [4]: it is predicated on both distance and its derivatives, which are intrinsically linked to the geometric shapes, sizes, and positions of robots and nearby objects. Projection techniques naturally align with this geometric framework, reinforcing their relevance. Motivated by the challenges outlined, we introduce the concept of the geometric projector to capture geometric constraints and reformulate the optimization problem with the augmented Lagrangian method. The key contributions are as follows: 1. An efficient and user-friendly approach for designing geometric constraints, leveraging the geometric projectors. 2. A unified, composable optimization framework that is applicable to a broad range of robots and behaviors. 3. Our method is validated through numerical simulations and real-world experiments, with benchmarks and analyses that highlight its effectiveness. ## II Related work Considering the exact geometric shapes of robots and surrounding objects enables safer behavior without being overly conservative. Instead of approximating the shapes by circles or ellipses, one way to deal with polytopic shapes is to formulate a mixed-integer problem [5]. However, the complexity and inefficiency limit its application. Another popular branch is dual approaches [2, 6] which handle it Fig. 1: Experiment setup. The task is to insert an object with a polytope geometric shape into a hole while maintaining a non-conservative approach to collision avoidance with the obstacles that form the hole. at the cost of additional dual variables. Another branch techniques for collision avoidance are control barrier functions [7], which have also been applied to mobile robots [8] and robot manipulators [9] to generate safety-aware behaviors. In our work, we show that this feature is easy to integrate and that we do not require smooth safety constraints. In the context of shaping behaviors and composable framework, geometric fabrics [10] shape behaviors by bending geometry generators [11]. Differentiable maps based on lower dimensional task variables can be designed separately and then pulled back to configuration space together and solved in a weighted average fashion. Other energy-based methods [12] also share the same similarities. However, these methods typically do not consider horizons, while our work can be both reactive and predictive. Projection [3] has been widely used in the collection of work mentioned above, as well as other robot applications or computer graphics [13]. In work [14], M Giftthaler _et al._ applied projection to handle equality constraints. Based on the recent advance in augmented Lagrangian method for problems with geometric constraints [15], H Girgin _et al._ applied it to point robot applications in [16]. In contrast to their work, we focus on a more general framework based on the concept of geometric projectors which consider more general projections between geometries with different shapes, sizes, positions and velocities capturing geometric constraints in robot applications. ## III Problem formulation In this section, we establish the notions we will use in the GeoPro framework. We use \(x\) to denote scalars, \(\mathbf{x}\) for vectors, \(\mathbf{X}\) for matrices, \(\mathbf{\mathcal{X}}\) for tensors. Subsequently, we introduce the concept of GeoPro and formalize it as a general nonlinear constrained optimization problem. ### _Geometric Constrained Optimization_ We consider a general constrained nonlinear optimization problem of the form \[\min_{\mathbf{x},\mathbf{u}} c(\mathbf{x},\mathbf{u})=\sum_{k=0}^{N-1}l\left(\mathbf{x}_{k},\mathbf{u}_{k}\right)\] s.t. \[\mathbf{x}_{k+1}=f(\mathbf{x}_{k},\mathbf{u}_{k}), \tag{1a}\] \[g_{i}(\mathbf{x}_{k})\in\mathcal{C}_{i},\quad\forall i=1,\ldots,N_{ p},\] (1b) \[\mathbf{x}\in\mathcal{D}_{x},\quad\mathbf{u}\in\mathcal{D}_{u} \tag{1c}\] where \(N\) is the time horizon, \(N_{p}\) is the number of constraints. The cost function \(l:\mathbb{R}^{n_{x}}\times\mathbb{R}^{n_{u}}\rightarrow\mathbb{R}\) depends on the robot tasks. The system dynamics \(f:\mathbb{R}^{n_{x}}\times\mathbb{R}^{n_{u}}\rightarrow\mathbb{R}^{n_{x}}\) can be obtained by either Euler method or 4-th Runge-Kutta method. \(g_{i}:\mathbb{R}^{n_{x}}\rightarrow\mathbb{R}^{n_{i}}\) is \(i\)-th continuously differentiable1 functions. The set \(\mathcal{C}_{i}\in\mathbb{R}^{n_{i}}\) is nonempty, closed and normally convex. The set \(\mathcal{D}_{x}\in\mathbb{R}^{n_{x}}\) and \(\mathcal{D}_{u}\in\mathbb{R}^{n_{u}}\) are only assumed to be nonempty and closed. The geometric nature of these constraints is captured by the sets defined in equations (1b)-(1c). The inequality and equality constraints can be described as \(\mathcal{C}_{i}:=\mathbb{R}_{-}^{n_{i}}\times\{0\}\), which encodes \(g_{i}(\mathbf{x})\leq 0\) and \(g_{i}(\mathbf{x})=0\). A Euclidean projection \(P\) mapping a point onto a set is often given by \(\mathcal{P}_{\mathcal{C}_{i}}(\mathbf{x}):=\operatorname*{argmin}_{\mathbf{x}\in \mathcal{C}_{i}}\|\mathbf{z}-\mathbf{x}\|\). The distance between a point \(\mathbf{x}\) and a set \(\mathcal{C}_{i}\) is denoted by \(d_{\mathcal{C}_{i}}(\mathbf{x}):=||\mathcal{P}_{\mathcal{C}_{i}}-\mathbf{x}||\). Further, let \(\text{sd}_{\mathcal{C}_{i}}:\mathbb{R}^{n}\rightarrow\mathbb{R}\) be the signed distance function to \(\mathcal{C}_{i}\): Footnote 1: It is assumed that only first-order derivative information is required for \(g_{i}(\cdot)\) \[\text{sd}_{\mathcal{C}_{i}}(\mathbf{x}):=\begin{cases}d_{\mathcal{C}_{i}}(\mathbf{x}),&x\in\mathbb{R}^{n}\backslash\mathcal{C}_{i}\\ -d_{\mathbb{R}^{n}\backslash\mathcal{C}_{i}}\left(\mathbf{x}\right),&x\in\mathcal{ C}_{i}\end{cases} \tag{2}\] Based on the aforementioned definitions, we introduce a generalized projection operator, denoted as \(\mathcal{P}\) and referred to as the Geometric Projector (GeoPro), to map \(g_{i}(x)\) onto \(\mathcal{C}_{i}\) and encapsulate the geometric constraints. **Remark 1**: _In [15], the set \(\mathcal{C}_{i}\) needs to be strictly convex. We found that for some non-convex sets, if we design projection of \(g_{i}\) onto \(\mathcal{C}_{i}\) to be unique, this condition can be relaxed to non-convex sets like the boundary of a circle._ ### _Geometric Projector_ Let \(\Omega\subseteq\mathbb{R}^{n}\) denote the task space. Each geometry within this task space is defined by two elements: its state vector and its geometric shape. The state vector at time \(k\) is denoted as \(\mathbf{x}_{k}\) for robots, \(\mathbf{y}_{k}\) for obstacles, and \(\mathbf{p}_{k}\) for targets. Correspondingly, the geometric shape of each of these entities at time \(k\) occupies a specific subset of \(\Omega\), denoted by \(\mathcal{G}(\cdot)\).2. Our focus lies on three primary classes of objects: i): Robots, represented by \(\mathcal{G}_{R}\), ii): Obstacles, represented by \(\mathcal{G}_{O}\), iii): Targets, represented by \(\mathcal{G}_{T}\). The composite geometries of \(N_{o}\) obstacles and \(N_{t}\) targets at time \(k\) can be expressed as Footnote 2: The notation \(\mathcal{G}\) is used generically to refer to both obstacles and targets unless specified otherwise. \[\mathcal{G}_{O}\left(\mathbf{y}_{k}\right):=\bigcup_{i=1}^{N_{o}}\mathcal{G}_{O}^ {i}\subset\Omega,\quad\mathcal{G}_{T}\left(\mathbf{p}_{k}\right):=\bigcup_{i=1}^{N _{t}}\mathcal{G}_{T}^{i}\subset\Omega, \tag{3}\] where \(\mathcal{G}_{O}^{i}\) and \(\mathcal{G}_{T}^{i}\) represent the individual geometries of the \(i^{th}\) obstacle and target, respectively. Since we design behaviors for robots, a robot-centered GeoPro, denoted as \(\mathcal{P}_{\mathcal{G}}^{\partial_{\mathbf{x}}(\mathbf{x}_{k})}\)is formally defined as \[\mathcal{P}:\mathcal{G}_{R}\left(\mathbf{x}_{k}\right)\times\mathcal{G}\to \mathcal{G}_{R}\left(\tilde{\mathbf{x}}_{k}\right), \tag{4}\] where \(\tilde{\mathbf{x}}_{k}\in\mathbb{R}^{n}\) is the projected state vector of the robot at time \(k\), and \(\mathcal{G}\) allows to be \(\varnothing\). Through the design of GeoPro, various robot behaviors can be modeled. Specifically, we identify three main classes of behaviors: * Safety behaviors, which aim to prevent collisions, * Goal-reaching behaviors, that guide the robot toward a specific destination, * Self-limiting behaviors, that restrict the robot's actions and states based on its capabilities. ### _Robot Behaviors based on GeoPro_ Collision avoidance is a critical aspect of robotic behavior, ensuring the safety of both the robot and surrounding objects. For safety behaviors, denoted as \(\mathcal{B}_{\text{safe}}\), we formally define the following GeoPro \(\mathcal{P}_{G_{O}\left(\mathbf{y}_{k}\right)}^{\mathcal{G}_{R}\left(\mathbf{x}_{k} \right)}\): \[\mathcal{P}:\mathcal{G}_{R}\left(\mathbf{x}_{k}\right)\times\mathcal{G }_{O}\left(\mathbf{y}_{k}\right)\rightarrow\mathcal{G}_{R}\left(\tilde{\mathbf{x}}_{k} \right), \tag{5}\] \[\mathcal{G}_{R}(\tilde{\mathbf{x}}_{k})\cap\mathcal{G}_{O}(\mathbf{y}_{k} )=\varnothing, \tag{6}\] which ensures that the geometries of the robot and the obstacles do not intersect. In the context of goal-reaching behaviors, denoted by \(\mathcal{B}_{\text{reach}}\), we employ the following GeoPro \(\mathcal{P}_{\mathcal{G}_{T}\left(\mathbf{p}_{k}\right)}^{\mathcal{G}_{R}\left( \mathbf{x}_{k}\right)}\): \[\mathcal{P}:\mathcal{G}_{R}\left(\mathbf{x}_{k}\right)\times\mathcal{G }_{T}\left(\mathbf{p}_{k}\right)\rightarrow\mathcal{G}_{R}\left(\tilde{\mathbf{x}}_{k }\right), \tag{7}\] \[\mathcal{G}_{R}(\tilde{\mathbf{x}}_{k})\cap\mathcal{G}_{T}(\mathbf{p}_{k })=\mathcal{G}_{\text{reach}}, \tag{8}\] where \(\mathcal{G}_{\text{reach}}\) can vary from a simple geometric point to elaborate geometries, such as residing within a manifold dictated by \(\mathcal{G}_{T}(\mathbf{y}_{k})\). To control robot self-behaviors, denoted by \(\mathcal{B}_{\text{self}}\). We suggest to use a Euclidean projection: \[P_{\mathcal{D}_{x}}(\mathbf{x}_{k}):\mathbf{x}_{k}\times\mathcal{D}_{x} \rightarrow\tilde{\mathbf{x}}_{k}\in\mathcal{D}_{x}, \tag{9}\] \[P_{\mathcal{D}_{u}}(\mathbf{u}_{k}):\mathbf{u}_{k}\times\mathcal{D}_{u} \rightarrow\tilde{\mathbf{u}}_{k}\in\mathcal{D}_{u}. \tag{10}\] Such projectors can be used to enforce constraints like speed or joint angle limits in robotics. In summary, we can systematize and generalize a diverse array of robot behaviors, denoted collectively as \(\mathcal{B}=\left\{\mathcal{B}_{\text{safe}},\mathcal{B}_{\text{reach}}, \mathcal{B}_{\text{self}}\right\}\). These behaviors can further be customized by composing GeoPros like \(\mathcal{B}:=\mathcal{B}_{\text{safe}}\wedge\mathcal{B}_{\text{reach}}\). ## IV Method In this section, we derive the details of GeoPro for general behaviors. Next, we introduce how to integrate GeoPro into an optimization problem by the augmented Lagrangian method. ### _GeoPro for Point Robot Geometry Behaviors_ For simplicity, we drop time notation \(k\). Let \(\mathcal{G}_{R}(\mathbf{x})\) be a point geometry, we consider two representations of obstacles and targets: hyperplanes, denoted by \(\mathcal{H}\), with two half-spaces, denoted by \(\mathcal{H}^{+}\) and \(\mathcal{H}^{-}\), and the signed distance field, denoted by \(\mathcal{S}\). The space in \(\Omega\) occupied by \(i\)-th obstacle can be represented by \[\mathcal{G}_{O}^{i}:=\{\mathbf{y}\in\mathbb{R}^{n}:\mathbf{A}\mathbf{y}\leq\mathbf{b}\}, \tag{11}\] where \(\mathbf{A}=\left[\mathbf{a}_{1},\ldots,\mathbf{a}_{N^{c}_{O}}\right]^{\top}\in\mathbb{R}^ {N^{c}_{O}\times n}\), \(N^{c}_{O}\) is the number of edges and \(\mathbf{b}\in\mathbb{R}^{N^{c}_{O}}\). Each pair \((\mathbf{a}_{j},b_{j})\) constructs a half-space denoted by outward \(\mathcal{H}^{+}(\mathbf{a}_{j},b_{j})\) aligned with normal direction and inward \(\mathcal{H}^{-}(\mathbf{a}_{j},b_{j})\) where \(\mathcal{H}^{-}(\mathbf{a}_{j},b_{j}):=\{\mathbf{y}\in\mathbb{R}^{n}:\mathbf{a}_{j}^{ \top}\mathbf{p}\leq b_{j}\}\). The obstacle \(\mathcal{G}_{O}^{i}\) can also be described as \(\bigcap_{j}^{N^{c}_{O}}\mathcal{H}^{-}\left(\mathbf{a}_{j},b_{j}\right)\). For safe behaviors \(\mathcal{B}_{\text{safe}}\), we introduce a condition \(C_{1}\): \[C_{1}:=\bigwedge_{j=1}^{N^{c}_{O}}\left(\mathbf{a}_{j}^{\top}\mathbf{x}\leq b_{j} \right), \tag{12}\] if \(C_{1}\) evaluates to _true_, this indicates a potential for collision and necessitates corrective action. In this case, we project \(\mathbf{x}\) onto the closest \(\mathcal{H}(\mathbf{a}_{j},b_{j})\) to ensure safety, as described by \[\mathcal{P}_{\mathcal{G}_{O}\left(\mathbf{y}\right)}^{\mathcal{G}_{R}\left(\mathbf{x} \right)}\rightarrow\mathcal{G}_{R}(\tilde{\mathbf{x}}):=\mathcal{G}_{R}\left( \mathbf{x}-\frac{\mathbf{a_{j}}\left(\mathbf{a_{j}}^{\top}\mathbf{x}-b_{j}\right)}{\|\mathbf{a_{j} }\|_{2}^{2}}\right), \tag{13}\] so that if \(C_{1}\) is _false_, we retain the original state \(\mathbf{x}=\tilde{\mathbf{x}}\). Since we are considering a point-like robot geometry \(\mathcal{G}_{R}\), the GeoPro operation is equivalent to \(P_{\mathcal{G}_{O}}(\mathbf{x})\) in this case. For reaching behaviors \(\mathcal{B}_{\text{reach}}\), even though the goal is often specified as a point, we consider a more general target geometry as \[\mathcal{G}_{T}^{i}:=\{\mathbf{p}\in\mathbb{R}^{n}:\mathbf{Z}\mathbf{p}\leq\mathbf{r}\}, \tag{14}\] where \(\mathbf{Z}=\left[\mathbf{z}_{1},\ldots,\mathbf{z}_{N^{c}_{T}}\right]^{\top}\in\mathbb{R}^ {N^{c}_{T}\times n}\), \(N^{c}_{T}\) is the number of edges and \(\mathbf{r}\in\mathbb{R}^{N^{c}_{T}}\). For contact and reach-inside behaviors, it is insufficient to use the hyperplanes \(\mathcal{H}(\mathbf{z}_{j},r_{j})\) as they extend infinitely. Let \(\mathcal{E}=\left[e_{1},\ldots,e_{N^{c}_{T}}\right]\) be the edge set, and \((\mathbf{e}_{j}^{a},\mathbf{e}_{j}^{b})\) be the vertices of the edge \(e_{j}\). The auxiliary scalar is \[h=\frac{\left(\mathbf{x}-\mathbf{e}_{j}^{a}\right)\left(\mathbf{e}_{j}^{b}-\mathbf{e}_{j}^{a} \right)}{\left|\mathbf{e}_{j}^{b}-\mathbf{e}_{j}^{a}\right|^{2}}, \tag{15}\] which is then clipped to limit projected points bounded by \(h\leftarrow\max(0,\min(1,h))\). The final GeoPro will be: \[\mathcal{P}_{\mathcal{G}_{T}\left(\mathbf{p}\right)}^{\mathcal{G}_{R}\left(\mathbf{x} \right)}\rightarrow\mathcal{G}_{R}(\tilde{\mathbf{x}}):=\mathcal{G}_{R}((1-h)\bm {x}+he_{j}^{b}), \tag{16}\] where \(\mathcal{G}_{T}^{i}(\mathbf{p}):=\bigcap_{j}^{N^{c}_{T}}\mathcal{E}\left(\mathbf{e}_{j}^{ a},\mathbf{e}_{j}^{b}\right)\). To facilitate the discussion on reaching behaviors, we introduce two conditions \(C_{2}\) and \(C_{3}\): \[C_{2}:=\bigvee_{j=1}^{N^{c}_{T}}\left(\mathbf{z}_{j}^{\top}\mathbf{x}>r_{j}\right),C_{3 }:=\bigwedge_{j=1}^{N^{c}_{T}}\left(\mathbf{z}_{j}^{\top}\mathbf{x}\leq r_{j}\right), \tag{17}\] where \(C_{2}\) pertains to contact behaviors. If \(C_{2}\) evaluates to _true_, we do projection (16). \(C_{3}\) is associated with the reach-inside behaviors, if \(C_{3}\) evaluates to _false_, we apply GeoPro (16). GeoPro \(\mathcal{P}\) also supports the implicit SDF \(\mathcal{S}\), either implicit functions or neural networks.3 For instance, We can represent an SDF using splines. The advantage of this approach is the availability of analytical and smooth gradient information. Assume the order of the polynomials is \(r\) and we have \(c\) control points, the matrix form of the SDF is \(\mathcal{S}:=\mathbf{t}^{\top}M\Phi\), \(\mathbf{t}\in\mathbb{R}^{r}\) is the time vector, and \(M\in\mathbb{R}^{r\times c}\) is the characteristic matrix and \(\Phi\in\mathbb{R}^{c}\). The gradient \(\nabla\mathcal{S}\) can be computed efficiently. Footnote 3: Interested readers can refer to various SDF examples in link, which can directly be incorporated into our framework. For SDF tutorials, please refer to RCFS. More applications can be found in link. For behaviors \(\mathcal{B}_{\text{safe}}\), since the \(\mathcal{G}_{R}(\mathbf{x})\) is a point geometry, the condition is the sign of \(\text{sd}_{S}(\mathbf{x})\), if it is negative, the GeoPro will be \[\mathcal{P}_{\mathcal{G}_{O}\left(\mathbf{y}\right)}^{\mathcal{G}_{R}\left(\mathbf{x} \right)}\rightarrow\mathcal{G}_{R}(\tilde{\mathbf{x}}):=\mathcal{G}_{R}(\mathbf{x}- \nabla\mathcal{S}(\mathbf{x})\mathcal ### _GeoPro for General Robot Geometry Behaviors_ Consider the robot has a geometric shape described by hyperplanes instead of a point, defined as \(\mathcal{G}_{R}(\mathbf{x}):=\{\mathbf{x}\in\mathbb{R}^{n}:\mathbf{V}\mathbf{x}\leq\mathbf{d}\}\), where \(\mathbf{V}=\big{[}\mathbf{v}_{1},\dots,\mathbf{v}_{N_{R}^{n}}\big{]}^{\top}\in\mathbb{R}^{N _{R}^{n}\times n}\), \(N_{R}^{e}\) is the number of edges of robot geometry and \(\mathbf{d}\in\mathbb{R}^{N_{R}^{n}}\). GeoPro turns the projection between two geometries into an Euclidean projection by exploiting a Minkowski sum operation [17]. Assume we have two geometries \(\mathcal{G}_{A}\) and \(\mathcal{G}_{B}\), the Minkowski sum is defined as: \[\mathcal{M}_{\mathcal{G}_{B}}^{\mathcal{G}_{A}}:=\mathcal{G}_{A}\oplus \mathcal{G}_{B}:=\{\mathbf{x}=\mathbf{x}_{A}-\mathbf{x}_{B}\mid\mathbf{x}_{A}\in\mathcal{G}_{ A},\mathbf{x}_{B}\in\mathcal{G}_{B}\}, \tag{19}\] where \(\mathcal{M}_{\mathcal{G}_{B}}^{\mathcal{G}_{A}}\subset\Omega_{c}=\mathbb{R}^ {n}\). There is one important property of Minkowski sum: if \(\mathcal{G}_{A}\) and \(\mathcal{G}_{B}\) are intersecting, the origin of \(\Omega_{c}\), denoted as \(\mathbf{0}_{\Omega_{c}}\), lies inside the Minkowski sum \(\mathcal{M}_{\mathcal{G}_{B}}^{\mathcal{G}_{A}}\), if they are not intersecting, Euclidean projection \(P_{\mathcal{M}_{\mathcal{G}_{B}}^{\mathcal{G}_{A}}}(\mathbf{0}_{\Omega_{c}})\) will bring these two geometries in contact. For behaviors \(\mathcal{B}_{\text{safe}}\) between \(\mathcal{G}_{R}(\mathbf{x})\) and \(\mathcal{G}_{O}(\mathbf{y})\), the condition is: \[C_{5}:=\mathbf{0}_{\Omega_{c}}\in\mathcal{M}_{\mathcal{G}_{O}(\mathbf{y})}^{\mathcal{ G}_{R}(\mathbf{x})}, \tag{20}\] if \(C_{5}\) holds \(true\), the GeoPro \(\mathcal{P}_{\mathcal{G}_{O}(\mathbf{y})}^{\mathcal{G}_{R}(\mathbf{x})}\) is defined as: \[\mathcal{P}_{\mathcal{G}_{O}(\mathbf{y})}^{\mathcal{G}_{R}(\mathbf{x})}\to\mathcal{G} _{R}(\tilde{\mathbf{x}}):=\mathcal{G}_{R}(P_{\mathcal{M}_{\mathcal{G}_{O}}^{ \mathcal{G}_{R}}}(\mathbf{0}_{\Omega_{c}})), \tag{21}\] where \(P_{\mathcal{M}_{\mathcal{G}_{O}}^{\mathcal{G}_{B}}}(\mathbf{0}_{\Omega_{c}})\) can be computed by (13). For the behavior \(\mathcal{B}_{\text{reach}}\), the condition is modified as: \[C_{6}:=\mathbf{0}_{\Omega_{c}}\notin\mathcal{M}_{\mathcal{G}_{T}(\mathbf{p})}^{ \mathcal{G}_{R}(\mathbf{x})}, \tag{22}\] if \(C_{6}\) is \(true\), the GeoPro for reach behaviors is defined as: \[\mathcal{P}_{\mathcal{G}_{T}(\mathbf{p})}^{\mathcal{G}_{R}(\mathbf{x})}\to\mathcal{G} _{R}(\tilde{\mathbf{x}}):=\mathcal{G}_{R}(P_{\mathcal{M}_{\mathcal{G}_{T}}^{ \mathcal{G}_{R}}}(\mathbf{0}_{\Omega_{c}})), \tag{23}\] In this section, we introduce the GeoPro as a foundational component for reformulating the constrained optimization problem 1. To address projection-based constraints, we employ the augmented Lagrangian method, as detailed in [15]. To solve the resulting constrained subproblem, we utilize spectral projected gradient descent. This integrated algorithm is referred to as GeoPro-based ALSPG. Comprehensive details, including mathematical derivations and algorithmic steps, are provided in the Appendix ## V Experiments We evaluate the effectiveness and performance of the geometric projector framework in simulation, as well as in experiments with a 7-axis Franka Emika robot arm. ### _Shape Robot Behaviors by GeoPro_ We begin by showing that the robot behaviors can be shaped by composing behaviors \(\mathcal{B}:=\mathcal{B}_{\text{safe}}\wedge\mathcal{B}_{\text{reach}}\wedge \mathcal{B}_{\text{limit}}\). The system is a 2D point mass: \[\dot{c}_{x}=v_{x},\ \dot{c}_{y}=v_{y},\ \dot{v}_{x}=a_{x},\ \dot{v}_{y}=a_{y}, \tag{24}\] where the system states are \(\mathbf{x}=(c_{x},c_{y},v_{x},v_{y})\) with \(\mathbf{u}=(a_{x},a_{y})\) as inputs. In Fig. 2(a), a set of 20 agents with initial conditions \((c_{x},c_{y})=(0,0)\) and \(\|v\|=1\) pointing outward and angles are evenly spaced from \(0\) to \(2\pi\). Though the obstacle shapes are the same, we can design different GeoPro to have different safe behaviors. The right upper obstacle is without any buffer while the safety buffer for the left corner and upper is 0.1 and 0.05 respectively. In Fig. 2(b), 5 agents on each side with the initial conditions \(\mathbf{x}_{\text{init}}=[-0.2,\pm 0.025,0,0]\) respectively, the goal for bottom 5 agents is \(\mathbf{p}_{\text{reach}}=[1,0,0,0]\). It is shown that when the goal-reaching behavior is considered, the bottom 5 agents converge to the goal while the upper 5 agents move without goals. In Fig. 2(c), a set of 20 agents have subgoals distributed in the circle. The goal-reaching behavior is defined implicitly by asking the agent to touch the goal instead of specifying a time sequence explicitly. The heart-shaped obstacle shows our framework is able to handle a variety of object geometries. In Fig. 2(d), a swarm of 80 agents is depicted, all of which share the same information about obstacles. For agents located in the right-upper corner, Fig. 2: Shape the robot behaviors by geometric projectors. (a) Demonstrates varying safety buffers in GeoPro to achieve conservative safe maneuvers. (b) Illustrates agents circumventing polytopic obstacles to consistently reach their goals. Polytope obstacles are very common in robotics. (c) Introduces subgoals and operational constraints for the robots; the robots are required to reach designated subgoals along the circle while adhering to either task-space or joint-space box limits to regulate velocities and accelerations. (d) Explores the capability of robots to form various goal shapes, useful in platoon planning and control. GeoPro further offers barrier-function-like safety mechanisms: as robots approach obstacles, their behavior becomes more conservative, while they proceed directly towards the goal when obstacles are distant. GeoPro’s layered computational architecture allows for the integration of multiple objectives and constraints, thereby enabling diverse robot behaviors. For more details of shaping behaviors, please refer to [https://www.xueminchi.com/publications/geopro](https://www.xueminchi.com/publications/geopro). the safety measures differ depending on their direction of approach to the obstacles. When converging from the bottom side, the agents are permitted to approach the obstacles more closely. In contrast, when they are converging from the upper side, the agents employ behaviors consistent with control barrier functions to enhance safety. In order to incorporate safety behaviors, we extend GeoPro by incorporating a safety function, denoted as \(\psi(d(\mathbf{x},\mathbf{u}))\). The modified GeoPro is defined as: \(\mathcal{P}_{\partial_{O}}^{\delta_{\mathbf{n}}}\rightarrow\mathcal{G}_{R}(\tilde{ \mathbf{x}}+\psi(d(\mathbf{x},\mathbf{u})))\). Here, \(\psi(d(\mathbf{x},\mathbf{u}))\) is specified as: \[\psi(d)=\begin{cases}0&\dot{d}(\mathbf{x},\mathbf{u})\geq 0\\ \gamma(d(\mathbf{x},\mathbf{u}))&\dot{d}(\mathbf{x},\mathbf{u})<0\end{cases} \tag{25}\] in this formulation, \(d\) represents the distance metric, and \(\dot{d}(\mathbf{x},\mathbf{u})\) signifies its time derivative. When the robot is approaching an obstacle \((\dot{d}(\mathbf{x},\mathbf{u}))\), the function \(\gamma(d(\mathbf{x},\mathbf{u})\) is invoked to modify the state \(\mathbf{x}\), effectively enhancing the safety behavior of the robot. The \(\gamma\) function can be a scalar or an extended class \(\mathcal{K}_{\infty}\) function, offering flexibility in the choice of safety margins. For the left-corner goal-reaching behavior, 20 agents are required to safely reach the circle-shaped goal. ### _Non-holonomic Mobile Robots_ We demonstrate our framework on non-holonomic mobile robots with different geometric shapes without resorting to approximations. For non-holonomic robots, we employ the following model: \[\dot{c}_{x}=v\cos(\theta),\quad\dot{c}_{x}=v\sin(\theta),\quad\dot{\theta}=w \tag{26}\] where system states are \(\mathbf{x}=(c_{x},c_{y},\theta)\) with \(\mathbf{u}=(v,w)\) velocity and turning rate as inputs. The system behaviors are \(\mathcal{B}:=\mathcal{B}_{\text{safe}}\wedge\mathcal{B}_{\text{reach}}\). One key advantage of the GeoPro framework lies in its ability to handle collision avoidance without needing to formulate it as a twice-differentiable constraint4. Additionally, our approach does not require decomposing non-convex shapes like the L-shape into convex components, making the framework both flexible and versatile. However, it should be noted that since the problem is inherently non-convex and we do not utilize any initial guesses, the optimality of the resulting trajectory cannot be guaranteed. Footnote 4: This is typically considered a requirement for smoothness in many problem formulations ### _Planar Arm_ GeoPro can also be extended to rigid robots with kinematic chains to obtain specified behaviors, as illustrated in Fig. 4. To demonstrate this capability, we focus on a 3-DOF planar arm. The system model is \(\mathbf{x}=f(\mathbf{q})\)5, with state vector as \(\mathbf{x}=[q_{1},q_{2},q_{3},\dot{q}_{1},\dot{q}_{2},\dot{q}_{3},c_{x},c_{y},\theta]\) while the control input vector is \(\mathbf{u}=[\ddot{q}_{1},\ddot{q}_{2},\ddot{q}_{3}]\). In Fig. 4(a), the \(\mathcal{B}_{\text{limit}}\) in joint space is \(\mathbf{\dot{q}}_{\text{min}}\leq\mathbf{\dot{q}}\leq\mathbf{\dot{q}}_{\text{max}}\). Geometrically, the feasible region in joint space resembles a box centered at the origin \(\mathbf{0}\). GeoPro forces joint limits by projecting onto the feasible region. The \(\mathcal{B}_{\text{reach}}\) behavior aims to reach a point with specified orientation, \([c_{x},c_{y},\theta]\) by projection. The cost function for these behaviors only considers minimum efforts \(\mathbf{u}\) with zero input as an initial guess. Footnote 5: For more details and codes, refer to the Robotics Codes from Scratch toolbox. In Fig. 4(b), an additional GeoPro requires the end-effector to reach the goal while following a specified line. Fig. 4(c) shows a task where the end-effector maintains a specific distance from an object while also maintaining a certain pose upon reaching a designated point. Similarly, Fig. 4(d) shows the end-effector being permitted to move within one manifold and outside another, while maintaining a specific orientation upon reaching a point. Insertion tasks [18] are common in practice. Our GeoPro-based framework can directly accommodate the geometric considerations inherent in peg-in-hole tasks, as shown in Fig. 4(e). This behavior combines \(\mathcal{B}_{\text{safe}}\wedge\mathcal{B}_{\text{reach}}\wedge\mathcal{B}_{ \text{limit}}\), where \(\mathcal{B}_{\text{safe}}\) ensures that the peg attached to the end-effector avoids obstacles. Object-centered tasks, which involve intricate geometric relationships between the robot and the object, can also be tackled. As illustrated in Fig. 4(f), we define a behavior that allows the robot to move around a target while ensuring that the end-effector remains oriented towards that target. ### _Autonomous Driving Benchmark_ To evaluate the effectiveness of our GeoPro algorithm, we conducted benchmarks comparing it with the state-of-the-art Optimization-Based Collision Avoidance (OBCA) method [2]. We focused on two commonly encountered scenarios: parallel and vertical parking, as depicted in Fig. 5. Fig. 3: Non-holonomic mobile robots with different geometric shapes. The vehicle model is given by \[\dot{c}_{x}=v\cos\theta,\dot{c}_{y}=v\sin\theta,\dot{\theta}=\frac{v}{L}\tan\delta,\dot{v}=a, \tag{27}\] where \(L=2.7\) m is the wheelbase length. The system states are \(\mathbf{x}=[c_{x},c_{y},\theta,v]\) with control inputs \(\mathbf{u}=[\delta,a]\). Given the nonlinear and non-convex nature of the problem, a hybrid A* algorithm is employed to generate initial guesses for the optimization. All common parameters, including constraints and the ego vehicle dimensions, are set to the same for fairness. The results are summarized in Tab. I. In OBCA, dual variables are utilized to reformulate the signed distance function, second-order information of cost function and constraints are required during the iterative process. Although OBCA employs the IPOPT solver, which is implemented in C++, the GeoPro-based approach demonstrates greater computational efficiency. This increased efficiency is attributable to GeoPro's ability to handle non-smooth safety constraints, as well as its requirement for only first-order information. ### _Franka Emika Experiments_ We finally demonstrate GeoPro on a Franka Emika robot for an insertion task. The task is to insert an polytope into a hole while avoiding four polytopic obstacles as illustrated in Fig. 1. The behavior is composed of a non-conservative safe behavior among four rectangular obstacles and a goal reaching behavior requires alignment of the polytope peg with the hole and mandates that the robot adhere to zero-velocity constraints upon completion of the insertion. ## VI Conclusion We present a geometric constrained based optimization framework that is flexible, efficient and versatile to design robot behaviors from low-dimensional to high-dimensional robot dynamics. We successfully demonstrate this in extensive simulations and an insertion task on a Franka Emika robot arm. We wish to extend and improve our work by the following points. First, we intend to extend \(\mathcal{C}_{i}\) to an arbitrary nonlinear and non-convex set. The off-the-shelf proximal methods will be employed to solve the subproblems. Second, safety-aware projection will be fully studied and compared against the popular CBFs based approaches. Third, the GeoPro based constrained optimization depends on the performance of GeoPro. We would like to introduce Geometric Algebra to design more efficient and versatile GeoPro. \begin{table} \begin{tabular}{|l|l|c|c|c|c|} \hline Scenarios & Algorithm & mean & std & min & max \\ \hline \multirow{4}{*}{Vertical Parking} & OBCA & 655 ms & 167 ms & 380 ms & 1027 ms \\ & IPOPT & & & & \\ \cline{2-6} & GeoPro & **222 ms** & **130 ms** & **52 ms** & **885 ms** \\ \cline{2-6} & GeoPro & **8**37 ms & 221 ms & 204 ms & 1319 ms \\ \hline \multirow{4}{*}{Parallel Parking} & OBCA & \multirow{2}{*}{708 ms} & \multirow{2}{*}{199 ms} & \multirow{2}{*}{368 ms} & \multirow{2}{*}{1346 ms} \\ & IPOPT & & & & \\ \cline{1-1} \cline{2-6} & GeoPro & **470 ms** & **136 ms** & **199 ms** & **909 ms** \\ \cline{1-1} \cline{2-6} & GeoPro & \multirow{2}{*}{942 ms} & \multirow{2}{*}{249 ms} & \multirow{2}{*}{227 ms} & \multirow{2}{*}{1894 ms} \\ \cline{1-1} \cline{2-6} & GeoPro & & & & \\ \cline{1-1} \cline{2-6} & SLP & & & & \\ \hline \end{tabular} \end{table} TABLE I: The performance comparison of our approach GeoPro against OBCA in 100 random tests. Fig. 4: GeoPro based planar arm behaviors. The rectangle geometry is fixed to the end-effector. Fig. 5: Benchmark scenarios. Initial states are randomly generated within a blue region defined by \([c_{x\text{min}},c_{x\text{max}},c_{y\text{min}},c_{y\text{max}}]=[-10,10,6.5,9.5]\).
ロボットとオブジェクトの相互作用を行うロボットのための運動生成は複雑な課題であり、その複雑さはロボットの自身 geometr y と多様な行動目標の考慮によってさらに複雑化されます。この問題に対処するために、私たちは、制約最適化に基づいた新しいフレームワークである幾何学的投影機 (GeoPro) を導入します。この革新的なフレームワークは、幾何学的制約に合致するタスクに無関係な行動の生成を可能にします。GeoProはタスクと構成空間の行動設計を最適化し、衝突回避や目標到達などの多様な機能を提供するとともに、高い計算効率を維持します。私たちの取り組みの効果を検証するために、シミュレーションとFranka Emika ロボット実験を実施し、そのパフォーマンスを最新のMethodologiesと比較します。この包括的な評価は、GeoProの多様性と、様々な動きのロボットと正確な幾何学的形状に対応できるという点で
2309.16382
RLLTE: Long-Term Evolution Project of Reinforcement Learning
We present RLLTE: a long-term evolution, extremely modular, and open-source framework for reinforcement learning (RL) research and application. Beyond delivering top-notch algorithm implementations, RLLTE also serves as a toolkit for developing algorithms. More specifically, RLLTE decouples the RL algorithms completely from the exploitation-exploration perspective, providing a large number of components to accelerate algorithm development and evolution. In particular, RLLTE is the first RL framework to build a complete and luxuriant ecosystem, which includes model training, evaluation, deployment, benchmark hub, and large language model (LLM)-empowered copilot. RLLTE is expected to set standards for RL engineering practice and be highly stimulative for industry and academia.
Mingqi Yuan, Zequn Zhang, Yang Xu, Shihao Luo, Bo Li, Xin Jin, Wenjun Zeng
2023-09-28T12:30:37
http://arxiv.org/abs/2309.16382v1
# RLLTE: Long-Term Evolution Project of Reinforcement Learning ###### Abstract We present RLLTE: a long-term evolution, extremely modular, and open-source framework for reinforcement learning (RL) research and application. Beyond delivering top-notch algorithm implementations, RLLTE also serves as a toolkit for developing algorithms. More specifically, RLLTE decouples the RL algorithms completely from the exploitation-exploration perspective, providing a large number of components to accelerate algorithm development and evolution. In particular, RLLTE is the first RL framework to build a complete and luxurian ecosystem, which includes model training, evaluation, deployment, benchmark hub, and large language model (LLM)-empowered copilot. RLLTE is expected to set standards for RL engineering practice and be highly stimulative for industry and academia. ## 1 Introduction Reinforcement learning (RL) has emerged as a highly significant research topic, garnering considerable attention due to its remarkable achievements in diverse fields, including smart manufacturing and autonomous driving (Mnih et al., 2015; Duan et al., 2016; Schulman et al., 2017; Haarnoja et al., 2018; Yarats et al., 2021). However, the efficient and reliable engineering implementation of RL algorithms remains a long-standing challenge. These algorithms often possess sophisticated structures, where minor code variations can substantially influence their practical performance. Academia requires a stable baseline for algorithm comparison, while the industry seeks convenient interfaces for swift application development (Raffin et al., 2021). However, the design and maintenance of an RL library prove costly, involving substantial computing resources, making it prohibitive for most research teams. To tackle this problem, several open-source projects were proposed to offer reference implementations of popular RL algorithms (Liang et al., 2018; D'Eramo et al., 2021; Fujita et al., 2021; Raffin et al., 2021; Huang et al., 2022). For instance, Raffin et al. (2021) developed a stable-baselines3 (SB3) framework, which encompasses seven model-free deep RL algorithms, including proximal policy optimization (PPO) (Schulman et al., 2017) and asynchronous actor-critic (A2C) (Mnih et al., 2016). SB3 prioritizes stability and reliability, and rigorous code testing has been conducted to minimize implementation errors and ensure the reproducibility of results. Weng et al. (2022) introduced Tianshou, a highly modularized library emphasizing flexibility and training process standardization. Tianshou also provides a unified interface for various algorithms, such as offline and imitation learning. In contrast, Huang et al. (2022) introduced CleanRL, which focuses on single-file implementations to facilitate algorithm comprehension, new features prototyping, experiment analysis, and scalability. Despite their achievements, most of the existing benchmarks have not established a long-term evolution plan and have proven to be short-lived. Firstly, the consistent complexity of RL algorithms naturally results in distinct coding styles, posing significant obstacles to open-source collaborations. Complete algorithm decoupling and modularization have yet to be well achieved, making maintenance challenging and limiting extensibility. Secondly, these projects are deficient in establishing a comprehensive application ecosystem. They primarily concentrate on model training, disregarding vital aspects like model evaluation and deployment. Furthermore, they frequently lack exhaustive benchmark testing data, including essential elements like learning curves and trained models. This deficiency makes replicating algorithms demanding in terms of computational resources. Inspired by the discussions above, we propose **RLLTE**, a long-term evolution, extremely modular, and open-source framework of RL. We summarize the highlighted features of RLLTE as follows: * **Module-oriented**. RLLTE decouples RL algorithms from the _exploitation-exploration_ perspective and breaks them down into minimal primitives, such as _encoder_ for feature extraction and _storage_ for archiving and sampling experiences. RLLTE offers a rich selection of modules for each primitive, enabling developers to utilize them as building blocks for constructing algorithms. As a result, the focus of RLLTE shifts from specific algorithms to providing more handy modules like PyTorch. In particular, each module in RLLTE is customizable and plug-and-play, empowering users to develop their own modules. This decoupling process also contributes to advancements in interpretability research, allowing for a more in-depth exploration of RL algorithms. * **Long-term evolution**. RLLTE is a long-term evolution project, continually involving advanced algorithms and tools in RL. RLLTE will be updated based on the following tenet: (i) generality; (ii) improvements in generalization ability and sample efficiency; (iii) excellent performance on recognized benchmarks; (iv) promising tools for RL. Therefore, this project can uphold the right volume and high quality resources, thereby inspiring more subsequent projects. * **Data augmentation.** Recent approaches have introduced data augmentation techniques at the _observation_ and _reward_ levels to improve the sample efficiency and generalization ability of RL agents, which are cost-effective and highly efficient. In line with this trend, RLLTE incorporates built-in support for data augmentation operations and offers a wide range of observation augmentation modules and intrinsic reward modules. * **Abundant ecosystem**. RLLTE considers the needs of both academia and industry and develops an abundant project ecosystem. For instance, RLLTE designed an evaluation toolkit to provide statistical and reliable metrics for assessing RL algorithms. Additionally, the deployment toolkit enables the seamless execution of models on various inference devices. Figure 1: Overview of the architecture of RLLTE. In particular, RLLTE attempts to introduce the large language model (LLM) to build an intelligent copilot for RL research and applications. * **Comprehensive benchmark data.** Existing RL projects typically conduct testing on a limited number of benchmarks and often lack comprehensive training data, including learning curves and test scores. While this limitation is understandable, given the resource-intensive nature of RL training, it hampers the advancement of subsequent research. To address this issue, RLLTE has established a data hub utilizing the Hugging Face platform. This data hub provides extensive testing data for the included algorithms on widely recognized benchmarks. By offering complete and accessible testing data, RLLTE will facilitate and accelerate future research endeavors in RL. * **Multi-hardware support.** RLLTE has been thoughtfully designed to accommodate diverse computing hardware configurations, including graphic processing units (GPUs) and neural network processing units (NPUs), in response to the escalating global demand for computing power. This flexibility enables RLLTE to support various computing resources, ensuring optimal trade-off of performance and scalability for RL applications. ## 2 Architecture Figure 1 illustrates the overall architecture of RLLTE, which contains the core layer, application layer, and tool layer. The following sections will detail the design concepts and usage of the three layers. ### Core Layer In the core layer, we decouple an RL algorithm from the _exploitation-exploration_ perspective and break them down into minimal primitives. Figure 2 illustrates a typical forward and update workflow of RL training. At each time step, an encoder first processes the observation to extract features. Then, the features are passed to a policy module to generate actions. Finally, the transition will be inserted into the storage, and the agent will sample from the storage to perform the policy update. In particular, we can use data augmentation techniques such as observation augmentation and intrinsic reward shaping to improve the sample efficiency and generalization ability. We categorize these fundamental components into two parts: xploit and xplore, and Table 1 illustrates their architectures. The modules within the xploit component primarily focus on exploiting the current collected experiences. For instance, the storage module defines the methods for storing and sampling experiences, while the policy module is updated based on the sampled data. In contrast, modules in xplore focus on exploring unknown domains. When policy is stochastic, distribution specifies the methods for sampling actions from the action space. In the case of Figure 2: Forward and update workflow of an RL algorithm. **Aug.**: Augmentation. **Dist.**: Distribution for sampling actions. **Int.**: Intrinsic. **Obs.**: Observation. a deterministic policy, the distribution module introduces noise to the current action to enhance the exploration of the action space. The augmentation and reward modules contribute to exploring the state and action space by augmenting observations and providing additional intrinsic rewards, respectively. Each submodule in Table 1 is accompanied by many pre-defined components, which are listed in Appendix A. ### Application Layer Equipped with modules of the core layer, we can efficiently develop RL algorithms and applications with simple steps, and Table 2 illustrates the architecture of the application layer. See all the corresponding code examples in Appendix C. #### 2.2.1 Fast Algorithm Construction Developers only need three steps to implement an RL algorithm with RLLTE: (i) select an algorithm prototype; (ii) select desired modules; (iii) define an update function. Currently, RLLTE provides three algorithm prototypes: OnPolicyAgent, OffPolicyAgent, and DistributedAgent. Figure 3 demonstrates how to write an A2C agent for discrete control tasks with RLLTE: \begin{table} \begin{tabular}{l l l} \hline \hline **Module** & **Submodule** & **Remark** \\ \hline \multirow{3}{*}{rllte.xploit} & policy & Policies for interaction and learning. \\ & encoder & Encoders for feature extraction. \\ & storage & Storages for collected experiences. \\ \hline \multirow{3}{*}{rllte.xplore} & distribution & Distributions for sampling actions. \\ & augmentation & Observation augmentation modules. \\ & reward & Intrinsic reward modules. \\ \hline \hline \end{tabular} \end{table} Table 1: Six primitives in RLLTE. Note that the action noise is implemented via a distribution manner to keep unification in RLLTE. \begin{table} \begin{tabular}{l l} \hline \hline **Module** & **Remark** \\ \hline \multirow{3}{*}{rllte.agent} & Top-notch implementations of highly-recognized RL algorithms, in which convenient interfaces are designed to realize fast application construction. In particular, the module-oriented design allows developers to replace settled modules of implemented algorithms to make performance comparisons and algorithm improvements. \\ \hline \multirow{3}{*}{Pre-training} & Since RLLTE is designed to support intrinsic reward modules natively, developers can conveniently realize pre-training. The pre-trained weights will be saved automatically after training, and it suffices to perform fine-tuning by loading the weights in the.train() function. \\ \hline \multirow{3}{*}{Deployment} & A toolkit that helps developers run their RL models on inference devices, which consistently have lower computational power. RLLTE currently supports two inference frameworks: NVIDIA TensorRT and HUAWEI CANN. RLLTE provides a fast API for model transformation and inference, and developers can invoke it directly with their models. \\ \hline \multirow{3}{*}{Copilot} & A promising attempt to introduce the LLM into an RL framework. The copilot can help users reduce the time required for learning frameworks and assist in the design and development of RL applications. We are developing more advanced features to it, including RL-oriented code completion and training control. \\ \hline \hline \end{tabular} \end{table} Table 2: Architecture of the application layer in RLLTE. As shown in this example, developers can effortlessly choose the desired modules and create an.update() function to implement a new algorithm. At present, the framework includes a collection of 13 algorithms, such as data-regularized actor-critic (DrAC) (Raileanu et al., 2021) and data regularized Q-v2 (DrQ-v2), and the detailed introduction can be found in Appendix B. #### 2.2.2 Module Replacement For an implemented algorithm, developers can replace its settled modules using the.set() method to realize performance comparisons and algorithm improvements. Moreover, developers can utilize custom modules as long as they inherit from the base class, as demonstrated in the code example in Appendix C.2. By decoupling these elements, RLLTE also empowers developers to construct prototypes and perform quantitative analysis of algorithm performance swiftly. #### 2.2.3 Copilot Copilot is the first attempt to integrate an LLM into an RL framework, which aims to help developers reduce the learning cost and facilitate application construction. We follow the design of (Toro, 2023) that interacts privately with documents using the power of GPT, and Figure 4 illustrates its architecture. The source documents are first ingested by an instructor embedding tool to create a local vector database. After that, a local LLM is used to understand questions and create answers based on the database. In practice, we utilize Vicuna-7B (Chiang et al., 2023) as the base model and build the database using various corpora, including API documentation, tutorials, and RL references. The powerful understanding ability of the LLM model enables the copilot to accurately answer questions about the use of the framework and any other questions of RL. Moreover, no additional training is required, and users are free to replace the base model according to their computing power. In future work, we will further enrich the corpus and add the code completion function to build a more intelligent copilot for RL. Figure 4: **Left**: The workflow of the copilot. **Right**: A conversation example of training an PPO agent using RLLTE. Figure 3: **Left**: Implement A2C algorithm with dozens of lines of code, and the complete code example can be found in Appendix C.1. **Right**: Simple interface to invoke implemented RL algorithms. ### Tool Layer The tool layer provides practical toolkits for task design, model evaluation, and benchmark data. rllte.env allows users to design task environments following the natural Gymnasium pattern without additional effort. All the environments in RLLTE are set to be vectorized to guarantee sample efficiency, and many different observation and action spaces (e.g., box, discrete, multi-binary, etc.) are supported. In particular, users can also use EnvPool (Weng et al., 2022) to realize ultra-fast operational acceleration. See code example in Appendix D.1. Beyond providing efficient task design and training interfaces, RLLTE further investigates the model evaluation problem in RL and develops a simple evaluation toolkit. RLLTE reconstructs and improves the code of (Agarwal et al., 2021) to realize a more convenient and efficient interface. Figure 5 illustrates several metrics computed and visualized by the toolkit. Finally, rllte.hub can accelerate academic research by providing practically available benchmark data, including training data and trained models. This toolkit will save much time and computational resources for researchers, and the code example can be found in Appendix D.3. RLLTE is the first open-source RL project that aims to build a complete ecosystem. Developers can perform task design, model training, model evaluation, and model deployment within one framework. As a result, RLLTE is highly stimulative for both industry and academia. ## 3 Project Evolution As a long-term evolution project, RLLTE is expected to consistently provide high-quality and timely engineering standards and development components for RL. To that end, RLLTE sets the following tenet for updating new features: * Generality is the most important; * Improvements in sample efficiency or generalization ability; * Excellent performance on recognized benchmarks; * Promising tools for RL. Firstly, RLLTE only accepts general algorithms that can be applied in many distinct scenarios and tasks. For example, PPO is a general RL algorithm that can solve tasks with arbitrary action spaces, and random network distillation (RND) (Burda et al., 2019) is a general intrinsic reward module that can be combined with arbitrary RL agents. This rule can effectively control the volume of the \begin{table} \begin{tabular}{l l} \hline \hline **Toolkit** & **Remark** \\ \hline \multirow{6}{*}{rllte.env} & Provides a large number of packaged environments (e.g., Atari games) \\ & for fast invocation. RLLTE is designed to natively support Gymnasium (Towers et al., 2023), which is a maintained fork of the Gym library of OpenAI (Brockman et al., 2016). Moreover, developers are allowed to use their custom environments with built-in wrappers in RLLTE. \\ \hline \multirow{6}{*}{rllte.evaluation} & Provides reasonable and reliable metrics for algorithm evaluation following (Agarwal et al., 2021). Performance module for evaluating a single algorithm. Comparison module for comparing multiple algorithms. Visualization for visualizing computed metrics. \\ \hline \multirow{6}{*}{rllte.hub} & Provides a large number of reusable datasets (.datasets) and trained \\ & models (.models) of supported RL algorithms. Developers can also \\ \cline{1-1} & reproduce the training process via the pre-defined RL applications (. applications). \\ \hline \hline \end{tabular} \end{table} Table 3: Architecture of the tool layer in RLLTE. Code example for each toolkit can be found in Appendix D. project while ensuring its adaptability to a wide range of requirements. Moreover, generality exemplifies the potential for future enhancements (e.g., the various variants of PPO), which can also reduce the difficulty of open-source collaboration and maintain community vitality. Furthermore, the algorithm is expected to improve sample efficiency or generalization ability (e.g., better intrinsic reward shaping approaches), two long-standing and critical problems in RL. Accordingly, the algorithm must be evaluated on multiple recognized benchmarks like Atari (Bellemare et al., 2013) and Progen games (Cobbe et al., 2020) to guarantee practical performance across tasks. In particular, RLLTE also accepts various promising tools (e.g., operational efficiency optimization, model evaluation, and deployment) to maintain a comprehensive ecosystem. In summary, RLLTE will keep evolving to adapt to changing needs and produce a positive impact on the RL community. \begin{table} \begin{tabular}{c|c c c c c} \hline \hline **Framework** & **Modularized** & **Parallel** & **Decoupling** & **Backend** & **License** \\ \hline \hline Baselines & ✓ & ✗ & - & TensorFlow & MIT \\ SB3 & ✓ & ✗ & - & PyTorch & MIT \\ CleanRL & - & ✗ & ✗ & PyTorch & MIT \\ Ray/rllib & ✓ & ✓ & - & TF/PyTorch & Apache-2.0 \\ rlpyt & ✓ & ✓ & ✗ & PyTorch & MIT \\ Tianshou & ✓ & ✓ & - & PyTorch & MIT \\ ElegantRL & ✓ & ✓ & - & PyTorch & Apache-2.0 \\ SpinningUp & ✗ & ✗ & ✗ & PyTorch & MIT \\ ACME & ✗ & ✓ & ✗ & TF/JAX & Apache-2.0 \\ RLLTE & ✓ & ✓ & ✓ & PyTorch & MIT \\ \hline \hline \end{tabular} \end{table} Table 4: Architecture comparison with existing projects. **Modularized**: The project adopts a modular design with reusable components. **Parallel**: The project supports parallel learning. **Decoupling**: The project supports algorithm decoupling and module replacement. **Backend**: Which machine learning framework to use? **License**: Which open-source protocol to use? Note that the short line represents partial support. Figure 5: Performance metrics computed and visualized by rllte.evaluation, and the code example can be found in Appendix D.2. ## 4 Related Work We compare RLLTE with eleven representative open-source RL projects, namely Baselines (Dhariswal et al., 2017), SB3 (Raffin et al., 2021), CleanRL (Huang et al., 2022), Ray/rlib (Liang et al., 2018), and rlpyt (Stooke and Abbeel, 2019), Tianshou (Weng et al., 2022), ElegantRL (Liu et al., 2021), SpinningUp (Achiam, 2018), and ACME (Hoffman et al., 2020), respectively. The following comparison is conducted from three aspects: architecture, functionality, and engineering quality. This project references some other open-source projects and adheres to their open-source protocols. ## 5 Discussion In this paper, we introduced a novel RL framework entitled RLLTE, which is a long-term evolution, extremely modular, and open-source project for advancing RL research and applications. With a rich and comprehensive ecosystem, RLLTE enables developers to accomplish task design, model training, evaluation, and deployment within one framework seamlessly, which is highly stimulative for both academia and industry. Moreover, RLLTE is an ultra-open framework where developers can freely use and try many built-in or custom modules, contributing to the research of decoupling \begin{table} \begin{tabular}{c|c c c c} \hline \hline **Framework** & **Documentation** & **Code Coverage** & **Type Hints** & **Last Update** & **Used by** \\ \hline Baselines & ✗ & ✗ & 01/2020 & 508 \\ SB3 & ✓ & 96\% & ✓ & 09/2023 & 3.3k \\ CleanRL & ✓ & - & ✗ & 09/2023 & 27 \\ Ray/rlib & ✓ & - & ✗ & 09/2023 & - \\ rlpyt & ✓ & 15\% & ✗ & 09/2020 & - \\ Tianshou & ✓ & 91\% & ✓ & 09/2023 & 169 \\ ElegantRL & ✓ & - & ✓ & 07/2023 & 256 \\ SpinningUp & ✓ & ✗ & ✗ & 02/2020 & - \\ ACME & ✓ & - & ✗ & 07/2023 & 149 \\ RLLTE & ✓ & 97\% & ✓ & 09/2023 & 2\(\nearrow\) \\ \hline \hline \end{tabular} \end{table} Table 6: Engineering quality comparison with existing projects. Note that the short line represents unknown. \begin{table} \begin{tabular}{c|c c c c c c c} \hline \hline **Framework** & **Number** & **Custom** & **Custom** & **Data** & **Data** & **Dela** & **Deploy.** & **Eval.** & **Multi-Device** \\ \hline Baselines & 9 & ✓(gym) & - & ✗ & - & ✗ & ✗ & ✗ \\ SB3 & 7 & ✓(gymnasium) & - & - & ✓ & ✗ & ✗ & ✗ \\ CleanRL & 9 & ✗ & ✓ & - & ✓ & ✗ & ✗ & ✗ \\ Ray/rlib & 16 & ✓(gym) & - & - & - & ✗ & ✗ & ✗ \\ rlpyt & 11 & ✗ & - & ✗ & - & ✗ & ✗ & ✗ \\ Tianshou & 20 & ✓(gymnasium) & ✗ & - & - & ✗ & ✗ & ✗ \\ ElegantRL & 9 & ✓(gym) & ✗ & ✗ & - & ✗ & ✗ & ✗ \\ SpinningUp & 6 & ✓(gym) & ✗ & ✗ & - & ✗ & ✗ & ✗ \\ ACME & 14 & ✓(dm\_env) & ✗ & ✗ & - & ✗ & ✗ & ✗ \\ RLLTE & 13\(\nearrow\) & ✓(gymnasium) & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 5: Functionality comparison with existing projects. **Custom Env.**: Support custom environments? Since Gym (Brockman et al., 2016) is no longer maintained, it is critical to make the project adapt to Gymnasium (Towers et al., 2023). **Custom Module**: Support custom modules? **Data Aug.**: Support data augmentation techniques like intrinsic reward shaping and observation augmentation? **Data Hub**: Have a data hub to store benchmark data? **Deploy.**: Support model deployment? **Eval.**: Support model evaluation? **Multi-Device**: Support hardware acceleration of different computing devices (e.g., GPU and NPU)? Note that the short line represents partial support. and interpretability of RL. As a long-term evolution project, RLLTE will keep tracking the latest research progress and provide high-quality implementations to inspire more subsequent research. In particular, there are some remaining issues that we intend to work on in the future. Firstly, RLLTE plans to add more algorithm prototypes to meet the task requirements of different scenarios, including multi-agent RL, inverse RL, imitation learning, and offline RL. Secondly, RLLTE will enhance the functionality of the pre-training module, which includes more prosperous training methods and more efficient training processes, as well as providing downloadable model parameters. Thirdly, RLLTE will further explore the combination of RL and LLM, including using LLM to control the construction of RL applications and improving the performance of existing algorithms (e.g., reward function design and data generation). Finally, RLLTE will optimize the operational efficiency of modules at the hardware level to reduce the computational power threshold, promoting the goal of RL for everyone. #### Acknowledgments This work is supported, in part, by NSFC under Grant No. 62102333 and Grant No. 62342246, and HKSAR RGC under Grant No. PolyU 25211321, and ZJNSFC under Grant LQ23F010008, and GDSTC under Grant No. 2023A1515010592. We thank the HPC center of the Eastern Institute for Advanced Study (EIAS) for providing their GPU computing platform for testing and HUAWEI Ascend for their NPU computing platform.
RLLTE: 長期進化、極めてモジュール化された、オープンソースの reinforcement learning (RL) 研究・応用フレームワークです。 RLLTEは、トップnotchのアルゴリズム実装を提供するだけでなく、アルゴリズムを開発するためのツールとしても役立ちます。 特に、RLLTEは、RLアルゴリズムを探索・搾取の両面から完全に decouple しています。 これにより、アルゴリズム開発と進化を加速するための多数のコンポーネントを提供しています。 RLLTEは、RLアルゴリズムの構築に成功した最初のフレームワークであり、モデルのトレーニング、評価、デプロイメント、ベンチマークハブ、そして大規模言語モデル (LLM) を駆使したコンパイルのサポートを備えています。 RLLTEは、RLエンジニアリングの標準を確立し、産業と学術界に刺激を与えることが
2309.06191
Characterisation and fundamental limitations of irreversible stochastic steering distillation
Steering resources, central for quantum advantages in one-sided device-independent quantum information tasks, can be enhanced via local filters. Recently, reversible steering conversion under local filters has been fully characterised. Here, we solve the problem in the irreversible scenario, which leads to a complete understanding of stochastic steering distillation. This result also provides an operational interpretation of the max-relative entropy as the optimal filter success probability. We further show that all steering measures can be used to quantify measurement incompatibility in certain stochastic steering distillation scenarios. Finally, for a broad class of steering robustness measures, we show that their maximally achievable values in stochastic steering distillation are always upper bounded by different types of incompatibility robustness measures. Hence, measurement incompatibility sets the fundamental limitations for stochastic steering distillation.
Chung-Yun Hsieh, Huan-Yu Ku, Costantino Budroni
2023-09-12T12:57:06
http://arxiv.org/abs/2309.06191v1
# Characterisation and fundamental limitations of irreversible stochastic steering distillation ###### Abstract Steering resources, central for quantum advantages in one-sided device-independent quantum information tasks, can be enhanced via local filters. Recently, reversible steering conversion under local filters has been fully characterised. Here, we solve the problem in the _irreversible_ scenario, which leads to a complete understanding of stochastic steering distillation. This result also provides an operational interpretation of the max-relative entropy as the optimal filter success probability. We further show that all steering measures can be used to quantify measurement incompatibility in certain stochastic steering distillation scenarios. Finally, for a broad class of steering robustness measures, we show that their maximally achievable values in stochastic steering distillation are always _upper bounded_ by different types of incompatibility robustness measures. Hence, measurement incompatibility sets the fundamental limitations for stochastic steering distillation. ## I Introduction For each quantum information task, there is a specific quantum resource necessary for its successful implementation [1]. For instance, perfect teleportation [2] and super-dense coding [3] require maximally entangled states. Also, perfect quantum memories and quantum communication protocols need noiseless identity channels [4]. However, physical systems are usually noisy and not perfectly under control. This leads to the question of how these resources can be manipulated and enhanced, i.e., the problem of resource _distillation_. Since the inception of entanglement distillation [5; 6], the idea of distillation has been applied to various physical phenomena, such as quantum communication [7; 8], thermodynamics [9; 10], informativeness of measurements [11], nonlocality [12; 13; 14], and quantum steering [15; 16; 17; 18]. In particular, quantum steering is inextricably linked to one-sided device-independent quantum information processing [19; 20; 21; 22; 23; 24; 25]. However, if compared with other quantum resources, steering distillation has been studied only very recently [15; 16; 17; 18]. Moreover, _stochastic_ steering distillation is using local filters to enhance quantum steering, which is not only experimentally feasible [15], but also has strong implications for incompatible measurements [17; 18]. Still, the question of characterising _irreversible_ steering conversion remains open. Up to now, the most timely characterisation [15; 17; 18] addresses the _reversible_ case, in which one can undo the local filter with non-vanishing success probability. It is then essential to characterise local filters that _cannot_ be undone in order to completely understand stochastic steering distillation. In this work, we prove the first necessary and sufficient characterisation of irreversible steering distillation under local filters. Furthermore, we provide a systematic way to quantify measurement incompatibility [26] by the optimally distillable steering. Finally, by uncovering the relation between different types of robustness measures of steering and incompatibility, we show that incompatibility sets the fundamental limitations for stochastic steering distillation. ## II Preliminary notions We start with specifying notations. For a _non-full-rank_ positive operator \(O\geq 0\), one cannot define its inverse in general. While one can effectively do so when we restrict to its _support_\(\text{supp}(O)\), which is the subspace spanned by its eigenstates with strictly positive eigenvalues (it is also called _range_ and denoted by \(\text{ran}(O)\)). Then, \(O\) is effectively full-rank in the subspace \(\text{supp}(O)\), and its inverse can be well-defined as given by \(O|_{\text{supp}(O)}^{-1}\oplus 0_{\text{\tiny{supp}(O)}}\). A conventional abbreviation in the literature is \(O^{-1}\coloneqq O|_{\text{supp}(O)}^{-1}\oplus 0_{\text{\tiny{supp}(O)}}\). ### Quantum Steering As an intermediate quantum resource between nonlocality and entanglement, quantum steering refers to one agent (\(A\)) remotely preparing states for a spatially separated agent (\(B\)) by \(A\)'s local measurement acting on their shared bipartite state \(\rho_{AB}\) with classical communication to \(B\)[27; 28; 29; 30] (see Fig. 1 for a schematic illustration). Formally, \(A\)'s measurements are described by a set of _positive operator-valued measures_ (POVMs) [31]\(\mathbf{E}\coloneqq\{E_{a|x}\}_{a,x}\) such that \(E_{a|x}\geq 0\) for every \(a,x\) and \(\sum_{a}E_{a|x}=\mathbb{I}_{A}\) for every \(x\), where \(\mathbb{I}_{A}\) is the identity operator in the system \(A\). We call \(\mathbf{E}\) a _measurement assemblage_. After \(A\) locally measures \(\rho_{AB}\), \(B\) obtains the post-measurement (un-normalised) state \[\sigma_{a|x}\coloneqq\text{tr}_{A}[(E_{a|x}\otimes\mathbb{I}_{B})\rho_{AB}]. \tag{1}\] The collection of un-normalised states \(\mathbf{\sigma}\coloneqq\{\sigma_{a|x}\}_{a,x}\) is called a _state assemblage_. Note that \(\rho_{\mathbf{\sigma}}\coloneqq\sum_{a}\sigma_{a|x}\) is a state independent of \(x\), which is due to the no-signalling condition. The _classical_ cases correspond to some state assemblage \(\mathbf{\tau}=\{\tau_{a|x}\}_{a,x}\) that can be simulated by a _local-hidden-state_ (LHS) model: \[\tau_{a|x}=\sum_{\lambda}P(\lambda)P(a|x,\lambda)\rho_{\lambda}, \tag{2}\] where \(\{P(\lambda)\}_{\lambda},\{P(a|x,\lambda)\}_{a,x,\lambda}\) are (conditional) probability distributions, and \(\{\rho_{\lambda}\}_{\lambda}\) are pre-existing states. When a state assemblage \(\mathbf{\sigma}\) does not admit a LHS model, denoted as \(\mathbf{\sigma}\notin\mathbf{LHS}\), we say that \(\mathbf{\sigma}\) is _steerable_. ### Measurement Incompatibility Measurement incompatibility is a key ingredient of many quantum phenomena such as uncertainty relations [32, 26], Bell nonlocality [33, 34, 35], contextuality [36], as well as steering [29]. To formally define measurement incompatibility, we say a measurement assemblage \(\mathbf{M}=\{M_{a|x}\}_{a,x}\) is _jointly measurable_, or _compatible_, if there exists a single POVM \(\{G_{\lambda}\}_{\lambda}\) and conditional probability distributions \(\{P(a|x,\lambda)\}_{a,x,\lambda}\) such that \[M_{a|x}=\sum_{\lambda}P(a|x,\lambda)G_{\lambda}. \tag{3}\] Physically, they can be simulated by a single measurement \(\{G_{\lambda}\}_{\lambda}\) with classical post-processing. For a measurement assemblage \(\mathbf{E}\), we write \(\mathbf{E}\in\mathbf{JM}\) whenever it is jointly measurable. Otherwise, it is _incompatible_ and denoted by \(\mathbf{E}\notin\mathbf{JM}\). ## III Main Results ### Steering Distillation Under Irreversible Local Filters We first recall the notion of _steering-equivalent observable_ (SEO) [37, 38], which mathematically connects quantum steering and measurement incompatibility. Given a state assemblage \(\mathbf{\sigma}\), its SEO is uniquely defined by \[\mathbf{B}^{(\mathbf{\sigma})}\coloneqq\left\{\sqrt{\rho_{\mathbf{\sigma}}}^{-1}\sigma _{a|x}\sqrt{\rho_{\mathbf{\sigma}}}^{-1}\right\}_{a,x}, \tag{4}\] which are POVMs in the space \(\text{supp}(\rho_{\mathbf{\sigma}})\). As shown in Refs. [37, 38], \(\mathbf{\sigma}\notin\mathbf{LHS}\)_if and only if \(\mathbf{B}^{(\mathbf{\sigma})}\notin\mathbf{JM}\)_. As previously mentioned, only the agent \(B\) is trusted in a steering experiment. According to the one-sided device-independent perspective, only the system of \(B\) is characterised. Hence, it is essential to understand how the local agent \(B\) can manipulate state assemblages with their trusted devices. It turns out that the SEO provides a clean way to _classify_ steering assemblages under filter operations locally implemented by \(B\). More precisely, two state assemblages have the same SEO (up to a unitary degree of freedom) if and only if they can be transformed to each other by local filter operations [17]. In other words, there exist local filters in _both_ directions, and, probabilistically, one is able to _reverse_ the effect of local filters. The question, thus, remains open of how to characterise irreversible local filter transformations. That is, can we compare state assemblages by the actions of local filters that are _irreversible_ even _probabilistically_? To this end, we introduce the _SEO ordering_: **Definition 1**.: _For state assemblages \(\mathbf{\sigma}\) and \(\mathbf{\tau}\), we write_ \[\mathbf{\sigma}>_{\mathrm{SEO}}\mathbf{\tau} \tag{5}\] _if and only if there is a unitary \(U\) with \(\text{supp}(\rho_{\mathbf{\tau}})\subseteq\text{supp}(U\rho_{\mathbf{\sigma}}U^{ \dagger})\) such that_ \[\mathbf{\tau}=\sqrt{\rho_{\mathbf{\tau}}}U\mathbf{B}^{(\mathbf{\sigma})}U^{\dagger}\sqrt{ \rho_{\mathbf{\tau}}}. \tag{6}\] In other words, \(\mathbf{\sigma}>_{\mathrm{SEO}}\mathbf{\tau}\) whenever \(\mathbf{\tau}\) can be mathematically induced by the SEO of \(\mathbf{\sigma}\). One can check that \(>_{\mathrm{SEO}}\) is a _preorder_, i.e., a reflexive and transitive homogeneous relation. Also, the SEO ordering generalises the notion of _SEO equivalence class_, introduced in Ref. [17], to the one-way, asymmetric regime. More precisely, \(\mathbf{\sigma},\mathbf{\tau}\) are _SEO equivalent_, denoted by \(\mathbf{\sigma}\sim_{\mathrm{SEO}}\mathbf{\tau}\), if and only if there exists some unitary \(U\) such that \(U\mathbf{B}^{(\mathbf{\sigma})}U^{\dagger}=\mathbf{B}^{(\mathbf{\tau})}\). It is then straightforward to see that \(\mathbf{\sigma}\sim_{\mathrm{SEO}}\mathbf{\tau}\)_if and only if_\(\mathbf{\sigma}>_{\mathrm{SEO}}\mathbf{\tau}\) and \(\mathbf{\tau}>_{\mathrm{SEO}}\mathbf{\sigma}\). An immediate question now is whether this definition is operationally relevant. As we shall demonstrate later, this mathematical notion is _operationally equivalent to_ the following type of _local filter_ operations: \[\sigma_{a|x}\mapsto\frac{K\sigma_{a|x}K^{\dagger}}{p_{\mathrm{succ}}}, \tag{7}\] where the operator \(K\) with the condition \(K^{\dagger}K\leq\mathbb{I}\) describes the local filter operation \((\cdot)\mapsto K(\cdot)K^{\dagger}\) by a single Kraus operator, and \(p_{\mathrm{succ}}\coloneqq\text{tr}\left(K\rho_{\mathbf{\sigma}}K^{\dagger}\right)\) is the probability for a successful filter with the input \(\mathbf{\sigma}\). We denote the set of all such local filters, i.e., those with a single Kraus operator, as \(\text{LF}_{1}\)[17]. Conditioned on the successful filtering outcomes, \(\tau_{a|x}=K\sigma_{a|x}K^{\dagger}/p_{\mathrm{succ}}\) is the state assemblage that one obtains. When there exists at least one such local filter achieving this transformation, with _non-vanishing_ success probability, we write \(\mathbf{\sigma}\xrightarrow{\text{LF}_{1}}\mathbf{\tau}\). Also, let \(p_{\mathrm{succ}}^{\text{max}}(\mathbf{\sigma}\xrightarrow{\text{LF}_{1}}\mathbf{\tau})\) be the highest success probability among all such \(\text{LF}_{1}\) transformations. It is helpful, at this point, to recall the definition of _max-relative entropy_[39]. For two states \(\rho,\eta\), define \[D_{\mathrm{max}}(\eta\,\|\,\rho)\coloneqq\log_{2}\min\{\lambda\geq 0\,|\,\eta \leq\lambda\rho\} \tag{8}\] if \(\text{supp}(\eta)\subseteq\text{supp}(\rho)\); otherwise, define \(D_{\mathrm{max}}(\eta\,\|\,\rho)\coloneqq\infty\). Then we have the following result: Figure 1: **A schematic interpretation of quantum steering.** Quantum steering refers to one party \(A\) remotely preparing states for a spatially separated party \(B\) by some local measurement in \(A\) (acting on the shared state \(\rho_{AB}\)) and classical communication from \(A\) to \(B\). **Theorem 1**.: _Let \(\mathbf{\sigma},\mathbf{\tau}\) be state assemblages. Then_ \[\mathbf{\sigma}>_{\mathrm{SEO}}\mathbf{\tau}\quad\text{if and only if}\quad\mathbf{\sigma} \xrightarrow{\mathrm{LF}_{1}}\mathbf{\tau}. \tag{9}\] _When such a local filter transformation exists, we have_ \[p_{\mathrm{succ}}^{\mathrm{max}}(\mathbf{\sigma}\xrightarrow{\mathrm{LF}_{1}}\mathbf{ \tau})=\sup_{U\in\mathcal{U}(\mathbf{\sigma}>_{\mathrm{SEO}}\mathbf{\tau})}2^{-D_{ \mathrm{max}}(\rho_{\mathbf{\tau}}\,\|\,U\rho_{\mathbf{\sigma}}U^{\dagger})}, \tag{10}\] _where \(\mathcal{U}(\mathbf{\sigma}>_{\mathrm{SEO}}\mathbf{\tau})\) is the set of all unitary operations consistent with a valid decomposition in Eq. (6)._ We detail the proof in Appendix A. Also, see Fig. 2 for a schematic illustration. As shown in Ref. [18], optimal stochastic steering distillation can be achieved by using \(\mathrm{LF}_{1}\) -- since no general local filter can distill more steerability than the optimal \(\mathrm{LF}_{1}\) filter. From this perspective, it is general enough to focus on \(\mathrm{LF}_{1}\) for stochastic steering distillation. Consequently, one can view Theorem 1 as a complete characterisation of stochastic steering distillation -- since it provides a complete and analytical characterisation of irreversible steering conversion under \(\mathrm{LF}_{1}\). See also Observation 4 for further discussions. As a direct corollary, Theorem 1 reproduces the main result in Ref. [17] by considering reversible \(\mathrm{LF}_{1}\); namely, one can check that: \(\mathbf{\sigma}\sim_{\mathrm{SEO}}\mathbf{\tau}\Leftrightarrow\mathbf{\sigma} \xrightarrow{\mathrm{LF}_{1}}\mathbf{\tau}\&\mathbf{\tau}\xrightarrow{\mathrm{LF}_{1}} \mathbf{\sigma}\Leftrightarrow\mathbf{\sigma}>_{\mathrm{SEO}}\mathbf{\tau}\)_and_\(\mathrm{supp}(\rho_{\mathbf{\sigma}}),\mathrm{supp}(\rho_{\mathbf{\tau}})\)_have the same dimension._ In Ref. [17], state assemblages are classified based on mutual convertibility under \(\mathrm{LF}_{1}\); here, Theorem 1 upgrades the steering classification into a preorder among state assemblages based on the most general conversion under \(\mathrm{LF}_{1}\). Rather unexpectedly, Theorem 1 provides a new operational way to interpret max-relative entropy in a steering experiment. Equation (10) implies that max-relative entropy actually carries the information of the success probability for local filter transformations \(\mathbf{\sigma}\xrightarrow{\mathrm{LF}_{1}}\mathbf{\tau}\). This also explains the importance of the support condition from the success probability perspective. In fact, the support condition implies a no-go result: _It is impossible to have \(\mathbf{\sigma}\xrightarrow{\mathrm{LF}_{1}}\mathbf{\tau}\) if the rank of \(\rho_{\mathbf{\tau}}\) is strictly higher than the one of \(\rho_{\mathbf{\sigma}}\)_-- since \(D_{\mathrm{max}}\left(\rho_{\mathbf{\tau}}\,\right\|U\rho_{\mathbf{\sigma}}U^{\dagger} \right)=\infty\) for all unitary \(U\) in this case._ As a final remark, note that the formula of optimal success probability, i.e., Eq. (10), has a _prerequisite_: namely, we first need to know that such an \(\mathrm{LF}_{1}\) filter _exists_. For instance, local filters cannot create steerability when state assemblage is unsteerable. Hence, we do have \(\rho_{\mathbf{\sigma}}\) and \(\rho_{\mathbf{\tau}}\) with finite-valued max-relative entropy that no \(\mathrm{LF}_{1}\) filter can convert \(\mathbf{\sigma}\in\mathbf{LHS}\) to \(\mathbf{\tau}\not\in\mathbf{LHS}\). Consequently, it is _insufficient_ to merely check the value of max-relative entropy if one wants to know whether a local filter transformation exists or not. ### Examples As an illustrative example, consider a qubit-qutrit setting with the initial state \(\rho_{AB}^{(v)}=v|\phi_{+}^{\mathrm{qubit}}\rangle\langle\phi_{+}^{\mathrm{ qubit}}|+(1\text{-}v)\frac{1_{A}}{2}\otimes|2\rangle\langle 2|\), where \(|\phi_{+}^{\mathrm{qubit}}\rangle:=(|00\rangle+|11\rangle)/\sqrt{2}\) is maximally entangled in the 'effective' qubit spanned by \(|0\rangle,|1\rangle\) in the qutrit \(B\). Suppose the initial state assemblage \(\sigma_{a|x}^{(v)}=\mathrm{tr}_{A}\left[\left(E_{a|x}^{\mathrm{Pauli}}\otimes \mathbb{I}_{B}\right)\rho_{AB}^{(v)}\right],\) with \(E_{0|0}^{\mathrm{Pauli}}=|0\rangle\langle 0|=\mathbb{I}_{A}-E_{1|0}^{ \mathrm{Pauli}},E_{0|1}^{\mathrm{Pauli}}=|+\rangle\langle+|=\mathbb{I}_{A}-E_ {1|1}^{\mathrm{Pauli}}\), and \(|+\rangle:=(|0\rangle+|1\rangle)/\sqrt{2}\). Namely, \(E_{a|x}^{\mathrm{Pauli}}\)'s are projective measurements corresponding to Pauli \(X\) and \(Z\) in the qubit system \(A\). By construction, the steerability of \(\mathbf{\sigma}^{(v)}\) can be arbitrarily weak when \(v\to 0\). Now, consider the \(\mathrm{LF}_{1}\) filter \(K=|0\rangle\langle 0|+|1\rangle\langle 1|\). Then direct computation shows that, after this local filter and post-selection, we obtain \(\sigma_{a|x}^{\mathrm{final}}=E_{a|x}^{\mathrm{Pauli}}/2\), which is maximally steerable in a two-qubit setting. Note that this filter operation is not reversible, since \(\mathrm{LF}_{1}\) cannot extend the final state assemblage to the degree of freedom spanned by \(|2\rangle\) in \(B\). This demonstrates how to use a genuinely irreversible \(\mathrm{LF}_{1}\) filter to distil steering. ## IV Applications ### Quantifying Measurement Incompatibility by Steering Distillation As an application, we show that steering distillation under local filters in \(\mathrm{LF}_{1}\) can be used to _quantify_ measurement incompatibility. To this end, we introduce the _steering-induced incompatibility measure_: \[I_{S}\left(\mathbf{E}\right)\coloneqq\sup\left\{S\left(\mathbf{\tau}\right)\left| \,\mathbf{\sigma}\xrightarrow{\mathrm{LF}_{1}}\mathbf{\tau},\mathbf{B}^{(\mathbf{\sigma}) }=\mathbf{E}\right.\right\}, \tag{11}\] where the maximisation is taken over every state assemblage \(\mathbf{\sigma}\) whose SEO is identical to \(\mathbf{E}\), and all other state assemblages that can be reached by them via \(\mathrm{LF}_{1}\). When \(S\) is a faithful steering measure (i.e., \(S(\mathbf{\sigma})=0\) if and only if \(\mathbf{\sigma}\in\mathbf{LHS}\)), it is not difficult to check that \(I_{S}\left(\mathbf{E}\right)=0\) if and only if \(\mathbf{E}\in\mathbf{JM}\). Adopting the resource theory settings considered in Refs. [40; 41], we prove the following result: **Theorem 2**.: \(I_{S}\) _is an incompatibility monotone if \(S\) is a steering monotone._ Figure 2: **Schematic interpretation of Theorem 1.** Theorem 1 considers two fundamental quantum information scenarios in a steering setup, which are ordering and one-way convertibility. (a) To characterise and compare different steering resources beyond classification, we need an ordering structure among all state assemblages. This can be captured by the SEO ordering introduced in this work. (b) An one-way convertibility problem aims to determine the possibility of transforming one object into another. In this work, we focus on transformations under local filters in \(\mathrm{LF}_{1}\), which are key ingredients in stochastic steering distillation [17; 18]. Detailed proof as well as a complete statement of this result can be found in Appendix B. Theorem 2_ quantitatively_ bridges quantum steering and incompatibility in the most general setting via their corresponding resource theories [40; 41]. Notably, if there is a steering measure with certain physical/operational meaning, one can obtain a measure for measurement incompatibility with a similar interpretation by using Theorem 2. For instance, if we consider the metrological steering monotone [42], one can then estimate measurement incompatibility via a metrologic task. Theorem 2 provides a full generality for this types of mapping. ### Fundamental Limitations on Steering Distillation It is useful to consider some specific measures to illustrate Theorem 2. To this end, we focus on the _robustness-type measures_, which are widely used in quantum information theory [43; 44; 45; 23; 37]. To illustrate what they are, now, we write \(p\mathbf{E}+(1-p)\mathbf{M}\coloneqq\{pE_{a|x}+(1-p)M_{a|x}\}_{a,x}\), and similar notations are also used for state assemblages. The _generalised incompatibility robustness_ is defined by [48; 49; 37] \[\mathrm{IR}(\mathbf{E})\coloneqq\inf\left\{t\geq 0\left|\frac{\mathbf{E}+t \mathbf{M}}{1+t}\in\mathbf{JM}\right\}, \tag{12}\] which is minimising over all possible measurement assemblages \(\mathbf{M}\). Physically, \(\mathrm{IR}(\mathbf{E})\) measures the smallest amount of _noise_ needed to turn \(\mathbf{E}\) compatible. In other words, \(\mathbf{M}\) takes the role of some noise being added to destroy \(\mathbf{E}\)'s incompatibility. This type of measure can be defined for a wide range of physical phenomena with _different types_ of noise. For instance, the _generalised steering robustness_[23; 50] is \[\mathrm{SR}(\mathbf{\sigma})\coloneqq\inf\left\{t\geq 0\left|\frac{\mathbf{\sigma}+t \omega}{1+t}\in\mathbf{LHS}\right|\right\}, \tag{13}\] which minimises over every state assemblage \(\mathbf{\omega}\). As another example, the _consistent generalised steering robustness_ (see, e.g., Refs. [50; 51]) is \[\mathrm{SR}^{(\mathrm{c})}(\mathbf{\sigma})\coloneqq\inf\left\{t\geq 0\left| \frac{\mathbf{\sigma}+t\omega}{1+t}\in\mathbf{LHS},\rho_{\mathbf{\omega}}=\rho_{\mathbf{ \sigma}}\right.\right\}, \tag{14}\] which can be viewed as a steering robustness with the 'noise model' containing all \(\mathbf{\omega}\) with the condition \(\rho_{\mathbf{\omega}}=\rho_{\mathbf{\sigma}}\). In general, one can consider different types of noise, and the associated steering robustness is called _consistent_ if the condition \(\rho_{\mathbf{\omega}}=\rho_{\mathbf{\sigma}}\) is always imposed. It turns out that \(\mathrm{LF}_{1}\) filters provide a general and quantitative way to link _different types_ of robustness measures of incompatibility and steering. To make this link precise, we introduce the following notion to describe different noise models. Formally, we define a _noise model for quantum steering_ as a map \(\mathcal{N}_{S}\) that maps a state assemblage into _a set of_ state assemblages; namely, \(\mathbf{\sigma}\mapsto\mathcal{N}_{S}(\mathbf{\sigma})=\{\mathbf{\omega}\}\). The set \(\mathcal{N}_{S}(\mathbf{\sigma})=\{\mathbf{\omega}\}\) contains all possible \(\mathbf{\omega}\) that can be used as noises to mix with the given \(\mathbf{\sigma}\). The _steering robustness subject to noise_\(\mathcal{N}_{S}\) is defined by \[\mathrm{SR}_{\mathcal{N}_{S}}(\mathbf{\sigma})\coloneqq\inf\left\{t\geq 0\left| \frac{\mathbf{\sigma}+t\mathbf{\omega}}{1+t}\in\mathbf{LHS},\mathbf{\omega}\in\mathcal{N}_ {S}(\mathbf{\sigma})\right\}.\right. \tag{15}\] For instance, if \(\mathcal{N}_{S}(\mathbf{\sigma})\) is the set of all state assemblages, we revisit the generalised steering robustness SR. When \(\mathcal{N}_{S}(\mathbf{\sigma})=\{\mathbf{\omega}\colon\text{state assemblage} \,|\,\rho_{\mathbf{\omega}}=\rho_{\mathbf{\sigma}}\}\), then we get the consistent generalised steering robustness \(\mathrm{SR}^{(\mathrm{c})}\). Now, similarly, a _noise model for measurement incompatibility_ is a map \(\mathcal{N}_{I}\) that maps a measurement assemblage into _a set of_ measurement assemblages; namely, \(\mathbf{E}\mapsto\mathcal{N}_{I}(\mathbf{E})=\{\mathbf{M}\}\). The set \(\mathcal{N}_{I}(\mathbf{E})=\{\mathbf{M}\}\) contains all possible \(\mathbf{M}\) that can be used as noises to mix with \(\mathbf{E}\). The _incompatibility robustness subject to noise_\(\mathcal{N}_{I}\) is defined by \[\mathrm{IR}_{\mathcal{N}_{I}}(\mathbf{E})\coloneqq\inf\left\{t\geq 0\left| \frac{\mathbf{E}+t\mathbf{M}}{1+t}\in\mathbf{JM},\ \mathbf{M}\in\mathcal{N}_{I}(\mathbf{E})\right\}.\right. \tag{16}\] If we set \(\mathcal{N}_{I}(\mathbf{E})=\) the set of all measurement assemblages, we revisit the generalised incompatible robustness IR. It turns out that \(\mathrm{SR}_{\mathcal{N}_{S}}\) and \(\mathrm{IR}_{\mathcal{N}_{I}}\) can be linked by SEO ordering. To this end, we say that \(\mathcal{N}_{I}\) is _SEO-included_ by \(\mathcal{N}_{S}\), and we denote it by \(\mathcal{N}_{I}\subset_{\mathrm{SEO}}\mathcal{N}_{S}\), if it holds that \[\sqrt{\eta}U\mathcal{N}_{I}\left(\mathbf{M}\right)U^{\dagger}\sqrt{\eta}\subseteq \mathcal{N}_{S}\left(\sqrt{\eta}U\mathbf{M}U^{\dagger}\sqrt{\eta}\right) \tag{17}\] for every measurement assemblage \(\mathbf{M}\), state \(\eta\), and unitary \(U\) with the condition \(\mathrm{supp}(\eta)\subseteq\mathrm{supp}(U\mathbb{I}_{\mathbf{M}}U^{\dagger})\), where \(\mathbb{I}_{\mathbf{M}}\coloneqq\sum_{a}M_{a|x}\,\forall x\); i.e., it is the identity of the (sub-)space that \(\mathbf{M}\) lives in. Note that the notation \(O\mathcal{N}_{I}(\mathbf{E})O^{\dagger}\) denotes the set \(\left\{O\mathbf{W}O^{\dagger}\,|\,\mathbf{W}\in\mathcal{N}_{I}(\mathbf{E})\right\}\). In other words, every _measurement_ assemblage noise can induce some _state_ assemblage noise via SEO ordering. Notice that \(\subset_{\mathrm{SEO}}\) is _not_ a preorder, since it always connects two different types of noises, so it is not homogeneous. As one can check, the noise models for IR and SR satisfy this inclusion, and the same holds for the noise models for IR and SR\({}^{(\mathrm{c})}\). Now we present the following result, which is proved in Appendix C: **Theorem 3**.: _Suppose \(\mathcal{N}_{I}\subset_{\mathrm{SEO}}\mathcal{N}_{S}\). Then for every state assemblage \(\mathbf{\sigma}\), we have that_ \[\sup\left\{\mathrm{SR}_{\mathcal{N}_{S}}\left(\mathbf{\tau}\right)\left|\,\mathbf{ \sigma}\stackrel{{\mathrm{LF}_{1}}}{{\longrightarrow}}\mathbf{\tau} \right.\right\}\leq\mathrm{IR}_{\mathcal{N}_{I}}\left(\mathbf{B}^{(\mathbf{ \sigma})}\right). \tag{18}\] Interestingly, Theorem 3 provides fundamental limitations on steering distillation under local filters in \(\mathrm{LF}_{1}\) when we measure steerability via \(\mathrm{SR}_{\mathcal{N}_{S}}\) -- namely, its value can _never_ go beyond the incompatibility robustness \(\mathrm{IR}_{\mathcal{N}_{I}}\). In other words, as long as the SEO inclusion holds, the distillable steerability is always controlled and limited by the amount of incompatibility of the corresponding SEO. This finding reveals a foundational hierarchy between steering and incompatibility in the most general setting. ### Quantifying Steering Distillation by Measurement Incompatibility A very natural question raised by Theorem 3 is: _when can the upper bound be saturated?_ It turns out that it is closely related to the consistency condition for the steering robustness. As an explicit example, we consider the relation of SR, SR(c), and IR through steering distillation tasks. Combining our finding and Refs. [17, 51], we have the following observation (in what follows, LF denotes _general_ local filters, i.e., completely-positive trace-non-increasing linear maps; we write \(\mathbf{\sigma}\xrightarrow{\mathrm{LF}}\mathbf{\tau}\) if such a filter exists with non-vanishing success probability): **Observation 4**.: _For every \(\mathbf{\sigma}\), we have that_ \[\mathrm{IR}\left(\mathbf{B}^{(\mathbf{\sigma})}\right) =\sup\left\{\mathrm{SR}(\mathbf{\tau})\left|\mathbf{\sigma}\xrightarrow{ \mathrm{LF}}\mathbf{\tau}\right.\right\}\] \[=\sup\left\{\mathrm{SR}(\mathbf{\tau})\left|\mathbf{\sigma}\xrightarrow{ \mathrm{LF}_{1}}\mathbf{\tau}\right.\right\}=\mathrm{SR}^{(\mathrm{c})}(\mathbf{ \sigma}). \tag{19}\] Proof.: A direct computation shows that \[\mathrm{IR}\left(\mathbf{B}^{(\mathbf{\sigma})}\right) \geq\sup\left\{\mathrm{SR}(\mathbf{\tau})\left|\mathbf{\sigma}\xrightarrow{ \mathrm{LF}_{1}}\mathbf{\tau}\right.\right\}\] \[\geq\sup\left\{\mathrm{SR}(\mathbf{\tau})\left|\mathbf{\sigma}\xrightarrow{ \mathrm{LF}_{1}}\mathbf{\tau},\mathbf{\tau}\xrightarrow{\mathrm{LF}_{1}}\mathbf{\sigma} \right.\right\}=\mathrm{IR}\left(\mathbf{B}^{(\mathbf{\sigma})}\right).\] The first inequality is from Theorem 3, the second inequality is due to reducing the maximisation range, and the last equality is the main result of Ref. [17]. Finally, since LF\({}_{1}\) are optimal over all LF filters (see Ref. [18]) and IR \(\left(\mathbf{B}^{(\mathbf{\sigma})}\right)=\mathrm{SR}^{(\mathrm{c})}(\mathbf{ \sigma})\) (see Ref. [51]), the result follows. This result has several implications. First, it provides an operational meaning to SR(c) as the maximal steerability that can be extracted from LF filters, as it was previously noted for IR. In other words, SR(c) is a natural _steering distillation_ measure. As a consequence, SR(c) is a quantity that cannot be stochastically distilled by LF filters on the trusted side (\(B\)), as it provides the same value. Notice that, as a consequence, SR(c) takes the same value on state assemblages with very different values of the generalised robustness SR. In fact, in the same SEO equivalence class, there exist state assemblages with the maximal SR (corresponding to IR) and also state assemblages with SR arbitrarily close to zero [17], whereas SR(c) always provides the same value corresponding to the maximally distillable steerability, as quantified by SR. Finally, this finding shows that LF cannot outperform LF\({}_{1}\) in stochastic steering distillation, as long as steerability is measured by SR. From this perspective, Theorem 1 can be viewed as a complete characterisation of stochastic steering distillation in a general sense. ## V Conclusion In this work, we prove the first complete characterisation of irreversible steering conversion under local filters in LF\({}_{1}\). This finding also fully solves a major open question of Ref. [17]. At the same time, we provide a general way of constructing incompatibility measures from steering measures and show how general incompatibility measures serve as upper bounds on the highest achievable steerability under local filters. As a consequence, fundamental limitations on irreversible stochastic steering distillation follow. Finally, we provide an operational interpretation of the consistent steering robustness as a measure of steering distillability, rather than a direct measure of steerability. Several questions remain open. First, Ref. [18] reports that, rather surprisingly, measurement incompatibility _cannot_ be distilled by local filters, even with non-physical filter operations. Hence, despite their mathematical equivalence, quantum steering and measurement incompatibility behave in opposite ways with respect to stochastic distillation. It is then interesting to explore the difference between these two seemingly equivalent physical phenomena. Furthermore, it is interesting to study possible applications of stochastic steering distillation to thermodynamics and information transmission [55, 56, 57, 58, 59]. Finally, from a practical perspective, it is useful to further study how to use stochastic steering distillation to improve steering inequality violation [60, 61, 62, 63] and its activation [64, 65, 66], which can potentially enhance (one-sided) device-independent quantum information protocols. ## Acknowledgements The authors acknowledge fruitful discussions with Paul Skrzypczyk and Roope Uola. C.-Y. H. is supported by the Royal Society through Enhanced Research Expenses (on grant NFQI) and the ERC Advanced Grant (FLQuant). H.-Y. K. and C. B. are supported by the Ministry of Science and Technology, Taiwan, (Grants No. MOST 111-2917-I-564-005), the Austrian Science Fund (FWF) through Projects No. ZK 3 (Zukunftskolleg), and No. F7113 (BeyondC). ## Appendix A Proof of Theorem 1 Proof.: (_direction_ "\(\Rightarrow\)") Suppose \(\mathbf{\sigma}\succ_{\mathrm{SEO}}\mathbf{\tau}\). Then we have \[\tau_{a|x}=\sqrt{\rho_{\mathbf{\tau}}}U\sqrt{\rho_{\mathbf{\sigma}}}^{-1}\sigma_{a|x} \sqrt{\rho_{\mathbf{\sigma}}}^{-1}U^{\dagger}\sqrt{\rho_{\mathbf{\tau}}}\quad\forall a,x \tag{20}\] for some unitary \(U\) achieving \(\mathrm{supp}(\rho_{\mathbf{\tau}})\subseteq\mathrm{supp}(U\rho_{\mathbf{\sigma}}U^{ \dagger})\). Equation (20) already suggests the correct Kraus operator, up to normalisation. Let us define \[\lambda_{\mathrm{opt}}\coloneqq\min\left\{\lambda\geq 0\,|\,\rho_{\mathbf{\tau}} \leq\lambda U\rho_{\mathbf{\sigma}}U^{\dagger}\right\}. \tag{21}\] This implies \[\sqrt{\rho_{\mathbf{\sigma}}}^{-1}U^{\dagger}\rho_{\mathbf{\tau}}U\sqrt{\rho_{\mathbf{ \sigma}}}^{-1}\leq\lambda_{\mathrm{opt}}\mathbb{I}_{\mathbf{\sigma}}, \tag{22}\] where \(\mathbb{I}_{\mathbf{\sigma}}\) is the identity of the subspace \(\mathrm{supp}(\rho_{\mathbf{\sigma}})\). Notice that \(\lambda_{\mathrm{opt}}\) exists and \(1\leq\lambda_{\mathrm{opt}}<\infty\), because of the inclusion of the supports, and consequently \(D_{\mathrm{max}}(\rho_{\mathbf{\tau}}\,\|\,U\rho_{\mathbf{\sigma}}U^{\dagger})=\log_{2 }\lambda_{\mathrm{opt}}\). Now, define the operator \[L\coloneqq\sqrt{\lambda_{\mathrm{opt}}}^{-1}\sqrt{\rho_{\mathbf{\tau}}}U\sqrt{ \rho_{\mathbf{\sigma}}}^{-1}. \tag{23}\] It is then straightforward to check that \(L^{\dagger}L\leq\mathbb{I}_{\mathbf{\sigma}}\), meaning that it is a local filter in LF\({}_{1}\). The success probability for the filter \(L\) reads \[p_{\mathrm{sacc}}=\mathrm{tr}\left(L\rho_{\mathbf{\sigma}}L^{\dagger}\right)= \lambda_{\mathrm{opt}}^{-1}=2^{-D_{\mathrm{max}}\left(\rho_{\mathbf{\tau}}\,\|\,U\rho _{\mathbf{\sigma}}U^{\dagger}\right)}>0, \tag{24}\] which is non-vanishing since \(\lambda_{\rm opt}<\infty\). Hence, it provides the correct state assemblage, since \(L\sigma_{a|x}L^{\dagger}/p_{\rm succ}=\tau_{a|x}\ \forall\,a,x\). This means that \(\mathbf{\sigma}\xrightarrow{1\text{F}_{1}}\mathbf{\tau}\), as desired. (_direction_ "\(\Leftarrow\)") Suppose that \(\mathbf{\sigma}\xrightarrow{1\text{F}_{1}}\mathbf{\tau}\). Then we have \[\tau_{a|x}=\frac{K\sqrt{\rho_{\mathbf{\sigma}}}\mathbf{B}_{a|x}^{( \mathbf{\sigma})}\sqrt{\rho_{\mathbf{\sigma}}}K^{\dagger}}{p_{\rm succ}} \tag{10}\] with some local filter \(K\) satisfying \(K^{\dagger}K\leq\mathbb{I}\) and \(p_{\rm succ}=\text{tr}\left(K\rho_{\mathbf{\sigma}}K^{\dagger}\right)>0\). By the polar decomposition theorem [31], there exist a unitary \(U\) and positive operator \(P\geq 0\) such that \[\frac{K\sqrt{\rho_{\mathbf{\sigma}}}}{\sqrt{p_{\rm succ}}}=UP, \tag{11}\] where \[P=\sqrt{\left(\frac{K\sqrt{\rho_{\mathbf{\sigma}}}}{\sqrt{p_{\rm succ }}}\right)^{\dagger}\left(\frac{K\sqrt{\rho_{\mathbf{\sigma}}}}{\sqrt{p_{\rm succ }}}\right)}=\frac{\sqrt{\sqrt{\rho_{\mathbf{\sigma}}}K^{\dagger}K\sqrt{\rho_{\mathbf{ \sigma}}}}}{\sqrt{p_{\rm succ}}}. \tag{12}\] Using the relation \(K^{\dagger}K\leq\mathbb{I}\), we can write (with \(p_{\rm succ}>0\)) \[P^{2}\leq\frac{\rho_{\mathbf{\sigma}}}{p_{\rm succ}}, \tag{13}\] meaning that \(\text{supp}(P)=\text{supp}(P^{2})\subseteq\text{supp}(\rho_{\mathbf{\sigma}})\). Together with Eq. (10), we obtain \[\rho_{\mathbf{\tau}}=UP\mathbb{I}_{\mathbf{\sigma}}PU^{\dagger}=UP^{2}U^ {\dagger}. \tag{14}\] Hence, \(UPU^{\dagger}=\sqrt{\rho_{\mathbf{\tau}}}\), and we have \(\tau_{a|x}=\sqrt{\rho_{\mathbf{\tau}}}U\mathbf{B}_{a|x}^{(\mathbf{\sigma})}U^{\dagger} \sqrt{\rho_{\mathbf{\tau}}}\) with \(\text{supp}(\rho_{\mathbf{\tau}})\subseteq\text{supp}(U\rho_{\mathbf{\sigma}}U^{ \dagger})\), meaning that \(\mathbf{\sigma}>_{\rm SEO}\mathbf{\tau}\). This concludes the proof of the claim \(\mathbf{\sigma}>_{\rm SEO}\mathbf{\tau}\Leftrightarrow\mathbf{\sigma}\xrightarrow{1\text{F }_{1}}\mathbf{\tau}\). (_computation of maximal success probability_) Suppose \(\mathbf{\sigma}\xrightarrow{1\text{F}_{1}}\mathbf{\tau}\). For _every_ LF\({}_{1}\) filter mapping \(\mathbf{\sigma}\mapsto\mathbf{\tau}\) with a success probability \(p_{\rm succ}\), Eqs. (13) and (14) imply that \(\rho_{\mathbf{\tau}}\leq U\rho_{\mathbf{\sigma}}U^{\dagger}/p_{\rm succ}\) for some unitary \(U\) consistent with the decomposition in Eq. (12). Using Eq. (13), we obtain \(p_{\rm succ}\leq\lambda_{\rm opt}^{-1}=2^{-D_{\rm max}(\rho_{\mathbf{\tau}}\, \|\,U\rho_{\mathbf{\sigma}}U^{\dagger})}\). Since this holds for _every possible_ local filter in LF\({}_{1}\) mapping \(\mathbf{\sigma}\mapsto\mathbf{\tau}\), we conclude that \[p_{\rm succ}^{\rm max}(\mathbf{\sigma}\xrightarrow{1\text{F}_{1}} \mathbf{\tau})\leq\sup_{U\in\mathcal{U}(\mathbf{\sigma}>_{\rm SEO}\mathbf{\tau})}2^{-D_{ \rm max}(\rho_{\mathbf{\tau}}\,\|\,U\rho_{\mathbf{\sigma}}U^{\dagger})}. \tag{15}\] Finally, this upper bound can be attained, since every such \(U\) can induce an LF\({}_{1}\) filter with success probability \(2^{-D_{\rm max}(\rho_{\mathbf{\tau}}\,\|\,U\rho_{\mathbf{\sigma}}U^{\dagger})}\) by following the proof above Eq. (10). The proof is thus completed. ## Appendix B Proof of Theorem 2 Before stating the formal result, we briefly state the allowed operations considered here. In this work, we consider free operations of measurement incompatibility introduced in Ref. [41], which are mappings \(\mathbf{E}\mapsto\mathcal{L}_{\rm inc}(\mathbf{E})\) given by \[\mathcal{L}_{\rm inc}(\mathbf{E})_{a^{\prime}|x^{\prime}}\coloneqq \sum_{a,x,\omega}p\left(\omega\right)p(x|x^{\prime},\omega)p(a^{\prime}|a,x^{ \prime},\omega)E_{a|x}. \tag{16}\] This operation can be seen as the classical post-processing with the pre-existing randomness \(\omega\). Let us also briefly recall the free operations in the resource theory of quantum steering [40], which are mappings \(\mathbf{\sigma}\mapsto\mathcal{E}_{\rm steer}(\mathbf{\sigma})\) given by \[\mathcal{E}_{\rm steer}(\mathbf{\sigma})_{a^{\prime}|x^{\prime}} \coloneqq\sum_{a,x,\omega}p(x|x^{\prime},\omega)p(a^{\prime}|a,x,x^{\prime}, \omega)\mathcal{E}_{\omega}(\sigma_{a|x}), \tag{17}\] where \(\{\mathcal{E}_{\omega}\}_{\omega}\) form a _quantum instrument_, i.e., it is a set of completely-positive trace-non-increasing maps with the property that \(\sum_{\omega}\mathcal{E}_{\omega}\) is trace-preserving. Now we provide a complete version of Theorem 2 as the following theorem. Recall that a steering measure \(S\) is _faithful_ if \(S(\mathbf{\sigma})=0\) if and only if \(\mathbf{\sigma}\in\mathbf{LHS}\). **Theorem B.1**.: _For an arbitrary faithful steering measure \(S\), we have \(I_{S}(\mathbf{E})=0\) if and only if \(\mathbf{E}\in\mathbf{JM}\). Moreover, \(I_{S}\left[\mathcal{L}_{\rm inc}(\mathbf{E})\right]\leq I_{S}(\mathbf{E})\) for every \(\mathbf{E}\) and free operation \(\mathcal{L}_{\rm inc}\)._ Proof.: It suffices to show the non-increasing property. First, by using Theorem 1, we can rewrite Eq. (11) as \[I_{S}(\mathbf{E})=\sup\left\{S(\mathbf{\tau})\ \Big{|}\ \mathbf{\sigma}>_{\rm SEO}\mathbf{\tau}, \mathbf{B}^{(\mathbf{\sigma})}=\mathbf{E}\right\}\] \[=\sup\left\{S\left(\sqrt{\eta}U\mathbf{E}U^{\dagger}\sqrt{\eta} \right)\Big{|}\ \text{supp}(\eta)\subseteq\text{supp}\left(U\mathbb{I}_{\mathbf{E}}U^{ \dagger}\right)\right\}, \tag{18}\] where the maximisation is taken over every state \(\eta\) and unitary \(U\) satisfying \(\text{supp}(\eta)\subseteq\text{supp}\left(U\mathbb{I}_{\mathbf{E}}U^{\dagger}\right)\). Now, for a free operation of incompatibility as given in Eq. (16), we have \(\sqrt{\eta}U\mathcal{L}_{\rm inc}(\mathbf{E})U^{\dagger}\sqrt{\eta}=\mathcal{E}_{ \rm steer}\left(\sqrt{\eta}U\mathbf{E}U^{\dagger}\sqrt{\eta}\right)\) for some free operation of steering. This is because Eq. (16) can be viewed as special cases of Eq. (17) with \(\mathcal{E}_{\omega}=p(\omega)\mathcal{I}\), where \(\mathcal{I}\) is the identity map. Hence, we have \[I_{S}\left[\mathcal{L}_{\rm inc}(\mathbf{E})\right]\] \[=\sup\left\{S\left[\mathcal{E}_{\rm steer}\left(\sqrt{\eta}U \mathbf{E}U^{\dagger}\sqrt{\eta}\right)\Big{|}\ \text{supp}(\eta)\subseteq\text{supp}\left(U\mathbb{I}_{\mathbf{E}}U^{\dagger} \right)\right\}\right]\] \[\leq\sup\left\{S\left(\sqrt{\eta}U\mathbf{E}U^{\dagger}\sqrt{\eta} \right)\Big{|}\ \text{supp}(\eta)\subseteq\text{supp}\left(U\mathbb{I}_{\mathbf{E}}U^{\dagger} \right)\right\}\] \[=I_{S}(\mathbf{E}), \tag{19}\] where we have used the non-increasing property of \(S(\cdot)\) under \(\mathcal{E}_{\rm steer}\) and Eq. (18). The proof is thus completed. When the given steering monotone \(S\) is convex, the steering-induced incompatibility measure is also convex. More precisely, \(I_{S}(\sum_{i}p_{i}\mathbf{E}_{i})\leq\sum_{i}p_{i}I_{S}(\mathbf{E}_{i})\) for every probability distribution \((p_{i})_{i}\), as one can directly check by using Eq. (18). Also, we comment that, in the current setting, the free operations of incompatibility can be viewed as a subset of free operations of steering. Finally, note that Eq. (37) implies that one can simply write \[I_{S}\left(\mathbf{E}\right)\coloneqq\sup\left\{S\left(\boldsymbol{\tau}\right) \left|\boldsymbol{\sigma}\stackrel{{\mathrm{LF}_{1}}}{{\longrightarrow}} \boldsymbol{\tau}\right.\right\} \tag{38}\] for _any_ state assemblage \(\boldsymbol{\sigma}\) satisfying \(\mathbf{B}^{(\boldsymbol{\sigma})}=\mathbf{E}\). ## Appendix C Proof of Theorem 3 Proof.: For every given unitary \(U\) and state \(\eta\) with the condition \(\operatorname{supp}(\eta)\subseteq\operatorname{supp}(U_{\boldsymbol{\sigma }}U^{\dagger})\), we can write \(\operatorname{SR}_{N_{S}}\left(\sqrt{\eta}U/\mathbf{B}^{(\boldsymbol{\sigma })}U^{\dagger}\sqrt{\eta}\right)\) as the following optimisation: \[\min t\] (39) s.t. \[t\geq 0,\frac{\sqrt{\eta}U\mathbf{B}^{(\boldsymbol{\sigma})}U^{ \dagger}\sqrt{\eta}+t\boldsymbol{\omega}}{1+t}\in\mathbf{LHS},\] \[\boldsymbol{\omega}\in\mathcal{N}_{S}\left(\sqrt{\eta}U\mathbf{B} ^{(\boldsymbol{\sigma})}U^{\dagger}\sqrt{\eta}\right).\] Using Eq. (17) to make the minimisation range smaller, we obtain the following upper bound: \[\min t\] (40) s.t. \[t\geq 0,\frac{\sqrt{\eta}U\mathbf{B}^{(\boldsymbol{\sigma})}U^{ \dagger}\sqrt{\eta}+t\boldsymbol{\omega}}{1+t}\in\mathbf{LHS},\] \[\boldsymbol{\omega}\in\sqrt{\eta}UN_{I}\left(\mathbf{B}^{( \boldsymbol{\sigma})}\right)U^{\dagger}\sqrt{\eta},\] This can be rewritten as \[\min t\] (41) s.t. \[t\geq 0,\sqrt{\eta}U\left(\frac{\mathbf{B}^{(\boldsymbol{\sigma}) }+t\mathbf{W}}{1+t}\right)U^{\dagger}\sqrt{\eta}\in\mathbf{LHS},\] \[\mathbf{W}\in\mathcal{N}_{I}\left(\mathbf{B}^{(\boldsymbol{\sigma })}\right).\] Whenever \(\operatorname{supp}(\eta)\subseteq\operatorname{supp}(U_{\mathbf{N}}U^{ \dagger})\), \(\mathbf{M}\in\mathbf{JM}\) implies that \(\sqrt{\eta}U/\mathbf{M}U^{\dagger}\sqrt{\eta}\in\mathbf{LHS}\). Hence, Eq. (41) is upper bounded by \[\min t\] (42) s.t. \[t\geq 0,\frac{\mathbf{B}^{(\boldsymbol{\sigma})}+t\mathbf{W}}{1+t} \in\mathbf{JM},\mathbf{W}\in\mathcal{N}_{I}\left(\mathbf{B}^{(\boldsymbol{ \sigma})}\right),\] which is exactly \(\operatorname{IR}_{\mathcal{N}_{I}}\left(\mathbf{B}^{(\boldsymbol{\sigma})}\right)\). This means that \(\operatorname{SR}_{N_{S}}\left(\sqrt{\eta}U/\mathbf{B}^{(\boldsymbol{\sigma })}U^{\dagger}\sqrt{\eta}\right)\leq\operatorname{IR}_{\mathcal{N}_{I}}\left( \mathbf{B}^{(\boldsymbol{\sigma})}\right)\) for every \(\eta\) and unitary \(U\) with \(\operatorname{supp}(\eta)\subseteq\operatorname{supp}(U_{\boldsymbol{\sigma }}U^{\dagger})\). Maximising over all such \(\eta,U\) and using Theorem 1, we obtain \[\sup\left\{\operatorname{SR}_{N_{S}}\left(\boldsymbol{\tau}\right)\left| \boldsymbol{\sigma}\stackrel{{\mathrm{LF}_{1}}}{{\longrightarrow}} \boldsymbol{\tau}\right\}\leq\operatorname{IR}_{N_{I}}\left(\mathbf{B}^{( \boldsymbol{\sigma})}\right). \tag{43}\] This concludes the proof.
steers resources, central for quantum advantages in one-sideddevice-independent quantum information tasks, via local filters can be enhanced. Recently, reversible steering conversion under local filters has been fully characterized. Here, we solve the problem in the irreversible scenario, which leads to a complete understanding of stochastic steering distillation. This result also provides an operational interpretation of the max-relativeentropy as the optimal filter success probability. We further show that all steering measures can be used to quantify measurement incompatibility in certain stochastic steering distillation scenarios. Finally, for a broad class of steering robustness measures, we show that their maximally achievable values in stochastic steering distillation are always upper bounded by different types of incompatibility robustness measures. Hence, measurement incompatibility sets the fundamental limitations for stochastic steering distillation.
2309.14595
Neural Informed RRT*: Learning-based Path Planning with Point Cloud State Representations under Admissible Ellipsoidal Constraints
Sampling-based planning algorithms like Rapidly-exploring Random Tree (RRT) are versatile in solving path planning problems. RRT* offers asymptotic optimality but requires growing the tree uniformly over the free space, which leaves room for efficiency improvement. To accelerate convergence, rule-based informed approaches sample states in an admissible ellipsoidal subset of the space determined by the current path cost. Learning-based alternatives model the topology of the free space and infer the states close to the optimal path to guide planning. We propose Neural Informed RRT* to combine the strengths from both sides. We define point cloud representations of free states. We perform Neural Focus, which constrains the point cloud within the admissible ellipsoidal subset from Informed RRT*, and feeds into PointNet++ for refined guidance state inference. In addition, we introduce Neural Connect to build connectivity of the guidance state set and further boost performance in challenging planning problems. Our method surpasses previous works in path planning benchmarks while preserving probabilistic completeness and asymptotic optimality. We deploy our method on a mobile robot and demonstrate real world navigation around static obstacles and dynamic humans. Code is available at https://github.com/tedhuang96/nirrt_star.
Zhe Huang, Hongyu Chen, John Pohovey, Katherine Driggs-Campbell
2023-09-26T01:00:02
http://arxiv.org/abs/2309.14595v2
# Neural Informed RRT* with Point-based Network Guidance ###### Abstract Sampling-based planning algorithms like Rapidly-exploring Random Tree (RRT) are versatile in solving path planning problems. RRT* offers asymptotical optimality but requires growing the tree uniformly over the free space, which leaves room for efficiency improvement. To accelerate convergence, informed approaches sample states in an ellipsoidal subset of the search space determined by current path cost during iteration. Learning-based alternatives model the topology of the search space and infer the states close to the optimal path to guide planning. We combine the strengths from both sides and propose Neural Informed RRT* with Point-based Network Guidance. We introduce Point-based Network to infer the guidance states, and integrate the network into Informed RRT* for guidance state refinement. We use Neural Connect to build connectivity of the guidance state set and further boost performance in challenging planning problems. Our method surpasses previous works in path planning benchmarks while preserving probabilistic completeness and asymptotical optimality. We demonstrate the deployment of our method on mobile robot navigation in the real world. ## I Introduction Path planning is one of the most fundamental problems in robotics, which is to find a path for a robot to traverse from a start to a goal safely and efficiently [1, 2, 3]. An effective path planning algorithm should be (1) complete and optimal: a solution is guaranteed to be found if one exists, and the optimal solution is guaranteed to be achieved with sufficient run time; (2) efficient in optimal convergence: the solution should be improved towards near optimal as fast as possible; and (3) versatile and scalable: the implementation should be modified with minimum effort to generalize across different problems, environments, and robots. Multiple branches of planning algorithms have been developed to meet these requirements, including grid-based search methods, artificial potential field, and sampling-based algorithms [7, 8, 9, 10, 11]. The sampling-based algorithms are increasingly popular due to their versatility, scalability, and formal properties of probabilistic completeness and asymptotical optimality [4]. To accelerate convergence to the optimal path, various sampling strategies are introduced to replace the default uniform sampling strategy [4]. The rule-based informed strategy enforces sampling in an ellipsoidal subset of states which are more promising to improve the current path solution [5, 12]. Learning-based methods harness grid-based neural networks to make inference of states close to the optimal path, and bias sampling towards these states, which we define as guidance states [6, 13, 14, 15, 16, 17, 18, 19]. While these works improve performance, we observe several limitations. First, the learning-based methods encodes the whole state space to generate guidance states without iterative improvement, where the inference speed and accuracy are affected by modeling features of irrelevant region or obstacles. Second, rule-based informed sampling does not favor the topologically important region in the focused subset (e.g., narrow corridors). Finally, the learning-based approaches do not consider connectivity of the guidance state set, which severely affects the convergence rate in complex planning problems. We introduce Neural Informed RRT* with Point-based Network Guidance (NIRRT*-PNG) to address these limitations (Figure 1). We use a point cloud of free states as input to a point-based network to classify guidance states. Sampling from the guidance states is mixed by 50% possibility with the random sampling step of Informed RRT* (IRRT*) [5]. Using point clouds instead of occupancy grids Fig. 1: Solutions of a 2D random world found by RRT* [4], Informed RRT* (IRRT*) [5], Neural RRT* with Grid-based Network Guidance (NMRT*-GNG) [6], and Neural Informed RRT* with Point-based Network Guidance (NIRRT*-PNG). NIRRT*-PNG effectively integrates IRRT* and point-based network, so IRRT* helps point-based network focus guidance state inference on the important region for solution improvement, and point-based network helps IRRT* sample critical states in the focused subset for convergence acceleration. as input allows the focus heuristic of IRRT* to constrain the input to the point-based network and infer critical states in the informed subset of the state space, which we call _Neural Focus_. The quality of the guidance states keep being improved during iteration, because the inference is always made on the informed subset created by an improved path cost. In addition, we build connectivity of the guidance state set by following a _Neural Connect_ scheme similar to RRT-Connect [20], where the point-based network is called to solve a subproblem of a closer pair of start and goal states. In short, our contributions are threefold: (1) we use a Point-based Network to directly take free states as point cloud input to generate multiple guidance states in one run; (2) we present Neural Informed RRT*, by introducing Neural Focus to integrate Point-based Network and Informed RRT*; and (3) we propose Neural Connect to address the connectivity issue of inferred guidance state set. ## II Related Work Grid-based search methods like A* [7] and D* [8] guarantee to find the optimal path in discretized state spaces if a path solution exists, which comes with the price of poor scaling with the problem complexity. Sampling-based algorithms like probabilistic roadmap (PRM) [10] and RRT [11] guarantee to find a feasible path solution if one exists as the number of iterations approaches infinity. PRM* and RRT* [4] provide asymptotical optimality, which requires exploring the planning domain globally. Informed RRT* and Batch Informed Trees improve the convergence rate by constraining the sampling space to a ellipsoidal subset based on start state, goal state, and current best path cost [5, 12]. Another line of works accelerates path planning by investigating the search space of the problem, such as Vonoroi bias [21, 22], evolutionary algorithms [23], and A* initialization [24]. Neural RRT* represents the square-shaped search space of 2D planning problems by images and uses U-Net [25] to predict probabilistic heatmap of states used for guiding RRT* [6]. MPNet voxelizes environment point cloud and feed into 3D Convolution Neural Networks to make recursive inference for path generation [15]. Grid-based neural networks are prevalent in previous works to encode the search space [13, 14, 15, 6], which requires discretization operations and the results are dependent on resolution. Previous works use PointNet to encode the point cloud which represents the global obstacle space [26, 27], but modeling the obstacle interior is inefficient for finding a path in the free space. Recent works apply graph neural networks to a sampled random geometric graph in configuration space, and select edges from the graph to build a near-optimal path [28, 29]. However, the path feasibility and path quality are highly dependent on the sampled graph, while continuous improvement is not discussed in these works. ## III Method ### _Problem Definition_ We define the optimal path planning problem similar to related works [4, 5, 6]. The state space is denoted as \(X\subseteq\mathbb{R}^{d}\). The obstacle space and the free space are denoted as \(X_{\text{obs}}\) and \(X_{\text{free}}\). A path \(\sigma:[0,1]\to X_{\text{free}}\) is a sequence of states. The set of paths is denoted as \(\Sigma\). The optimal path planning problem is to find a path \(\sigma^{*}\) which minimizes a given cost function \(c:\Sigma\rightarrow\mathbb{R}_{\geq 0}\), connects a given start state \(x_{\text{start}}\in X_{\text{free}}\) and a given goal state \(x_{goal}\in X_{\text{free}}\), and has all states on the path in free space. Formally: \[\begin{split}\sigma^{*}=&\arg\min_{\sigma\in\Sigma} c\left(\sigma\right)\\ \text{s.t.}\ \sigma(0)=x_{\text{start}},\sigma(1)=& x_{\text{ goal}},\forall s\in[0,1],\sigma(s)\in X_{\text{free}}\end{split} \tag{1}\] ### _Neural Informed RRT*_ We present NIRRT*-PNG in Algorithm 1, where the unhighlighted part is from RRT*, the blue part is from IRRT*, and the red part is our contribution. We track the best path solution cost \(c_{\text{best}}^{i}\) through each iteration, which is initialized as infinity (line 3). We initialize update cost \(c_{\text{update}}\) with the value of \(c_{\text{best}}^{0}\) (line 4). We call the neural network to infer an initial guidance state set \(X_{\text{guide}}\) based on the complete free state space (line 5). As better solutions are found, the guidance state set \(X_{\text{guide}}\) may be updated by the neural network calls depending on how much the path cost has been improved, and random samples \(x_{\text{rand}}\) are sampled using both \(X_{\text{guide}}\) and informed sampling (line 8). PointNetGuidedSampling: When the current best path cost \(c_{\text{curr}}\) is less than the path cost improvement ratio \(\alpha\leq 1\) of \(c_{\text{update}}\), the neural network is called to update \(X_{\text{guide}}\), and \(c_{\text{update}}\) is updated by \(c_{\text{curr}}\). The random sample \(x_{\text{rand}}\) is sampled with a mixed strategy: if a random number Rand() \(\in(0,1)\) is smaller than 0.5, we use the sampling strategy of IRRT* to sample \(x_{\text{rand}}\); otherwise, we sample \(x_{\text{rand}}\) uniformly from \(X_{\text{guide}}\). Note the frequency of calling neural networks for guidance state inference is controlled by the path cost improvement ratio \(\alpha\). Similar to [6, 15, 18], our mixed sampling strategy guarantees probabilistic completeness and asymptotic optimality by implementing the sampling procedure same as IRRT* with probability of 0.5. If we do not update \(X_{\text{guide}}\) after initial inference, and remove IRRT* components, NIRRT* is reduced to NIRRT*. While NIRRT* is generic in that any neural network that infers guidance states can fit into the framework, we emphasize the use of point-based network. In the next subsection, we discuss the details of Point-based Network Guidance, and explain the preference of point representations over grid representations. ### _Point-based Network Guidance_ **Point-based Network.** We represent the state space by a point cloud \(X_{\text{input}}=\{x_{1},x_{2},\ldots,x_{N}\}\subset X_{\text{free}}\). The density of point cloud should allow a reasonable amount of neighbors around each point in radius of step size \(\eta\). We oversample points uniformly from \(X_{\text{free}}\), and perform minimum distance downsampling to obtain the point cloud with even distribution. In addition to coordinates, we add to each point a one-hot vector feature, indicating whether the point is within radius \(\eta\) of \(x_{\text{start}}\) and/or \(x_{\text{goal}}\). We normalize coordinates of the point cloud. The processed point cloud \(\bar{X}_{\text{input}}\) is fed into a point-based network \(f\). The network \(f\) maps each point to a probability \(p_{i}\in[0,1]\), where the points with probability greater than 0.5 form the set of guidance states \(X_{\text{guide}}\). Formally, \[\{p_{1},p_{2},\dots,p_{N}\}=f(\bar{X}_{\text{input}}),\ X_{\text{guide}}=\{x_{i}| p_{i}>0.5\}. \tag{2}\] We implement PointNet++ [30] as the model architecture of the point-based network. Since PointNet++ is originally designed for 3D point cloud, we set zero \(z\) coordinates for 2D problems. We collect 4,000 2D random worlds as the training dataset. For each random world, we run A* in pixel space with step size of unit pixel and clearance of 3 pixels to generate the pixel-wise optimal path. We generate a point cloud of number \(N=2048\), and generate guidance state labels by checking whether each point is around any point of the pixel-wise optimal path in radius of \(\eta\), which is set as 10 pixels. We train PointNet++ by Adam optimizer [31] with an an initial learning rate of 0.001 and batch size of 16 for 100 epochs. We use the trained model across all types of 2D planning problems. For 3D random world problems, we follow a similar scheme, but the clearance is set as 2 voxels. **Neural Focus.** Informed RRT* (IRRT*) outperforms RRT* by proposing a heuristic ellipsoidal subset of the planning domain \(X_{\text{focus}}\) in terms of the current best solution cost \(c_{\text{curr}}\), in order to sample \(x_{\text{rand}}\) which is more likely to improve the current solution. The reasoning behind this sampling strategy is that for any state \(x_{\text{rejected}}\) from \(X\backslash X_{\text{focus}}\), the minimum cost of a feasible path from \(x_{\text{start}}\) to \(x_{\text{goal}}\) through \(x_{\text{rejected}}\) has to be greater than or equal to \(c_{\text{curr}}\): \[X_{\text{focus}}=\{x\in X\big{|}\,||x-x_{\text{start}}||_{2}+||x-x_{\text{goal }}||_{2}\leq c_{\text{curr}}\} \tag{3}\] Neural Focus is to constrain the point cloud input to the point-based network inside the \(X_{\text{focus}}\), which is equivalent as changing the domain of oversampling from \(X_{\text{free}}\) to \(X_{\text{focus}}\cap X_{\text{free}}\). Since we normalize point coordinates when processing point cloud inputs, the trained point-based network can handle point clouds sampled from domains at different scales. With the same number of points \(N\), a smaller volume of \(X_{\text{focus}}\) leads to a denser point cloud, which describes important regions with finer details. For example, Figure 3(b) shows that Neural Focus fills the narrow passage with a large number of points, which is captured by the point-based network to produce more effective inference on guidance states compared to Figure 3(a). **Neural Connect.** The points close to \(x_{\text{start}}\) or \(x_{\text{goal}}\) are usually assigned with greater probabilities as guidance states over the points around midway of the path. When the distance between \(x_{\text{start}}\) and \(x_{\text{goal}}\) gets longer, the guidance state set \(X_{\text{guide}}\) is more likely to be separated into disconnected "blobs." This phenomenon of probability polarization is reported in NRRT* work [6]. Our experiments show lack of connectivity limits the performance in large and complex planning problems. We address this issue by introducing Neural Connect, which is inspired by RRT-Connect [20]. We initialize \(X_{\text{guide}}\) as an empty set, \(x^{1}_{\text{start}}\) as \(x_{\text{start}}\), and \(x^{1}_{\text{goal}}\) as \(x_{\text{goal}}\). During iteration, we first call the point-based network with \(x^{i}_{\text{start}}\) and \(x^{i}_{\text{goal}}\) as start and goal, and add inferred guidance states to \(X_{\text{guide}}\). Second, We run Breadth First Search (BFS) from \(x_{\text{start}}\) to \(x_{\text{goal}}\) through the guidance states in \(X_{\text{guide}}\). The neighbor radius of BFS is set as \(\eta\), and no collision check is performed. After BFS is finished, connectivity of \(X_{\text{guide}}\) is confirmed if \(x_{\text{goal}}\) is reached. Otherwise, we find the boundary points \(X_{\text{bound}}\) of the states visited by BFS by checking whether any points in \(X_{\text{input}}\backslash X_{\text{guide}}\) are around the visited state of radius \(\eta/2\). We select \(x^{i+1}_{\text{start}}\) from \(X_{\text{bound}}\) which is one of the states heuristically the furthest from \(x_{\text{start}}\) and one of the states to reach \(x_{\text{goal}}\) with minimum total heuristic cost. Third, we perform the same operation as the second step, with the start of BFS as \(x_{\text{goal}}\), and the goal of BFS as \(x_{\text{start}}\). We obtain \(x^{i+1}_{\text{goal}}\) if connectivity is negative. We perform the iteration until connectivity is built or the limit of iteration \(n_{\text{guide}}\) is reached, which we set as 5 in practice. We illustrate Neural Connect in Figure 3(c-h). Note the orange path found by BFS in Figure 3(h) does not go through collision check, so the path is not a feasible solution but a visual demonstration on the connectivity of \(X_{\text{guide}}\). PointNetGuide: We apply both Neural Focus and Neural Connect to point-based network, and obtain the complete module of Point-based Network Guidance, which is presented in Algorithm 3. **Point versus Grid.** We prefer using points over grids to represent state space due to compatibility with geometric constraints and convenience of extension to different problems. To apply Neural Focus to a CNN, grid representations require masking of the complement set of the ellipsoidal subset, where the mask quality depends on grid resolution. CNN also has to process the irrelevant masked region within the rectangular/box grid input. In contrast, point representations naturally confine states within arbitrary geometry by modifying the sampling domain, and the point-based network only needs to model free states. Moreover, while point-based network just needs adjustment of the input format to extend to different dimensions, changing input dimensions usually requires redesign of CNN architecture. ## IV Experiments ### _Simulation Experiments_ **Planning Problems.** We conduct simulation experiments on 2D center block, 2D narrow passage, 2D random world, Fig. 3: (a-b) Neural Focus in a 2D narrow passage and (c-h) Neural Connect in a 2D random world. Green dots denote states visited by Breadth First Search (BFS). Circles of larger size around green dots denote boundary points. The colors of circles denote the heuristic score. The boundary point which is \(x^{i+1}_{\text{start}}\) for \(x^{i+1}_{\text{goal}}\) has a blue marker on the circle. The orange line denotes the path found by BFS which represents the connectivity of \(X_{\text{guide}}\). Fig. 2: Guidance state inference by point-based network. Red is start, yellow is goal, blue is free states, and orange is guidance states. and 3D random world problems. The center block and the narrow passage problems are defined similar to IRRT* work [5] (Figure 4). The center block problem examines the efficiency of planners to sample states relevant to the problem in a wide free space. The narrow passage problem studies the capability of planners to focus sampling in topologically critical area. The random world problems evaluate versatility and scalability of planners. In the center block problems, we specify 5 different map sizes with respect to a fixed start-goal distance, and set the block width randomly for 100 independent runs. In the narrow passage problems, we specify 5 different gap heights, and set random positions of the passage for 100 independent runs. We generate 500 random worlds for each 2D and 3D cases for evaluation. Note we use clearance of 3 pixels for 2D random world, zero clearance for 2D center block and 2D narrow passage, and 2 voxels for 3D random world. The default size of 2D planning problems is \(224\times 224\), and the default size of 3D planning problems is \(50\times 50\times 50\). **Metrics.** For the center block problems, we check the number of iterations to reach within a path cost threshold, which is some percentage above the optimal cost. For the narrow passage problems, we check the number of iterations to find a path through the passage. For the random world problems, we examine the iterations each planner spends on finding the initial solution, and path cost improvement after certain numbers of iterations. **Baselines.** We compare NIRRT* to RRT*, IRRT*, NRRT*-GNG, and variants of our method. NIRRT*-PNG(FC) is our complete algorithm, where F is Neural Focus and C is Neural Connect. NIRRT*-PNG(F) removes Neural Connect from the complete version. NRRT*-PNG is Neural RRT* with the point-based network. NRRT*-PNG(C) uses Neural Connect in addition to NRRT*-PNG. We train a U-Net [25] with pretrained ResNet50 weights [32] for NRRT*-GNG for 2D problems. **Experiment Results.** The center block experiments show in Figure 5(b) that NIRRT*-PNG(FC) outperforms IRRT* in terms of the speed to find near-optimal paths across different problem sizes. The point-based network is able to infer guidance states from the informed subset which are the most promising to converge the path solution to optimum. Both Figure 5(a)(c) show that NIRRT*-PNG(FC) and NIRRT*-PNG(F) have similar performance. The reason is that the informed subset effectively constrains the region of the point cloud to be around the center block, and significantly simplifies the task of guidance state inference, so the point-based network performs well even without Neural Connect. Similar to the claim by [6] that initial path solution cost of NRRT* is better than RRT*, we see in Figure 5(c) that NRRT*-PNG and NRRT*-PNG(C) are faster than RRT* in terms of reaching within a more relaxed threshold above optimal cost such as 7-10%. However, the convergence speeds of NRRT* variants tend to be slow and are often worse than RRT* when approaching a tighter bound such as 2-4%. In contrast, NIRRT* variants work consistently better than IRRT* across thresholds of optimal cost, since the informed subset allows the point-based network to provide finer distribution of the guidance states to continuously refine the path towards the optimal solution. In the narrow passage experiments, NIRRT*-PNG(FC) finds a difficult path through the passage faster than IRRT* in more evaluation cases as represented in Figure 5(f). The convergence speed of NIRRT*-PNG(FC) outperforms all baselines as shown in Figure 5(e). NIRRT*-PNG(F) performance is similar to IRRT* because the guidance state set usually ends up separated on left and right sides of the gap without Neural Connect, whereas NIRRT*-PNG(FC) is able to connect the guidance state set together through the gap, which helps sampling critical states inside the gap. Both NRRT*-GNG and NRRT*-PNG are worse than RRT*, but NRRT*-PNG(C) works consistently better than RRT*, which indicates the effectiveness of Neural Connect in planning problems with critical states. Note we collect the training dataset for point-based network with optimal paths which requires clearance of 3 pixels, which is equivalent to 7 pixels of the gap height. Figure 5(e) demonstrates that our point-based network generalizes well to planning problems with Fig. 4: Center block and narrow passage experiments. clearances tighter than training distribution by Neural Focus and Neural Connect, while the CNN model is sensitive to clearance [6]. For each random world problem, we record the cost of path solution at certain number of iterations after the initial solution is found, and plot these costs relative to the cost of the initial path solution from RRT*. The 2D and 3D results are presented in Figure 5(g) and (d) respectively, and planning in 3D random worlds are visualized in Figure 6. We observe NRRT*-GNG has the best initial solution in 2D since the grid representations are denser than point representations in terms of the whole state space. However, NIRRT* variants converge faster due to continuous improvement of the guidance states. We find that both Neural Connect and Neural Focus contribute to improvement of convergence speed in both 2D and 3D cases. Figure 5(h) shows that NIRRT*-PNG(FC) is faster than IRRT* in terms of finding initial path solution. ### _Real World Deployment_ We deploy our method and the model trained in 2D random world to a TurtleBot 2i. The demonstration of real world navigation can be found in the video submission. ## V Conclusions We present Neural Informed RRT* approach to accelerate optimal path planning by incorporating a point-based network into Informed RRT* for guidance state inference. We introduce Neural Focus to naturally fit the point-based network into informed sampling strategy. We propose Neural Connect to improve quality of the inferred guidance state set by enforcing connectivity. Our simulation experiments show that Neural Informed RRT* outperforms RRT*, Informed RRT*, and Neural RRT* in terms of convergence rate towards optimal solutions in planning problems with varying sizes, critical states, and randomized complicated patterns. In future, we want to study how to further improve our algorithm when the planning problem sizes are significantly different from the training distribution. We would like to explore the effectiveness of our work in higher-dimensional problems. It is also interesting to study if we can denoise the guidance state set inferred by the point-based network to offer an end-to-end option for generating feasible and near-optimal paths [33, 34]. Fig. 5: Experiment results. **Center block**: (a) The average number of iterations to find a path within 2% of the optimal cost for different map widths; (b) Comparison of the number of iterations IRRT* and NIRRT*-PNG(FC) take to find a path within 2% of the optimal cost for each center block problem; (c) The average number of iterations to find a path within the specified tolerance of the optimal cost. **3D Random world**: (d) The average path cost relative to the initial solution of RRT* at different numbers of iterations after finding an initial solution. **Narrow passage**: (e) The average number of iterations to find a path better than flanking the obstacle for different gap heights; (f) Comparison of the number of iterations IRRT* and NIRRT*-PNG(FC) take to find a path better than flanking the obstacle for each narrow passage problem. **2D Random world**: (g) The average path cost relative to the initial solution of RRT* at different numbers of iterations after finding an initial solution; (h) Comparison of the number of iterations IRRT* and NIRRT*-PNG(FC) to find an initial solution for each random world problem. Error bars denote 95% confidence interval. The error bars are not plotted for random worlds for clarity of figures. NRRT*-GNG is not implemented for center block and random world 3D due to incompatible grid sizes and incompatible number of dimensionality. Fig. 6: Visualization on planning in 3D random world at 500 iterations. Left: NRTRT*-PNG. Right: NIRRT*-PNG(FC).
``` サンプリングに基づく計画アルゴリズムである、Rapidly-exploring Random Tree (RRT)は、パスプランニング問題を解決するのに万能です。RRT*は asymptoticoptimality を提供する一方で、自由空間を均等に木を成長させる必要があります。これは効率の改善のための余地を秘めています。収束を加速するために、規則に基づく情報に基づいたアプローチは、現在のパスコストによって決定された空間における許容できるellipsoidalサブセットに状態をサンプリングします。学習ベースの代替案は、自由空間の拓撲をモデル化し、最適なパスに近接する状態を推測して計画を導きます。私たちは、Neural Informed RRT* を提案して、両方の強みを組み合わせます。私たちは、自由状態の点雲の表現を定義します。私たちは、Informed RRT* から許容可能なellipsoidalサブセット内の点雲を
2309.15853
Models of f(Q) gravity with electromagnetic field
There are so many ideas that potentially explain the dark energy phenomenon, current research is focusing on a more in-depth analysis of the potential effects of modified gravity on both local and cosmic scales. In this paper we have investigated some cosmic reconstructions in $f (Q)$ cosmology where $Q$ is the non-metricity corresponding to the evolution background in the Friedmann-Lamatre-Robertson-Walker $(FLRW)$ universe. This allows us to determine how any $FLRW$ cosmology can emerge from a particular $f (Q)$ theory. We employ the reconstruction technique to generate explicit formulations of the $f (Q)$ Lagrangian for several types of matter sources like perfect fluid, dust like fluid, stiff fluid and the binary mixture of two fluids. Furthermore, we computed the field equations and equation of state (EoS) parameter $\omega$ for two different reconstructed $f(Q)$ models with the variation of the involved constants, which gives the scenario of accelerating universe, quintessence region and cosmological constant. We also observed that the time dependence of $\omega$ admits cosmic acceleration. These new $f(Q)$ gravity inspired models may have an impact on gravitational phenomena at other cosmological scales.
S. H. Shekh, Hira Sohail, Irfan Mahmood, Allah Ditta, Anil Kumar Yadav
2023-08-31T16:16:45
http://arxiv.org/abs/2309.15853v1
# Models of \(f(Q)\) gravity with electromagnetic field ###### Abstract **Abstract:** There are so many ideas that potentially explain the dark energy phenomenon, current research is focusing on a more in-depth analysis of the potential effects of modified gravity on both local and cosmic scales. In this paper we have investigated some cosmic reconstructions in \(f(Q)\) cosmology where \(Q\) is the non-metricity corresponding to the evolution background in the Friedmann-Lamatre-Robertson-Walker (\(FLRW\)) universe. This allows us to determine how any \(FLRW\) cosmology can emerge from a particular \(f(Q)\) theory. We employ the reconstruction technique to generate explicit formulations of the \(f(Q)\) Lagrangian for several types of matter sources like perfect fluid, dust like fluid, stiff fluid and the binary mixture of two fluids. Furthermore, we computed the field equations and equation of state (EoS) parameter \(\omega\) for two different reconstructed \(f(Q)\) models with the variation of the involved constants, which gives the scenario of accelerating universe, quintessence region and cosmological constant. We also observed that the time dependence of \(\omega\) admits cosmic acceleration. These new \(f(Q)\) gravity inspired models may have an impact on gravitational phenomena at other cosmological scales. ## I Introduction One notable troubling issue during the previous two decades has been the universe presents acceleration due to the Dark Energy (DE). Recent advancements in observational cosmology confirm this cosmological occurrence; for example, type Ia supernovae [1; 2; 3], Cosmic Microwave Background Radiation[4], observations of large-scale structure [5; 6; 7], the power spectrum of the Lyman-forest from the Sloan Digital Sky Survey[8], and the exploration of high-energy DE models using weak lensing data [9]. On the other hand, GR cannot explain observations of huge pulsars [10; 11] and white dwarfs [12; 13; 14] with masses greater than the traditional maximum limit (i.e. the Chandrasekhar mass limit, \(1.44M_{\odot}\)). Furthermore, the strong gravitational field and new findings contradict \(GR\)[15; 16; 17]. As a result, scientists look for suitable changes to the General Theory of Relativity (GR) e.g, \(f(R)\) gravity [18; 19; 20; 21; 22], \(f(R,G)\) gravity [23], \(f(T)\) gravity [24; 25; 26], \(f(G)\) gravity [27; 28; 29], Brans-Dicke (BD) gravity [30; 31], and so on, where \(R,G\), and \(T\) are the Ricci, Gauss- Bonnet, and torsion scalars respectively. The theory of higher-order curvature, particularly \(f(R)\) gravity, is the most successful modification of GR, explaining the presence of dark matter and using evidence to contradict the theory of gravity[32]. In the recent past, Jim'enez et al. [33] have suggested a symmetric teleparallel theory of gravity i. e. \(f(Q)\) gravity, which is a well-motivated gravity theory in which non-metricity drives the gravitational interaction in space-time, based on Lagrangian density yielding a general function of the non-metricity scalar \(Q\). At the background level, the modified theory of \(f(Q)\) gravity leads to fascinating the cosmic phenomenology [34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53]. Furthermore, it has successfully been confronted with various background and perturbation observational data, such as the Cosmic Microwave Background (CMB), Supernovae type Ia (SNIa), Baryonic Acoustic Oscillations (BAO), Redshift Space Distortion (RSD), growth data, and so on [54; 55; 56; 57; 58; 59; 60], revealing that \(f(Q)\) gravity may challenge the standard \(\Lambda CDM\) scenario. Finally, \(f(Q)\) gravity easily overcomes the Big Bang Nucleosynthesis (BBN) limitations[61].Choosing \(T\) (torsion) or \(Q\) (nonmetricity) produces similar but distinct theories of gravity, named \((TEGR)\) teleparallel equivalent of general relativity [62; 63] and (STGR) symmetric teleparallel general relativity[64; 65; 66]. The notion of gravity in STGR is based on nonmetricity rather than curvature and torsion. Inspired by the interesting qualities of f \((Q)\) gravity, we hope to create a family of \(f(Q)\) functions that can imitate the properties of the \(\Lambda\)CDM model. In this research, we will investigate several specific \(f(Q)\) gravity theories and rebuild the related cosmological theory for many cosmological solutions using a technique introduced in [67; 68]. Section I is introductory in nature. In section II, we have write the basics of \(f(Q)\) gravity and charged field equations for anisotropic matter using the action in,we also have analytically obtained the modified Friedmann equations using the charged field equations for FLRW metric. In the section III, the cosmological models is being developed starting form the \(\Lambda CDM\) model for various kind of fluid.In the section IV, various cosmological models are developed for various values of EoS parameter \(\omega\) using the modified Friedman equations.In the section V, the values of energy density(\(\rho\)) and pressures (\(p_{t}\ \ \&\ \ p_{r}\)) for known \(f(Q)\) models are calculated.In the final section VI a cosmological reconstruction for modified f(Q) gravity is built in terms of e-folding and EoS parameter \(\omega\). ## II Basics of \(f(Q)\) gravity and its field equations Let us start with the following action of \(f(Q)\) gravity \[S=\int\sqrt{-g}d^{4}x\left[\frac{1}{2}f(Q)+\lambda_{\alpha}^{\beta\mu\nu}R_{ \beta\mu\nu}^{\alpha}+\lambda_{\alpha}^{\mu\nu}T_{\mu\nu}^{\alpha}+L_{m}+L_{e}\right] \tag{1}\] where \(f(Q)\) is the function of nonmetricity \(Q\), \(\lambda_{\alpha}^{\beta\mu\nu}\) are the Lagrangian multiplier,\(g\) indicates the determinant of metric element,\(L_{m}\) and \(L_{e}\) denote the matter Lagrangian density and electric field. In the context of affine-connection, non-metricity is defined as \[Q_{\alpha\mu\nu}=\Delta_{\alpha}g_{\mu\nu}=\partial_{\alpha}g_{\mu\nu}-\Gamma_ {\alpha\mu}^{\varrho}\mathcal{S}_{\mu\sigma}-\Gamma_{\alpha\nu}^{\varrho} \mathcal{S}_{\mu\varrho} \tag{2}\] The commonly used form of the affine-connection can be divided into three components, which are listed below. \[\Gamma_{\mu\nu}^{\varrho}=\left\{\begin{array}{c}\varrho\\ \mu\nu\end{array}\right\}+K_{\mu\nu}^{\varrho}+L_{\mu\nu}^{\varrho} \tag{3}\] where the Levi-Civita \(\left\{\begin{array}{c}\varrho\\ \mu\nu\end{array}\right\}\) can be defined in terms of metric \(g_{\mu\nu}\) as \[\left\{\begin{array}{c}\varrho\\ \mu\nu\end{array}\right\}=\frac{1}{2}g^{\varrho}\big{(}\partial_{\mu}g_{\beta \nu}+\partial_{\nu}g_{\beta\mu}-\partial_{\beta}g_{\mu\nu}\big{)} \tag{4}\] where \(K_{\mu\nu}^{\varrho}\) denotes the contortion specified by the following relation: \[K_{\mu\nu}^{\varrho}=\frac{1}{2}T_{\mu\nu}^{\varrho}+T_{\mu\ \nu}^{\ \varrho} \tag{5}\] Torsion tensor \(T_{\mu\nu}^{\varrho}\) is realized as the antisymmetric component of the affine connection,\(T_{\mu\nu}^{\varrho}=2\Gamma_{[\mu\nu]}^{\varrho}\) in the previous equation, and the disformation \(L_{\mu\nu}^{\varrho}\) is written as \[L_{\mu\nu}^{\varrho}=\frac{1}{2}Q_{\mu\nu}^{\varrho}-Q_{\mu\ \nu}^{\ \varrho} \tag{6}\] The non-metricity conjugate is calculated as \[P_{\mu\nu}^{\alpha}=-\frac{1}{4}Q_{\mu\nu}^{\alpha}+\frac{1}{2}Q_{(\mu^{\alpha }\nu)}^{\alpha}+\frac{1}{4}\left(Q^{\alpha}-\bar{Q}^{\alpha}\right)g_{\mu\nu} -\frac{1}{4}\delta_{\ \ (\mu}^{\alpha}Q_{\nu)} \tag{7}\] with two independent traces \[Q_{\alpha}=Q_{\alpha}^{\mu}{}_{\mu},\hskip 28.452756pt\hat{Q}_{\alpha}=Q_{\alpha\mu}^ {\mu} \tag{8}\] The non-metricity scalar is calculated is as follows \[Q=-Q_{\alpha\mu\nu}P^{\alpha\mu\nu} \tag{9}\] The standard form of the energy-momentum tensor and electromagnetic field tensor is given by \[T_{\mu\nu}=-\frac{2}{\sqrt{-g}}\frac{\delta(\sqrt{-g}L_{m})}{\delta g^{\mu\nu}} \tag{10}\] \[E_{\mu\nu}=-\frac{2}{\sqrt{-g}}\frac{\delta(\sqrt{-g}L_{e})}{\delta g^{\mu\nu}} \tag{11}\] Varying the action in Eq.(1) with respect to the metric \(g_{\mu\nu}\), we get the following field equations[69] \[-T_{\mu\nu}+E_{\mu\nu}=\frac{2}{\sqrt{-g}}\Delta_{\alpha}\bigg{(}\sqrt{-g}f_{ Q}P_{\mu\nu}^{\alpha}\bigg{)}+\frac{1}{2}g_{\mu\nu}f+F_{Q}\bigg{(}P_{\mu\alpha \beta}Q_{\nu}^{\alpha\beta}-2Q_{\alpha\beta\mu}P^{\alpha\beta}{}_{\nu}\bigg{)} \tag{12}\] We assume a flat Friedmann-Lemaitre-Robertson-Walker (FLRW) metric with \(N(t)=1\) \[ds^{2}=-N(t)^{2}dt^{2}+a(t)^{2}\sum_{i=1}^{3}(dx^{i})^{2} \tag{13}\] where \(N(t)\) is a Lapse function and \(a(t)\) is a cosmic scale factor.We cannot select a certain Lapse function because the coincident gauge is fixed during the diffeomorphism. The particular case of \(Q\) theories, on the other hand, allows for this since \(Q\) retains a residual time re-parameterization invariant. As a result, one can use symmetry to set \(N(t)=1\)[72]. The covariant derivatives are reduced to ordinary derivatives by using the coincident gauge.If we adopt a co\(-\)moving reference system,with \(\zeta^{\mu}=\{0,0,0,1\}\),then \(\frac{d}{d\tau}=\frac{d}{dt^{2}}\tau\) is a proper time, giving \(H=\frac{a(t)}{a(t)}\). So, for the metric (13), the Eq.(9)yields the non-metricity scalar \(Q\) as \[Q=6H^{2} \tag{14}\] The energy-momentum tensor containing charge for anisotropic matter source[70] is defined as \[T_{\mu\nu}=\rho\zeta_{\mu}\zeta_{\nu}+p_{t}\zeta_{\mu}\xi_{\nu}+p_{t}(\zeta_{ \mu}\xi_{\nu}-\xi_{\mu}\xi_{\nu}-g_{ij}) \tag{15}\] where,\(p_{t}\), and \(p_{t}\) denote the energy density and pressure components (radial and tangential). The expression \(\zeta_{\mu}\) defines the radial four-vector and \(\xi_{\mu}\) elaborates the four-velocity vector, both of which meet the following condition, \[\zeta^{\beta}\zeta_{\beta}=1,\hskip 14.226378pt\tilde{\zeta}^{\beta}\xi_{ \beta}=-1 \tag{16}\] In contrast, the Electromagnetic tensor which is given by Lichnerowicz [71] is \[E_{i}^{j}=4\pi\Big{[}|h|^{2}\Big{(}\zeta_{i}\zeta^{j}+\frac{1}{2}g_{j}^{i} \Big{)}-h_{i}h^{j}\Big{]} \tag{17}\] where \(h_{i}\) is the magnetic flux defined by \[h_{i}=\frac{\sqrt{-g}}{8\pi}\epsilon_{ijkl}F^{kl}\zeta^{j} \tag{18}\] If we suppose that the current is flowing in the x-axis direction, the magnetic field generated will be in the \(yz\)-plane. As a consequence, \(h_{1}\neq 0\), \(h_{2}=h_{3}=h_{4}=0\). Using Eq. (17), it is clear that \(F_{23}\) will be the only component of \(F_{ij}\) which is non-vanishing and all other components\((F_{12},F_{13},F_{14},F_{24},F_{34})\) vanish save \(F_{23}\), which is non-vanishing. Because the magnetic field vector indicates a preferred spatial direction, any cosmological model that includes a global magnetic field must be anisotropic. As a result, the Maxwell equations \[F_{ij,k}+F_{jk,i}+F_{ki,j}=0 \tag{19}\] gives \[F_{23}=constant=M \tag{20}\] We get the components of the stress tensor and electromagnetic tensor for the line element (13) as \[T_{11}=a(t)^{2}p_{r},\ \ \ \ T_{22}=T_{33}=a(t)^{2}p_{t},\ \ \ \ T_{44}=\rho \tag{21}\] \[E_{11}=-\frac{M^{2}}{8\pi a^{2}},\ \ \ \ E_{22}=E_{33}=\frac{M^{2}}{8\pi a^{2}},\ \ \ \ E_{44}=\frac{M^{2}}{8\pi a^{4}} \tag{22}\] We find the field equations given below by using the equations (21), (22) and (14) as \[\rho=\frac{1}{2}f-6H^{2}f_{Q}-\mathbb{E}^{2} \tag{23}\] \[p_{t}=2\dot{f_{Q}}H+6f_{Q}H^{2}+2f_{Q}\dot{H}-\frac{1}{2}f-\mathbb{E}^{2} \tag{24}\] \[p_{r}=2\dot{f_{Q}}H+6f_{Q}H^{2}+2f_{Q}\dot{H}-\frac{1}{2}f+\mathbb{E}^{2} \tag{25}\] where \(\mathbb{E}^{2}=\frac{-M^{2}}{8\pi a^{2}}\). The equation of continuity reads as \[\dot{\rho}+3H(\rho+p_{r})=0 \tag{26}\] Thus the modified Friedmann equations can be rewritten as \[\rho=\frac{1}{2}f-Qf_{Q}-\mathbb{E}^{2} \tag{27}\] \[p_{t}=2f_{Q}H+f_{Q}Q+2f_{Q}\dot{H}-\frac{1}{2}f-\mathbb{E}^{2} \tag{28}\] \[p_{r}=2f_{Q}H+f_{Q}Q+2f_{Q}\dot{H}-\frac{1}{2}f+\mathbb{E}^{2} \tag{29}\] The parameter of state in this case is given by \[\omega=\frac{p_{r}}{\rho} \tag{30}\] By using Eq.(27) and (29) the state parameter can also be written as \[\omega=-1+\frac{2H(12\dot{f_{Q}}Q+f_{Q})}{-(Qf_{Q}-\frac{1}{2}f+\mathbb{E}^{2})} \tag{31}\] In the de Sitter universe, where \(\dot{H}=0\), Eq. (31) produces \(\omega=-1\), which is consistent with the \(\Lambda CDM\) model. So, we can construct the \(f(Q)\) theory for \(\Lambda CDM\) model. ## III Reconstruction of \(f(Q)\) theory: \(\Lambda\)CDM model Starting with the \(\Lambda\)CDM model, let us reconstruct the \(f(Q)\) theories, as is well known, in this instance \[H^{2}=H_{0}^{2}+\frac{\kappa^{2}\rho_{0}}{3a^{3}}=\frac{1}{6}Q \tag{32}\] with \(\kappa^{2}=1\),as assumed in this case.The equation (32) can also be written as \[Q=6H_{0}^{2}+\frac{2\rho_{0}}{a^{3}},\hskip 14.226378pta^{-1}=\left[\frac{3}{ \rho_{0}}(H^{2}-H_{0}^{2})\right]^{\frac{1}{3}}=\left[\frac{1}{2\rho_{0}}(Q-6 H_{0}^{2})\right]^{\frac{1}{3}} \tag{33}\] The deceleration parameter \((q)\) is defined as \[q=-\frac{a\bar{a}}{\bar{a}^{2}} \tag{34}\] The value of \(q\) for this model is calculated below, where \(c_{1}=constant\) \[q=-1+\frac{3}{1+\cosh\left(3H_{0}\left(t-\sqrt{3}c_{1}\right)\right)} \tag{35}\] The solution to equation (27)'s homogeneous case (that is, \(\rho=0\)) is given by \[f=2\mathbb{E}^{2}+C\sqrt{Q},\hskip 14.226378ptC=const. \tag{36}\] The second equation (29) naturally satisfies and yields \(p_{r}=0\), and the equation (28) for \(f(Q)\) yields \(p_{t}=-2\mathbb{E}^{2}\). Thus, giving the null dust solutions (\(\rho=0,p_{r}=0\)).The phenomenons which can be modeled by the solutions include a very high-frequency electromagnetic wave, and a beam of incoherent electromagnetic radiation. The model in (36) is a real-valued function with a positive non-metricity scalar, as shown by the observation. It means that, in addition to GR, there are classes of real-valued functions \(f(Q)\) that can simulate a dust-like expansion history. The equation of state parameter in terms of energy density can also be derived form the equation of continuity as \[\rho=\rho_{c}a^{-3(1+\omega)} \tag{37}\] Let us now construct the \(f(Q)\) gravity theories for matter field ### Reconstruction for dust-like matter First, consider the situation in which the universe is filled with dust-like things for which the value of equation of state is \(\omega=0\). By imposing the dust like case, we obtained the energy density from the equation (37) in terms of \(a\) as \[\rho=\rho_{c}a^{-3}=\frac{\rho_{c}}{2\rho_{0}}(Q-6H_{0}^{2}) \tag{38}\] Substituting the above equation in (27), we get the following equation with unknown function \(f(Q)\) \[\frac{1}{2}f(Q)-Qf_{Q}-\mathbb{E}^{2}=\frac{\rho_{c}}{2\rho_{0}}\Bigg{(}Q-6H_{ 0}^{2}\Bigg{)} \tag{39}\] The particular solution of the above equation is \[f(Q)=2\mathbb{E}^{2}-\frac{\rho_{c}}{\rho_{0}}\Bigg{(}Q+6H_{0}^{2}\Bigg{)} \tag{40}\] The value of deceleration parameter for this can be calculated using Eq. (34) as \[q=-1 \tag{41}\] ### Reconstruction for Dark Matter fluid Now we consider the perfect fluid with the equation of state parameter value as \(\omega=-\frac{1}{3}\). The cosmos is accelerating in this scenario, and the EoS value \(\omega=-\frac{1}{3}\) is physically interesting since it is close to the limit of the set of matter fields that satisfy the strong energy condition.Also,it is shown in [73] that the negative strong energy condition shows cosmic acceleration due to anti gravity. From the equation (37), we found the energy density in terms of \(a\) as, \[\rho=\rho_{c}a^{-2}=\rho_{c}\Bigg{[}\frac{1}{2\rho_{0}}(Q-6H_{0}^{2})\Bigg{]}^{ \frac{2}{3}} \tag{42}\] The modified Friedmann equation (27) takes the form \[\frac{1}{2}f(Q)-Qf_{Q}-\mathbb{E}^{2}=\rho_{c}\Bigg{[}\frac{1}{2\rho_{0}} \Bigg{(}Q-6H_{0}^{2}\Bigg{)}\Bigg{]}^{\frac{2}{3}}=\alpha\Bigg{(}Q-\beta\Bigg{)} ^{\frac{2}{3}} \tag{43}\] The solution of the above equation is given by \[f(Q)=2\Bigg{[}\mathbb{E}^{2}+\frac{\alpha(Q-\beta)^{2/3}Hypergeometric2F1\big{[} -\frac{2}{3},-\frac{1}{2},\frac{1}{2},\frac{Q}{\beta}\big{]}}{\big{(}1-\frac{ Q}{\beta}\big{)}^{2/3}}\Bigg{]} \tag{44}\] The obtained explicit form of the \(f(Q)\),in the model(44) is useful for investigating the universe's acceleration scenario, along with the positive non-metricty \(Q\). The value of Deceleration parameter in this case is \[q=-1 \tag{45}\] ### Reconstruction for stiff fluid Now we consider the stiff fluid with the equation of state parameter value as \(\omega=1\). The string theory predicts that a stiff cosmic fluid with pressure equal to the energy density can be characterised by a mass-less scalar field. The stiff fluid, on the other hand, is a crucial element since it dominates the remaining parts of the model early on and can be used to explain the shear-dominated phase of a potential initial anomalous scenario[74]. Also, A stiff matter era is present in the cosmological model of Zeldovich (1972) where the primordial universe is assumed to be made of a cold gas of baryons[75]. Thus, equation (37) takes the form \[\rho=\rho_{s}a^{-6}=\rho_{s}\Bigg{[}\frac{1}{2\rho_{0}}\Bigg{(}Q-6H_{0}^{2} \Bigg{)}\Bigg{]}^{2},\ \ \ \ \rho_{s}=const. \tag{46}\] In this case the modified equation(27) becomes \[\frac{1}{2}f(Q)-Qf_{Q}-\mathbb{E}^{2}=\rho_{s}\Bigg{[}\frac{\rho_{0}}{2\rho_{0 }}\Bigg{(}Q-6H_{0}^{2}\Bigg{)}\Bigg{]}^{2}=\alpha\Bigg{(}Q-\beta\Bigg{)}^{2} \tag{47}\] with the solution as \[f(Q)=2\mathbb{E}^{2}+\frac{2\alpha}{3}\Bigg{(}6Q\beta-Q^{2}+3\beta^{2}\Bigg{)} \tag{48}\] ### Reconstruction for two fluids The interactions of various "matter" components, which largely interact through gravity and electromagnetic radiation, are the foundation of our current understanding of the Universe. The idea of coupled ideal fluids is frequently used to describe the nature of the various components and potential interactions. We begin with two fluid species as opposed to one. With current densities of \(\rho_{c}\) and \(\rho_{s}\), respectively, let's assume that the cosmos contains both perfect fluid and stiff fluid. The equation (37) is used in this instance to obtain the total matter density as \[\rho=\rho_{c}a^{-2}+\rho_{s}a^{-6} \tag{49}\] #### iii.2.1 Dark Matter fluid and stiff fluid Now let us consider the more complicated case with two fluids, perfect (\(\omega=\frac{-1}{3}\)) and stiff(\(\omega=1\)), for which the equation (37) becomes \[\rho=\rho_{c}a^{-2}+\rho_{s}a^{-6}=\rho_{c}\left[\frac{1}{2\rho_{0}}(Q-6H_{0}^{ 2})\right]^{\frac{2}{3}}+\rho_{s}\left[\frac{1}{2\rho_{0}}\left(Q-6H_{0}^{2} \right)\right]^{2} \tag{50}\] the equation (27) takes the form \[\frac{1}{2}f(Q)-Qf_{Q}-\mathbb{E}^{2}=\rho_{c}\left[\frac{1}{2\rho_{0}}(Q-6H_{ 0}^{2})\right]^{\frac{2}{3}}+\rho_{s}\left[\frac{1}{2\rho_{0}}\left(Q-6H_{0}^{ 2}\right)\right]^{2}=\alpha\left(Q-\beta\right)^{\frac{2}{3}}+\delta\left(Q- \beta\right)^{2} \tag{51}\] which has the solution \[f(Q)=\frac{2}{3}\left[3\mathbb{E}^{2}-\alpha Q^{2}+6\alpha\beta Q+3\alpha \beta^{2}+\frac{3\delta(Q-\beta)^{2/3}Hypergeometric2F1\big{[}-\frac{2}{3},- \frac{1}{2},\frac{1}{2},\frac{Q}{\beta}\big{]}}{\big{(}1-\frac{Q}{\beta}\big{)} ^{2/3}}\right] \tag{52}\] where \(\alpha=\frac{\rho_{c}}{2\rho_{0}},\ \ \beta=6H_{0}^{2},\ \ \delta=\frac{\rho_{s}}{2\rho_{0}}\) #### iii.2.2 Dust like fluid and stiff fluid Now we consider the following two fluid model: dust like fluid (\(\omega=0\)) and stiff fluid(\(\omega=1\)), which gives the following form for equation(37) \[\rho=\rho_{c}a^{-3}+\rho_{s}a^{-6}=\rho_{c}\left[\frac{1}{2\rho_{0}}(Q-6H_{0} ^{2})\right]+\rho_{s}\left[\frac{1}{2\rho_{0}}\left(Q-6H_{0}^{2}\right) \right]^{2}=\alpha\left(Q-\beta\right)+\delta\left(Q-\beta\right)^{2} \tag{53}\] the equation (27) takes the form \[\frac{1}{2}f(Q)-Qf_{Q}-\mathbb{E}^{2}=\alpha\left(Q-\beta\right)+\delta\left( Q-\beta\right)^{2} \tag{54}\] with the solution as \[f(Q)=\frac{2}{3}\left[3\mathbb{E}^{2}-\alpha Q^{2}+6\alpha\beta Q+3\alpha \beta^{2}-3\delta Q-3\delta\beta\right] \tag{55}\] Radiation fluid and stiff fluid At last, we consider the following two fluid model:dust like fluid (\(\omega=0\)) and Radiation fluid(\(\omega=\frac{1}{3}\)), which gives the following form for equation(37) \[\rho=\rho_{c}a^{-3}+\rho_{s}a^{-4}=\rho_{c}\left[\frac{1}{2\rho_{0}}(Q-6H_{0}^ {2})\right]+\rho_{s}\left[\frac{1}{2\rho_{0}}\left(Q-6H_{0}^{2}\right)\right]^ {\frac{4}{3}}=\alpha\left(Q-\beta\right)+\delta\left(Q-\beta\right)^{\frac{4}{3}} \tag{56}\] the modified Friedmann equation (27) takes the form \[\frac{1}{2}f(Q)-Qf_{Q}-\mathbf{E}^{2}=\rho_{c}\left[\frac{1}{2\rho_{0}}(Q-6H_{0 }^{2})\right]+\rho_{s}\left[\frac{1}{2\rho_{0}}\left(Q-6H_{0}^{2}\right) \right]^{\frac{4}{3}}=\alpha\left(Q-\beta\right)+\delta\left(Q-\beta\right)^{ \frac{4}{3}} \tag{57}\] with solution \[f(Q)=-\frac{2}{Z}\left[Z\big{(}-\mathbf{E}^{2}+\delta(Q+\beta)\big{)}-\alpha \beta^{2}ZY-\alpha\beta QZW\right] \tag{58}\] where \[Y = Hypergeometric2F1\big{[}-\frac{1}{2},-\frac{1}{3},\frac{1}{2}, \frac{Q}{\beta}\big{]}\qquad\qquad Z=\left(1-\frac{Q}{\beta}\right)^{\frac{1}{ 3}}\] \[W = Hypergeometric2F1\big{[}-\frac{1}{3},\frac{1}{2},\frac{3}{2}, \frac{Q}{\beta}\big{]}\] ## IV Reconstruction using \(\omega\) Though different theoretical approaches are there to explain the phenomenon of cosmic acceleration, till now none of them is definitely known as the appropriate one. The present trend of modelling of late-time cosmic acceleration is called reconstruction, where the model is built up by taking the observational data directly into account. This is actually the reverse way of finding the suitable cosmological model.By using the same concept we will reconstruct \(f(Q)\) for the particular values of the state parameter \(\omega\), which can be the possible candidate for accelerating universe with the positive non-metricty \(Q\).The following equation can easily be obtained from equation (31) as \[2\dot{f}_{Q}H+\big{[}(\omega+1)Q+2\dot{H}\big{]}f_{Q}+(\omega+1)\big{(} \mathbf{E}^{2}-\frac{1}{2}f\big{)}=0 \tag{59}\] ### With \(\omega=-1\) for \(\Lambda\)CDM model In this case equation (59) becomes \[\dot{H}\bigg{[}12f_{\text{QQ}}H^{2}+f_{Q}\bigg{]}=0 \tag{60}\] There are two solutions to this equation. i) \(\dot{H}=0\). As a result, \(H=H_{0}=const\). It's called the de Sitter spacetime. ii) The second solution is consistent with the following equation \[12f_{\text{QQ}}H^{2}+f_{Q}=0 \tag{61}\] which has the solution \[f(Q)=-12e^{-1/2}2QC_{1}+C_{2},\ \ \ \ \ \ C_{j}=\text{\it{consets}}. \tag{62}\] ### \(\omega=-1/3\) In this case the equation (59) becomes \[2f_{QQ}H^{2}+\left[\frac{2}{3}Q+2\dot{H}\right]f_{Q}+\frac{2}{3}\left[\mathbb{E} ^{2}-\frac{1}{2}f\right]=0 \tag{63}\] which has the solution \[f(Q)=2\mathbb{E}^{2}+C_{1}e^{-Y}HermiteH\left[-\frac{3}{2},Z\right]+C_{2}e^{-Y }Hypergeometric1F1\left[\frac{3}{4},\frac{1}{2},Z^{2}\right] \tag{64}\] where \[Y=\frac{Q^{2}+6Q\dot{H}}{Q},\hskip 28.452756ptZ=\frac{Q}{\sqrt{6}H}+\frac{ \sqrt{\frac{3}{2}}\dot{H}}{H} \tag{65}\] ### \(\omega=0\) For this case pressure is zero and equation (59) takes the form with solution as given below \[2f_{QQ}H^{2}+\left[Q+2\dot{H}\right]f_{Q}+\mathbb{E}^{2}-\frac{1}{2}f=0 \tag{66}\] \[f(Q)=2\mathbb{E}^{2}+C_{1}e^{-Y}HermiteH\left[-\frac{3}{2},Z\right]+C_{2}e^{-Y }Hypergeometric1F1\left[\frac{3}{4},\frac{1}{2},Z^{2}\right] \tag{67}\] where \[Y=\frac{3Q^{2}+12Q\dot{H}}{2Q},\hskip 28.452756ptZ=\frac{Q}{2H}+\frac{\dot{H}}{H} \tag{68}\] ## V \(f(Q)\)models Now, we consider the following assumed \(f(Q)\) models and find the resulting expressions for \(\rho\) and \(p_{r}\) ### \(f(Q)=Q-\alpha\gamma\left(1-\frac{1}{\exp\frac{Q}{\gamma}}\right)\) In the first case, we consider the following model[76] \[f(Q)=Q-\alpha\gamma\left(1-\frac{1}{\exp\frac{Q}{\gamma}}\right) \tag{69}\] Where \(\alpha\in(0,1)\) and \(\gamma=\Omega H_{0}^{2}\), \(\Omega\) is a dimensionless parameter and \(H_{0}^{2}\) is Hubble parameter The modified Friedmann equations(27) and (29) with the above model becomes \[\rho=-\frac{Q}{2}-\frac{\alpha\gamma}{2}+\alpha Qe^{-Q/\gamma}+\frac{\alpha \gamma}{2}e^{-Q/\gamma}-E^{2} \tag{70}\] \[p_{r}=\frac{Q}{2}+\frac{\alpha\gamma}{2}+2\dot{H}+\frac{4\alpha}{\gamma}Q\dot {H}e^{-Q/\gamma}-\alpha Qe^{-Q/\gamma}-2\alpha\dot{H}e^{-Q/\gamma}-\frac{ \alpha\gamma}{2}e^{-Q/\gamma}+E^{2} \tag{71}\] The EoS paramter \(\omega\) form equation (30) in this case takes the form \[\omega=-1-\frac{\frac{4\alpha}{\gamma}Q\dot{H}e^{-Q/\gamma}-2\alpha\dot{H}e^{-Q/ \gamma}+2\dot{H}}{\frac{Q}{2}+\frac{\alpha\gamma}{2}-\alpha Qe^{-Q/\gamma}- \frac{4\gamma}{2}e^{-Q/\gamma}+E^{2}}=-1-\frac{A}{B} \tag{72}\] This results in three separate examples of \(\omega\) representing different stages of the universe's evolution, as follows: * if \(\frac{A}{B}<0\), then \(\omega<-1\) which corresponds to the phantom accelerating universe * if \(0<\frac{A}{B}<1\),then EoS parameter \(\omega\) will be slightly greater than \(-1\) which means that the universe stays in the quintessence region. * if \(\frac{A}{B}=0\),we have a cosmos whose dynamics is determined by the cosmological constant \(\omega=-1\). ### \(f(Q)=Q+kQ^{n}\) For this case we consider the below model[77] with \(k\) and \(n\) as constants \[f(Q)=Q+kQ^{n} \tag{73}\] For this model the corresponding equations become \[\rho=-\frac{Q}{2}+\frac{kQ^{n}}{2}-nkQ^{n}-E^{2} \tag{74}\] \[p_{t}=\frac{Q}{2}+nkQ^{n}-\frac{kQ^{n}}{2}+2\dot{H}+2n(2n-1)k\dot{H}Q^{n-1}-E^ {2} \tag{75}\] \[p_{r}=\frac{Q}{2}+nkQ^{n}-\frac{kQ^{n}}{2}+2\dot{H}+2n(2n-1)k\dot{H}Q^{n-1}+E^ {2} \tag{76}\] The EoS paramter for the current mode is \[\omega=-1-\frac{2\dot{H}+2n(2n-1)k\dot{H}Q^{n-1}}{\frac{Q}{2}-\frac{kQ^{n}}{2} +nkQ^{n}+E^{2}}=-1-\frac{A}{B} \tag{77}\] For any positive real number \(n\)\(\And k\), we can discuss as follows * if \(\frac{A}{B}<0\), then \(\omega<-1\) which corresponds to the phantom accelerating universe * if \(0<\frac{A}{B}<1\),then EoS parameter \(\omega\) will be slightly greater than \(-1\) which means that the universe stays in the quintessence region. * if \(\frac{A}{B}=0\),we have a cosmos whose dynamics is determined by the cosmological constant \(\omega=-1\). Now, we study the current model for different values of constants \(k\) and \(n\) #### iv.2.1 k=0 The equations (74,75,76)and (77) with \(k=0\) becomes \[\rho=-\frac{Q}{2}-E^{2} \tag{78}\] \[p_{t}=\frac{Q}{2}+2\dot{H}-E^{2} \tag{79}\] \[p_{r}=\frac{Q}{2}+2\dot{H}+E^{2} \tag{80}\] \[\omega=-1+\frac{2\dot{H}}{-(\frac{Q}{2}+E^{2})} \tag{81}\] \(k\neq 0\) In this section the Eqs. (74), (75), (76) and (77) for different value of \(n\) with \(k\neq 0\) is being discussed i)For \(n=1/2\), the Eqs. (74), (75), (76) and (77) becomes \[\rho=-\frac{Q}{2}-E^{2} \tag{82}\] \[p_{t}=\frac{Q}{2}+2\dot{H}-E^{2} \tag{83}\] \[p_{r}=\frac{Q}{2}+2\dot{H}+E^{2} \tag{84}\] \[\omega=-1+\frac{2\dot{H}}{-(\frac{Q}{2})+E^{2}} \tag{85}\] ii)For \(n=2\), then the equations(74,75,76,77) takes the form \[\rho=-\frac{Q}{2}-\frac{3}{2}kQ^{2}-E^{2} \tag{86}\] \[p_{t}=\frac{Q}{2}+\frac{3}{2}kQ^{2}+2\dot{H}+12k\dot{H}Q-E^{2} \tag{87}\] \[p_{r}=\frac{Q}{2}+\frac{3}{2}kQ^{2}+2\dot{H}+12k\dot{H}Q+E^{2} \tag{88}\] \[\omega=-1+\frac{2\dot{H}+12k\dot{H}Q}{-(\frac{Q}{2}+\frac{3}{2}kQ^{2}+E^{2})} \tag{89}\] ## VI Cosmic acceleration In this section, we discuss the evolution of scale factor in terms of EoS parameter for which we consider the following model[68] \[p_{r}=\frac{A_{-1}(Q)}{\rho}+A_{0}(Q)+A_{1}(Q)\rho \tag{90}\] It is worthwhile to note that for \(A_{-1}=constant,A_{0}=A_{1}=0\), Eq. (90) converts to the Chaplygin gas equation. Moreover, we obtain Eq. (90), for \(f(Q)=Q+kQ^{n}\). Therefore, the expression for energy density is read as \[\rho=-\frac{Q}{2}+\frac{kQ^{n}}{2}-nkQ^{n}-\mathbb{E}^{2} \tag{91}\] \[p_{r}=\frac{Q}{2}+nkQ^{n}-\frac{kQ^{n}}{2}+2\dot{H}+2n(2n-1)kHQ^{n-1}+\mathbb{ E}^{2} \tag{92}\] with the assumption that \(n=2,k=1\) \(\&\) \(A_{k}=cosnts\). Eq. (90) takes the form \[\dot{H}=-\frac{3\left(A_{1}+1\right)Q^{2}-2A_{0}+\frac{4A_{-1}}{2\mathbb{E}^{ 2}+3Q^{2}+Q}+2\mathbb{E}^{2}+Q}{4(6Q+1)} \tag{93}\] \[Q_{N}=-\frac{3\left(3\left(A_{1}+1\right)Q^{2}-2A_{0}+\frac{4A_{-1}}{2\mathbb{E}^{ 2}+3Q^{2}+Q}+2\mathbb{E}^{2}+Q\right)}{6Q+1} \tag{94}\] where \(N=\log\frac{d}{a_{0}}\) As a result, for the scenario \(A_{0}=A_{1}=0\), we get the following solution, where \(b=4A_{-1}\) \[Q_{N}=-\frac{3\left(\frac{4A_{-1}}{2\mathbb{E}^{2}+3Q^{2}+Q}+2\mathbb{E}^{2}+3 Q^{2}+Q\right)}{6Q+1}=-\frac{3\left(\frac{b}{2\mathbb{E}^{2}+3Q^{2}+Q}+2 \mathbb{E}^{2}+3Q^{2}+Q\right)}{6Q+1} \tag{95}\] \[Q=\frac{1}{6}\left(-\sqrt{-12\sqrt{e^{-6(N-N_{0})}-b}-24\mathbb{E}^{2}+1}-1\right) \tag{96}\] or \[H=\frac{1}{6}\sqrt{-\sqrt{-12\sqrt{e^{-6(N-N_{0})}-b}-24\mathbb{E}^{2}+1}-1} \tag{97}\] For the models (96-97), the EoS parameter(\(\omega\)) takes the form \[\omega=-\frac{4(6Q+1)H^{\prime}(t)}{2\mathbb{E}^{2}+3Q^{2}+Q}-1=-\frac{(6Q+1) Q_{N}}{3\left(2\mathbb{E}^{2}+3Q^{2}+Q\right)}-1=-1+\frac{1}{1-be^{6(N-N_{0})}} \tag{98}\] It is evident from Eq. (98) that the EOS parameter evolves in the range \(-1\leq\omega\leq 0\) or \(\omega\leq-1\). So, we can conclude that the some models presented in this paper, admit cosmic acceleration. ## VII Conclusion In this paper, we have investigated several forms of DE cosmology in \(f(Q)\) gravity and examined its nature with electromagnetic field. We considered the well know relation between nonmetricty \(Q\) and Hubble parameter in the form \(Q=6H^{2}\) and reconstructed \(f(Q)\) theory for dust like matter, perfect fluid, stiff fluid and binary mixture of two fluids. Furthermore we also reconstructed \(f(Q)\) theory by using various form of EOS parameter \(\omega\). It is worthwhile to note that \(\omega=-1\) generates a physically viable form of \(f(Q)\) that describes the acceleration of the universe at present epoch. The various modified form of \(f(Q)\) models are investigated which give a extremely powerful theory without need of DE components to describes the late time acceleration of the universe. Furthermore, we observe that the reconstruction of \(f(Q)\) theories define the deceleration parameter \(q\) as a function of \(t\) and its explicit expression is obtained. From Eq. (35), it is clear that \(q\) varies with time \(t\) and it approaches to -1 for dust like matter and Dark matter fluid; incidently this value of \(q\) leads to \(\frac{dH}{dt}=0\), which implies the greatest value of Hubble's parameter and the fastest rate of expansion of the universe. Therefore, the derived models under the reconstruction of \(f(Q)\) theories can be utilized to describe the late time evolution of the actual universe. Finally, we conclude that no DE is required to replicate ordinary \(\Lambda CDM\) cosmology and thus models of \(f(Q)\) gravity with electromagnetic field are free from the problems associated with cosmological constant/DE as discussed in Refs. [78; 79; 80; 81; 82].
**Answer:** 宇宙には、ダークエネルギの現象を説明する多くのアイデアが存在しますが、現在の研究は、修正重力によるローカルと宇宙のスケールにおける潜在的な影響をより深い分析に集中しています。この論文では、$f(Q)$ kosmologyにおけるいくつかの宇宙的再構築を調査しました。ここで、$Q$はFriedmann-Lamatre-Robertson-Walker $(FLRW)$宇宙における進化背景に対応する非幾何学的であり、この技術により、特定の$f(Q)$理論からどの$FLRW$宇宙的構造が導き出されるかを決定することができます。この論文では、$f(Q)$ラグランジュの表現方法を構築するために、より具体的な手法を用いて、理想的な流体、塵状流体、硬質流体、および二つの流体の混合体などの様々な物質源を調査しました。さらに、これらの$f(Q)$モデルの場
2309.04786
Stability maps for the 5/3 mean motion resonance between Ariel and Umbriel with inclination
The evolution of the five largest satellites of Uranus during the crossing of the 5/3 mean motion resonance between Ariel and Umbriel is strongly affected by chaotic motion. Studies with numerical integrations of the equations of motion and analysis of Poincar\'e surface sections provided helpful insights to the role of chaos on the system. However, they lack of a quantification of this chaos in the phase-space. Here, we construct stability maps using the frequency analysis method. We determine that for low energies (small eccentricity and/or inclinations), the phase-space is mainly stable. As the energy increases, the chaotic regions replace the stable motion, until only small, localized libration regions remain stable.
Sérgio R. A. Gomes, Alexandre C. M. Correia
2023-09-09T13:20:07
http://arxiv.org/abs/2309.04786v1
# Stability maps for the 5/3 mean motion resonance between Ariel and Umbriel with inclination ###### Abstract The evolution of five the largest satellites of Uranus during the cross of the 5/3 MMR between Ariel and Umbriel is strongly affected by chaotic motion. Studies with numerical integrations of the equations of motion and analysis of Poincare surfaces provided helpful insights to the role of chaos on the system. However, they lack a quantification of the chaos in the phase-space. We constructed stability maps using the frequency analysis method. We determined that for lower energies, the phase-space is mainly stable. As the energy increases, the chaotic regions replace the stable motion, until only small, localized libration areas remain stable. ## 1 Introduction The dynamical evolution that led to the current configuration of the regular satellites of Uranus remains an enigma (e.g. Pollack et al., 1991; Szulagyi et al., 2018; Ishizawa et al., 2019; Ida et al., 2020; Rufu and Canup, 2022). The current large eccentricities (\(\sim 10^{-3}\)) cannot be explained by mutual interactions between the satellites (Squyres et al., 1985; Smith et al., 1986; Peale, 1988). Additionally, the surfaces of Miranda, Ariel, and Titania display evidences of large scale surface melting (Dermott et al., 1988; Peale, 1988; Tittemore, 1990). Tidal interactions raised on the planet induce a differential outward motion of the satellites (eg. Peale, 1988; Tittemore and Wisdom, 1988, 1989, 1990; Pollack et al., 1991; Cuk et al., 2020). This migration likely resulted in multiple encounters with mean motion resonances (MMRs) over their evolution. Although currently we do not observe any MMR in the system, tidal models suggests that the 5/3 MMR between Ariel and Umbriel was the latest to be crossed (eg. Peale, 1988; Tittemore and Wisdom, 1988; Cuk et al., 2020; Gomes and Correia, 2023), possibly exciting the eccentricities and inclination of the moons. Tittemore and Wisdom (1988) carried a detailed study on the passage through the 5/3 Ariel-Umbriel MMR in the planar approximation and for small eccentricities. Their results shown that chaotic motion has a significant impact on the dynamical evolution during the resonance crossing. In fact, this chaos can drive the eccentricities of Ariel and Umbriel to much higher values than the initial ones and provides a mechanism to break the resonance. Using a \(N\)-body integrator, Cuk et al. (2020) studied the passage through the 5/3 Ariel-Umbriel MMR. The authors account for the five regular Uranian moons and assessed the role of the eccentricity, the inclination, and the spin for the outcome of the resonance crossing. Their results show that, during resonant passage, the eccentricities and the inclinations of the five Uranian satellites are excited by chaotic motion, even if they are not in resonance. Gomes and Correia (2023) also revisited the intricate 5/3 MMR, in order to understand the role of inclination in the outcome of the passage. They performed a similar analysis as the one by Tittemore and Wisdom (1988, 1989), with a two-body secular model in the circular approximation with low inclinations, and a more robust tidal model. Resorting to the Poincare surface section method, their analysis confirms the results obtained by Tittemore and Wisdom (1988) and Cuk et al. (2020), as well as the theoretical predictions from Dermott et al. (1988), that chaotic motion rules the dynamics of the MMR between Ariel and Umbriel. In this work, we re-evaluate the stability of the 5/3 MMR, but now applying the frequency analysis method (Laskar, 1990, 1993) to access the stability of the system. Model To easily analyse the stability of the 5/3 MMR, we need to resort to a simplified model, with a reduced number of degrees of freedom. For that, we adopt the secular resonant two-satellite circular model with low inclinations developed in Gomes and Correia (2023). The model considers an oblate central body of mass \(m_{0}\) (Uranus) surrounded by two point-mass bodies \(m_{1}\), \(m_{2}\ll m_{0}\) (satellites), where the subscript 1 refers to the inner orbit (Ariel) and the subscript 2 refers to the outer orbit (Umbriel) and departs from a Hamiltonian truncated to the first order in the mass ratios, \(m_{k}/m_{0}\), zeroth order in the eccentricities, and second order in the inclinations, \(I_{k}\) (with respect to the equatorial plane of the central body). After the high frequency angles are averaged, the Hamiltonian reads as \[\begin{split}\bar{\mathcal{H}}&=\left(\mathcal{K} _{a}+\mathcal{S}_{a}\right)\left(y_{1}\overline{y}_{1}+y_{2}\overline{y}_{2} \right)+\mathcal{K}_{b}(y_{1}\overline{y}_{1}+y_{2}\overline{y}_{2})^{2}\\ &+\left(\mathcal{O}_{a}+\mathcal{S}_{b}\right)y_{1}\overline{y}_{ 1}+\left(\mathcal{O}_{b}+\mathcal{S}_{c}\right)y_{2}\overline{y}_{2}+ \frac{\mathcal{S}_{d}}{2}(y_{1}\overline{y}_{2}+\overline{y}_{1}y_{2})\\ &+\frac{\mathcal{R}_{a}}{2}(y_{1}^{2}+\overline{y}_{1}^{2})+ \frac{\mathcal{R}_{b}}{2}(y_{2}^{2}+\overline{y}_{2}^{2})+\frac{\mathcal{R}_{ c}}{2}(y_{1}y_{2}+\overline{y}_{1}\overline{y}_{2})\,\end{split} \tag{1}\] where \(K\) stands for the Keplerian coefficients, \(O\) for the oblateness coefficients, \(S\) for secular the coefficients, and \(R\) for the resonant coefficients (see appendix A in Gomes and Correia, 2023). The Hamiltonian is written as a function of a set of canonical complex rectangular coordinates \((y_{1},\mathrm{i}\,\overline{y}_{1},y_{2},\mathrm{i}\,\overline{y}_{2})\), given by \[y_{k}=\sqrt{\beta_{k}\sqrt{\mu_{k}a_{k}}\left(1-\cos I_{k}\right)}\,\mathrm{e }^{\mathrm{i}\varphi_{k}}\,\approx\,I_{k}\sqrt{\frac{\Gamma_{k}}{2}}\mathrm{e }^{\mathrm{i}\varphi_{k}}\, \tag{2}\] where \(\overline{y}_{k}\) is the complex conjugate of \(y_{k}\), \(a_{k}\) is the semi-major axis, \(\beta_{k}=m_{0}m_{k}/(m_{0}+m_{k})\), \(\mu_{k}=G(m_{0}+m_{k})\), \(G\) is the gravitational constant, and \(\varphi_{k}=\frac{5}{2}\lambda_{2}-\frac{3}{2}\lambda_{1}-\Omega_{k}\) are the resonant angles, where \(\lambda_{k}\) is the mean longitude and \(\Omega_{k}\) is the longitude of the ascending node. \[\Gamma_{1}=\frac{3}{2}\Gamma\left(1-\Delta\right)\quad\text{and}\quad\Gamma_{ 2}=-\frac{3}{2}\Gamma\left(1-\frac{5}{3}\Delta\right) \tag{3}\] are constant parameters, where \(\Delta=(\beta_{1}\sqrt{\mu_{1}a_{1}}\cos I_{1}+\beta_{2}\sqrt{\mu_{2}a_{2}} \cos I_{2})/\Gamma\) is the normalised total orbital angular momentum, with \(\Gamma=2.6684\times 10^{-12}\)\(M_{\odot}\,\mathrm{au}^{2}\,\mathrm{yr}^{-1}\). The conservative equations of motion are simply obtained from the Hamilton equations, yielding \[\dot{y}_{1}=\mathrm{i}\bigg{[}\left(\mathcal{K}_{a}+\mathcal{S}_{a}\right)y_{1 }+2\mathcal{K}_{b}\left(y_{1}\overline{y}_{1}+y_{2}\overline{y}_{2}\right)y_{1 }+\left(\mathcal{O}_{a}+\mathcal{S}_{b}\right)y_{1}+\frac{\mathcal{S}_{d}}{2} y_{2}+\mathcal{R}_{a}\overline{y}_{1}+\frac{\mathcal{R}_{c}}{2}\overline{y}_{2} \bigg{]} \tag{4}\] and \[\dot{y}_{2}=\mathrm{i}\Bigg{[}\left(\mathcal{K}_{a}+\mathcal{S}_{a}\right)y_{ 2}+2\mathcal{K}_{b}\left(y_{1}\overline{y}_{1}+y_{2}\overline{y}_{2}\right)y_{ 2}+\left(\mathcal{O}_{b}+\mathcal{S}_{c}\right)y_{2}+\frac{\mathcal{S}_{d}}{2} y_{1}+\mathcal{R}_{b}\overline{y}_{2}+\frac{\mathcal{R}_{c}}{2}\overline{y}_{1} \Bigg{]}. \tag{5}\] The dynamics of the 5/3 MMR essentially depends on \(\Delta\). We introduce the quantity \[\delta=\frac{\Delta}{\Delta_{r}}-1\quad\text{with}\quad\Delta_{r}=0.7681\, \tag{6}\] which measures the proximity to the nominal resonance. ## 3 Stability maps Gomes and Correia (2023) analysed the behaviour of Ariel and Umbriel at different stages of the crossing of the 5/3 MMR resorting to Poincare surface sections. Here, we revisit the global dynamics of this resonance using stability maps. To this end, we adopt the frequency analysis method (Laskar, 1990, 1993) to map the diffusion of the orbits. For coherence, we fix \(\delta=-2\times 10^{-6}\) (Eq. (6)) as in Gomes and Correia (2023), for which the most diverse dynamics can be found, and adopt the physical properties of the system from Table 1. Since the Hamiltonian (Eq. (1)) is a four-degree function of \(y_{k}\), the intersection of the constant energy manifold by a plane may have up to four roots (families). Each family corresponds to a different dynamical behaviour, and so we must plot one of them at a time. However, the families are symmetric, and actually we only need to show two of them. We chose to represent the families with the positive roots (that we dub 1 and 2). For each energy value, we build a grid of \(200\times 200\) equally distributed initial conditions in the plane \((y_{1,i},y_{1,r})\), where \(y_{k,i}\) and \(y_{k,r}\) correspond to the imaginary and real parts of \(y_{k}\), respectively. We fix \(y_{2,r}=0\) for all initial conditions and compute \(y_{2,i}\) for each family from the total energy (Eq. (1)). We then numerically integrate the equations of motion (4) and (5) for a time \(T\). Finally, we perform a frequency analysis of \(y_{1}\), using the software TRIP (Gastineau and Laskar, 2011) over the time intervals \([0,T/2]\) and \([T/2,T]\), and determine the main frequency in each interval, \(f_{\rm in}\) and \(f_{\rm out}\), respectively. The stability of the orbit is measured by the index \[D\equiv\left|1-\frac{f_{\rm out}}{f_{\rm in}}\right|\, \tag{7}\] which estimates the stability of the orbital long-distance diffusion (Dumas and Laskar, 1993). The larger \(D\), the more orbital diffusion exists. For stable motion, we have \(D\sim 0\), while \(D\ll 1\) if the motion is weakly perturbed, and \(D\sim 1\) when the motion is irregular. It is difficult to determine the precise value of \(D\) for which the motion is stable or unstable, but a threshold of stability \(D_{s}\) can be estimated such that most of the trajectories with \(D<D_{s}\) are stable (for more details see Couetdic et al., 2010). The diffusion index depends on the considered time interval. Here, we integrate the equations of motion for \(T=10^{4}\) yr, because this interval is able to capture the main characteristics of the dynamics regarding the resonant frequency, which lies within the range \(\sim 60\) yr. With this time interval, we estimate that \(D_{s}\sim 10^{-4}\). The diffusion index \(D\) is represented by a logarithmic colour scale calibrated such that blue and green correspond to stable trajectories (\(D\ll D_{s}\)), while orange and red correspond to chaotic motion (\(D\gg D_{s}\)). In Fig. 1, we show the stability maps for Ariel. We rescale \(y_{k}\) by \(\sqrt{\Gamma_{k}/2}\), and so we actually plot the maps in the plane \((I_{1}\sin\varphi_{1},I_{1}\cos\varphi_{1})\) with \(\cos\varphi_{2}=0\) (Eq. (2)). Each panel corresponds to a different energy value \(\tilde{\cal H}/{\cal H}_{0}\), where \({\cal H}_{0}=1.06\times 10^{-19}\)\(M_{\odot}\,{\rm au}^{2}\,{\rm yr}^{-2}\) is the energy of the transition between the circulation and libration regions, i.e., the energy of the separatrix. The lowest energies occur in the circulation regions, \(\tilde{\cal H}<{\cal H}_{0}\), while the largest energies occur in the libration region, \(\tilde{\cal H}>{\cal H}_{0}\). The inner circulation region is delimited by \(0<\tilde{\cal H}<{\cal H}_{0}\), where \(\tilde{\cal H}=0\) corresponds to the energy of the equilibrium point with \(y_{1}=y_{2}=0\). For this energy range, there are four families, while for the remaining energies only two families exist. For \(\tilde{\cal H}\ll 0\) (Fig. 1 a), only family 1 exists, and we observe that the system is always stable, corresponding to trajectories in the outer circulation region. As the energy increases, two islands appear, corresponding to trajectories that are in the libration region (in resonance). Initially, the motion in these new regions is also stable, but the separatrix and some localized concentric regions outside the separatrix are chaotic. As the energy approaches the threshold \(\tilde{\cal H}=0\) (Figs. 1 b,c), the chaotic regions expand for the vicinities of the separatrix. For \(0<\tilde{\cal H}<{\cal H}_{0}\) (family 1), the chaotic regions increase even further, while the resonant islands shrink (Figs. 1 d,e), until they completely disappear for \(\tilde{\cal H}={\cal H}_{0}\) (Fig. 1 f). Note that up to these energies, outside the chaotic regions, the circulation region remains stable. For this specific energy range, we also need to plot family 2. Close to \(\tilde{\cal H}=0\), we observe stable motion in the inner circulation region (Figs. 1 j,k). However, as we approach \(\tilde{\cal H}={\cal H}_{0}\), this area is replaced by a chaotic region (Fig. 1). Finally, for \(\tilde{\cal H}>{\cal H}_{0}\), we observe that the stable region progressively vanishes, and chaotic motion dominates the phase-space, where only small libration regions remain stable (Figs. 1 g,h,i). In this energy range, we only have family 1 and trajectories in the outer circulation region also do not exist. Moreover, there is also a forbidden region at the centre of each panel that grows with the energy value, while the libration areas shrink. The results in Fig. 1 are in perfect agreement with those shown in Fig. 3 from Gomes and Correia (2023) using Poincare surface sections. ## 4 Conclusion As observed by Gomes and Correia (2023) and several previous works (Dermott et al., 1988; Tittemore and Wisdom, 1988; Cuk et al., 2020), chaos has a strong presence on the passage through the 5/3 MMR between Ariel and Umbriel. However, the analysis of this resonance with stability maps allows to quantify the chaos for each region. \begin{table} \begin{tabular}{c c c c c c} \hline \hline & \(m\) (M\({}_{\odot}\times 10^{-10}\)) & \(R\) (km) & \(\langle T_{\rm rot}\rangle\) (day) & \(J_{2}(\times 10^{-3})\) & \(C/(m_{0}R^{2})\) \\ \hline Uranus & 436562.8821 & 25559. & 0.7183 & 3.5107 & 0.2296 \\ \hline \hline Satellite & \(m\) (M\({}_{\odot}\times 10^{-10}\)) & \(R\) (km) & \(\langle T_{\rm orb}\rangle\) (day) & \(\langle a\rangle\) (\(R\)) & \(\langle I\rangle\) (\({}^{\circ}\)) \\ \hline Ariel & 6.291561 & 578.9 & 2.479971 & 7.468180 & 0.0167 \\ Umbriel & 6.412118 & 584.7 & 4.133904 & 10.403550 & 0.0796 \\ \hline \end{tabular} \end{table} Table 1: Present physical and orbital properties of the Uranian system (Thomas, 1988; Jacobson, 2014). Figure 1: Stability maps for Ariel in the plane \((I_{1}\sin\varphi_{1},I_{1}\cos\varphi_{1})\) with \(\cos\varphi_{2}=0\) and \(\delta=-2\times 10^{-6}\). The colour scale corresponds to the relative frequency diffusion index in logarithmic scale (Eq. (7)). Each panel was obtained with a different energy value and \(\mathcal{H}_{0}=1.06\times 10^{-19}\;M_{\odot}\,\mathrm{au}^{2}\,\mathrm{yr}^{-2}\). The dynamics of the 5/3 MMR between Ariel and Umbriel is very rich and depends on the energy of the system. In fact, the energy depends on the value of the inclinations (Eq. (1)), given by the variables \(y_{1}\) and \(y_{2}\) (Eq. (2)). Therefore, the value of the inclinations of Ariel and Umbriel when the system encounters the resonance can trigger completely different behaviours. For \(\bar{\mathcal{H}}<0\), the phase-space is dominated by stable orbits. As we approach the separatrix energy, \(\bar{\mathcal{H}}\sim\mathcal{H}_{0}\), the chaotic motion engproces the low inclination regions, while the outer circulation regions remain stable. Finally, for \(\bar{\mathcal{H}}\gg\mathcal{H}_{0}\), only small libration regions remain stable, surrounded by large chaotic areas. This is a new result, since surface sections appeared to have exclusive quasi-periodic motion for energies near the equilibrium resonance points. The dynamical analysis with stability maps can be extended to several combinations of the variables. Indeed, we do not need to choose a specific projection plane, and so the choice of the phase-space plane is much less restricted than for the Poincare surface sections (see also Alves-Carmo et al., 2023). Allied to the quantification of chaos, the stability maps method thus provides a more exhaustive analysis of the dynamics of the resonance passage. ## Acknowledgements This work was supported by grant SFRH/BD/143371/2019, and by projects CFisUC (UIDB/04564/2020 and UIDP/04564/2020), GRAVITY (PTDC/FIS-AST/7002/2020), PHOBOS (POCI-01-0145-FEDER-029932), and ENGAGE SKA (POCI-01-0145-FEDER-022217), funded by COMPETE 2020 and FCT, Portugal. We acknowledge the Laboratory for Advanced Computing at University of Coimbra ([https://www.uc.pt/lca](https://www.uc.pt/lca)) for providing the resources to perform the stability maps with high resolution.
Uranusの5大衛星がArielとUmbrielの間の5/3 Mean motion resonanceの通過中に進化する過程において、乱流運動の影響が強く作用します。数値積分による運動方程式の解析とPoincaré表面断面分析により、混沌の役割に関する興味深い知見を得ることができました。しかし、その混沌の定量化は、相空間において行われていません。そこで、今回は相空間における安定性を示すマップを作成しました。低エネルギー(小径の偏心率と傾き)では、相空間は主に安定しています。エネルギーが増加するにつれて、乱流領域が安定した運動を置き換える、最終的には小規模で局所化した揺らぎ領域のみが安定に維持されます。
2309.05498
On Talagrand's functional and generic chaining
In the study of the supremum of stochastic processes, Talagrand's chaining functionals and his generic chaining method are heavily related to the distribution of stochastic processes. In the present paper, we construct Talagrand's type functionals in the general distribution case and obtain the upper bound for the suprema of all $p$-th moments of the stochastic process using the generic chaining method. As applications, we obtained the Johnson-Lindenstrauss lemma, the upper bound for the supremum of all $p$-th moment of order 2 Gaussian chaos, and convex signal recovery in our setting.
Yiming Chen, Pengtao Li, Dali Liu, Hanchao Wang
2023-09-11T14:39:07
http://arxiv.org/abs/2309.05498v1
# On Talagrand's functional and generic chaining ###### Abstract. In the study of the supremum of stochastic processes, Talagrand's chaining functionals and his generic chaining method are heavily related to the distribution of stochastic processes. In the present paper, we construct Talagrand's type functionals in the general distribution case and obtain the upper bound for the suprema of all \(p\)-th moments of the stochastic process using the generic chaining method. As applications, we obtained the Johnson-Lindenstrauss lemma, the upper bound for the supremum of all \(p\)-th moment of order \(2\) Gaussian chaos, and convex signal recovery in our setting. Key words and phrases:\(\varphi\)-sub-Gaussian distribution; Generic chaining; Tail probability inequality; Compressed sensing; Johnson-Lindenstrauss lemma; Order 2 Gaussian chaos 2 ## 1. Introduction In this paper we study the _sub-Gaussian distribution_ of a random variable \(\xi\) with a Gaussian distribution \(\mu\). The distribution of \(\xi\) is a sub-Gaussian random variable \(\xi\) with a Gaussian distribution \(\mu\). The distribution of \(\xi\) is a sub-Gaussian random variable. by Chernoff's approach. The convexity of \(\frac{x^{2}}{2}\) plays an essential role in this approach. Kozachenko and Ostrovsky extended \(\frac{x^{2}}{2}\) to more general cases in [9] and introduced the \(\varphi\)-sub-Gaussian distribution. \(\varphi\)-sub-Gaussian distribution is widely used in many application scenarios. Antonini and Hu [1] present an application to random Fourier series for \(\varphi\)-sub-Gaussian random variables. In the sampling theorem, Kozachenko and Olenko [10] investigated \(L_{p}([0,T])\) and uniform approximations of \(\varphi\)-sub-Gaussian random processes. Moreover, they also present the results on the Whittaker-Kotelnikov-Shannon (WKS) approximation of \(\varphi\)-sub-Gaussian signals in \(L_{p}([0,T])\) with a given accuracy and reliability [11]. This section introduces the definition and notations of \(\varphi\)-sub-Gaussian distribution. Some useful lemmas and properties are also collected in this section. More details can be found in Buldygin and Kozachenko [4]. **Definition 2.1**.: A function \(\varphi(x),x\in\mathbb{R}\), is called an Orlicz \(N\)-function if it is continuous, even, convex with \(\varphi(0)=0\) and monotonically increasing in the set \(x>0\), satisfying * \(\varphi(x)/x\to 0\), when \(x\to 0\); * \(\varphi(x)/x\to\infty\), when \(x\to\infty\). **Definition 2.2**.: An Orlicz \(N\)-function \(\varphi\) satisfies \(Q-\)condition, if there exist constant \(c>0\), such that \[\liminf_{x\to 0}\frac{\varphi(x)}{x^{2}}=c.\] _Remark 2.3_.: The following are some useful examples of Orlicz \(N\)-functions. 1) \(\varphi(x)=a|x|^{\alpha},x\in\mathbb{R};a>0,\alpha>1\); 2) \(\varphi(x)=c\left(\exp\left\{|x|^{\alpha}\right\}-1\right),x\in\mathbb{R};c>0, \alpha>1\); 3) \(\varphi(x)=\exp\{|x|\}-|x|-1,x\in\mathbb{R}\). Now, we present two useful lemmas on Orlicz \(N\)-functions, which can be found in Buldygin and Kozachenko [4]. **Lemma 2.4**.: _For any Orlicz \(N\)-function \(\varphi\), the following results hold:_ * _for_ \(\beta>1\)_,_ \(\varphi(\beta x)\geq\beta\varphi(x)\) _for_ \(x\in\mathbb{R}\)_;_ * _there exists a constant_ \(c=c(\varphi)>0\)_, such that_ \(\varphi(x)>cx\) _for_ \(x>1\)_;_ * _the function_ \(g(x)=\frac{\varphi(x)}{x}\) _is monotone nondecreasing in_ \(x\) _for all_ \(x>0\)_;_ * \(\varphi(x)+\varphi(y)\leq\varphi(|x|+|y|)\) _for_ \(x,y\in\mathbb{R}\)_._ **Lemma 2.5**.: _Denoting the inverse function of \(\varphi(\cdot)\) by \(\varphi^{(-1)}(\cdot)\), then for an Orlicz \(N-\)function \(\varphi(\cdot)\), the following holds true:_ * \(\varphi^{(-1)}(\beta x)\leq\beta\varphi^{(-1)}(x),\quad x>0,\) _for_ \(\beta>1\)_;_ * \(\varphi^{(-1)}(\beta x)\geq\beta\varphi^{(-1)}(x),\quad x>0,\) _for_ \(\beta\leq 1\)_._ We need _Young-Fenchel transform_ for a given function \(f\) to present our results. **Definition 2.6**.: Let \(f=(f(x),x\in\mathbb{R})\) be a real-valued function. The function \(f^{*}=(f^{*}(x),x\in\mathbb{R})\) defined by the formula \[f^{*}(x)=\sup_{y\in\mathbb{R}}(xy-f(y)),\] is called the convex conjugate of \(f\), also known as the Young-Fenchel transform of the function \(f\). _Remark 2.7_.: It is remarkable that the convex conjugate function of an Orlicz \(N\)-function is also an Orlicz \(N\)-function. _Remark 2.8_.: Here are some examples of the Young-Fenchel transform for some functions. * Given \(a>1,c>0\), let \(f(x)=c|x|^{a},x\in\mathbb{R}\). Then \[f^{*}(x)=c_{1}|x|^{b},\quad x\in\mathbb{R},\] where \(c_{1}=(ca)^{b/a}/b\) and where \(b\) is the conjugate exponent of \(a\), namely \(1/b+1/a=1\). * In particular, if \(f(x)=\frac{1}{2}x^{2},x\in\mathbb{R}\), then \(f^{*}(x)=\frac{1}{2}x^{2}\). * Let \(f(x)=e^{|x|}-|x|-1,x\in\mathbb{R}\), then \(f^{*}(x)=(|x|+1)\ln(|x|+1)-|x|,\quad x\in\mathbb{R}\). Now, we present the definition of \(\varphi\)-sub-Gaussian distribution. **Definition 2.9**.: For an Orlicz \(N\)-function \(\varphi\) satisfying \(Q-\)condition, a zero-mean random variable \(\xi\) obeys \(\varphi\)-sub-Gaussian distribution, if there exists a constant \(a\geq 0\) such that the inequality \[\mathsf{E}\exp(\lambda\xi)\leq\exp\left(\varphi\left(\lambda a\right)\right), \tag{2.1}\] holds for all \(\lambda\in\mathbb{R}\). _Remark 2.10_.: The assumptions that \(\xi\) is zero-mean and that \(\varphi\) satisfies \(Q-\)Condition are actually necessary. As \(\lambda\to 0\), it is known that \[\mathsf{E}\exp\left(\lambda\xi\right)=1+\lambda\mathsf{E}\xi+ \frac{\lambda^{2}}{2}\mathsf{E}\xi^{2}+o(\lambda^{2}),\] \[\exp\left(\varphi(a\lambda)\right)=1+\varphi(a\lambda)+o(\varphi (a\lambda)).\] The inequality (2.1) implies these two assumptions. A random variable that obeys \(\varphi\)-sub-Gaussian distribution is also called a \(\varphi\)-sub-Gaussian random variable. Let \(Sub_{\varphi}(\Omega)\) denotes the space of \(\varphi\)-sub-Gaussian random variables. For a metric space \(T\), a stochastic process, \((X_{t})_{t\in T}\), is called \(\varphi\)-sub-Gaussian process if \(X_{t}\in Sub_{\varphi}(\Omega)\) for all \(t\in T\). **Lemma 2.11**.: _([4] Lemma 4.2) For any random variable \(\xi\in\mathrm{Sub}_{\varphi}(\Omega)\),_ \[\mathsf{E}\exp\{\lambda\xi\}\leq\exp\left\{\varphi\left(\lambda\tau_{\varphi} (\xi)\right)\right\},\quad\lambda\in\mathbb{R} \tag{2.2}\] _holds, where_ \[\tau_{\varphi}(\xi):=\sup_{\lambda\neq 0}\frac{\varphi^{(-1)}(\log\mathsf{E} \exp\{\lambda\xi\})}{|\lambda|},\] _and \(\varphi^{(-1)}(\cdot)\) denotes the inverse function of \(\varphi(\cdot)\)._ **Lemma 2.12**.: _([4] Lemma 4.3) \(Sub_{\varphi}(\Omega)\) is a Banach space with respect to the norm_ \[\tau_{\varphi}(X)=\inf\{a\geq 0:\mathsf{E}\exp\{\lambda X\}\leq\exp\{\varphi(a \lambda)\},\text{ for all }\lambda\in\mathbb{R}\}. \tag{2.3}\] _With the norm above, a \(\varphi\)-sub-Gaussian process \((X_{t})_{t\in T}\) satisfies the following incremental inequality:_ \[\mathsf{P}\left(|X_{t}-X_{s}|\geqslant u\tau_{\varphi}(X_{t}-X_{s})\right) \leqslant 2\exp\left\{-\varphi^{*}(u)\right\}, \tag{2.4}\] _where \(t,s\in T\)._ _Remark 2.13_.: Centered Gaussian process \((X_{t})_{t\in T}\) is an important example of \(\varphi\)-sub-Gaussian process, where \(\varphi(x)=x^{2}/2\) and \(\tau_{\varphi}(X_{t})=\left(\mathsf{E}|X_{t}|^{2}\right)^{1/2}\). _Remark 2.14_.: Another example of \(\varphi\)-sub-Gaussian distribution lies in the Weibull distribution. When the probability density function of \(\xi\) is of the form \[f(x;\kappa,q)=\begin{cases}\frac{q}{\kappa}\left(\frac{x}{\kappa}\right)^{q-1 }e^{-(x/\kappa)^{q}},&x\geq 0\\ 0,&x<0\end{cases},\] where \(q>0\) is the shape parameter and \(\kappa>1\) is the scale parameter of the distribution, \(\xi\) obeys \(\varphi\)-sub-Gaussian distribution. ## 3. Talagrand's chaining functionals Talagrand developed a comprehensive framework on the generic chaining method to capture the relevant geometric structure and the boundness of stochastic processes. Let us begin with some definitions related to generic chaining. \(T\) is the index set of \(\varphi\)-sub-Gaussian process \((X_{t})_{t\in T}\). \(d\) is a metric on \(T\), and defined by \(d(s,t)=\tau_{\varphi}(X_{t}-X_{s})\). **Definition 3.1**.: An increasing sequence \(\left(\mathcal{A}_{n}\right)_{n\geq 0}\) of partitions of \(T\) is called admissible if \(\operatorname{card}(\mathcal{A}_{0})=1\) and \(\operatorname{card}(\mathcal{A}_{n})\leq 2^{2^{n}}\) for all \(n\geq 1\), \(\operatorname{card}(T)\) denotes the cardinality of a set \(T\). For \(t\in T\), \(A_{n}(t)\) denotes the unique element of \(\mathcal{A}_{n}\) which contains \(t\). If the stochastic process \((X_{t})_{t\in T}\) satisfies \[\mathsf{P}\left(|X_{t}-X_{s}|\geq u\right)\leq 2\exp\left(-\frac{u^{2}}{2d^{2} (s,t)}\right), \tag{3.1}\] where \(u>0\), Talagrand introduced chaining functional, \(\gamma_{2}(T,d)\), to replace Dudely's bound as the upper bound of the sub-Gaussian process, where \[\gamma_{2}(T,d)=\inf\sup_{t\in T}\sum_{n\geq 0}^{\infty}2^{n/2}\Delta\left(A_{n} (t)\right), \tag{3.2}\] with the infimum taken over all admissible sequences. Here, as always, \(\Delta\left(A_{n}(t)\right)\) denotes the diameter of a set \(A_{n}(t)\). Inspired by Talagrand [19] and Dirksen [5], we introduce the following functionals to capture the boundness of \(\varphi\)-sub-Gaussian process. For any \(1\leq p<\infty\), \(k_{p}:=\left\lfloor\frac{\log(p)}{\log(2)}\right\rfloor\), where \(\left\lfloor\cdot\right\rfloor\) denotes the integer part. **Definition 3.2**.: For an Orlicz \(N\)-function \(\varphi\) satisfying \(Q-\)condition, distribution-dependent Talagrand-type \(\gamma\)-functional is defined by \[\gamma_{\varphi,p}(T,d)=\inf\sup_{t\in T}\sum_{n\geq k_{p}}^{\infty}\varphi^{ *(-1)}(2^{n})\Delta\left(A_{n}(t)\right), \tag{3.3}\] where the infimum is taken over all admissible sequences. When \(\varphi(x)=\frac{x^{2}}{2}\), we denote \(\gamma_{\varphi,p}(T,d)\) by \(\gamma_{2,p}(T,d)\). We also introduce a modified version of the functional in the following. **Definition 3.3**.: For an Orlicz \(N\)-function \(\varphi\) satisfying \(Q-\)condition, distribution-dependent Talagrand-type modified \(\gamma\)-functional is defined by \[\tilde{\gamma}_{\varphi,p}(T,d)=\inf_{T_{n}\subset T,\operatorname{card}(T_{n}) \leq 2^{2^{n}}}\sup_{t\in T}\sum_{n\geq k_{p}}^{\infty}\varphi^{*(-1)}(2^{n} )d\left(t,T_{n}\right). \tag{3.4}\] Similarly, when \(\varphi(x)=\frac{x^{2}}{2}\), we denote \(\tilde{\gamma}_{\varphi,p}(T,d)\) by \(\tilde{\gamma}_{2,p}(T,d)\). To discuss the properties of (3.3) and (3.4), we need to employ the entropy number of a subset of \(T\). For \(A\subset T\), the entropy number is defined as \[e_{n}(A)=\inf_{S\subset T,\operatorname{card}(S)\leq 2^{2^{n}}}\sup_{t\in A}d(t,S)=\inf\{\varepsilon:N(A,d,\varepsilon)\leq 2^{2^{n}}\},\] \(N(A,d,\varepsilon)\) is the smallest integer \(N\) such that \(A\) can be covered by \(N\) balls of radius \(\varepsilon\). Dudley's bound plays an essential role in the history of studying the boundness of stochastic processes. It is natural to consider Dudley's bound in our setting. The following theorem shows that our chaining functional is smaller than Dudley's bound. Before presenting our result, we need a condition for \(\varphi\). **Definition 3.4**.: For any \(b\geq 1\), if there exists some \(M_{b}\in(0,1)\), such that \[\frac{\varphi^{*(-1)}(bx)}{\varphi^{*(-1)}(x)}\geq\frac{1}{1-M_{b}} \tag{3.5}\] holds for any \(x\geq 2\), we say that \(\varphi\) satisfies \(\Delta_{2}-\)condition. _Remark 3.5_.: One important example satisfying the \(\Delta_{2}-\)condition is \[\varphi(x)=c|x|^{a},\quad a>1,\,c>0.\] _Remark 3.6_.: The definition of \(\Delta_{2}-\)condition is from (8.63) in Talagrand [19]. When the functions, \(-\log\mathsf{P}(\xi_{i}\geq x)\), are convex, Latala [14] studied the the suprema of canonical processes \(\sum_{i\geq 1}t_{i}\xi_{i}\) under \(\Delta_{2}-\)condition, where \(t_{i}\in\mathbb{R}\). \(\Delta_{2}-\)condition is essential for obtaining the majorizing measure theorem of \(\sum_{i\geq 1}t_{i}\xi_{i}\). **Theorem 3.7**.: _It is assumed that the function \(\varphi(\cdot)\) satisfies \(\Delta_{2}-\)condition. Then,_ \[\tilde{\gamma}_{\varphi,p}(T,d)\leq\int_{0}^{\varepsilon_{k_{p}}(T)}\varphi^{ *(-1)}\left(N(T,d,\varepsilon)\right)d\varepsilon\leq\int_{0}^{\infty}\varphi^ {*(-1)}\left(N(T,d,\varepsilon)\right)d\varepsilon. \tag{3.6}\] Proof.: It can be observed from the definition of \(e_{n}(T)\) that \(N(T,d,\varepsilon)\geq 1+2^{2^{n}}\) as \(\varepsilon\leq e_{n}(T)\). Hence, \[\varphi^{*(-1)}(1+2^{2^{n}})\left(e_{n}(T)-e_{n+1}(T)\right)\leq\int_{e_{n}(T )}^{e_{n+1}(T)}\varphi^{*(-1)}(\log N(T,d,\varepsilon))d\varepsilon.\] Since \(\log(1+2^{2^{n}})\geq 2^{n}\log 2\), summation over \(n\geq k_{p}=\lfloor\frac{\log p}{\log 2}\rfloor\) gives \[\log 2\sum_{n\geq k_{p}}\varphi^{*(-1)}(2^{n})(e_{n}(T)-e_{n+1}(T))\leq\int_{0}^ {\varepsilon_{k_{p}}(T)}\varphi^{*(-1)}\left(\log N(T,d,\varepsilon)\right)d\varepsilon. \tag{3.7}\] For some constant \(C_{1}\in(0,1)\), we have, \[\sum_{n\geq k_{p}}\varphi^{*(-1)}(2^{n})(e_{n}(T)-e_{n+1}(T))=\sum_{n\geq k_{p }}\varphi^{*(-1)}(2^{n})e_{n}(T)-\sum_{n\geq k_{p}+1}\varphi^{*(-1)}(2^{n-1})e _{n}(T)\] \[\geq\sum_{n\geq k_{p}}\varphi^{*(-1)}(2^{n})\left(1-\frac{\varphi^{*( -1)}(2^{n-1})}{\varphi^{*(-1)}(2^{n})}\right)e_{n}(T)\] \[\geq M_{2}\sum_{n\geq k_{p}}\varphi^{*(-1)}(2^{n})e_{n}(T). \tag{3.8}\] Thus, a combination of (3.7), (3.8) and the definition of \(\gamma_{\varphi,p}\) yields \[\gamma_{\varphi,p}(T,d)\leq\sum_{n\geq k_{p}}\varphi^{*(-1)}(2^{n })e_{n}(T) \leq C_{2}\int_{0}^{e_{k_{p}}(T)}\varphi^{*(-1)}\left(\log N(T,d, \varepsilon)\right)d\varepsilon\] \[\leq\int_{0}^{\infty}\varphi^{*(-1)}\left(\log N(T,d,\varepsilon )\right)d\varepsilon. \tag{3.9}\] This completes the proof. The growth condition of Talagrand's chaining functional and partitioning schemes is the backbone of Talagrand's formulation. Now, we discuss the growth condition and partition schemes in our setting. We first present some definitions, which are from Talagrand [19]. **Definition 3.8**.: A map \(F\) is a functional on subsets of \(T\) if * \(F(H)\geq 0\) for each subset \(H\) of \(T\); * \(F(H^{\prime})\geq F(H)\) for \(H\subset H^{\prime}\subset T\). **Definition 3.9**.: For a metric space \(T\) endowed with distance \(d\), given \(a>0\) and an integer \(r\geq 8\), we say that subsets \(H_{1},\ldots,H_{m}\) of \(T\) are \((a,r)\)-separated if \[H_{\ell}\subset B\left(t_{\ell},2a/r\right)\quad\text{for all}\quad\ell\leq m,\] with \(B\left(t_{\ell},2a/r\right)\) denoting a ball centered at \(t_{\ell}\) with radius \(2a/r\), where the points \(t_{1},t_{2},\ldots,t_{m}\) in \(T\) satisfy \[a\leq d\left(t_{\ell},t_{\ell^{\prime}}\right)\leq 2ar\quad\text{for all}\quad\ell,\ell^{\prime}\leq m,\ell\neq\ell^{\prime}.\] **Definition 3.10**.: We say that the functional \(F\) satisfies the generalized growth condition with parameters \(r\geq 8\) and \(c^{*}>0\) if for any integer \(n\geq 1+\lfloor\log_{2}p\rfloor\) and any \(a>0\) the following holds true, where \(m=2^{2^{n}}\) : For each collection of subsets \(H_{1},\ldots,H_{m}\) of \(T\) that are \((a,r)\)-separated, we have \[F\left(\bigcup_{\ell\leq m}H_{\ell}\right)\geq c^{*}a\varphi^{*(-1)}\left(2^{ n}\right)+\min_{\ell\leq m}F\left(H_{\ell}\right).\] **Proposition 3.11**.: _Assume that \(r\geq 16\), the functional \(\gamma_{\varphi,p}(T,d)\) satisfies the generalized version of the growth condition with parameters \(r\) and \(c^{*}=1/8\). Under the same assumption, the functional \(\tilde{\gamma}_{\varphi,p}(T,d)\) also satisfies the generalized version of the growth condition with parameters \(r\) and \(c^{*}=1/8\)._ The proofs of Proposition 3.11 and Proposition 3.2 are similar. We only give a detailed proof of Proposition 3.11 for simplicity. Proof.: Consider points \(\left(t_{\ell}\right)_{\ell\leq m}\) of \(T\) with \(d\left(t_{\ell},t_{\ell^{\prime}}\right)\geq a\) if \(\ell\neq\ell^{\prime}\). Consider sets \(H_{\ell}\subset B\left(t_{\ell},a/8\right)\) and the set \(H=\bigcup_{\ell\leq m}H_{\ell}\). For an arbitrary admissible sequence of partitions \((\mathcal{A}_{n})\) of \(H\), consider the set \[I_{n}=\left\{\ell\leq m,\text{there exists }A\in\mathcal{A}_{n-1}\text{ satisfying }A\subset H_{\ell}\right\}.\] An injection \(\mathcal{M}:I_{n}\to\mathcal{A}_{n-1}\) is defined by, for \(\ell\in I_{n}\), \[\mathcal{M}(\ell)=A\] for an arbitrary \(A\in\mathcal{A}_{n-1}\) with \(A\subset H_{\ell}\). As a consequence, \(\operatorname{card}I_{n}\leq\operatorname{card}\mathcal{A}_{n-1}\leq 2^{2^{n-1}}<m=2^{2^{n}}\). Hence, there exists \(\ell_{0}\notin I_{n}\), and then \(A_{n-1}(t)\not\subset H_{\ell_{0}}\). So that since \(A_{n-1}(t)\subset H\), the set \(A_{n-1}(t)\) must intersect a set \(H_{\ell}\neq H_{\ell_{0}}\). Consequently, \(A_{n-1}(t)\) intersects the ball \(B\left(t_{\ell},a/8\right)\). Since \(t\in H_{\ell_{0}}\), we have \(d\left(t,B\left(t_{\ell},a/8\right)\right)\geq a-2a/r-a/8\geq a/2\). Since \(t\in A_{n-1}(t)\), this implies that \(\Delta\left(A_{n-1}(t)\right)\geq a/2\). Since \(\Delta\left(A_{n-1}(t)\cap H_{\ell_{0}}\right)\leq\Delta\left(H_{\ell_{0}} \right)\leq a/4\), we have proven that for \(t\in H_{\ell_{0}}\) and any integer \(n\geq 1\), \[\Delta\left(A_{n-1}(t)\right)\geq\Delta\left(A_{n-1}(t)\cap H_{\ell_{0}} \right)+\frac{1}{4}a. \tag{3.10}\] Recall that for \(p\geq 1\), \(\lfloor\frac{\log p}{\log 2}\rfloor\) is denoted by \(k_{p}\). Now, since for each \(k\geq 0\) we have \(\Delta\left(A_{k}(t)\right)\geq\Delta\left(A_{k}(t)\cap H_{\ell_{0}}\right)\), we have \[\sum_{k\geq k_{p}}\varphi^{*(-1)}\left(2^{k}\right)\left(\Delta \left(A_{k}(t)\right)-\Delta\left(A_{k}(t)\cap H_{\ell_{0}}\right)\right)\] \[\geq\varphi^{*(-1)}\left(2^{n+k_{p}-1}\right)\left(\Delta\left(A _{n+k_{p}-1}(t)\right)-\Delta\left(A_{n+k_{p}-1}(t)\cap H_{\ell_{0}}\right)\right)\] \[\geq\frac{1}{4}a\varphi^{*(-1)}\left(2^{n-1}\right),\] where we have used (3.10) in the last inequality, and, consequently, \[\sum_{k\geq k_{p}}\varphi^{*(-1)}\left(2^{k}\right)\Delta\left(A_{k}(t)\right) \geq\frac{1}{4}a\varphi^{*(-1)}\left(2^{n-1}\right)+\sum_{k\geq k_{p}}\varphi ^{*(-1)}\left(2^{k}\right)\Delta\left(A_{k}(t)\cap H_{\ell_{0}}\right). \tag{3.11}\] Next, consider the admissible sequence \((\mathcal{A}_{n}^{\prime})\) of \(H_{\ell_{0}}\) given by \(\mathcal{A}_{n}^{\prime}=\{A\cap\,H_{\ell_{0}};A\in\mathcal{A}_{n}\}\). We have, by definition, \[\sup_{t\in H_{\ell_{0}}}\sum_{k\geq k_{p}}\varphi^{*(-1)}\left(2^{k}\right) \Delta\left(A_{k}(t)\cap H_{\ell_{0}}\right)\geq\gamma_{\varphi,p}\left(H_{ \ell_{0}},d\right).\] Hence, taking the supremum over \(t\) in \(H_{\ell_{0}}\) in (3.11), we get \[\sup_{t\in H_{\ell_{0}}}\sum_{k\geq k_{p}}\varphi^{*(-1)}\left(2^ {k}\right)\Delta\left(A_{k}(t)\right) \geq\frac{1}{4}a\varphi^{*(-1)}\left(2^{n-1}\right)+\gamma_{ \varphi,p}\left(H_{\ell_{0}},d\right)\] \[\geq\frac{1}{8}a\varphi^{*(-1)}\left(2^{n}\right)+\min_{\ell\leq m }\gamma_{\varphi,p}\left(H_{\ell},d\right).\] Since the admissible sequence \((\mathcal{A}_{n})\) is arbitrary, we have proven that \[\gamma_{\varphi,p}(H,d)\geq\frac{1}{8}a\varphi^{*(-1)}\left(2^{n}\right)+\min_ {\ell\leq m}\gamma_{\varphi,p}\left(H_{\ell},d\right), \tag{3.12}\] which completes the proof. The following result is on the partitioning schemes. It is essential for obtaining the lower bound or other important results in generic chaining. The equivalence of \(\gamma_{\varphi,p}(T,d)\) and \(\tilde{\gamma}_{\varphi,p}(T,d)\) can be also derived by this result. See Theorem 3.16 for the proof of equivalence. **Theorem 3.12**.: _Assume a functional \(F\) exists, which satisfies the generalized growth condition on \(T\) for the parameters \(r\) and \(c^{*}\). Assume that the function \(\varphi(\cdot)\) satisfies \(\Delta_{2}-\)condition, then we have_ \[\gamma_{\varphi}\left(T,d\right)\leqslant\frac{Lr}{c^{*}}F(T)+Lr\Delta(T). \tag{3.13}\] Some lemmas are needed for the proof of Theorem 3.12. The following two elementary lemmas are from Talagrand [19]. **Lemma 3.13**.: _For a given metric space \((T,d)\), assume that any sequence \(\left(t_{\ell}\right)_{\ell\leq m}\) with \(d\left(t_{\ell},t_{\ell^{\prime}}\right)\geq\) a for \(\ell\neq\ell^{\prime}\) satisfies \(m\leq N\). Then, \(T\) can be covered by \(N\) balls of radius \(a\)._ **Lemma 3.14**.: _Consider numbers \(\left(a_{n}\right)_{n\geq 0},a_{n}>0\), and assume \(\sup_{n}a_{n}<\infty\). Consider \(\alpha>1\), and define_ \[I=\left\{n\geq 0;\text{for all }k\geq 0,k\neq n,a_{k}<a_{n}\alpha^{|n-k|} \right\}.\] _Then \(I\neq\emptyset\), and we have_ \[\sum_{k\geq 0}a_{k}\leq\frac{2\alpha}{\alpha-1}\sum_{n\in I}a_{n}.\] The following lemma provides a fundamental principle for the proof of partition schemes. **Lemma 3.15**.: _Under the conditions of Theorem 3.12, consider \(B\subset T\) satisfies that \(\Delta(B)\leq 2r^{-j}\) for a certain \(j\in\mathbb{Z}\). For any \(n\geq 0\), set \(m=2^{2^{n}}\). Then there exists a partition \(\left(A_{k}\right)_{k\leq m}\) of \(B\) into sets which have either of the following properties:_ \[\Delta\left(A_{k}\right)\leq 2r^{-j-1}, \tag{3.14}\] _or else_ \[t\in A_{k}\Rightarrow F\left(B\cap B\left(t,2r^{-j-2}\right)\right)\leq F(B) -c^{*}\varphi^{*(-1)}(2^{n})r^{-j-1}. \tag{3.15}\] Proof.: Consider the set below \[I=\left\{t\in B;F\left(B\cap B\left(t,2r^{-j-2}\right)\right)>F(B)-c^{*} \varphi^{*(-1)}(2^{n})r^{-j-1}\right\}.\] For points \(\left(t_{k}\right)_{k\leq m^{\prime}}\) in \(I\) such that \(d\left(t_{k},t_{k^{\prime}}\right)\geq r^{-j-1}\) when \(k\neq k^{\prime}\), we have \(m^{\prime}<m\). For otherwise, by the generalized growth condition, for \(a=r^{-j-1}\) and for the sets \(H_{k}:=B\cap B\left(t_{k},2r^{-j-2}\right)\), \[F(B)\geq F\left(\bigcup_{k\leq m}H_{k}\right)\geq c^{*}\varphi^{*(-1)}(2^{n}) r^{-j-1}+\min_{k\leq m}F\left(H_{k}\right)>F(B).\] The contradiction means \(m^{\prime}<m\). Consequently, then by the Lemma 3.13 with \(N=m-1\), we may cover \(I\) by \(m^{\prime}<m\) balls \(\left(B_{k}\right)_{k\leq m^{\prime}}\) of radius \(\leq r^{-j-1}\). Finally, we set \(A_{k}=I\cap(B_{k}\setminus\cup_{k^{\prime}<k}B_{k^{\prime}})\) for \(k\leq m^{\prime}\), \(A_{k}=\emptyset\) for \(m^{\prime}<k<m\) and \(A_{m}=B\setminus I\), and the proof of the lemma is complete. Proof of Theorem 3.12.: To drive (3.13), we construct an admissible sequence of partitions \(\mathscr{A}_{n}\), and for \(A\in\mathscr{A}_{n}\), we construct \(j_{n}(A)\in Z\), such that \[\Delta(A)\leq 2r^{-j_{n}(A)}. \tag{3.16}\] We start with \(\mathcal{A}_{0}=\{T\}\) and \(j_{0}(T)=\max\{j_{0}\in\mathbb{Z},\Delta(T)\leq 2r^{-j_{0}}\}\). Having constructed \(\mathcal{A}_{n}\), we construct \(\mathcal{A}_{n+1}\) as follows: for each \(B\in\mathcal{A}_{n}\), we use Lemma 3.15 with \(j=j_{n}(B)\) to split \(B\) into sets \(\left(A_{\ell}\right)_{\ell\leq 2^{2^{n}}}\). If \(A_{\ell}\) satisfies (3.14), we set \(j_{n+1}\left(A_{\ell}\right)=j_{n}(B)+1\), and otherwise we set \(j_{n+1}\left(A_{\ell}\right)=j_{n}(B)\). The sequence thus constructed is admissible, since each set \(B\) in \(\mathcal{A}_{n}\) is split into at most \(2^{2^{n}}\) sets and \((2^{2^{n}})^{2}\leq 2^{2^{n+1}}\). We note also by construction that if \(B\in\mathcal{A}_{n}\) and \(A\subset B,A\in\mathcal{A}_{n+1}\), then * Either \(j_{n+1}(A)=j_{n}(B)+1\), * Or else \(j_{n+1}(A)=j_{n}(B)\) and \[F(B\cap B(t,2r^{-j_{n+1}(A)-2}))\leq F(B)-c^{*}\varphi^{*(-1)}(2^{n})r^{-j_{n+ 1}(A)-1}. \tag{3.17}\] Now, it suffices to show that for a fixed \(t\in T\), \[\sum_{n\geqslant k_{p}}\varphi^{*(-1)}(2^{n})\Delta\left(A_{n}(t)\right) \leqslant\frac{Lr}{c^{*}}F(T)+Lr\Delta(T). \tag{3.18}\] Let \(j(n)=j_{n}(A_{n}(t))\), and by the construction of the admissible sequence \(\mathcal{A}_{n}\), we know that \[\Delta(A_{n}(t))\leq 2r^{-j(n)}. \tag{3.19}\] Let \(a(n)=\varphi^{*(-1)}\left(2^{n}\right)r^{-j(n)}\), and \(M_{2}\) be the constant defined in (3.5) and then take \[I=\left\{n\geqslant 0;\text{for all }k\geqslant 0,n\neq k,a(k)<a(n)\left(\frac{1}{ 1-M_{2}}\right)^{|n-k|}\right\}.\] Then by Lemma 3.14, \[\sum_{n\geqslant k_{p}}\varphi^{*(-1)}(2^{n})\Delta\left(A_{n}(t)\right)\leq \sum_{n\geqslant 0}\varphi^{*(-1)}(2^{n})\Delta\left(A_{n}(t)\right)\leq\frac{2}{M_{2}} \sum_{n\in I}\varphi^{*(-1)}(2^{n})\Delta\left(A_{n}(t)\right).\] Observe that by the definition of \(j_{0}\), we know that \(2r^{-j_{0}}\leq r\Delta(T)\). Hence, \(a(0)=\varphi^{*(-1)}(1)r^{-j_{0}}\leq\varphi^{*(-1)}(1)r\Delta(T)/2\), so we only need to show that \[\sum_{n\in I\setminus\{0\}}\varphi^{*(-1)}(2^{n})\Delta\left(A_{n}(t)\right) \leq\frac{1}{2}\sum_{n\in I\setminus\{0\}}a(n)\leqslant\frac{Lr}{c^{*}}F(T). \tag{3.20}\] Note that if \(n\in I\), then \[a(n+1)<\frac{1}{1-M_{2}}a(n),\quad a(n-1)<\frac{1}{1-M_{2}}a(n).\] Also, we know that \[a(n+1)=\frac{\varphi^{*(-1)}\left(2^{n+1}\right)}{\varphi^{*(-1)}\left(2^{n} \right)}r^{j(n)-j(n+1)}a(n).\] Therefore, \[\frac{1}{1-M_{2}}r^{j(n)-j(n+1)}a(n) <\frac{\varphi^{*(-1)}\left(2^{n+1}\right)}{\varphi^{*(-1)}\left( 2^{n}\right)}r^{j(n)-j(n+1)}a(n)\] \[=a(n+1)<\frac{1}{1-M_{2}}a(n),\] \[a(n-1)<\frac{1}{1-M_{2}}a(n)=\frac{1}{1-M_{2}}\frac{\varphi^{*(-1)}\left(2^{n }\right)}{\varphi^{*(-1)}\left(2^{n-1}\right)}r^{j(n-1)-j(n)}a(n-1)\] \[\leq\left(\frac{\varphi^{*(-1)}\left(2^{n}\right)}{\varphi^{*(-1)} \left(2^{n-1}\right)}\right)^{2}r^{j(n-1)-j(n)}a(n-1)\] \[\leq 4r^{j(n-1)-j(n)}a(n-1).\] The observation above implies that \(j(n+1)=j(n)+1\) and \(j(n-1)=j(n)\). Hence, when \(n\geq k_{p}\), then let \(I\backslash\{0\}\) enumerated as as \(n_{1}<n_{2}<\ldots\), so that \[j\left(n_{k}+1\right)=j\left(n_{k}\right)+1;\quad j\left(n_{k}-1\right)=j \left(n_{k}\right). \tag{3.21}\] As a consequence, \[j\left(n_{k+2}\right)\geq j\left(n_{k+1}\right)+1\geq j\left(n_{k}\right)+2. \tag{3.22}\] For simplicity, denote \(F\left(A_{n}(t)\right)\) by \(f(n)\). Then \(f(0)=F(T)\) and the sequence \((f(n))_{n\geq 0}\) is decreasing because \(A_{n}(t)\subset A_{n-1}(t)\). Then, the key to derive (3.20) is to prove that for \(k\geq 1\), \[a\left(n_{k}\right)\leq\frac{Lr}{c^{*}}\left(f\left(n_{k}-1\right)-f\left(n_{ k+2}\right)\right). \tag{3.23}\] For \(k\geq 2\), \(f(n_{k}-1)\leq f(n_{k-1})\), so that (3.23) implies that \[a\left(n_{k}\right)\leq\frac{Lr}{c^{*}}\left(f\left(n_{k-1}\right)-f\left(n_{ k+2}\right)\right).\] Summation overall \(k\geq 2\) gives \[\sum_{k\geq 2}a(n_{k})\leq\frac{Lr}{c^{*}}F(T), \tag{3.24}\] and then a combination of (3.24) and (3.23) for \(k=1\) concludes the proof of the theorem when \(I\) is infinite. The case when \(I\) is finite will be proven at the end of the proof. So now let us roll up our sleeves to handle (3.23). Since \(n_{k}\geq 1\), applying (3.17) to \(A=A_{n_{k}}(t)\) and \(B=A_{n_{k}-1}(t)\), thus we get \[\begin{split}& F\left(B\cap B\left(t,2r^{-j_{n_{k}}(A)-2}\right) \right)\\ &\leqslant F(B)-c^{*}\varphi^{*(-1)}\left(2^{n_{k}-1}\right)r^{-j_ {n_{k}}(A)-1}\\ &=F(B)-c^{*}\frac{\varphi^{*(-1)}\left(2^{n_{k}-1}\right)}{\varphi ^{*(-1)}\left(2^{n_{k}}\right)}\varphi^{*(-1)}\left(2^{n_{k}}\right)r^{-j_{n_ {k}}(A)-1}\\ &=F(B)-c^{*}\frac{\varphi^{*(-1)}\left(2^{n_{k}-1}\right)}{ \varphi^{*(-1)}\left(2^{n_{k}}\right)}a(n_{k})\cdot r^{-1}.\end{split} \tag{3.25}\] Combined with Lemma 2.5, we obtain that \[a\left(n_{k}\right) \leq\frac{r}{c^{*}}\frac{\varphi^{*(-1)}\left(2^{n_{k}}\right)} {\varphi^{*(-1)}\left(2^{n_{k}-1}\right)}\left(F(B)-F\left(B\cap B\left(t,2r^ {-j_{n_{k}}(A)-2}\right)\right)\right)\] \[\leq\frac{2r}{c^{*}}\left(F(B)-F\left(B\cap B\left(t,2r^{-j_{n_{k} }(A)-2}\right)\right)\right). \tag{3.26}\] Moreover, recalling that \(j\left(n_{k+2}\right)=j_{n_{k+2}}\left(A_{n_{k+2}}(t)\right)\), hence a combination with (3.22) gives \(\Delta\left(A_{n_{k+2}}(t)\right)\leq 2r^{-j(n_{k+2})}\leq 2r^{-j(n_{k})-2}\). So \(A_{n_{k+2}}(t)\subset B\cap B\left(t,2r^{-j(n_{k})-2}\right)\) and then \[f\left(n_{k+2}\right)\leq F\left(B\cap B\left(t,2r^{-j_{n_{k}}(A)-2}\right) \right). \tag{3.27}\] Recalling that \(F(B)=f(n_{k}-1)\), combined with (3.26) and (3.27), we have \[a\left(n_{k}\right)\leqslant\frac{Lr}{c^{*}}\left(f\left(n_{k}-1\right)-f\left( n_{k+2}\right)\right).\] then we finish the proof of (3.13) when \(I\) is infinite. When \(I\) is finite, we denote the largest element of \(I\) as \(n_{\bar{k}}\). For \(k\leq\bar{k}-2\), \(a(n_{k})\) is controlled by previous argument. For \(k=\bar{k}-1\) and \(k=\bar{k}\), we use the fact that for \(n\geq 0\), \[a(n)\leq\frac{Lr}{c^{*}}F(T)+L\Delta(T). \tag{3.28}\] For \(n\geq 1\) and \(j(n-1)=j(n)\), use (3.17) for \(n-1\) rather than \(n\) yields (3.28). For \(n\geq 1\) and \(j(n-1)=j(n)-1\), \[a(n) =\frac{\varphi^{*(-1)}\left(2^{n}\right)}{\varphi^{*(-1)}\left(2 ^{n-1}\right)}r^{-1}a(n-1)\] \[\leq\frac{2}{r}a(n-1)<a(n-1).\] Iterate this relation until we reach an integer \(n^{*}\) with * either \(j(n^{*})=j(n^{*}-1)\), * or \(n^{*}=0\). Note that \(a(0)\leq L\Delta(T)\), the case when \(I\) is finite is done. We finally prove the theorem. With Theorem 3.12 in hand, we obtain the equivalence of \(\gamma_{\varphi,p}(T,d)\) and \(\tilde{\gamma}_{\varphi,p}(T,d)\) under proper conditions. **Theorem 3.16**.: _For an Orlicz \(N\)-function \(\varphi\) satisfying \(Q-\)condition and \(\Delta_{2}-\)condition,_ \[\gamma_{\varphi,p}(T,d)\asymp\tilde{\gamma}_{\varphi,p}(T,d).\] Proof.: It is easy to see that \[\tilde{\gamma}_{\varphi,p}(T,d)\lesssim\gamma_{\varphi,p}(T,d).\] Hence, it suffices to prove that \[\gamma_{\varphi,p}(T,d)\lesssim\tilde{\gamma}_{\varphi,p}(T,d).\] By Proposition 3.2, we know that \(\tilde{\gamma}_{\varphi,p}(T,d)\) satisfies the generalized growth condition. By Theorem 3.12, we know that \[\gamma_{\varphi}\left(T,d\right) \leq\frac{Lr}{c^{*}}\tilde{\gamma}_{\varphi,p}(T,d)+Lr\Delta(T)\] \[\lesssim\tilde{\gamma}_{\varphi,p}(T,d).\] This finally completes the proof. ## 4. Upper bounds and tails bounds via generic chaining In the following, \(||\cdot||_{p}\) (\(p\geq 1\)) denotes the \(L_{p}\) norm of a random variable. \(T\) is the index set of \(\varphi\)-sub-Gaussian process \((X_{t})_{t\in T}\). \(d\) is a metric on \(T\), and defined by \(d(s,t)=\tau_{\varphi}(X_{t}-X_{s})\). \(\Delta(S):=\sup\limits_{s,t\in S}d(s,t)\) denotes the diameter of the a set \(S\in T\) with metric \(d\). For an admissible sequence \(T_{n}\), \(\pi_{n}(t):=\mathop{\rm arg\,min}\limits_{s\in T_{n}}d(s,t)\). \(a\lor b=\max\{a,b\}\) for \(a,b\in\mathbb{R}\). \(\langle\cdot,\cdot\rangle\) represents inner product. \(A^{t}\) denotes the transpose of a matrix \(A\). **Theorem 4.1**.: _For \(\varphi\)-sub-Gaussian process \((X_{t})_{t\in T}\), we have, for any given \(1\leq p<\infty\),_ \[\left(\mathcal{E}\sup\limits_{t\in T}|X_{t}|^{p}\right)^{\frac{1}{p}}\leq C_{ 0}\tilde{\gamma}_{\varphi,p}(T,d)+\inf\limits_{t_{0}\in T}\left\{2\sup\limits _{t\in T}\left(\mathcal{E}|X_{t}-X_{t_{0}}|^{p}\right)^{\frac{1}{p}}+\left( \mathcal{E}|X_{t_{0}}|^{p}\right)^{1/p}\right\}. \tag{4.1}\] _Here, \(C_{0}\) is a universal constant. As a consequence, we have,_ \[\left(\mathcal{E}\sup\limits_{t\in T}|X_{t}|^{p}\right)^{\frac{1}{p}}\leq C_{ 1}\tilde{\gamma}_{\varphi,p}(T,d)+C_{2}\sqrt{p}\Delta(T)+C_{3}p\Delta(T), \tag{4.2}\] _with \(C_{1},C_{2},C_{3}\) some positive universal constants. Therefore, for \(u\geq\sqrt{2}\),_ \[\mathcal{P}\left(\sup\limits_{t\in T}|X_{t}-X_{t_{0}}|\geq C_{4}\Delta(T) \varphi^{*}(u)+C_{5}\Delta(T)\sqrt{\varphi^{*}(u)}+C_{6}\tilde{\gamma}_{ \varphi,p}(T,d)\right)\leq\exp(-\varphi^{*}(u)), \tag{4.3}\] _where \(C_{4},C_{5},C_{6}\) are positive universal constants._ **Corollary 4.2**.: _For \(\varphi\)-sub Gaussian process \((X_{t})_{t\in T}\), we have, for any given \(1\leq p<\infty\),_ \[(\mathcal{E}|X_{t}|^{p})^{1/p}\leq C_{4}\Delta(T)p+C_{4}\Delta(T)p^{1/2}.\] Before starting to prove our results, we need some lemmas. **Lemma 4.3**.: _(Foucart and Rauhut [8]) If a random variable \(\xi\) satisfies_ \[(\mathcal{E}|\xi|^{p})^{\frac{1}{p}}\leq c_{1}p+c_{2}\sqrt{p}+c_{3},\quad\text { for all }\quad p\geq 1,\] _for some \(0\leq c_{1},c_{2},c_{3}<\infty\), then_ \[\mathcal{P}\left(|\xi|\geq e\left(c_{1}u+c_{2}\sqrt{u}+c_{3}\right)\right)\leq \exp(-u)\quad(u\geq 1).\] **Lemma 4.4**.: _(Dirksen [5]) Fix \(1\leq p<\infty\) and \(0<\alpha<\infty\). Let \(\gamma\geq 0\) and suppose that \(\xi\) is a positive random variable such that for some \(c\geq 1\) and \(u_{*}>0\),_ \[\mathcal{P}(\xi>\gamma u)\leq c\exp\left(-pu^{\alpha}/4\right)\quad(u\geq u_{ *})\,.\] _Then, for a constant \(\tilde{c}_{\alpha}>0\) depending only on \(\alpha\),_ \[\left(\mathcal{E}\xi^{p}\right)^{1/p}\leq\gamma\left(\tilde{c}_{\alpha}c+u_{ *}\right).\] **Lemma 4.5**.: _For \(g\sim N(0,1)\), for any \(\alpha\geq 1\),_ \[\mathcal{E}|g|^{\alpha}\leq\sqrt{\frac{e}{e-1}}\alpha^{\frac{\alpha}{2}}\quad.\] Proof.: It is easily checked that for \(x>0\), \[\left(\frac{x}{\sqrt{\alpha}}\right)^{\alpha}\leq\exp\left(\frac{x^{2}}{2e} \right).\] Then \[\frac{\mathsf{E}\left|g\right|^{\alpha}}{\alpha^{\frac{\alpha}{2}}}=\frac{2}{ \sqrt{2\pi}}\int_{0}^{\infty}\left(\frac{x}{\sqrt{\alpha}}\right)^{\alpha}\exp \left(-\frac{x^{2}}{2}\right)dx=\sqrt{\frac{e}{e-1}}.\] This completes the proof. Proof of Theorem 4.1.: By triangle inequality, we have for any \(t_{0}\in T\), \[\left(\mathsf{E}\sup_{t\in T}|X_{t}|^{p}\right)^{\frac{1}{p}} \leq\left(\mathsf{E}\sup_{t\in T}|X_{t}-X_{t_{0}}|^{p}\right)^{ \frac{1}{p}}+(\mathsf{E}|X_{t_{0}}|^{p})^{\frac{1}{p}}\] \[\leq\left(\mathsf{E}\sup_{t\in T}|X_{t}-X_{\pi_{k_{p}}(t)}|^{p} \right)^{\frac{1}{p}}+\left(\mathsf{E}\sup_{t\in T}|X_{\pi_{k_{p}}(t)}-X_{t_{ 0}}|^{p}\right)^{\frac{1}{p}}+(\mathsf{E}|X_{t_{0}}|^{p})^{\frac{1}{p}}. \tag{4.4}\] For the second term in (4.4), \[\left(\mathsf{E}\sup_{t\in T}|X_{\pi_{k_{p}}(t)}-X_{t_{0}}|^{p} \right)^{\frac{1}{p}} \leq\left(\mathsf{E}\sum_{t\in T}|X_{\pi_{k_{p}}(t)}-X_{t_{0}}|^{p }\right)^{\frac{1}{p}}\] \[\leq\left(2^{2^{k_{p}}}\sup_{t\in T}\mathsf{E}|X_{\pi_{k}(t)}-X_{ t_{0}}|^{p}\right)^{\frac{1}{p}}\] \[\leq 2\sup_{t\in T}\left(\mathsf{E}|X_{\pi_{k_{p}}(t)}-X_{t_{0}}| ^{p}\right)^{\frac{1}{p}}\leq 2\sup_{t\in T}\left(\mathsf{E}|X_{t}-X_{t_{0}}|^{p} \right)^{\frac{1}{p}}. \tag{4.5}\] The generic chaining method is needed to bound the first term in (4.4). Recall that the increment condition here is \[\mathsf{P}\left(|X_{t}-X_{s}|\geq ud(t,s)\right)\leq 2\exp(-\varphi^{*}(u)).\] Hence, we have \[\begin{split}\mathsf{P}\left(\left|X_{\pi_{n}(t)}& -X_{\pi_{n-1}(t)}\right|\geq\varphi^{*(-1)}(2^{n}u)d(\pi_{n}(t),d_{n-1}(t)) \right)\\ &\leq 2\exp\left(-\varphi^{*}(\varphi^{*(-1)}(2^{n}u))\right)=2 \exp(-2^{n}u).\end{split} \tag{4.6}\] Here, \(\operatorname{card}\{(\pi_{n}(t),\pi_{n-1}(t));t\in T\}\leq\operatorname{ card}(T_{n})\;\operatorname{card}(T_{n-1})\leq 2^{2^{n}}2^{2^{n-1}}\leq 2^{2^{n+1}}\). \(\Omega_{u,p,n}\) denotes the following event: \[\left|X_{\pi_{n}(t)}-X_{\pi_{n-1}(t)}\right|\leq\varphi^{*(-1)}(2^{n}u)d(\pi_ {n}(t),\pi_{n-1}(t))\quad\text{for all}\quad t\in T.\] By the choice of \(k\), we have \(2^{k_{p}}\leq p<2^{k_{p}+1}\). Then we know that for \(u\geq 2\), \[\mathsf{P}\left((\bigcap_{n>k_{p}}\Omega_{u,p,n})^{c}\right) \leq 2\sum_{n>k_{p}}2^{2^{n}+1}\exp\left(-2^{n}u\right)=2\sum_{n>k _{p}}\exp\left(2(\log 2)2^{n}\right)\exp(-2^{n}u)\] \[\leq 2\sum_{n>k_{p}}\exp\left((\log 2-1)u2^{n}\right)=2\exp\left(- \frac{2^{k_{p}}u}{2}\right)\sum_{n>k_{p}}\exp\left((\log 2-1)u2^{n}+\frac{2^{k_{p}}u}{2 }\right)\] \[\leq 2\exp\left(-\frac{2^{k_{p}}u}{2}\right)\sum_{n\geq k_{p}} \exp\left((\log 2-1)u2^{n}+\frac{2^{n}u}{4}\right)\] \[\leq c\exp\left(-\frac{pu}{4}\right).\] Here, \[c\leq 2\sum_{n\geq k_{p}}\exp\left((\log 2-1)u2^{n}+\frac{2^{n}u}{4}\right) \leq 2\sum_{n\geq 0}\exp\left(2(\log 2-3/4)n\right)<\infty.\] If event \(\bigcap_{n>k_{p}}\Omega_{u,p,n}\) occurs, then for \(u\geq 2\), by Lemma 2.5, \[\left|\sum_{n>k_{p}}X_{\pi_{n}(t)}-X_{\pi_{n-1}(t)}\right| \leq\sum_{n>k}\left|X_{\pi_{n}(t)}-X_{\pi_{n-1}(t)}\right|\] \[\leq\sum_{n>k_{p}}\varphi^{*(-1)}(2^{n}u)d(\pi_{n}(t),\pi_{n-1}(t))\] \[\leq u\sum_{n>k_{p}}\varphi^{*(-1)}(2^{n})d(\pi_{n}(t),t)+u\sum_ {n>k_{p}}\varphi^{*(-1)}(2^{n})d(t,\pi_{n-1}(t))\] \[\leq u\sum_{n>k_{p}}\varphi^{*(-1)}(2^{n})d(\pi_{n}(t),t)+2u\sum_ {n\geq k_{p}}\varphi^{*(-1)}(2^{n})d(t,\pi_{n}(t))\] \[\leq 3u\tilde{\gamma}_{\varphi,p}(T,d).\] Taking the supremum on both sides, then we get for \(u\geq 2\), \[\sup_{t\in T}\left|X_{t}-X_{\pi_{k_{p}}(t)}\right|\leq 3u\tilde{\gamma}_{p}(T,d).\] To conclude, \[\mathsf{P}\left(\sup_{t\in T}\left|X_{t}-X_{\pi_{k_{p}}(t)}\right|\geq 3u \tilde{\gamma}_{\varphi,p}(T,d)\right)\leq c\exp\left(-\frac{pu}{4}\right) \qquad(u\geq 2). \tag{4.7}\] Therefore, by Lemma 4.5, \[\mathsf{E}\sup_{t\in T}\left|X_{t}-X_{\pi_{k_{p}}(t)}\right|^{p} =\int_{0}^{\infty}pu^{p-1}\mathsf{P}\left(\sup_{t\in T}\left|X_{t}-X _{\pi_{k_{p}}(t)}\right|>u\right)du\] \[=\int_{0}^{\infty}p(3\tilde{\gamma}_{\varphi,p}(T,d))^{p}v^{p-1} \mathsf{P}\left(\sup_{t\in T}\left|X_{t}-X_{\pi_{k_{p}}(t)}\right|>3v\tilde{ \gamma}_{\varphi,p}(T,d)\right)dv\] \[=(3\tilde{\gamma}_{\varphi,p}(T,d))^{p}\left(\int_{0}^{2}+\int_{ 2}^{\infty}\right)pv^{p-1}\mathsf{P}\left(\sup_{t\in T}\left|X_{t}-X_{\pi_{k_{ p}}(t)}\right|>3v\tilde{\gamma}_{\varphi,p}(T,d)\right)dv\] \[\leq(3\tilde{\gamma}_{\varphi,p}(T,d))^{p}\left(2^{p}+cp\int_{2} ^{\infty}v^{p-1}\exp\left(-\frac{pv}{4}\right)dv\right)\] \[\leq(6\tilde{\gamma}_{\varphi,p}(T,d))^{p}+cp(3\tilde{\gamma}_{ \varphi,p}(T,d))^{p}\int_{0}^{\infty}v^{p-1}\exp\left(-\frac{pv}{4}\right)dv\] \[=(6\tilde{\gamma}_{\varphi,p}(T,d))^{p}+2^{1+p}p^{1-p}c(3\tilde{ \gamma}_{\varphi,p}(T,d))^{p}\int_{0}^{\infty}u^{2p-1}\exp\left(-\frac{u^{2}} {2}\right)du\] \[=(6\tilde{\gamma}_{\varphi,p}(T,d))^{p}+2^{1+p}p^{1-p}c(3\tilde{ \gamma}_{\varphi,p}(T,d))^{p}\frac{\sqrt{2\pi}}{2}\mathsf{E}|g|^{2p-1}\] \[\leq(6\tilde{\gamma}_{\varphi,p}(T,d))^{p}+2^{p}p^{1-p}(3\tilde{ \gamma}_{\varphi,p}(T,d))^{p}\frac{c\sqrt{2\pi e}}{\sqrt{e}-1}(2p-1)^{p-\frac {1}{2}}\] \[\leq(6\tilde{\gamma}_{\varphi,p}(T,d))^{p}+2^{2p}(3\tilde{\gamma} _{\varphi,p}(T,d))^{p}\frac{c\sqrt{\pi e}}{\sqrt{e}-1}p^{\frac{1}{2}}.\] Here, \(g\) is a standard Gaussian random variable. Noting that \(p^{1/2p}\leq e^{1/2e}\), we obtain that \[\begin{split}\left(\mathsf{E}\sup_{t\in T}\left|X_{t}-X_{\pi_{k_ {p}}(t)}\right|^{p}\right)^{\frac{1}{p}}&\leq\left((3\tilde{ \gamma}_{\varphi,p}(T,d))^{p}(2^{p}+2^{2p}\frac{c\sqrt{\pi e}}{\sqrt{e-1}}p^{ \frac{1}{2}})\right)^{\frac{1}{p}}\\ &\leq 3\tilde{\gamma}_{\varphi,p}(T,d)\left(2+4\left(\frac{c\sqrt{ \pi e}}{\sqrt{e-1}}\right)^{\frac{1}{p}}p^{\frac{1}{2p}}\right)\\ &\leq C_{1}\tilde{\gamma}_{\varphi,p}(T,d),\end{split} \tag{4.8}\] for some universal constant \(C_{1}>0\). By a standardization of the Orlicz \(N\)-function \(\varphi\) in a neighborhood of zero (see [4] P.67 for details), we might assume \(\varphi(x)=\frac{x^{2}}{2}\) for \(|x|<\sqrt{2}\). By Lemma 2.4, \(\frac{\varphi^{*}(u)}{u}\) is monotonically nondecreasing for \(u>0\), \[\varphi^{*}(u)\geq\varphi^{*}(1)u=\frac{u}{2}\quad\text{for}\quad u>1.\] By the increment condition, we get \[\begin{split}\mathsf{P}\left(|X_{t}-X_{t_{0}}|\geq u\Delta(T) \right)&\leq\mathsf{P}\left(|X_{t}-X_{s}|\geq ud(t,s)\right)\\ &\leq 2\exp\left(-\varphi^{*}(u)\right)=\begin{cases}2\exp\left(-\frac{ u^{2}}{2}\right)&0\leq u<1\\ 2\exp\left(-\frac{u}{2}\right)&u\geq 1\end{cases}.\end{split} \tag{4.9}\] Then we have \[\mathsf{E}\left|X_{t}-X_{t_{0}}\right|^{p}= \int_{0}^{\infty}pu^{p-1}\mathsf{P}\left(\left|X_{t}-X_{t_{0}} \right|>u\right)du\] \[=\Delta^{p}(T)p\left(\int_{0}^{1}+\int_{1}^{\infty}\right)u^{p-1} \mathsf{P}\left(\left|X_{t}-X_{t_{0}}\right|>u\Delta(T)\right)du\] \[\leq\Delta^{p}(T)p\left(2\int_{0}^{1}u^{p-1}\exp\left(-\frac{u^{2 }}{2}\right)du+2\int_{1}^{\infty}u^{p-1}\exp\left(-\frac{u}{2}\right)du\right)\] \[\leq\Delta^{p}(T)p\left(2^{\frac{p}{2}}\int_{0}^{1}v^{\frac{p}{2} -1}\exp(-v)dv+2^{p+1}\int_{1/2}^{\infty}v^{p-1}\exp(-v)dv\right)\] \[\leq\Delta^{p}(T)\left(2^{\frac{p}{2}}p\Gamma\left(\frac{p}{2} \right)+2^{p+1}p\Gamma(p)\right),\] where \(\Gamma\left(\cdot\right)=\int_{0}^{\infty}v^{-1}\exp(-v)dv\) is the gamma function. By Stirling's formula, we know for some \(0\leq\theta_{1}(p),\theta_{2}(p)\leq 1\), \[2^{\frac{p}{2}}p\Gamma\left(\frac{p}{2}\right)=2\sqrt{\pi}e^{\frac{\theta_{1} \left(p/2\right)}{\theta_{p}}}p^{\frac{p+1}{2}}\exp\left(-\frac{p}{2}\right),\] \[2^{p+1}p\Gamma\left(p\right)=\sqrt{\pi}e^{\frac{\theta_{2}(p)}{12p}}2^{p+ \frac{3}{2}}p^{p+\frac{1}{2}}\exp\left(-p\right).\] So \[\left(\mathsf{E}\left|X_{t}-X_{t_{0}}\right|^{p}\right)^{\frac{1} {p}} \leq\Delta(T)\left(2\sqrt{\pi}e^{\frac{\theta_{1}\left(p/2\right) }{\theta_{p}}}p^{\frac{p+1}{2}}\exp\left(-\frac{p}{2}\right)+\sqrt{\pi}e^{ \frac{\theta_{2}\left(p\right)}{12p}}2^{p+\frac{3}{2}}p^{p+\frac{1}{2}}\exp \left(-p\right)\right)^{1/p}\] \[\leq\Delta(T)\left(\left(2\sqrt{\pi}e^{\frac{1}{\theta_{p}}} \right)^{\frac{1}{p}}e^{\frac{1-\varepsilon}{2\varepsilon}}p^{\frac{1}{2}}+2 \left(\sqrt{8\pi}e^{\frac{1}{12p}}\right)^{\frac{1}{p}}e^{\frac{1-2\varepsilon }{2\varepsilon}}p\right). \tag{4.10}\] The same procedure from (4.9) to (4.10) also applied to \((\mathsf{E}|X_{t_{0}}|^{p})^{\frac{1}{p}}\), so we can also get \[\left(\mathsf{E}|X_{t_{0}}|^{p}\right)^{\frac{1}{p}} \leq\Delta(T)\left(2\sqrt{\pi}e^{\frac{\theta_{1}\left(p/2\right) }{\theta_{p}}}p^{\frac{p+1}{2}}\exp\left(-\frac{p}{2}\right)+\sqrt{\pi}e^{ \frac{\theta_{2}\left(p\right)}{12p}}2^{p+\frac{3}{2}}p^{p+\frac{1}{2}}\exp \left(-p\right)\right)^{1/p}\] \[\leq\Delta(T)\left(\left(2\sqrt{\pi}e^{\frac{1}{\theta_{p}}} \right)^{\frac{1}{p}}e^{\frac{1-\varepsilon}{2\varepsilon}}p^{\frac{1}{2}}+2 \left(\sqrt{8\pi}e^{\frac{1}{12p}}\right)^{\frac{1}{p}}e^{\frac{1-2\varepsilon }{2\varepsilon}}p\right). \tag{4.11}\] Combing (4.4), (4.5), (4.8) and (4.11), we obtain the moment bound \[\left(\mathsf{E}\sup_{t\in T}\left|X_{t}-X_{t_{0}}\right|^{p}\right)^{\frac{1} {p}}\leq C_{1}\tilde{\gamma}_{\varphi,p}(T,d)+C_{2}\sqrt{p}\Delta(T)+C_{3}p \Delta(T). \tag{4.12}\] for some universal constants \(C_{1},C_{2},C_{3}>0\) Now using Lemma 4.3, \[\mathsf{P}\left(\sup_{t\in T}\left|X_{t}\right|\geq e\left(c_{1}\varphi^{*}(u)+ c_{2}\sqrt{\varphi^{*}(u)}+c_{3}\right)\right)\leq\exp\left(-\varphi^{*}(u) \right)\quad(u\geq\sqrt{2}), \tag{4.13}\] where \[c_{1}=12e^{\frac{1-2\varepsilon}{2\varepsilon}}\Delta(T)\left(\sqrt{8\pi}e^{ \frac{1}{12p}}\right)^{\frac{1}{p}},\] \[c_{2}=6e^{\frac{1-\varepsilon}{2\varepsilon}}\Delta(T)\left(\sqrt{2\pi}e^{ \frac{1}{\theta_{p}}}\right)^{\frac{1}{p}},\] \[c_{3}=3\left(2+4e^{\frac{1}{2e}}\left(c\sqrt{\frac{2\pi e}{e-1}}\right)^{\frac{1} {p}}\right)\tilde{\gamma}_{\varphi,p}(T,d).\] To put (4.13) in another way, for some positive universal constant \(C_{1},C_{2},C_{3}\), \[\mathsf{P}\left(\sup_{t\in T}|X_{t}|\geq C_{1}\Delta(T)\varphi^{*}(u)+C_{2} \Delta(T)\sqrt{\varphi^{*}(u)}+C_{3}\tilde{\gamma}_{\varphi,p}(T,d)\right)\leq \exp(-\varphi^{*}(u))\quad(u\geq\sqrt{2}). \tag{4.14}\] This completes the proof of Theorem 4.1. Proof of Corollary 4.2.: See (4.11) in the proof of Theorem 4.1. ## 5. Examples and Applications This section illustrates a range of applications based on our preceding results. ### Order 2 Gaussian chaos Consider independent standard Gaussian sequences \((g_{i})_{i\geq 1}\), \((g^{\prime}_{j})_{j\geq 1}\) and a given double sequence sequence \(t=(t_{i,j})_{i,j\geq 1}\). \[X_{t}=\sum_{i,j\geq 1}t_{i,j}g_{i}g_{j} \tag{5.1}\] is defined as a (non-decoupled) order 2 Gaussian chaos. To lighten the notation, we denote by \(tg\) the sequence \(\left(\sum_{j\geq 1}t_{i,j}g_{j}\right)_{i\geq 1}\), by \(\langle\cdot,\cdot\rangle\) the dot product in \(\ell^{2}\), and by \(||\cdot||_{2}\) the corresponding norm. For \(t=(t_{i,j})_{i,j\geq 1}\), \(||t||_{HS}:=\sum_{i,j\geq 1}t_{i,j}^{2}\) denotes the Hilbert-Schmidt norm and write \[||t|| = \sup_{\sum_{j\geq 1}\alpha_{j}^{2}\leq 1}(\sum_{i\geq 1}(\sum_{j \geq 1}\alpha_{j}t_{i,j})^{2})^{1/2}\] \[= \sup\{\sum_{i\geq 1,j\geq 1}\alpha_{j}\beta_{i}t_{i,j}:\sum_{j \geq 1}\alpha_{j}^{2}\leq 1,\sum_{i\geq 1}\beta_{i}^{2}\leq 1\}.\] We define \(d_{\infty}(s,t)=||t-s||\), and \[Y_{t}^{*}:=\sum_{i\geq 1}\left(\sum_{j\geq 1}t_{i,j}g_{j}\right)^{2}=|| tg||_{2}^{2}=\langle tg,tg\rangle=\sum_{i\geq 1}\sum_{j,k\geq 1}t_{i,j}t_{i,k}g_{j}g_{ k}, \tag{5.3}\] \[Y_{t}:=Y_{t}^{*}-\mathds{E}Y_{t}^{*}=\sum_{i\geq 1}\sum_{j\neq k }t_{i,j}t_{i,k}g_{j}g_{k}+\sum_{i\geq 1}\sum_{j\geq 1}t_{i,j}^{2}\left(g_{j}^{2}-1 \right), \tag{5.2}\] which is a Gaussian chaos of order 2. **Theorem 5.1**.: _For any set \(T\) with \(0\in T\), we have, for \(p\geq 1\),_ \[\left(\mathcal{E}\sup_{t\in T}|Y_{t}|^{p}\right)^{1/p}\leq L\gamma_{2,p}\left( T,d_{\infty}\right)\left(\gamma_{2,p}\left(T,d_{\infty}\right)+\sup_{t\in T}||t ||_{HS}\right). \tag{5.4}\] _Remark 5.2_.: Gaussian random variables are \(\varphi\)-sub-Gaussian with \(\varphi(x)=x^{2}/2\). Throught this section, we write \(\gamma_{2,p}(T,d)\) for \(\gamma_{\varphi,p}\) when \(\varphi(x)=x^{2}/2\). _Remark 5.3_.: When \(p=1\), Our result is same as Krahmer, Mendelson and Rauhut [13]. The primary step of the proof of Theorem 5.1 consists of the following propositions. **Proposition 5.4**.: _Consider independent standard Gaussian sequences \((g_{i})_{i\geq 1}\), \((g^{\prime}_{j})_{j\geq 1}\) and a given collection \(\mathcal{B}\) of double sequence \(b=(b_{i,j})_{i,j\geq 1}\). We have, for \(p\geq 1\),_ \[\operatorname*{\mathsf{E}}_{b\in\mathcal{B}}|\sum_{i\neq j}b_{i,j}g_{i}g_{j}+ \sum_{i\geq 1}b_{i,i}(g_{i}^{2}-1)|^{p}\leq 2^{p}\operatorname*{\mathsf{E}}_{b \in\mathcal{B}}|\sum_{i,j\geq 1}b_{i,j}g_{i}g^{\prime}_{j}|^{p}. \tag{5.5}\] Proof.: For \(b=(b_{i,j})_{i,j\geq 1}\in\mathcal{B}\) and \(p\geq 1\), we have \[\operatorname*{\mathsf{E}}_{b\in\mathcal{B}}|\sum_{i\neq j}b_{i,j }g_{i}g_{j}+\sum_{i\geq 1}b_{i,i}(g_{i}^{2}-1)|^{p} =\operatorname*{\mathsf{E}}_{g}\sup_{b\in\mathcal{B}}| \operatorname*{\mathsf{E}}_{g^{\prime}}\sum_{i,j}b_{i,j}(g_{i}+g^{\prime}_{i} )(g_{j}-g^{\prime}_{j})|^{p}\] \[\leq\operatorname*{\mathsf{E}}_{g}\operatorname*{\mathsf{E}}_{g ^{\prime}}\sup_{b\in\mathcal{B}}|\sum_{i,j}b_{i,j}(g_{i}+g^{\prime}_{i})(g_{j}- g^{\prime}_{j})|^{p}\] \[=\operatorname*{\mathsf{E}}_{b\in\mathcal{B}}|\sum_{i,j}b_{i,j}( g_{i}+g^{\prime}_{i})(g_{j}-g^{\prime}_{j})|^{p}\] \[=2^{p}\operatorname*{\mathsf{E}}_{b\in\mathcal{B}}\left|\sum_{i,j}b_{i,j}\frac{g_{i}+g^{\prime}_{i}}{\sqrt{2}}\frac{g_{j}-g^{\prime}_{j}}{ \sqrt{2}}\right|^{p}\] \[=2^{p}\operatorname*{\mathsf{E}}_{b\in\mathcal{B}}|\sum_{i,j\geq 1 }t_{i,j}g_{i}g^{\prime}_{j}|.\] Here, \(\operatorname*{\mathsf{E}}_{g}\) and \(\operatorname*{\mathsf{E}}_{g^{\prime}}\) represents taking expectation in the random variables \(g_{i}\) and \(g^{\prime}_{i}\), respectively. Jensen's inequality guarantees the inequality above. The last equality above holds since the families \((g_{i}+g^{\prime}_{i})/\sqrt{2}\) and \((g_{i}-g^{\prime}_{i})/\sqrt{2}\) are independent sequences of standard Gaussian random variables independent of each other. Define \[Z_{t}=\sum_{i,j,k\geq 1}t_{i,j}t_{i,k}g_{j}g^{\prime}_{k}=\left\langle tg,tg^{ \prime}\right\rangle.\] **Proposition 5.5**.: _For \(r\geq 2\), let \(U_{r}:=\operatorname*{\mathsf{E}}\sup_{t\in T}||tg||_{2}^{r}\). Then, for \(p\geq 1\),_ \[\left(\operatorname*{\mathsf{E}}_{t\in T}|Z_{t}|^{p}\right)^{1/p}\leq LU_{2p} ^{1/2p}\gamma_{2,p}\left(T,d_{\infty}\right). \tag{5.6}\] Proof.: Without loss of generality, it might be assumed that \(T\) is finite. Consider an admissible sequence \((\mathcal{A}_{n})\) with \[\sup_{t\in T}\sum_{n\geq\lfloor\log p/\log 2\rfloor}2^{n/2}\Delta\left(A_{n}(t )\right)\leq 2\gamma_{2,p}\left(T,d_{\infty}\right),\] where the diameter \(\Delta\) is for the distance \(d_{\infty}\). For \(A\in\mathcal{A}_{n}\), consider an element \(t_{A,n}\in A\), and define as usual a chaining by \(\pi_{n}(t)=t_{A_{n}(t),n}\). Since \(0\in T\), without loss of generality, we may assume that \(\pi_{0}(t)=0\). We observe that \[Z_{\pi_{n}(t)}-Z_{\pi_{n-1}(t)}=\left\langle(\pi_{n}(t)\right. \left.-\pi_{n-1}(t))\,g,\pi_{n}(t)g^{\prime}\right\rangle\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad+ \left\langle\pi_{n-1}(t)g,(\pi_{n}(t)-\pi_{n-1}(t))\,g^{\prime}\right\rangle. \tag{5.7}\] Recalling that we consider each \(t\) as an operator on \(\ell^{2}\), let us denote by \(t^{*}\) its adjoint. Thus, \[\left\langle\left(\pi_{n}(t)-\pi_{n-1}(t)\right)g,\pi_{n}(t)g^{\prime}\right\rangle =\left\langle g,\left(\pi_{n}(t)-\pi_{n-1}(t)\right)^{*}\pi_{n}(t)g^{\prime} \right\rangle. \tag{5.8}\] Here, \(\left(\pi_{n}(t)-\pi_{n-1}(t)\right)^{*}\pi_{n}(t)g^{\prime}\) is the element of \(\ell^{2}\) obtained by applying the operator \(\left(\pi_{n}(t)-\pi_{n-1}(t)\right)^{*}\) to the vector \(\pi_{n}(t)g^{\prime}\). Let us now consider the r.v.s \(W=\sup_{t\in T}||tg||_{2}\) and \(W^{\prime}=\sup_{t\in T}||tg^{\prime}||_{2}\). Then \[\left|\left|\left(\pi_{n}(t)-\pi_{n-1}(t)\right)^{*}\pi_{n}(t)g^{ \prime}\right|\right|_{2}\leq\left|\left|\left(\pi_{n}(t)-\pi_{n-1}(t)\right)^ {*}\right|\right|\left|\left|\pi_{n}(t)g^{\prime}\right|\right|_{2}\] \[\leq\Delta\left(A_{n-1}(t)\right)W^{\prime}.\] It then follows from (5.8) that, conditionally on \(g^{\prime}\), the quantity \(\left\langle\left(\pi_{n}(t)-\pi_{n-1}(t)\right)g,\pi_{n}(t)g^{\prime}\right\rangle\) is a Gaussian r.v. \(G\) with \(\left(\mathsf{E}G^{2}\right)^{1/2}\leq\Delta\left(A_{n-1}(t)\right)W^{\prime}\). Thus, we obtain that for \(u\geq 1\), \[\mathsf{P}\left(\left|\left\langle\left(\pi_{n}(t)-\pi_{n-1}(t)\right)g,\pi_{ n}(t)g^{\prime}\right\rangle\right|\geq 2^{n/2}u\Delta\left(A_{n-1}(t)\right)W^{ \prime}\right)\leq\exp\left(-u^{2}2^{n}/2\right).\] Proceeding similarly for the second term in (5.7), we get \[\mathsf{P}\left(\left|Z_{\pi_{n}(t)}-Z_{\pi_{n-1}(t)}\right|\geq 2^{n/2}u \Delta\left(A_{n-1}(t)\right)(W+W^{\prime})\right)\leq 2\exp\left(-u^{2}2^{n}/2 \right).\] Using that \(Z_{\pi_{0}(t)}=0\), and proceeding just as in the proof of the generic chaining bound, we obtain that for \(u\geq L\), \[\mathsf{P}\left(\sup_{t\in T}\left|Z_{t}\right|\geq Lu\gamma_{2}\left(T,d_{ \infty}\right)(W+W^{\prime})\right)\leq L\exp\left(-u^{2}\right). \tag{5.9}\] In particular, \(R=\sup_{t\in T}\left|Z_{t}\right|/\left(W+W^{\prime}\right)\) satisfies \[\mathsf{P}\left(R\geq Lu\gamma_{2}\left(T,d_{\infty}\right)\right)\leq L\exp \left(-u^{2}\right). \tag{5.10}\] As a consequence, according to Lemma 4.4, we have, for any \(q\geq 1\), \[\left(\mathsf{E}R^{q}\right)^{1/q}\leq L\gamma_{2,p}(T,d_{\infty}). \tag{5.11}\] Notice that \(\sup_{t\in T}\left|Z_{t}\right|=R\left(W+W^{\prime}\right)\) and \(\mathsf{E}W^{2p}=EW^{\prime 2p}=U_{2p}\), we know that, by Cauchy-Schwarz inequality and (5.11), \[\left(\mathsf{E}\sup_{t\in T}\left|Z_{t}\right|^{p}\right)^{1/p} =\left(\mathsf{E}R^{p}(W+W^{\prime})^{p}\right)^{1/p}\] \[\leq\left(\mathsf{E}R^{2p}\right)^{1/2p}\left(\mathsf{E}(W+W^{ \prime})^{2p}\right)^{1/2p}\] \[\leq L\gamma_{2,p}(T,d_{\infty})(2^{2p-1}(\mathsf{E}W^{2p}+ \mathsf{E}W^{\prime 2p}))^{1/2p}\] \[\leq L\gamma_{2,p}(T,d_{\infty})(\mathsf{E}W^{2p})^{1/2p}=L\gamma_ {2,p}(T,d_{\infty})(U_{2p})^{1/2p}.\] (5.12) This finally yields Proposition 5.5. **Proposition 5.6**.: _We set \(V=\sup_{t\in T}||t||_{HS}\). Then for \(p\geq 1\),_ \[U_{2p}^{1/2p}\leq L\left(\gamma_{2,p}(T,d_{\infty})+V\right). \tag{5.13}\] Proof.: We have \(V^{2}=\sup_{t\in T}||t||_{HS}^{2}=\sup_{t\in T}\sum_{i,j\geq 1}t_{i,j}^{2}= \sup_{t\in T}\mathsf{E}Y_{t}^{*}\). For \(t\in T\), we have, for \(p\geq 1\), \[\left|\left|tg\right|\right|_{2}^{2p}=(Y_{t}^{*})^{p}=(Y_{t}+EY_{t}^{*})^{p} \leq\left(\left|Y_{t}\right|+V^{2}\right)^{p}\leq\left(\sup_{t\in T}\left|Y_{t }\right|+V^{2}\right)^{p}.\] Thus, \[U_{2p}\leq\mathsf{E}\left(\sup_{t\in T}\left|Y_{t}\right|+V^{2}\right)^{p} \leq 2^{p-1}\left(\mathsf{E}\sup_{t\in T}\left|Y_{t}\right|^{p}+V^{2p} \right). \tag{5.14}\] By Proposition 5.4 and Proposition 5.5, we have \[\mathsf{E}\sup_{t\in T}|Y_{t}|^{p}\leq 2^{p}\mathsf{E}\sup_{t\in T}|Z_{t}|^{p} \leq L^{p}(\gamma_{2,p}(T,d_{\infty}))^{p}U_{2p}^{1/2}. \tag{5.15}\] Plug (5.15) into (5.14), we know that \[U_{2p}\leq L^{p}((\gamma_{2,p}(T,d_{\infty}))^{p}U_{2p}^{1/2}+L^{p}V^{2p}.\] Hence, \[U_{2p}^{1/2}\leq L^{p}(\gamma_{2,p}(T,d_{\infty}))^{p}+L^{p}V^{p},\] namely, \[U_{2p}^{1/2p}\leq L\left(\gamma_{2,p}(T,d_{\infty})+V\right).\] This proves Proposition 5.6. Now, we are well prepared for the proof of Theorem 5.1. Proof of Theorem 5.1.: A combination of Proposition 5.4, Proposition 5.5, and Proposition 5.6 gives Theorem 5.1. ### Johnson-Lindenstrauss Lemma This section deals with the Johnson-Lindenstrauss lemma in \(\varphi\)-sub-Gaussian setting. **Definition 5.7**.: A random vector \(X\) is called a \(\varphi\)-sub-Gaussian random vector if all one-dimensional marginals of \(X\), i.e., the random variables \(\langle X,x\rangle\) for any \(x\in\mathbb{R}^{n}\), are \(\varphi\)-sub-Gaussian. The \(\varphi\)-sub-Gaussian norm of \(X\) is defined as \[\tau_{\varphi}(X):=\sup_{x\in\mathrm{S}^{n-1}}\tau_{\varphi}\left(\langle X,x \rangle\right),\] where \(S^{n-1}\) denotes the Euclidean unit sphere in \(\mathbb{R}^{n}\). **Lemma 5.8**.: _(Kozachenko and Troshki [12]) For \(\xi\in Sub_{\varphi^{*}}(\Omega)\) with mean zero, \(\eta=\xi^{2}\in Sub_{\psi^{*}}(\Omega)\) with \(\psi(x)=\varphi(\sqrt{x})\)._ **Lemma 5.9**.: _Assume that \(\Delta_{2}-\)condition holds for \(\varphi(\cdot)\). When \(T\) is finite, for any \(p\geq 1\), \(\gamma_{\varphi,p}(T,d)\leq K_{\varphi}\Delta(T)\varphi^{*(-1)}(\log\operatorname {card}T)\), where \(K_{\varphi}\) denotes a constant with respect to \(\varphi\)._ Proof of Lemma 5.9.: Since \(\Delta(A_{n}(t))=0\) when \(2^{2^{n}}\geq\operatorname{card}(T)\), we know that \(2^{n^{*}}\leq C_{1}\log(\operatorname{card}(T))\) holds for some \(n^{*}\) and \(C_{1}>1\), and \(\Delta(A_{n^{*}+1}(t))=0\). For that, we have for some \(M_{2}\in(0,1)\), \[\gamma_{\varphi,p}(T,d) \leq\sup_{t\in T}\sum_{n=0}^{n^{*}}\varphi^{*(-1)}(2^{n})\Delta(A_ {n}(t))\] \[\leq\Delta(T)\sum_{n=0}^{n^{*}}\varphi^{*(-1)}(2^{n})\] \[\leq\Delta(T)\sum_{n=0}^{n^{*}}(1-M_{2})^{(n^{*}-n)}\varphi^{*(-1 )}(2^{n^{*}})\] \[\leq C_{1}\Delta(T)\varphi^{*(-1)}\left(\operatorname{card}(T) \right)\sum_{n=0}^{n^{*}}(1-M_{2})^{n}\] \[=C_{1}\frac{1-(1-M_{2})^{n^{*}+1}}{M_{2}}\Delta(T)\varphi^{*(-1)} \left(\operatorname{card}(T)\right)\] \[\leq\frac{C_{1}}{M_{2}}\Delta(T)\varphi^{*(-1)}\left(\operatorname{card}(T) \right).\] **Theorem 5.10**.: _Let \(X\) be a set of \(N\) points in \(\mathbb{R}^{n}\) and \(\varepsilon\in(0,1)\). Consider an \(m\times n\) matrix \(A\) whose rows are independent, mean zero, isotropic, and \(\varphi^{*}\)-sub-Gaussian random vectors in \(\mathbb{R}^{n}\). Rescale \(A\) by defining the random projection_ \[\operatorname{P}:=\frac{1}{\sqrt{m}}A.\] _Assume that \(\varphi^{*(-1)}(\cdot)\) satisfies_ \[m\geqslant C_{\varphi}\varepsilon^{-2}\log(N),\] _where \(C_{\varphi}\) is an appropriately large constant with respect to \(\varphi\). Then, with a probability of at least_ \[\exp\left(-\frac{m\varepsilon^{2}}{c\Delta(T)}+C_{\varphi}\log(N)\right),\] _with \(c\) denoting a universal constant, the map \(\operatorname{P}\) preserves the distances between all points in \(X\) with error \(\varepsilon\), that is_ \[(1-\varepsilon)||x-y||_{2}\leqslant||Px-Py||_{2}\leqslant(1+\varepsilon)||x- y||_{2} \tag{5.16}\] _for all \(x,y\in X\)._ Proof of Theorem 5.10.: We first rewrite (5.16) as \[1-\varepsilon\leqslant||Pz||_{2}\leqslant 1+\varepsilon\quad\text{ for all }z\in\operatorname{T}, \tag{5.17}\] where \[\operatorname{T}:=\left\{\frac{x-y}{||x-y||_{2}}:x,y\in X\text{ distinct points }\right\}.\] To prove the (5.17), it is enough to prove \[1-\varepsilon\leqslant||Pz||_{2}^{2}\leqslant 1+\varepsilon\quad\text{ for all }z\in\operatorname{T}. \tag{5.18}\] Observe that \[\operatorname{\mathsf{P}}\left(\sup_{z\in T}\left|\frac{1}{m} \sum_{i=1}^{m}\left\langle X_{i},z\right\rangle^{2}-1\right|\leqslant\varepsilon\right) =\operatorname{\mathsf{P}}\left(\sup_{z\in T}\left|\sum_{i=1}^{m} \left\langle X_{i},z\right\rangle^{2}-m\right|\leqslant m\varepsilon\right)\] \[=\operatorname{\mathsf{P}}\left(\sup_{z\in T}\left|\sum_{i=1}^{m} \left\langle X_{i},z\right\rangle^{2}-m\mathrm{E}\left\langle X_{i},z\right \rangle^{2}\right|\leqslant m\varepsilon\right)\] \[=\operatorname{\mathsf{P}}\left(\sup_{z\in T}\left|\sum_{i=1}^{m} \left(\left\langle X_{i},z\right\rangle^{2}-\mathrm{E}\left\langle X_{i},z \right\rangle^{2}\right)\right|\leqslant m\varepsilon\right), \tag{5.19}\] where the second equality holds because of isotropicity. By assumption, the random variables \(\left\langle X_{i},z\right\rangle^{2}-1\) are independent. For \(\left\langle X_{i},z\right\rangle\in Sub_{\varphi^{*}}(\Omega)\), by Lemma 5.8, we know that \(\left\langle X_{i},z\right\rangle^{2}-\mathrm{E}\left\langle X_{i},z\right\rangle ^{2}\in\mathrm{Sub}_{\psi^{*}}(\Omega)\), where \(\psi(x)=\varphi(\sqrt{x})\). By Theorem 4.1, for \(u\geq\sqrt{2}\), we have, for \(\psi^{*}\)-sub-Gaussian process \(Z_{t}\), \[\operatorname{\mathsf{P}}\left(\sup_{t\in T}|Z_{t}|\geq c(\Delta_{\rho}(T)u+ \tilde{\gamma}_{\varphi}(T,\rho))\right)\leq\exp\left(-u\right),\] for some universal constant \(c>0\). Combining Lemma 5.9 and Theorem 3.16, we know that \[\tilde{\gamma}_{\varphi,p}(T,\rho)\leq K_{\varphi}\Delta_{\rho}(T)\varphi^{*(-1)} (\log\operatorname{card}T).\] Hence, for any \(z\in T\), \[\mathsf{P}\left\{\sup_{z\in\operatorname{T}}\left|\frac{1}{m} \sum_{i=1}^{m}\left\langle X_{i},z\right\rangle^{2}-1\right|>\varepsilon\right\} \leq\exp\left(-\frac{m\varepsilon-c\tilde{\gamma}_{\varphi}(T, \rho)}{c\Delta(T)}\right)\] \[\leq\exp\left(-\frac{m\varepsilon-cK_{\varphi}\Delta(T)\varphi^{* (-1)}(\log\operatorname{card}(T))}{c\Delta(T)}\right)\] \[\leq\exp\left(-\frac{m\varepsilon-c^{\prime}K_{\varphi}\Delta(T) \varphi^{*(-1)}(\log N)}{c\Delta(T)}\right)\] \[\leq\exp\left(-\frac{m\varepsilon^{2}-c^{\prime}K_{\varphi}\Delta (T)\log N}{c\Delta(T)}\right), \tag{5.20}\] where the second-to-last inequality holds because \(\operatorname{card}(T)\leqslant N^{2}\) according to the initial setting of \(T\) and the last inequality is based on the properties of \(\varphi^{-1}(\cdot)\). Therefore, for \(m\geqslant C_{\varphi}\varepsilon^{-2}\log(N)\) with sufficiently large constant \(C_{\varphi}\), The proof is complete. ### Convex signal recovery from \(\varphi\)-sub-Gaussian measurements This section develops some problems in signal recovery. We demonstrate a universal error bound for \(\varphi\)-sub-Gaussian measurements. To achieve the bound, we obtain a lower bound for the minimum conic singular value for a random matrix \(\mathbf{\Phi}\) that satisfies certain conditions, which is of independent interest. Some background material on signal recovery is presented first, and then we give the main theorem. **Definition 5.11**.: A set \(\mathbf{C}\subseteq\mathbb{R}^{n}\) is called a cone if \(\mathbf{C}=\theta\mathbf{C}\) for all \(\theta>0\). For a proper convex function \(f:\mathbb{R}^{n}\to\bar{\mathbb{R}}\), the descent cone \(\mathcal{D}(f,\mathbf{x})\) of the function \(f\) at a point \(\mathbf{x}\in\mathbb{R}^{n}\) is defined as \[\mathcal{D}(f,\mathbf{x})=\bigcup_{\theta>0}\{\mathbf{u}\in\mathbb{R}^{n}, \quad f(\mathbf{x}+\theta\mathbf{u})\leq f(\mathbf{x})\}.\] _Remark 5.12_.: The descent cone of a convex function is always a convex cone, though not necessarily closed. **Definition 5.13**.: For a \(m\times d\) matrix \(\mathbf{\Phi}\) and a cone \(\mathbf{C}\in\mathbb{R}^{n}\) not necessarily convex, the minimum singular value of \(\mathbf{\Phi}\) with respect to the cone \(\mathbf{C}\) is defined as \[\lambda_{min}(\mathbf{\Phi};\mathbf{C})=\inf_{\mathbf{u}\in\mathbf{C}\cap S^{ n-1}}||\mathbf{\Phi}\mathbf{u}||_{2},\] with \(S^{n-1}\) denoting the Euclidean unit sphere in \(\mathbb{R}^{n}\). Throughout this section, \(||\cdot||_{2}\) denotes the Euclidean \(\ell\)-2 norm. We briefly recall the framework for many convex optimization methods for recovering a structured signal from linear measurements. For an unknown but structured signal \(\mathbf{x}^{*}\in\mathbb{R}^{n}\), suppose we have observed a vector \(\mathbf{y}\in\mathbb{R}^{m}\) that consists of \(m\) linear measurements \(\mathbf{y}=\mathbf{\Phi}\mathbf{x}^{*}+\mathbf{e}\). It is assumed that \(\mathbf{\Phi}\) is a known \(m\times n\) sampling matrix of the form \(\mathbf{\Phi}=(\varphi_{1},\cdots,\varphi_{\mathbf{m}})^{t}\) with \(\varphi_{\mathbf{i}}\in\mathbb{R}^{n}\left(i=1,\cdots,m\right)\) independent and identically distributed random vectors, and \(\mathbf{e}\in\mathbb{R}^{m}\) is vector of unknown noises. It is expected to reconstruct the unknown \(\mathbf{x}^{*}\) via convex optimization. For a proper convex function \(f:\mathbb{R}^{n}\rightarrow\bar{\mathbb{R}}\), the following convex program is usually studied: \[minimize\quad f(\mathbf{x})\quad subject\ to\quad||\mathbf{\Phi}\mathbf{x}- \mathbf{y}||_{2}\leq\eta \tag{5.21}\] for a specified bound \(\eta\) on the norm of the noise. The following proposition in Tropp [20] provides an error bound for convex recovery. **Proposition 5.14**.: _(Tropp [20]) For any solution \(\mathbf{x}_{\eta}\) to the the convex optimization problem described in (5.21), we have_ \[||\mathbf{x}_{\eta}-\mathbf{x}^{*}||_{2}\leq\frac{2\eta}{\lambda_{min}( \mathbf{\Phi},\mathcal{D}(f,\mathbf{x}^{*}))}. \tag{5.22}\] From the proposition 5.14, we know the task for bounding the error of convex signal recovery is to obtain the lower bound of \(\lambda_{min}(\mathbf{\Phi},\mathcal{D}(f,x^{*}))\). Mendelson [15] obtained a lower bound of the minimum conic singular value as a nonnegative empirical process, providing a powerful tool. Tropp [20] got the following result. **Proposition 5.15**.: _(Tropp [20]) A cone \(\mathbf{C}\subseteq\mathbb{R}^{n}\) is fixed. Suppose \(\varphi_{\mathbf{1}},\cdots,\varphi_{\mathbf{m}}\) are independent copied of random vector \(\varphi_{\mathbf{0}}\in\mathbb{R}^{n}\) and the sampling matrix \(\mathbf{\Phi}\) is of the form \(\mathbf{\Phi}=(\varphi_{\mathbf{1}},\cdots,\varphi_{\mathbf{m}})^{t}\). Then for any \(\xi>0\) and \(t>0\), with probability exceeding \(1-e^{-t^{2}/2}\)_ \[\begin{split}\lambda_{min}(\mathbf{\Phi};\mathbf{C})& =\inf_{u\in\mathbf{C}\cap S^{n-1}}\left(\sum_{i=1}^{m}|\left< \varphi_{\mathbf{i}},\mathbf{u}\right>|^{2}\right)^{1/2}\\ &\geq\xi\sqrt{m}Q_{2\xi}(\mathbf{C}\cap S^{n-1};\varphi_{ \mathbf{0}})-2W_{m}(\mathbf{C}\cap S^{n-1};\varphi_{\mathbf{0}})-\xi t\end{split} \tag{5.23}\] _Here,_ \[Q_{2\xi}(\mathbf{C}\cap S^{n-1};\varphi_{\mathbf{0}})=\inf_{u\in\mathbf{C} \cap S^{n-1}}\mathcal{P}\{|\langle\varphi_{\mathbf{0}},\mathbf{u}\rangle|\geq \xi\}\quad\text{for}\quad\xi>0,\] _and_ \[W_{m}(\mathbf{C}\cap S^{n-1};\varphi_{\mathbf{0}})=\mathcal{E}\sup_{u\in \mathbf{C}\cap S^{n-1}}\langle\frac{1}{\sqrt{m}}\sum_{i=1}^{m}\varepsilon_{i} \varphi_{\mathbf{i}},\mathbf{u}\rangle,\] _with \(\varepsilon_{i}\)\((i=1,\cdots,m)\) independent Rademacher random variables independent from \(\varphi_{\mathbf{i}}\)\((i=1,\cdots,m)\)._ With all these preliminaries, we are finally well prepared for the error bound for signal recovery under \(\varphi\)-sub-Gaussian measurements by Mendelson's Small Ball Method (see Mendelson[15] and Tropp [20] for more details about this technique). In this section, \(T=\{1,\cdots,m\}\) is a metric space endowed with the distance \(d(i,j)=\tau_{\varphi}(\varphi_{\mathbf{i}},\varphi_{\mathbf{j}})\) for \(1\leq i,j\leq m\). Some assumptions are described as follows. It is assumed that \(\varphi_{\mathbf{0}}\) is a random vector in \(\mathbb{R}^{n}\) satisfying: * \(\mathsf{E}\varphi_{\mathbf{0}}=0\); [Centering] * For some constant \(\alpha>0\), \(\mathsf{E}|\langle\varphi_{\mathbf{0}},\mathbf{u}\rangle|\geq\alpha\) for all \(\mathbf{u}\in S^{n-1}\); [Nondegeneracy] * \(\varphi_{\mathbf{0}}\) is a \(\varphi\)-sub-Gaussian random vector; [\(\varphi\)-sub-Gaussian marginals] * The eccentricity \(\mu:=\Delta(T)/\alpha\) of the distribution should be small. [Low eccentricity] Under these assumptions, the following theorem demonstrates a lower bound for the minimum conic singular value of a random matrix \(\mathbf{\Phi}\). **Theorem 5.16**.: _Suppose \(\mathbf{\Phi}=(\varphi_{\mathbf{1}}\cdots\varphi_{\mathbf{m}})^{\mathbf{t}}\) is an \(m\times d\) random matrix with \(\varphi_{\mathbf{i}}\)\((i=1,\cdots,m)\) independent and identically distributed copied of \(\varphi_{\mathbf{0}}\). Random vector \(\varphi_{\mathbf{0}}\) satisfies the above-mentioned conditions. For a cone \(\mathbf{C}\subset\mathbb{R}^{n}\) not necessarily convex, with probability exceeding \(1-e^{-t^{2}/2}\),_ \[\lambda_{min}(\mathbf{\Phi};\mathbf{C})\geq C_{1}\sqrt{m}\alpha\mu^{-2}-C_{2} \gamma_{\varphi,p}(T,d)+C_{3}\Delta(T)-\alpha t/3,\] _with \(C_{1},C_{2},C_{3}>0\) some universal constants._ Proof.: For a fixed cone \(\mathbf{C}\in\mathbb{R}^{n}\), Proposition 5.15 indicated that with probability at least \(1-e^{-t^{2}/2}\), for all \(\xi>0\) and \(t>0\), \[\lambda_{min}(\mathbf{\Phi};\mathbf{C})\geq\xi\sqrt{m}Q_{2\xi}(\mathbf{C}\cap S ^{n-1};\varphi_{\mathbf{0}})-2W_{m}(\mathbf{C}\cap S^{n-1};\varphi_{\mathbf{0} })-\xi t \tag{5.24}\] By Paley-Zygmund inequality, we know that for any \(\mathbf{u}\in\mathbf{C}\cap S^{n-1}\), \[\mathsf{P}\{|\langle\varphi_{\mathbf{0}},\mathbf{u}\rangle|\geq 2\xi\}\geq \frac{[(\mathsf{E}|\langle\varphi_{\mathbf{0}},\mathbf{u}\rangle|-2\xi)\lor 0 ]^{2}}{\mathsf{E}|\langle\varphi_{\mathbf{0}},\mathbf{u}\rangle|^{2}}.\] By \(\varphi\)-sub-Gaussian marginal condition, setting \(p=2\) in Corollary 4.2, we know that \[\mathsf{E}|\langle\varphi_{\mathbf{0}},\mathbf{u}\rangle|^{2}\leq C_{4}\Delta ^{2}(T).\] for some universal constant \(C_{4}>0\). With the assumption that \(\mathsf{E}|\langle\varphi_{\mathbf{0}},\mathbf{u}\rangle|\geq\alpha\), we know that for \(\xi<\alpha/2\), \[Q_{2\xi}(\mathbf{C}\cap S^{n-1};\varphi_{\mathbf{0}})=\inf_{u\in\mathbf{C} \cap S^{n-1}}\mathsf{P}\{|\langle\varphi_{\mathbf{0}},\mathbf{u}\rangle|\geq \xi\}\geq C_{5}\frac{(\alpha-2\xi)^{2}}{\Delta^{2}(T)}. \tag{5.25}\] The following calculation implies that for random vector \(\mathbf{h}=\frac{1}{\sqrt{m}}\sum_{i=1}^{m}\varepsilon_{i}\varphi_{\mathbf{i}}\), \(\langle\mathbf{h},\mathbf{u}\rangle\) is a \(\varphi\)-sub-Gaussian random variable for each \(\mathbf{u}\in\mathbb{R}^{n}\). For \(\lambda>0\), \[\mathsf{E}\exp\left(\lambda\langle\mathbf{h},\mathbf{u}\rangle\right) =\mathsf{E}\exp\left(\frac{\lambda}{\sqrt{m}}\sum_{i=1}^{m} \varepsilon_{i}\langle\varphi_{\mathbf{i}},\mathbf{u}\rangle\right)\] \[=\mathsf{E}_{X}\mathsf{E}_{\varepsilon}\exp\left(\frac{\lambda}{ \sqrt{m}}\sum_{i=1}^{m}\varepsilon_{i}\langle\varphi_{\mathbf{i}},\mathbf{u} \rangle\right)\] \[=\frac{1}{2}\mathsf{E}_{X}\exp\left(\frac{\lambda}{\sqrt{m}}\sum_ {i=1}^{m}\langle\varphi_{\mathbf{i}},\mathbf{u}\rangle\right)+\frac{1}{2} \mathsf{E}_{X}\exp\left(-\frac{\lambda}{\sqrt{m}}\sum_{i=1}^{m}\langle\varphi_ {\mathbf{i}},\mathbf{u}\rangle\right)\] \[\leq\frac{1}{2}\exp\left(\varphi\left(\frac{\lambda}{\sqrt{m}} \tau_{\varphi}(\sum_{i=1}^{m}\langle\varphi_{\mathbf{i}},\mathbf{u}\rangle) \right)\right)+\frac{1}{2}\exp\left(\varphi\left(\frac{\lambda}{\sqrt{m}}\tau_ {\varphi}(-\sum_{i=1}^{m}\langle\varphi_{\mathbf{i}},\mathbf{u}\rangle)\right)\right)\] \[=\exp\left(\varphi\left(\frac{\lambda}{\sqrt{m}}\tau_{\varphi}( \sum_{i=1}^{m}\langle\varphi_{\mathbf{i}},\mathbf{u}\rangle)\right)\right) \leq\exp\left(\varphi\left(\frac{\lambda}{\sqrt{m}}\sum_{i=1}^{m}\tau_{\varphi }(\langle\varphi_{\mathbf{i}},\mathbf{u}\rangle)\right)\right),\] with the last two inequalities using the basic property of the norm. The subscripts \(\varepsilon\) and \(X\) of expectation are intended to remind us about the sources of randomness used in taking these expectations. We can also know from the calculation above that \[\tau_{\varphi}(\langle\mathbf{h},\mathbf{u}\rangle)=\tau_{\varphi}\left(\frac{ 1}{\sqrt{m}}\sum_{i=1}^{m}\varepsilon_{i}\langle\varphi_{\mathbf{i}},\mathbf{u }\rangle\right)\leq\frac{1}{\sqrt{m}}\sum_{i=1}^{m}\tau_{\varphi}\left( \langle\varphi_{\mathbf{i}},\mathbf{u}\rangle\right).\] Then we get for \(\mathbf{u},\mathbf{v}\in\mathbb{R}^{n}\), \[\tau_{\varphi}(\left\langle\mathbf{h},\mathbf{u}-\mathbf{v}\right\rangle)\leq \frac{1}{\sqrt{m}}\sum_{i=1}^{m}\tau_{\varphi}\left(\left\langle\varphi_{ \mathbf{i}},\mathbf{u}-\mathbf{v}\right\rangle\right).\] According to Theorem 4.1, we know that \[W_{m}(\mathbf{C}\cap S^{n-1};\varphi_{\mathbf{0}})=\mathsf{E}\sup_{u\in \mathbf{C}\cap S^{n-1}}\langle\mathbf{h},\mathbf{u}\rangle\leq C_{6}\gamma_{ \varphi,p}(T,d)+C_{7}\Delta(T), \tag{5.26}\] for some constant \(C_{6},C_{7}>0\). Combing (5.24), (5.25) and (5.26), we obtain that for \(\xi<\alpha/2\), with probability at least \(1-e^{-t^{2}/2}\), \[\lambda_{min}(\mathbf{\Phi};\mathbf{C})\geq C_{8}\xi\sqrt{m}\frac{(a-2\xi)^{2 }}{\Delta^{2}(T)}-C_{9}\gamma_{\varphi,p}(T,d)+C_{10}\Delta(T)-\xi t\] for some for some constant \(C_{8},C_{9},C_{10}>0\). Select \(\xi=\alpha/3\), and then we get,with probability at least \(1-e^{-t^{2}/2}\), \[\lambda_{min}(\mathbf{\Phi};\mathbf{C})\geq C_{11}\sqrt{m}\frac{\alpha^{3}}{ \Delta^{2}(T)}-C_{12}\gamma_{\varphi,p}(T,d)+C_{13}\Delta(T)-\alpha t/3. \tag{5.27}\] Simplify this result by eccentricity \(\mu=\Delta(T)/\alpha\), and the desired theorem is proved. With Theorem 5.16 in hand, we immediately have an error bound for signal recovery from \(\varphi\)-sub-Gaussian measurements. **Theorem 5.17**.: _Suppose \(\mathbf{x}^{*}\) is a signal in \(\mathbb{R}^{n}\), and \(\mathbf{\Phi}=(\varphi_{\mathbf{1}}\cdots\varphi_{\mathbf{m}})^{\mathbf{t}}\) is an \(m\times d\) random matrix with \(\varphi_{\mathbf{i}}\;(i=1,\cdots,m)\) independent and identically distributed copied of \(\varphi_{\mathbf{0}}\). Random vector \(\varphi_{\mathbf{0}}\) satisfies the four conditions, namely centering, nondegeneracy, \(\varphi\)-sub-Gaussian marginals, and low eccentricity. \(\mathbf{y}=\mathbf{\Phi}\mathbf{x}^{*}+\mathbf{e}\) is a vector of measurements in \(\mathbb{R}^{m}\), where \(\mathbf{e}\) is noise. Suppose \(||\mathbf{e}||\leq\eta\) and \(\mathbf{x}_{\eta}\) is any solution to the problem (5.21), then with probability more than \(1-e^{-t^{2}/2}\), we have_ \[||\mathbf{x}_{\eta}-\mathbf{x}^{*}||_{2}\leq\frac{2\eta}{\left(C\sqrt{m} \frac{\alpha^{3}}{\Delta^{2}(T)}-C^{\prime}\gamma_{\varphi,p}(T,d)+C^{\prime \prime}\Delta(T)-\alpha t/3\right)\lor 0},\] _with \(C,C^{\prime},C^{\prime\prime}\) some universal constants._ Proof.: A combination of Proposition 5.14 and Theorem 5.16 completes the proof. ## 6. Discussions This work presents the first generic chaining argument for \(\varphi\)-sub-Gaussian processes as a generalization of Gaussian processes, and it has been demonstrated that the results obtained through the chaining method are superior to the Dudley bound in this space. We also attempt to extend the recent advances building on Talagrand's functional in this paper, for instance, the truncated version of the \(\gamma\)-functional attributed to Krahmer, Mendelson and Rauhut [13]. Moreover, we attempt to verify the growth condition and explore partition schemes for the distribution-dependent Talagrand-type \(\gamma\)-functional. On the other hand, studying the lower bound for the expectation of the supremum of \(\varphi\)-sub-Gaussian processes is interesting. Recently, the so-called Bernoulli conjecture has been solved in Bednorz and Latala [2], and later, Bednorz and Martynek [3] obtained a more general result. These authors established the decomposition theorems for the lower bound of the Bernoulli process, empirical processes, and so on. After these great works, it will be fascinating to study the decomposition theorem for the canonical process \(\sum_{i\geq 1}t_{i}\xi_{i}\), where \(\{\xi_{i}\}_{i\geq 1}\) is a sequence of \(\varphi\)-sub-Gaussian random variables. ## Acknowledgments This work was supported by the National Natural Science Foundation of China (No. 12071257 and No. 11971267 ); National Key R&D Program of China (No. 2018YFA0703900 and No. 2022YFA1006104); Shandong Provincial Natural Science Foundation (No. ZR2019ZD41).
統計過程の supremum の研究において、タルグランДの連鎖関数と彼の一般化された連鎖法は、確率過程の分布に密接に関係しています。この論文では、一般化された分布におけるタルグランДのタイプ関数を構築し、一般化された連鎖法を用いて、確率過程のすべての$p$次の瞬間の supremum の上限を求めます。その結果として、ジェイコブソン-リンゼンストラウスの定理、$p$次の2次ガウスの乱数Chaosのsupremaの上限、そして、この設定における凸信号復元を得ました。
2310.00483
Prompting Code Interpreter to Write Better Unit Tests on Quixbugs Functions
Unit testing is a commonly-used approach in software engineering to test the correctness and robustness of written code. Unit tests are tests designed to test small components of a codebase in isolation, such as an individual function or method. Although unit tests have historically been written by human programmers, recent advancements in AI, particularly LLMs, have shown corresponding advances in automatic unit test generation. In this study, we explore the effect of different prompts on the quality of unit tests generated by Code Interpreter, a GPT-4-based LLM, on Python functions provided by the Quixbugs dataset, and we focus on prompting due to the ease with which users can make use of our findings and observations. We find that the quality of the generated unit tests is not sensitive to changes in minor details in the prompts provided. However, we observe that Code Interpreter is often able to effectively identify and correct mistakes in code that it writes, suggesting that providing it runnable code to check the correctness of its outputs would be beneficial, even though we find that it is already often able to generate correctly-formatted unit tests. Our findings suggest that, when prompting models similar to Code Interpreter, it is important to include the basic information necessary to generate unit tests, but minor details are not as important.
Vincent Li, Nick Doiron
2023-09-30T20:36:23
http://arxiv.org/abs/2310.00483v1
# Prompting Code Interpreter to Write Better Unit Tests on Quixbugs Functions ###### Abstract Unit testing is a commonly-used approach in software engineering to test the correctness and robustness of written code. Unit tests are tests designed to test small components of a codebase in isolation, such as an individual function or method. Although unit tests have historically been written by human programmers, recent advancements in AI, particularly LLMs, have shown corresponding advances in automatic unit test generation. In this study, we explore the effect of different prompts on the quality of unit tests generated by Code Interpreter, a GPT-4-based LLM, on Python functions provided by the Quixbugs dataset, and we focus on prompting due to the ease with which users can make use of our findings and observations. We find that the quality of the generated unit tests is not sensitive to changes in minor details in the prompts provided. However, we observe that Code Interpreter is often able to effectively identify and correct mistakes in code that it writes, suggesting that providing it runnable code to check the correctness of its outputs would be beneficial, even though we find that it is already often able to generate correctly-formatted unit tests. Our findings suggest that, when prompting models similar to Code Interpreter, it is important to include the basic information necessary to generate unit tests, but minor details are not as important. _Keywords: Large Language Models, Unit Testing, Code Interpreter, Quixbugs, Prompting_ ## 1 Introduction In software engineering, testing the correctness of written code, especially before deployment, is of utmost importance, since it greatly reduces the possibility of unexpected errors and crashes. A common approach to software testing is _unit testing_, in which code is broken down into smaller components whose correctness can be tested individually. Often, this is done by individually testing a _focal method_ or a _focal class_ in isolation. The advantage of such an approach is that breaking down code into smaller components reduces its complexity, making it easier for human programmers to construct a comprehensive unit test suite that includes a diverse set of edge cases. Furthermore, it allows human programmers to more easily pinpoint the location and cause of errors and discrepancies between the expected and actual output of the code, thus facilitating the debugging process. However, writing unit tests is often a time-consuming process that therefore demands a large portion of a developer's time and energy. In recent years, with the rise of Large Language Models (LLMs) such as ChatGPT [7], there has been an increasing focus on the application of LLMs to the task of writing unit tests, as they have the potential to drastically reduce the time necessary to properly and sufficiently test written code before deployment. Therefore, the study of the unit test generation capabilities of LLMs has the potential to provide a valuable tool for developers and to greatly increase the speed at which developers can produce correct code that handles edge cases well. ### Related Work In recent years, several previous works have attempted to tackle the problem of automatic unit test generation, with many opting to use LLMs for this purpose. For instance, AthenaTest [9] and A3Test [1] both use transformer-based language models to generate unit tests. Many previous methods, such as TestPilot (based on Codex) [8], ChatUnTest [10], and ChatTester [11], (both based on ChatGPT) make use of an iterative algorithm, in which a code generation model is initially prompted to generate unit testing code, and if there are errors in the generated code, it is repeatedly prompted with the goal of inducing it to fix the errors. Non-iterative methods include differential prompting [3], in which the model is prompted to find failing test cases for a given focal method by generating multiple reference implementations and finding the test cases in which the method under test and the reference implementations produce different results. **Common Failure Modes of Generated Unit Tests.** Although unit tests generated by LLMs have the advantage of being less time-consuming to generate, as compared to those generated by humans, they have the disadvantage of being more likely to contain errors that prevent successful compilation of the code. Indeed, such syntax and compilation errors are a significant source of ineffective unit testing code [8, 10, 11]. Other sources of ineffective unit tests include the unit testing code running for longer than the enforced timeout or asserting incorrect values, the latter of which may cause correct focal functions to be marked as incorrect and incorrect focal functions to be marked as correct [8]. However, due to the prevalence of compilation errors and the necessity of compilable code, it is imperative that a reliable method of fixing or correcting these errors be found. **Prompting.** There are several components of the prompts given to LLMs to generate unit tests that have been investigated by previous works. Due to the prevalence of iterative methods in the literature [8, 10, 11], the investigation of these methods sheds much light on the variety of different variables, enumerated below, that may be modified to alter the quality of the generated unit tests. Iterative methods work by first generating an initial prompt, and then subsequently repronting the model to correct errors or otherwise achieve a better result. Even though not all methods are iterative, we will, to the ends of readability and ease of characterization, simply treat them as iterative methods that only generate an initial prompt, without any reprompting. 1. _Amount of Code Context Given in the Initial Prompt._ It is important that the user provide the model with enough information (_i.e._ code context, such as the definition of the focal function or focal method, the signature of the focal function, etc.) to write proper unit tests, and in general, it is better to give more information, while keeping in mind the limited size of the model's context window [8, 9]. In particular, it is common practice to include information about the focal method's body in the initial prompt, whether that information be included directly, in the prompt itself [9, 10], or indirectly, such as by using that information to write an NL (natural language) description of the focal method's intention, which is then used as the actual prompt [11, 3]. It follows from the evidence in previous studies that this leads to better results. Aside from the information about the focal method's signature and body, other relevant pieces of information include the _focal class_ (the class that the focal method is a part of, if any), the fields and methods of the focal class, and the _dependencies_ of the focal method (the functions called by the focal method). Here, there is less consensus among previous studies. Some include that information in the initial prompt directly, whether it be with dependencies [10] or without [9]; some include it indirectly (via the generation of an intermediate NL prompt describing the intention) [11]; and others do not include it at all, in the initial prompt (although they may or may not include it in subsequent prompts) [3, 8]. 2. _Code Documentation._ TestPilot [8] also draws upon the documentation of the focal methods used. In particular, it, at times, prompts the model with doc comments (NL comments in the documentation that describe the focal method and its intention) and examples of the focal method's usage. However, the drawback of this approach is that it is only possible when there exists documentation from which to draw this information. For many focal methods, the relevant documentation does not exist. However, we may instead inquire about the effectiveness of including documentation information of the dependencies, especially if the dependencies include functions from well-known and widely-used libraries. Perhaps this may be an effective strategy. Alternatively, if a focal method or one or more of its dependencies does not have human-written documentation available online, it may possibly be beneficial to prompt the model (or a different model) to write documentation-style comments and usage examples for those functions itself, and then prompt the model with the output. Even though past studies have explored the possibility of having the model write an NL comment describing the focal method's purpose [11], the question of how the other prompting methods would affect the quality of the generated unit tests remains, though it is beyond the scope of this study. 3. _Followup Prompting._ In iterative methods, it is important to design the subsequent prompts to the model to maximize their effectiveness. Due to the prevalence of compilation errors in the generated unit test code, the primary objective of reprompting is to lead the model to repair errors in its code. In the literature, we have found three distinct factors that previous studies have experimented with: (1) the amount of information, or additional information, about the code context to include, (2) whether error messages are directly fed back into the model, or whether they are converted to feedback given in NL, and (3) the model involved in reprompting, if any. With respect to information included in subsequent prompts, previous models have either included only information related to the error message [11, 10], even for general coding applications outside of unit testing [5], or opted to include additional code context information due to finding it to be a more effective strategy [8]. Of the studies whose proposed reprompting algorithms include only the information about the error messages, some directly include the error message as part of the prompt [10, 11]. Other studies experiment with using LLMs or human feedback to turn the error message into NL before reprompting the code-generating model, albeit on general coding tasks [5]. If a subsequent prompt is constructed by taking the error message, as is, and taking that as part of the subsequent prompt, then it does not invoke an LLM to generate that prompt. However, evidence suggests that using more advanced models, such as GPT-4, to turn the error message into an NL explanation of the error may be more effective [5]. **Prompt Factors.** Through a combination of reviewing the literature and independently brainstorming, we come up with and present a list of factors that may cause prompts to result in different outputs in Appendix A. It is our hope that future researchers may find this list useful during literature review; however, we include it in the appendix for the sake of readability. **The Present Study.** Although there exist a variety of different factors, such as model size, training data quality, and code quality, that affect the quality of generated unit tests, the present study focuses on the effect of prompting on the quality of the generated tests. There are several reasons for this. Firstly, the prompt is one of the elements of the model over which the user is able to exercise a great deal of control. Therefore, our findings may be applied by any user, rather than only those with access to the internals of the model and with the ability to change them. Secondly, the ease at which users can change prompts means that they are able to tailor the prompts to suit their own unique needs and preferences. Thirdly, a study of the kinds of prompts that are effective in eliciting high-quality unit tests may provide useful insight into the nature of the model, since it may elucidate helpful general principles for eliciting high-quality responses. Such studies are also relatively uncommon in the literature. In the present study, we study the ability of Code Interpreter [6]1, a GPT-4 model with the ability to generate code, to generate unit tests for functions in the Quixbugs dataset [4]. In particular, we focus on how to engineer the prompt to achieve optimal results. We find evidence to suggest that, when prompt-engineering with Code Interpreter, the quality of the model's outputs, according to multiple metrics, is not sensitive to changes in minor, less relevant details in the prompt. Therefore, our findings suggest that users of Code Interpreter need not consider the details, so long as the basic information necessary for the model to perform its task are contained in the prompt. Footnote 1: Note that the usage is capped like with the default version of GPT-4 (the message displayed is that the usage of GPT-4 has reached the limit), and that the user interface ([https://chat.openai.com/?model=gpt-4-code-interpreter](https://chat.openai.com/?model=gpt-4-code-interpreter), as of August 2, 2023) classifies it under GPT-4, despite the blog post calling the underlying model a ChatGPT model, so we assume it to be a GPT-4 model. ## 2 Methodology The purpose of this study is to investigate how using different prompts affects the quality of generated unit tests. Therefore, we first generate the contents of the prompt that is fed into Code Interpreter, and then evaluate the output. Rather than giving subsequent prompts after the initial prompt, we only give it an initial prompt and evaluate the output from that prompt. This noniterative workflow of giving one prompt and receiving one reply per conversation simplifies the evaluation process and allows us greater freedom in experimenting with having the model regenerate its response to the same prompt multiple times, thus giving a more balanced and comprehensive view of its response to any given prompt. ### Prompt Generation When generating the content of the prompts, we consider the following dimensions along which the prompts may vary (for more details, see Appendices B and C): 1. _Format of Code Context._ We experiment by giving it the code context in either one of two formats: NL format, as a description of the function, or as the code of the focal function itself. The NL description is generated by Code Interpreter itself from the code of the incorrect implementation of the function body and simply copied and pasted into the prompt for generating the unit tests. In order to prevent the model from using that code when generating unit tests, we ask it to generate unit tests in a separate conversation window. The code itself comes from the Quixbugs dataset [4], which gives both a correct and incorrect implementation of each focal function. Because unit tests are conducted for the purpose of finding implementation errors, we give the incorrect implementation of the function when we give context in code format. All code is given in Python. Regardless of the format of the code context, we always include the function signature, which includes the function name and the names of the inputs. 2. _Number of Example Unit Tests Provided._ We experiment with giving the model formatting examples of zero, one, or two separate unit test cases. Regardless of the number of unit test case examples provided, we always provide adequate instruction for the expected format of the unit test cases, and in the prompt, we encourage the model to generate unit tests that are comprehensive and cover edge cases. 3. _Different Focal Functions._ Although it would be ideal to prompt with all of the focal functions in the dataset, we removed some for various reasons, such as: to ensure that the remaining functions were amenable to testing solely their inputs and outputs (_i.e._ they did not require testing of the intermediate processes, as was the case in functions such as depth- and breadth-first search, where the order of search is relevant), to filter out those for which a given input could have multiple correct outputs, to exclude those for which we suspected that the given correct implementation may not have been correct, to avoid functions whose inputs or outputs contained formatting not compatible with the testing framework producing accurate results (such as difficult-to-parse tuple or string expressions), etc. The subset of functions that remains can be found in Appendix D. 4. _Miscellaneous NL Comments._ We test whether the model produces better-written unit tests and catches the mistake in the incorrect implementation more often if the prompt contains comments such as "You are an expert programmer", which thus creates two distinct possibilities for the kinds of NL comments in the prompt. To see the code that was used to generate the prompts, and to view the prompt-generation process in more detail, please refer to Appendices B and C. We probe the model using every combination of the above 4 dimensions to generate prompts. For each prompt, we sample the model's output a total of 5 times. In cases when the model's output length exceeds the length of the preset token limit for a single response, we simply allow it to continue generating until it finishes its response. We collect data from August 1-16, 2023, inclusive. ### Output Evaluation When evaluating the output produced by the model, we check several components of the output: 1. _Correctness of Format._ We check whether the format, whether given as a downloadable file or an embedded code block, conforms to the format specified in the prompt. If it does not, then we do not use those test cases, as they are incompatible with the testing framework. Therefore, all data about the provided test cases comes solely from those generated with the correct format. 2. _Whether the Mistake is Corrected._ We observe that the model will sometimes catch the mistake in the incorrect implementation. We consider the model to have corrected the mistake if and only if it either correctly re-implements the function or it points out the error and the exact changes necessary to fix it. 3. _Correctness of Test Cases._ We check whether the expected output that the model gives in the test cases matches the output of the correct implementation of the function. If so, then we consider the test case to be correct. 4. _Coverage of Test Cases._ We examine whether the given test cases are _failing_, which we define as a test case in which the correct and incorrect implementations give different outputs. Therefore, failing test cases are those that are capable of uncovering errors in the incorrect implementation, if the correct output is known. Note that a failing test case is failing, regardless of whether the expected output, given in the test case, is correct; the input is the important variable for determining whether a test case is failing. 5. _Correct Coverage._ Because it is optimal for failing test cases to also be correct, we also combine the above two points by checking which test cases are both correct and failing. _Error Handling_ When the function throws an error as a result of the input from a test case, we handle it as follows: if the correct implementation throws an error, regardless of the behavior of the incorrect implementation, then we assume that the input does not satisfy the function's precondition, which means the function's behavior is undefined. Therefore, we consider the test case to be correct, since any output is correct, and not failing, because causing undefined behavior does not reveal the incorrectness of the incorrect implementation. If the correct implementation does not throw an error, but the incorrect implementation throws, then we consider the test case to be a failing test case. ## 3 Results We present data about all of the test cases, as whole, in Table 1. Based on the data, in order to have at least 10 expected correct failing test cases, it is advisable to resample at least 4 times from the same prompt. Upon performing t-tests on the resultant data, we find that, with a few exceptions, between any two types of prompt, the variability from reprompting and the diversity of focal functions provided was substantially greater than that provided by differences in prompting style. In other words, for most prompts, there was no significant evidence of a difference in any of the measured quantities arising from changes to the prompting style2. Footnote 2: The exceptions are that prompts that included the code, did not include miscellaneous NL comments, and included 1 or 2 output examples showed significantly more correctly formatted outputs than prompts that include an NL function description, miscellaneous NL comments, and no output examples (p = 0.000039 if the former prompt has 1 example; p = 0.000042 if the former prompt has 2 examples, which is significant at the alpha = 0.05 level). However, despite the low p-values of these differences, we do not focus much attention on their significance because the mean difference is small, and because we do not believe that this will bring much of a change to the end user, due to it being a difference in only a single metric. However, despite this, in general, we find that prompts that give code context directly (_i.e._ as code), do not include miscellaneous NL comments, \begin{table} \begin{tabular}{l||l l l l} \hline \hline & \multicolumn{4}{c}{Standard} & \\ & Mean & \multicolumn{1}{c}{Deviation} & \\ & No. of & in No. of & Mean & St. Dev. \\ & TCs per & TCs per & Frac. of & in Frac. \\ & Response & Response & TCs & of TCs \\ \hline CFORM & 7.78 & 3.95 & 1.00 & 0.00 \\ CORR & 6.13 & 4.18 & 0.79 & 0.54 \\ FAIL & 4.43 & 3.72 & 0.57 & 0.48 \\ CORR & & & & \\ and & 3.11 & 3.59 & 0.40 & 0.46 \\ FAIL & & & & \\ \hline \hline \end{tabular} \end{table} Table 1: Data collected about the test cases together, as a whole. The term fraction refers to the number of test cases with the given characteristic, per correctly-formatted test case. For example, the mean fraction of test cases per response for correct test cases refers to the proportion of test cases that were correct (on average, considering only correctly-formatted responses), which is computed by first aggregating the data and then averaging, rather than in the opposite order. Abbreviations used: CORR: correct test cases; FAIL: failing test cases; CFORM: correctly formatted; No.: number; frac.: fraction; st. dev.: standard deviation; TC: test case. and include 2 output examples are more generally associated with better performance metrics, while prompts that include the code context as an NL description, include miscellaneous NL comments, and do not have output examples have the opposite effect, though we note that the difference is not large. Thus, we advise that future Code Interpreter users employ the former prompting style when creating prompts. For the complete data collected, please see Appendix E. ### Observations In addition to the data above, we also make several qualitative observations about the responses produced by Code Interpreter, especially in regards to the format of the responses generated. Firstly, we observe that, when the model generates test cases that are not in the format that we specified, it is often in a similar, but more standard, format. In fact, the model response will sometimes point out that the format we specify is nonstandard JSON. Therefore, it appears that the model has an inclination towards working with more standard formats, suggesting that it would be beneficial to ask for standard, rather than nonstandard, formats, unless there is strong reason to use a nonstandard format, such as testing the model's ability to generalize to nonstandard formats that do not appear often in the training data. Secondly, the model is often not able to format test cases with multiple or nested sets of brackets correctly. Often, this happens in the test cases for sorting functions, since their input is a list of numbers (_i.e._[1, 2, 3]), rather than multiple numbers (_i.e._ 1, 2, 3). More generally, the model often produces test cases that are marked as incorrect by the testing framework, but would have been correct if they were correctly formatted. Thus, it sometimes misunderstands the expected test case format, despite the formatting instructions that we provide. Thirdly, and most interestingly, when the model itself writes Python code containing an error, it will often be able to correctly explain the error message and make the corresponding correction because it has access to its own internal Python environment. Therefore, to increase the probability of the model producing correctly-formatted test cases, we conjecture that it would be helpful to provide Python code designed to check whether the test cases are correctly formatted, allowing the model to check and correct the format of the cases that it generates. Even though providing explicit and specific formatting instructions helps the model produce correctly-formatted test cases, we conjecture that providing the format-checking code will be more effective, especially considering that the model does not always follow the formatting instructions. Alternatively, asking the model to provide the unit test as Python code may also be effective, since it would be able to run that code in its integrated Python environment. However, when the focal function causes an assertion error in the unit test code (which could happen when the actual output does not match the expected output) it would be important for the model to be able to differentiate that kind of error from an error in the unit test code itself. ## 4 Discussion Although previous works with previous LLMs have found that the quality of those models' outputs are noticeably influenced by the modification of details in the prompts, our findings are to the contrary, suggesting that the present model under test is more robust to such changes. Even though we do not test for all of the factors suggested in previous studies, we posit that Code Interpreter will not be as sensitive to such details as previous models are. However, it is important to keep in mind that there exist several basic elements that have always been present in all of our prompts. For example, we give clear instructions for how we expect the output to be formatted, a description of the focal function derived from its code (if not the code of the focal function itself), and resampled multiple outputs from the same initial prompt. We believe these elements to be necessary for the model to produce an output that conforms to the user's requirements. We speculate that this change from previous models is due to a difference in the training process that GPT-4 underwent, as compared to previous models, which thus allowed it to become robust to less relevant details and only be affected by more important details, much like a human programmer would. Because unit test generation is a well-defined and relatively common task for programmers, the abundance of available training data would only have served to accelerate this effect. However, without knowing more about the training process, it is difficult to make a definitive statement. As AI nears or surpasses human levels of intelligence in increasingly many domains, it is ever more important to pay close attention to the risks inherent in AI. ### Limitations As with any study, there exist limitations to ours, which we describe and address below. First, we focus primarily on resampling outputs from the same initial prompt, rather than using followup prompts to correct or enhance the existing output. However, we note that our approach does not require that we tailor a followup prompt specifically suited to induce the model to correct the existing output. Furthermore, the need to generate a followup response in order to repair the original response is also fulfilled by regenerating more responses from the initial prompt. This is because, of the multiple responses generated, it is exceedingly likely that at least one is of the correct format, since generating more responses will increase the chances that at least one is in the correct format. Second, our testing framework is limited by the kinds of unit tests that it can run. Therefore, we picked a subset of the functions that were compatible with the testing framework. This subset was characterized by functions that were amenable to simple input/output testing. However, a study on how the results might change if a more diverse set of functions (such as those requiring an in-depth examination of the intermediate stages, like breadth- and depth-first search) and test cases (such as those in which one of multiple possible answers could be correct) were included would be quite informative. Furthermore, an improvement in the way that our testing framework handled errors thrown by the focal function (either the correct or incorrect implementation) would also have provided a more accurate view of the behavior of both implementations under the generated test cases. ### Future Work An interesting future research direction to explore would be to investigate how followup prompts affect the quality of generated unit tests, and how such prompts can be designed to most efficiently do so. Furthermore, it would be worth investigating how models fare against more complex codebases containing multiple, interdependent functions or functions that are not commonly in the training data, especially if their doc comments are sparse or nonexistent. Additionally, a study on the degree to which the test cases were memorized from the training data, or how the prompt could be improved if the model were asked to improve it, would be informative. ### Alignment and Risks With the rise of AI capabilities, AI developers and researchers must take care to prevent the misuse and misalignment of increasingly powerful AI systems, especially as they near the level of artificial general intelligence (AGI). If AGI is realized, then misaligned agents have the potential to cause great, possibly even existential, harm to society, especially considering the increasing reliance on AI systems. Our work seeks to address this by shedding light on best practices for prompting AI models to construct more rigorous and comprehensive unit tests. Although there is still much room for improvement before certifiably safe and aligned AGI systems can be made, it is our hope that our work can make and inspire progress towards reaching that goal by laying the foundations for the automatic generation of safety checks through disseminating widely-applicable knowledge of unit test prompting best practices. However, it is also important to be mindful of unexpected capabilities that emerge from large models and to be prepared for the unforeseen challenges that may arise when attempting to make AI systems aligned. To deal with these challenges as they arise will be crucial for successfully working towards the goal of creating safe and aligned AGI systems. ## 5 Conclusion In this study, we investigate the effect of prompting on the quality of unit tests generated by Code Interpreter. In particular, we vary the format in which the code context was provided, the number of formatting examples for the expected output, and whether we include NL miscellaneous comments in the prompt telling the model that it is an expert code. Although these factors do not have a significant effect on the quality of the generated unit tests, we make several observations about the outputs of Code Interpreter, the most interesting of which is that its ability to run and correct its own Python code is quite effective and suggests that including code to check for the correctness of the output format would be useful. Future work could explore the effect of followup prompts on the quality of generated unit tests. However, as AI continues to advance, it is important that researchers increasingly focus their attention on preventing harms from AI-related issues, such as misalignment. ## 6 Acknowledgements We would like to express gratitude to Zachary Rudolph and the University of Chicago Existential Risk Laboratory (XLab) for providing the funding and facilities necessary to conduct this research. In particular, Zack's mentorship, feedback, and support proved invaluable for this project, and VL feels that the XLab Summer Research Fellowship has imparted on him a substantial amount of knowledge about existential risk, which is an important concern in today's times, even if he also thinks that Zack calls on him too often.
```japanese ソフトウェアエンジニアリングにおけるユニットテストは、コードの正確性と堅牢性をテストするために広く使用されている方法です。ユニットテストは、個々の関数やメソッドなど、コードベースの小さなコンポーネントを aisladaでテストするためのテストです。従来は、人間プログラマーによってユニットテストが書かれていましたが、AI、特にLLMの最近の進歩により、自動ユニットテストの生成が進んでおり、この研究では、Code Interpreter(GPT-4ベースのLLM)がPython関数からQuixbugsデータセットを提供する際に生成するユニットテストの質に影響を与える異なるプロンプトの効果を調査しました。また、ユーザーが私たちの発見や観察結果を容易に利用できることから、この研究では、プロンプトに関連する問題について探求しました。私たちは、生成されたユニットテストの質はプロンプトにおける微小な変更に影響を与えません。しかし、Code Interpreterはコードを生成する
2309.05473
Machine learning the dimension of a Fano variety
Fano varieties are basic building blocks in geometry - they are `atomic pieces' of mathematical shapes. Recent progress in the classification of Fano varieties involves analysing an invariant called the quantum period. This is a sequence of integers which gives a numerical fingerprint for a Fano variety. It is conjectured that a Fano variety is uniquely determined by its quantum period. If this is true, one should be able to recover geometric properties of a Fano variety directly from its quantum period. We apply machine learning to the question: does the quantum period of X know the dimension of X? Note that there is as yet no theoretical understanding of this. We show that a simple feed-forward neural network can determine the dimension of X with 98% accuracy. Building on this, we establish rigorous asymptotics for the quantum periods of a class of Fano varieties. These asymptotics determine the dimension of X from its quantum period. Our results demonstrate that machine learning can pick out structure from complex mathematical data in situations where we lack theoretical understanding. They also give positive evidence for the conjecture that the quantum period of a Fano variety determines that variety.
Tom Coates, Alexander M. Kasprzyk, Sara Veneziale
2023-09-11T14:13:30
http://arxiv.org/abs/2309.05473v1
# Machine learning the dimension of a Fano variety ###### Abstract. Fano varieties are basic building blocks in geometry - they are 'atomic pieces' of mathematical shapes. Recent progress in the classification of Fano varieties involves analysing an invariant called the quantum period. This is a sequence of integers which gives a numerical fingerprint for a Fano variety. It is conjectured that a Fano variety is uniquely determined by its quantum period. If this is true, one should be able to recover geometric properties of a Fano variety directly from its quantum period. We apply machine learning to the question: does the quantum period of \(X\) know the dimension of \(X^{2}\) Note that there is as yet no theoretical understanding of this. We show that a simple feed-forward neural network can determine the dimension of \(X\) with \(98\%\) accuracy. Building on this, we establish rigorous asymptotics for the quantum periods of a class of Fano varieties. These asymptotics determine the dimension of \(X\) from its quantum period. Our results demonstrate that machine learning can pick out structure from complex mathematical data in situations where we lack theoretical understanding. They also give positive evidence for the conjecture that the quantum period of a Fano variety determines that variety. Key words and phrases:Fano varieties, quantum periods, mirror symmetry, machine learning 2020 Mathematics Subject Classification: 14J45 (Primary); 68T07 (Secondary) ## 1. Introduction Algebraic geometry describes shapes as the solution sets of systems of polynomial equations, and manipulates or analyses a shape \(X\) by manipulating or analysing the equations that define \(X\). This interplay between algebra and geometry has applications across mathematics and science; see e.g. [3, 57, 22, 53]. Shapes defined by polynomial equations are called _algebraic varieties_. Fano varieties are a key class of algebraic varieties. They are, in a precise sense, atomic pieces of mathematical shapes [45, 46]. Fano varieties also play an essential role in string theory. They provide, through their 'anticanonical sections', the main construction of the Calabi-Yau manifolds which give geometric models of spacetime [6, 55, 30]. The classification of Fano varieties is a long-standing open problem. The only one-dimensional example is a line; this is classical. The ten smooth two-dimensional Fano varieties were found by del Pezzo in the 1880s [19]. The classification of smooth Fano varieties in dimension three was a triumph of 20th century mathematics: it combines work by Fano in the 1930s, Iskovskikh in the 1970s, and Mori-Mukai in the 1980s [51, 52, 38, 24, 39, 40]. Beyond this, little is known, particularly for the important case of Fano varieties that are not smooth. A new approach to Fano classification centres around a set of ideas from string theory called Mirror Symmetry [31, 35, 7, 15]. From this perspective, the key invariant of a Fano variety is its _regularized quantum period_[8] \[\widehat{G}_{X}(t)=\sum_{d=0}^{\infty}c_{d}t^{d} \tag{1}\] This is a power series with coefficients \(c_{0}=1\), \(c_{1}=0\), and \(c_{d}=r_{d}d!\), where \(r_{d}\) is a certain Gromov-Witten invariant of \(X\). Intuitively speaking, \(r_{d}\) is the number of rational curves in \(X\) of degree \(d\) that pass through a fixed generic point and have a certain constraint on their complex structure. In general \(r_{d}\) can be a rational number, because curves with a symmetry group of order \(k\) are counted with weight \(1/k\), but in all known cases the coefficients \(c_{d}\) in (1) are integers. It is expected that the regularized quantum period \(\widehat{G}_{X}\) uniquely determines \(X\). This is true (and proven) for smooth Fano varieties in low dimensions, but is unknown in dimensions four and higher, and for Fano varieties that are not smooth. In this paper we will treat the regularized quantum period as a numerical signature for the Fano variety \(X\), given by the sequence of integers \((c_{0},c_{1},\ldots)\). _A priori_ this looks like an infinite amount of data, but in fact there is a differential operator \(L\) such that \(L\widehat{G}_{X}\equiv 0\); see e.g. [8, Theorem 4.3]. This gives a recurrence relation that determines all of the coefficients \(c_{d}\) from the first few terms, so the regularized quantum period \(\widehat{G}_{X}\) contains only a finite amount of information. Encoding a Fano variety \(X\) by a vector in \(\mathbb{Z}^{m+1}\) given by finitely many coefficients \((c_{0},c_{1},\ldots,c_{m})\) of the regularized quantum period allows us to investigate questions about Fano varieties using machine learning. In this paper we ask whether the regularized quantum period of a Fano variety \(X\) knows the dimension of \(X\). There is currently no viable theoretical approach to this question. Instead we use machine learning methods applied to a large dataset to argue that the answer is probably yes, and then prove that the answer is yes for toric Fano varieties of low Picard rank. The use of machine learning was essential to the formulation of our rigorous results (Theorems 5 and 6 below). This work is therefore proof-of-concept for a larger program, demonstrating that machine learning can uncover previously unknown structure in complex mathematical datasets. Thus the Data Revolution, which has had such impact across the rest of science, also brings important new insights to pure mathematics [18, 21, 34, 49, 58, 59]. This is particularly true for large-scale classification questions, e.g. [1, 10, 14, 17, 47], where these methods can potentially reveal both the classification itself and structural relationships within it. ## 2. Results ### Algebraic varieties can be smooth or have singularities Depending on their equations, algebraic varieties can be smooth (as in Figure 1(a)) or have singularities (as in Figure 1(b)). In this paper we consider algebraic varieties over the complex numbers. The equations in Figures 1(a) and 1(b) therefore define complex surfaces; however, for ease of visualisation, we have plotted only the points on these surfaces with co-ordinates that are real numbers. Most of the algebraic varieties that we consider below will be singular, but they all have a class of singularities called _terminal quotient singularities_. This is the most natural class of singularities to allow from the point of view of Fano classification [46]. Terminal quotient singularities are very mild; indeed, in dimensions one and two, an algebraic variety has terminal quotient singularities if and only if it is smooth. ### The Fano varieties that we consider The fundamental example of a Fano variety is projective space \(\mathbb{P}^{N-1}\). This is a quotient of \(\mathbb{C}^{N}\setminus\{0\}\) by the group \(\mathbb{C}^{\times}\), where the action of \(\lambda\in\mathbb{C}^{\times}\) identifies the points \((x_{1},x_{2},\ldots,x_{N})\) and \((\lambda x_{1},\lambda x_{2},\ldots,\lambda x_{N})\). The resulting algebraic variety is smooth and has dimension \(N-1\). We will consider generalisations of projective spaces called _weighted projective spaces_ and _toric varieties of Picard rank two_. A detailed introduction to these spaces is given in SSA. To define a weighted projective space, choose positive integers \(a_{1},a_{2},\ldots,a_{N}\) such that any subset of size \(N-1\) has no common factor, and consider \[\mathbb{P}(a_{1},a_{2},\ldots,a_{N})=(\mathbb{C}^{N}\setminus\{0\})/\mathbb{ C}^{\times}\] Figure 1. Algebraic varieties and their equations: (a) a smooth example; (b) an example with a singular point. where the action of \(\lambda\in\mathbb{C}^{\times}\) identifies the points \[(x_{1},x_{2},\ldots,x_{N})\quad\text{and}\quad(\lambda^{a_{1}}x_{1},\lambda^{a_{ 2}}x_{2},\ldots,\lambda^{a_{N}}x_{N})\] in \(\mathbb{C}^{N}\setminus\{0\}\). The quotient \(\mathbb{P}(a_{1},a_{2},\ldots,a_{N})\) is an algebraic variety of dimension \(N-1\). A general point of \(\mathbb{P}(a_{1},a_{2},\ldots,a_{N})\) is smooth, but there can be singular points. Indeed, a weighted projective space \(\mathbb{P}(a_{1},a_{2},\ldots,a_{N})\) is smooth if and only if \(a_{i}=1\) for all \(i\), that is, if and only if it is a projective space. To define a toric variety of Picard rank two, choose a matrix \[\begin{pmatrix}a_{1}&a_{2}&\cdots&a_{N}\\ b_{1}&b_{2}&\cdots&b_{N}\end{pmatrix} \tag{2}\] with non-negative integer entries and no zero columns. This defines an action of \(\mathbb{C}^{\times}\times\mathbb{C}^{\times}\) on \(\mathbb{C}^{N}\), where \((\lambda,\mu)\in\mathbb{C}^{\times}\times\mathbb{C}^{\times}\) identifies the points \[(x_{1},x_{2},\ldots,x_{N})\quad\text{and}\quad(\lambda^{a_{1}}\mu^{b_{1}}x_{1 },\lambda^{a_{2}}\mu^{b_{2}}x_{2},\ldots,\lambda^{a_{N}}\mu^{b_{N}}x_{N})\] in \(\mathbb{C}^{N}\). Set \(a=a_{1}+a_{2}+\cdots+a_{N}\) and \(b=b_{1}+b_{2}+\cdots+b_{N}\), and suppose that \((a,b)\) is not a scalar multiple of \((a_{i},b_{i})\) for any \(i\). This determines linear subspaces \[S_{+}=\{(x_{1},x_{2},\ldots,x_{N})\mid x_{i}=0\text{ if }b_{i}/a_{i}<b/a\}\] \[S_{-}=\{(x_{1},x_{2},\ldots,x_{N})\mid x_{i}=0\text{ if }b_{i}/a_{i}>b/a\}\] of \(\mathbb{C}^{N}\), and we consider the quotient \[X=(\mathbb{C}^{N}\setminus S)/(\mathbb{C}^{\times}\times\mathbb{C}^{\times}) \tag{3}\] where \(S=S_{+}\cup S_{-}\). The quotient \(X\) is an algebraic variety of dimension \(N-2\) and second Betti number \(b_{2}(X)\leq 2\). If, as we assume henceforth, the subspaces \(S_{+}\) and \(S_{-}\) both have dimension two or more then \(b_{2}(X)=2\), and thus \(X\) has Picard rank two. In general \(X\) will have singular points, the precise form of which is determined by the weights in (2). There are closed formulas for the regularized quantum period of weighted projective spaces and toric varieties [9]. We have \[\widehat{G}_{\mathbb{P}}(t)=\sum_{k=0}^{\infty}\frac{(ak)!}{(a_{1}k)!(a_{2}k )!\cdots(a_{N}k)!}t^{ak} \tag{4}\] where \(\mathbb{P}=\mathbb{P}(a_{1},\ldots,a_{N})\) and \(a=a_{1}+a_{2}+\cdots+a_{N}\), and \[\widehat{G}_{X}(t)=\!\!\!\sum_{(k,l)\in\mathbb{Z}^{2}\cap\mathbb{C}}\!\!\frac {(ak+b)!}{(a_{1}k+b_{1}l)!\cdots(a_{N}k+b_{N}l)!}t^{ak+bl} \tag{5}\] where the weights for \(X\) are as in (2), and \(C\) is the cone in \(\mathbb{R}^{2}\) defined by the equations \(a_{i}x+b_{i}y\geq 0\), \(i\in\{1,2,\ldots,N\}\). Formula (4) implies that, for weighted projective spaces, the coefficient \(c_{d}\) from (1) is zero unless \(d\) is divisible by \(a\). Formula (5) implies that, for toric varieties of Picard rank two, \(c_{d}=0\) unless \(d\) is divisible by \(\gcd\{a,b\}\). _Data generation: weighted projective spaces._ The following result characterises weighted projective spaces with terminal quotient singularities; this is [43, Proposition 2.3]. **Proposition 1**.: _Let \(X=\mathbb{P}(a_{1},a_{2},\ldots,a_{N})\) be a weighted projective space of dimension at least three. Then \(X\) has terminal quotient singularities if and only if_ \[\sum_{i=1}^{N}\{ka_{i}/a\}\in\{2,\ldots,N-2\}\] _for each \(k\in\{2,\ldots,a-2\}\). Here \(a=a_{1}+a_{2}+\cdots+a_{N}\) and \(\{q\}\) denotes the fractional part \(q-\lfloor q\rfloor\) of \(q\in\mathbb{Q}\)._ A simpler necessary condition is given by [42, Theorem 3.5]: **Proposition 2**.: _Let \(X=\mathbb{P}(a_{1},a_{2},\ldots,a_{N})\) be a weighted projective space of dimension at least two, with weights ordered \(a_{1}\leq a_{2}\leq\ldots\leq a_{N}\). If \(X\) has terminal quotient singularities then \(a_{i}/a<1/(N-i+2)\) for each \(i\in\{3,\ldots,N\}\)._ Weighted projective spaces with terminal quotient singularities have been classified in dimensions up to four [41, 43]. Classifications in higher dimensions are hindered by the lack of an effective upper bound on \(a\). We randomly generated \(150\,000\) distinct weighted projective spaces with terminal quotient singularities, and with dimension up to \(10\), as follows. We generated random sequences of weights \(a_{1}\leq a_{2}\leq\ldots\leq a_{N}\) with \(a_{N}\leq 10N\) and discarded them if they failed to satisfy any one of the following: 1. for each \(i\in\{1,\ldots,N\}\), \(\gcd\{a_{1},\ldots,\widehat{a}_{i},\ldots,a_{N}\}=1\), where \(\widehat{a}_{i}\) indicates that \(a_{i}\) is omitted; 2. \(a_{i}/a<1/(N-i+2)\) for each \(i\in\{3,\ldots,N\}\); 3. \(\sum_{i=1}^{N}\{ka_{i}/a\}\in\{2,\ldots,N-2\}\) for each \(k\in\{2,\ldots,a-2\}\). Condition (i) here was part of our definition of weighted projective spaces above; it ensures that the set of singular points in \(\mathbb{P}(a_{1},a_{2},\ldots,a_{N})\) has dimension at most \(N-2\), and also that weighted projective spaces are isomorphic as algebraic varieties if and only if they have the same weights. Condition (ii) is from Proposition 2; it efficiently rules out many non-terminal examples. Condition (iii) is the necessary and sufficient condition from Proposition 1. We then deduplicated the sequences. The resulting sample sizes are summarised in Table 1. _Data generation: toric varieties._ Deduplicating randomly-generated toric varieties of Picard rank two is harder than deduplicating randomly generated weighted projective spaces, because different weight matrices in (2) can give rise to the same toric variety. Toric varieties are uniquely determined, up to isomorphism, by a combinatorial object called a _fan_[25]. A fan is a collection of cones, and one can determine the singularities of a toric variety \(X\) from the geometry of the cones in the corresponding fan. We randomly generated \(200\,000\) distinct toric varieties of Picard rank two with terminal quotient singularities, and with dimension up to \(10\), as follows. We randomly generated weight matrices, as in (2), such that \(0\leq a_{i},b_{j}\leq 5\). We then discarded the weight matrix if any column was zero, and otherwise formed the corresponding fan \(F\). We discarded the weight matrix unless: 1. \(F\) had \(N\) rays; 2. each cone in \(F\) was simplicial (i.e. has number of rays equal to its dimension); 3. the convex hull of the primitive generators of the rays of \(F\) contained no lattice points other than the rays and the origin. Conditions (i) and (ii) together guarantee that \(X\) has Picard rank two, and are equivalent to the conditions on the weight matrix in (2) given in our definition. Conditions (ii) and (iii) guarantee that \(X\) has terminal quotient singularities. We then deduplicated the weight matrices according to the isomorphism type of \(F\), by putting \(F\) in normal form [48, 32]. See Table 1 for a summary of the dataset. \begin{table} \begin{tabular}{c r r r r r} \hline \hline \multicolumn{3}{c}{Weighted projective spaces} & \multicolumn{3}{c}{Rank-two toric varieties} \\ \hline Dimension & Sample size & Percentage & Dimension & Sample size & Percentage \\ \hline 1 & 1 & 0.001 & & & \\ 2 & 1 & 0.001 & 2 & 2 & 0.001 \\ 3 & 7 & 0.005 & 3 & 17 & 0.009 \\ 4 & 8 936 & 5.957 & 4 & 758 & 0.379 \\ 5 & 23 584 & 15.723 & 5 & 6 050 & 3.025 \\ 6 & 23 640 & 15.760 & 6 & 19 690 & 9.845 \\ 7 & 23 700 & 15.800 & 7 & 35 395 & 17.698 \\ 8 & 23 469 & 15.646 & 8 & 42 866 & 21.433 \\ 9 & 23 225 & 15.483 & 9 & 47 206 & 23.603 \\ 10 & 23 437 & 15.625 & 10 & 48 016 & 24.008 \\ \hline Total & 150 000 & & Total & 200 000 & \\ \hline \hline \end{tabular} \end{table} Table 1. The distribution by dimension in our datasets. _Data analysis: weighted projective spaces._ We computed an initial segment \((c_{0},c_{1},\dots,c_{m})\) of the regularized quantum period for all the examples in the sample of \(150\,000\) terminal weighted projective spaces, with \(m\approx 100\,000\). The non-zero coefficients \(c_{d}\) appeared to grow exponentially with \(d\), and so we considered \(\{\log c_{d}\}_{d\in S}\) where \(S=\{d\in\mathbb{Z}_{\geq 0}\mid c_{d}\neq 0\}\). To reduce dimension we fitted a linear model to the set \(\{(d,\log c_{d})\mid d\in S\}\) and used the slope and intercept of this model as features; see Figure 2(a) for a typical example. Plotting the slope against the \(y\)-intercept and colouring datapoints according to the dimension we obtain Figure 3(a): note the clear separation by dimension. A Support Vector Machine (SVM) trained on \(10\%\) of the slope and \(y\)-intercept data predicted the dimension of the weighted projective space with an accuracy of \(99.99\%\). Full details are given in SSSB-C. _Data analysis: toric varieties._ As before, the non-zero coefficients \(c_{d}\) appeared to grow exponentially with \(d\), so we fitted a linear model to the set \(\{(d,\log c_{d})\mid d\in S\}\) where \(S=\{d\in\mathbb{Z}_{\geq 0}\mid c_{d}\neq 0\}\). We used the slope and intercept of this linear model as features. **Example 3**.: In Figure 2(b) we plot a typical example: the logarithm of the regularized quantum period sequence for the nine-dimensional toric variety with weight matrix \[\left(\begin{array}{cccccccccc}1&2&5&3&3&3&0&0&0&0&0\\ 0&0&0&3&4&4&1&2&2&3&4\end{array}\right)\] Figure 3. The slopes and \(y\)-intercepts from the linear models: (a) for weighted projective spaces with terminal quotient singularities. The colour records the dimension of the weighted projective space and the circled points indicate projective spaces. (b) for toric varieties of Picard rank two with terminal quotient singularities. The colour records the dimension of the toric variety. Figure 2. The logarithm of the non-zero period coefficients \(c_{d}\): (a) for a typical weighted projective space; (b) for the toric variety of Picard rank two from Example 3. along with the linear approximation. We see a periodic deviation from the linear approximation; the magnitude of this deviation decreases as \(d\) increases (not shown). To reduce computational costs, we computed pairs \((d,\log c_{d})\) for \(1000\leq d\leq 20\,000\) by sampling every \(100\)th term. We discarded the beginning of the period sequence because of the noise it introduces to the linear regression. In cases where the sampled coefficient \(c_{d}\) is zero, we considered instead the next non-zero coefficient. The resulting plot of slope against \(y\)-intercept, with datapoints coloured according to dimension, is shown in Figure 3(b). We analysed the standard errors for the slope and \(y\)-intercept of the linear model. The standard errors for the slope are small compared to the range of slopes, but in many cases the standard error \(s_{\text{int}}\) for the \(y\)-intercept is relatively large. As Figure 4 illustrates, discarding data points where the standard error \(s_{\text{int}}\) for the \(y\)-intercept exceeds some threshold reduces apparent noise. This suggests that the underlying structure is being obscured by inaccuracies in the linear regression caused by oscillatory behaviour in the initial terms of the quantum period sequence; these inaccuracies are concentrated in the \(y\)-intercept of the linear model. Note that restricting attention to those data points where \(s_{\text{int}}\) is small also greatly decreases the range of \(y\)-intercepts that occur. As Example 4 and Figure 5 suggest, this reflects both transient oscillatory behaviour and also the presence of a subleading term in the asymptotics of \(\log c_{d}\) which is missing from our feature set. We discuss this further below. **Example 4**.: Consider the toric variety with Picard rank two and weight matrix \[\begin{pmatrix}1&10&5&13&8&12&0\\ 0&0&3&8&5&14&1\end{pmatrix}\] This is one of the outliers in Figure 3(b). The toric variety is five-dimensional, and has slope \(1.637\) and \(y\)-intercept \(-62.64\). The standard errors are \(4.246\times 10^{-4}\) for the slope and \(5.021\) for the \(y\)-intercept. We computed the first \(40\,000\) coefficients \(c_{d}\) in (1). As Figure 5 shows, as \(d\) increases the \(y\)-intercept of the linear model increases to \(-28.96\) and \(s_{\text{int}}\) decreases to \(0.7877\). At the same time, the slope of the linear model remains more or less unchanged, decreasing to \(1.635\). This supports the idea that computing (many) more coefficients \(c_{d}\) would significantly reduce noise in Figure 3(b). In this example, even \(40\,000\) coefficients may not be enough. Computing many more coefficients \(c_{d}\) across the whole dataset would require impractical amounts of computation time. In the example above, which is typical in this regard, increasing the number of coefficients computed from \(20\,000\) to \(40\,000\) increased the computation time by a factor of more than \(10\). Instead we restrict to those toric varieties of Picard rank two such that the \(y\)-intercept standard error \(s_{\text{int}}\) is less than \(0.3\); this retains \(67\,443\) of the \(200\,000\) datapoints. We used \(70\%\) of the slope and \(y\)-intercept data in the restricted dataset for model training, and the rest for validation. An SVM model predicted the dimension of the toric variety with an accuracy of \(87.7\%\), and a Random Forest Classifier (RFC) predicted the dimension with an accuracy of \(88.6\%\). Figure 4. The slopes and \(y\)-intercepts from the linear model. This is as in Figure 3(b), but plotting only data points for which the standard error \(s_{\text{int}}\) for the \(y\)-intercept satisfies \(s_{\text{int}}<0.3\). The colour records the dimension of the toric variety. _Neural networks._ Neural networks do not handle unbalanced datasets well. We therefore removed the toric varieties of dimensions \(3\), \(4\), and \(5\) from our data, leaving \(61\,164\) toric varieties of Picard rank two with terminal quotient singularities and \(s_{\mathrm{int}}<0.3\). This dataset is approximately balanced by dimension. A Multilayer Perceptron (MLP) with three hidden layers of sizes \((10,30,10)\) using the slope and intercept as features predicted the dimension with \(89.0\%\) accuracy. Since the slope and intercept give good control over \(\log c_{d}\) for \(d\gg 0\), but not for small \(d\), it is likely that the coefficients \(c_{d}\) with \(d\) small contain extra information that the slope and intercept do not see. Supplementing the feature set by including the first \(100\) coefficients \(c_{d}\) as well as the slope and intercept increased the accuracy of the prediction to \(97.7\%\). Full details can be found in SSSB-C. _From machine learning to rigorous analysis._ Elementary "out of the box" models (SVM, RFC, and MLP) trained on the slope and intercept data alone already gave a highly accurate prediction for the dimension. Furthermore even for the many-feature MLP, which was the most accurate, sensitivity analysis using SHAP values [50] showed that the slope and intercept were substantially more important to the prediction than any of the coefficients \(c_{d}\): see Figure 6. This suggested that the dimension of \(X\) might be visible from a rigorous estimate of the growth rate of \(\log c_{d}\). In SS3 we establish asymptotic results for the regularized quantum period of toric varieties with low Picard rank, as follows. These results apply to any weighted projective space or toric variety of Picard rank two: they do not require a terminality hypothesis. Note, in each case, the presence of a subleading logarithmic term in the asymptotics for \(\log c_{d}\). **Theorem 5**.: _Let \(X\) denote the weighted projective space \(\mathbb{P}(a_{1},\ldots,a_{N})\), so that the dimension of \(X\) is \(N-1\). Let \(c_{d}\) denote the coefficient of \(t^{d}\) in the regularized quantum period \(\widehat{G}_{X}(t)\) given in (4). Let \(a=a_{1}+\cdots+a_{N}\) and \(p_{i}=a_{i}/a\). Then \(c_{d}=0\) unless \(d\) is divisible by \(a\), and non-zero coefficients \(c_{d}\) satisfy_ \[\log c_{d}\sim Ad-\frac{\dim X}{2}\log d+B\] _as \(d\to\infty\), where_ \[A =-\sum_{i=1}^{N}p_{i}\log p_{i}\] \[B =-\frac{\dim X}{2}\log(2\pi)-\frac{1}{2}\sum_{i=1}^{N}\log p_{i}\] Note, although it plays no role in what follows, that \(A\) is the Shannon entropy of the discrete random variable \(Z\) with distribution \((p_{1},p_{2},\ldots,p_{N})\), and that \(B\) is a constant plus half the total self-information of \(Z\). Figure 5. Variation as we move deeper into the period sequence. The \(y\)-intercept and its standard error \(s_{\mathrm{int}}\) for the toric variety from Example 4, as computed from pairs \((k,\log c_{k})\) such that \(d-20\,000\leq k\leq d\) by sampling every \(100\)th term. We also show LOWESS-smoothed trend lines. **Theorem 6**.: _Let \(X\) denote the toric variety of Picard rank two with weight matrix_ \[\begin{pmatrix}a_{1}&a_{2}&a_{3}&\cdots&a_{N}\\ b_{1}&b_{2}&b_{3}&\cdots&b_{N}\end{pmatrix}\] _so that the dimension of \(X\) is \(N-2\). Let \(a=a_{1}+\cdots+a_{N}\), \(b=b_{1}+\cdots+b_{N}\), and \(\ell=\gcd\{a,b\}\). Let \([\mu:\nu]\in\mathbb{P}^{1}\) be the unique root of the homogeneous polynomial_ \[\prod_{i=1}^{N}(a_{i}\mu+b_{i}\nu)^{a_{i}b}-\prod_{i=1}^{N}(a_{i}\mu+b_{i}\nu )^{b_{i}a}\] _such that \(a_{i}\mu+b_{i}\nu\geq 0\) for all \(i\in\{1,2,\ldots,N\}\), and set_ \[p_{i}=\frac{\mu a_{i}+\nu b_{i}}{\mu a+\nu b}\] _Let \(c_{d}\) denote the coefficient of \(t^{d}\) in the regularized quantum period \(\widehat{G}_{X}(t)\) given in (5). Then non-zero coefficients \(c_{d}\) satisfy_ \[\log c_{d}\sim Ad-\frac{\dim X}{2}\log d+B\] _as \(d\to\infty\), where_ \[A =-\sum_{i=1}^{N}p_{i}\log p_{i}\] \[B =-\frac{\dim X}{2}\log(2\pi)-\frac{1}{2}\sum_{i=1}^{N}\log p_{i}- \frac{1}{2}\log\left(\sum_{i=1}^{N}\frac{(a_{i}b-b_{i}a)^{2}}{\ell^{2}p_{i}}\right)\] Theorem 5 is a straightforward application of Stirling's formula. Theorem 6 is more involved, and relies on a Central Limit-type theorem that generalises the De Moivre-Laplace theorem. _Theoretical analysis._ The asymptotics in Theorems 5 and 6 imply that, for \(X\) a weighted projective space or toric variety of Picard rank two, the quantum period determines the dimension of \(X\). Let us revisit the clustering analysis from this perspective. Recall the asymptotic expression \(\log c_{d}\sim Ad-\frac{\dim X}{2}\log d+B\) and the formulae for \(A\) and \(B\) from Theorem 5. Figure 7(a) shows the values of Figure 6. Model sensitivity analysis using SHAP values. The model is an MLP with three hidden layers of sizes (10,30,10) applied to toric varieties of Picard rank two with terminal quotient singularities. It is trained on the slope, \(y\)-intercept, and the first 100 coefficients \(c_{d}\) as features, and predicts the dimension with 97.7% accuracy. and \(B\) for a sample of weighted projective spaces, coloured by dimension. Note the clusters, which overlap. Broadly speaking, the values of \(B\) increase as the dimension of the weighted projective space increases, whereas in Figure 3(a) the \(y\)-intercepts decrease as the dimension increases. This reflects the fact that we fitted a linear model to \(\log c_{d}\), omitting the subleading \(\log d\) term in the asymptotics. As Figure 8 shows, the linear model assigns the omitted term to the \(y\)-intercept rather than the slope. Figure 8. For weighted projective spaces, the asymptotic coefficients \(A\) and \(B\) are closely related to the slope and \(y\)-intercept. (a) Comparison between \(A\) and the slope from the linear model, for weighted projective spaces that occur in both Figure 3(a) and Figure 7(a), coloured by dimension. The line slope \(=A\) is indicated. (b) Comparison between \(B\) and the \(y\)-intercept from the linear model, for weighted projective spaces that occur in both Figure 3(a) and Figure 7(a), coloured by dimension. In each case the line \(y\)-intercept \(=B-\frac{9}{2}\dim X\) is shown. Figure 7. The values of the asymptotic coefficients \(A\) and \(B\): (a) for all weighted projective spaces \(\mathbb{P}(a_{1},\ldots,a_{N})\) with terminal quotient singularities and \(a_{i}\leq 25\) for all \(i\). The colour records the dimension of the weighted projective space. (b) for toric varieties of Picard rank two in our dataset. The colour records the dimension of the toric variety. The slope of the linear model is approximately equal to \(A\). The \(y\)-intercept, however, differs from \(B\) by a dimension-dependent factor. The omitted \(\log\) term does not vary too much over the range of degrees (\(d<100\,000\)) that we considered, and has the effect of reducing the observed \(y\)-intercept from \(B\) to approximately \(B-\frac{9}{2}\dim X\), distorting the clusters slightly and translating them downwards by a dimension-dependent factor. This separates the clusters. We expect that the same mechanism applies in Picard rank two as well: see Figure 7(b). We can show that each cluster in Figure 7(a) is linearly bounded using constrained optimisation techniques. Consider for example the cluster for weighted projective spaces of dimension five, as in Figure 9. **Proposition 7**.: _Let \(X\) be the five-dimensional weighted projective space \(\mathbb{P}(a_{1},\ldots,a_{6})\), and let \(A\), \(B\) be as in Theorem 5. Then \(B+\frac{5}{2}A\geq\frac{41}{8}\). If in addition \(a_{i}\leq 25\) for all \(i\) then \(B+5A\leq\frac{41}{40}\)._ Fix a suitable \(\theta\geq 0\) and consider \[B+\theta A=-\frac{\dim X}{2}\log(2\pi)-\frac{1}{2}\sum_{i=1}^{N}\log p_{i}- \theta\sum_{i=1}^{N}p_{i}\log p_{i}\] with \(\dim X=N-1=5\). Solving \[\min(B+\theta A)\quad\text{subject to}\quad p_{1}+\cdots+p_{6}=1\] \[p_{1},\ldots,p_{6}\geq 0\] on the five-simplex gives a linear lower bound for the cluster. This bound does not use terminality: it applies to any weighted projective space of dimension five. The expression \(B+\theta A\) is unbounded above on the five-simplex (because \(B\) is) so we cannot obtain an upper bound this way. Instead, consider \[\max(B+\theta A)\quad\text{subject to}\quad p_{1}+\cdots+p_{6}=1\] \[e\leq p_{1}\leq p_{2}\leq\cdots\leq p_{6}\] for an appropriate small positive \(e\), which we can take to be \(1/a\) where \(a\) is the maximum sum of the weights. For Figure 9, for example, we can take \(a=124\), and in general such an \(a\) exists because there are only finitely many terminal weighted projective spaces. This gives a linear upper bound for the cluster. The same methods yield linear bounds on each of the clusters in Figure 7(a). As the Figure shows however, the clusters are not linearly separable. Figure 9. Linear bounds for the cluster of five-dimensional weighted projective spaces in Figure 7(a). The bounds are given by Proposition 7. **Discussion.** We developed machine learning models that predict, with high accuracy, the dimension of a Fano variety from its regularized quantum period. These models apply to weighted projective spaces and toric varieties of Picard rank two with terminal quotient singularities. We then established rigorous asymptotics for the regularized quantum period of these Fano varieties. The form of the asymptotics implies that, in these cases, the regularized quantum period of a Fano variety \(X\) determines the dimension of \(X\). The asymptotics also give a theoretical underpinning for the success of the machine learning models. Perversely, because the series involved converge extremely slowly, reading the dimension of a Fano variety directly from the asymptotics of the regularized quantum period is not practical. For the same reason, enhancing the feature set of our machine learning models by including a \(\log d\) term in the linear regression results in less accurate predictions. So although the asymptotics in Theorems 5 and 6 determine the dimension in theory, in practice the most effective way to determine the dimension of an unknown Fano variety from its quantum period is to apply a machine learning model. The insights gained from machine learning were the key to our formulation of the rigorous results in Theorems 5 and 6. Indeed, it might be hard to discover these results without a machine learning approach. It is notable that the techniques in the proof of Theorem 6 - the identification of generating functions for Gromov-Witten invariants of toric varieties with certain hypergeometric functions - have been known since the late 1990s and have been studied by many experts in hypergeometric functions since then. For us, the essential step in the discovery of the results was the feature extraction that we performed as part of our ML pipeline. This work demonstrates that machine learning can uncover previously unknown structure in complex mathematical data, and is a powerful tool for developing rigorous mathematical results; cf. [18]. It also provides evidence for a fundamental conjecture in the Fano classification program [8]: that the regularized quantum period of a Fano variety determines that variety. ## 3. Methods In this section we prove Theorem 5 and Theorem 6. The following result implies Theorem 5. **Theorem 8**.: _Let \(X\) denote the weighted projective space \(\mathbb{P}(a_{1},\ldots,a_{N})\), so that the dimension of \(X\) is \(N-1\). Let \(c_{d}\) denote the coefficient of \(t^{d}\) in the regularized quantum period \(\widetilde{G}_{X}(t)\) given in (4). Let \(a=a_{1}+\ldots+a_{N}\). Then \(c_{d}=0\) unless \(d\) is divisible by \(a\), and_ \[\log c_{ka}\sim ka\left[\log a-\frac{1}{a}\sum_{i=1}^{N}a_{i}\log a_{i}\right] -\frac{\dim X}{2}\log(ka)+\frac{1+\dim X}{2}\log a-\frac{\dim X}{2}\log(2\pi) -\frac{1}{2}\sum_{i=1}^{N}\log a_{i}\] _That is, non-zero coefficients \(c_{d}\) satisfy_ \[\log c_{d}\sim Ad-\frac{\dim X}{2}\log d+B\] _as \(d\to\infty\), where_ \[A=-\sum_{i=1}^{N}p_{i}\log p_{i}\quad B=-\frac{\dim X}{2}\log(2\pi)-\frac{1}{ 2}\sum_{i=1}^{N}\log p_{i}\] _and \(p_{i}=a_{i}/a\)._ Proof.: Combine Stirling's formula \[n!\sim\sqrt{2\pi n}\left(\frac{n}{e}\right)^{n}\] with the closed formula (4) for \(c_{ka}\). _Toric varieties of Picard rank 2._ Consider a toric variety \(X\) of Picard rank two and dimension \(N-2\) with weight matrix \[\begin{pmatrix}a_{1}&a_{2}&a_{3}&\cdots&a_{N}\\ b_{1}&b_{2}&b_{3}&\cdots&b_{N}\end{pmatrix}\] as in (2). Let us move to more invariant notation, writing \(\alpha_{i}\) for the linear form on \(\mathbb{R}^{2}\) defined by the transpose of the \(i\)th column of the weight matrix, and \(\alpha=\alpha_{1}+\cdots+\alpha_{N}\). Equation 5 becomes \[\widehat{G}_{X}(t)=\sum_{k\in\mathbb{Z}^{2}\cap C}\frac{(\alpha\cdot k)!}{\prod_ {i=1}^{N}(\alpha_{i}\cdot k)!}t^{\alpha\cdot k}\] where \(C\) is the cone \(C=\{x\in\mathbb{R}^{2}\mid\alpha_{i}\cdot x\geq 0\text{ for }i=1,2,\ldots,N\}\). As we will see, for \(d\gg 0\) the coefficients \[\frac{(\alpha\cdot k)!}{\prod_{i=1}^{N}(\alpha_{i}\cdot k)!}\quad\text{where }k \in\mathbb{Z}^{2}\cap C\text{ and }\alpha\cdot k=d\] are approximated by a rescaled Gaussian. We begin by finding the mean of that Gaussian, that is, by minimising \[\prod_{i=1}^{N}(\alpha_{i}\cdot k)!\quad\text{where }k\in\mathbb{Z}^{2}\cap C \text{ and }\alpha\cdot k=d.\] For \(k\) in the strict interior of \(C\) with \(\alpha\cdot k=d\), we have that \[(\alpha_{i}\cdot k)!\sim\left(\frac{\alpha_{i}\cdot k}{e}\right)^{\alpha_{i} \cdot k}\] as \(d\to\infty\). **Proposition 9**.: _The constrained optimisation problem_ \[\min\prod_{i=1}^{N}(\alpha_{i}\cdot x)^{\alpha_{i}\cdot x}\quad\text{subject to }\begin{cases}x\in C\\ \alpha\cdot x=d\end{cases}\] _has a unique solution \(x=x^{*}\). Furthermore, setting \(p_{i}=(\alpha_{i}\cdot x^{*})/(\alpha\cdot x^{*})\) we have that the monomial_ \[\prod_{i=1}^{N}p_{i}^{\alpha_{i}\cdot k}\] _depends on \(k\in\mathbb{Z}^{2}\) only via \(\alpha\cdot k\)._ Proof.: Taking logarithms gives the equivalent problem \[\min\sum_{i=1}^{N}(\alpha_{i}\cdot x)\log(\alpha_{i}\cdot x) \text{subject to }\begin{cases}x\in C\\ \alpha\cdot x=d\end{cases} \tag{6}\] The objective function \(\sum_{i=1}^{N}(\alpha_{i}\cdot x)\log(\alpha_{i}\cdot x)\) here is the pullback to \(\mathbb{R}^{2}\) of the function \[f(x_{1},\ldots,x_{N})=\sum_{i=1}^{N}x_{i}\log x_{i}\] along the linear embedding \(\varphi:\mathbb{R}^{2}\to\mathbb{R}^{N}\) given by \((\alpha_{1},\ldots,\alpha_{N})\). Note that \(C\) is the preimage under \(\varphi\) of the positive orthant \(\mathbb{R}^{N}_{+}\), so we need to minimise \(f\) on the intersection of the simplex \(x_{1}+\cdots+x_{N}=d\), \((x_{1},\ldots,x_{N})\in\mathbb{R}^{N}_{+}\) with the image of \(\varphi\). The function \(f\) is convex and decreases as we move away from the boundary of the simplex, so the minimisation problem in (6) has a unique solution \(x^{*}\) and this lies in the strict interior of \(C\). We can therefore find the minimum \(x^{*}\) using the method of Lagrange multipliers, by solving \[\sum_{i=1}^{N}\alpha_{i}\log(\alpha_{i}\cdot x)+\alpha=\lambda\alpha \tag{7}\] for \(\lambda\in\mathbb{R}\) and \(x\) in the interior of \(C\) with \(\alpha\cdot x=d\). Thus \[\sum_{i=1}^{N}\alpha_{i}\log(\alpha_{i}\cdot x^{*})=(\lambda-1)\alpha\] and, evaluating on \(k\in\mathbb{Z}^{2}\) and exponentiating, we see that \[\prod_{i=1}^{N}(\alpha_{i}\cdot x^{*})^{\alpha_{i}\cdot k}\] depends only on \(\alpha\cdot k\). The result follows. Given a solution \(x^{*}\) to (7), any positive scalar multiple of \(x^{*}\) also satisfies (7), with a different value of \(\lambda\) and a different value of \(d\). Thus the solutions \(x^{*}\), as \(d\) varies, lie on a half-line through the origin. The direction vector \([\mu:\nu]\in\mathbb{P}^{1}\) of this half-line is the unique solution to the system \[\prod_{i=1}^{N}(a_{i}\mu+b_{i}\nu)^{a,b} =\prod_{i=1}^{N}(a_{i}\mu+b_{i}\nu)^{b_{i}a} \tag{8}\] \[\begin{pmatrix}\mu\\ \nu\end{pmatrix} \in C\] Note that the first equation here is homogeneous in \(\mu\) and \(\nu\); it is equivalent to (7), by exponentiating and then eliminating \(\lambda\). Any two solutions \(x^{*}\), for different values of \(d\), differ by rescaling, and the quantities \(p_{i}\) in Proposition 9 are invariant under this rescaling. They also satisfy \(p_{1}+\cdots+p_{N}=1\). We use the following result, known in the literature as the "Local Theorem" [29], to approximate multinomial coefficients. **Local Theorem**.: _For \(p_{1},\ldots,p_{n}\in[0,1]\) such that \(p_{1}+\cdots+p_{n}=1\), the ratio_ \[d^{\frac{n+1}{2}}\begin{pmatrix}d\\ k_{1}\cdots k_{n}\end{pmatrix}\prod_{i=1}^{n}p_{i}^{k_{i}}:\frac{\exp(-\frac{1}{ 2}\sum_{i=1}^{n}q_{i}x_{i}^{2})}{(2\pi)^{\frac{n+1}{2}}\sqrt{p_{1}\cdots p_{n} }}\to 1\] _as \(d\to\infty\), uniformly in all \(k_{i}\)'s, where_ \[q_{i} =1-p_{i} x_{i} =\frac{k_{i}-dp_{i}}{\sqrt{dp_{i}q_{i}}}\] _and the \(x_{i}\) lie in bounded intervals._ Let \(B_{r}\) denote the ball of radius \(r\) about \(x^{*}\in\mathbb{R}^{2}\). Fix \(R>0\). We apply the Local Theorem with \(k_{i}=\alpha_{i}\cdot k\) and \(p_{i}=(\alpha_{i}\cdot x^{*})/(\alpha\cdot x^{*})\), where \(k\in\mathbb{Z}^{2}\cap C\) satisfies \(\alpha\cdot k=d\) and \(k\in B_{R\sqrt{d}}\). Since \[x_{i}=\frac{\alpha_{i}\cdot(k-x^{*})}{\sqrt{dp_{i}q_{i}}}\] the assumption that \(k\in B_{R\sqrt{d}}\) ensures that the \(x_{i}\) remain bounded as \(d\to\infty\). Note that, by Proposition 9, the monomial \(\prod_{i=1}^{N}p_{i}^{k_{i}}\) depends on \(k\) only via \(\alpha\cdot k\), and hence here is independent of \(k\): \[\prod_{i=1}^{N}p_{i}^{k_{i}}=\prod_{i=1}^{N}p_{i}^{a_{i}\cdot x^{*}}=\prod_{i= 1}^{N}p_{i}^{dp_{i}}\] Furthermore \[\sum_{i=1}^{N}q_{i}x_{i}^{2}=\frac{(k-x^{*})^{T}A\left(k-x^{*}\right)}{d}\] where \(A\) is the positive-definite \(2\times 2\) matrix given by \[A=\sum_{i=1}^{N}\frac{1}{p_{i}}\alpha_{i}^{T}\alpha_{i}\] Thus as \(d\to\infty\), the ratio \[\frac{(\alpha\cdot k)!}{\prod_{i=1}^{N}(\alpha_{i}\cdot k)!}:\frac{\exp\left( -\frac{1}{2d}(k-x^{*})^{T}A\left(k-x^{*}\right)\right)}{(2\pi d)^{\frac{N-1}{2 }}\prod_{i=1}^{N}p_{i}^{dp_{i}+\frac{1}{2}}}\to 1 \tag{9}\] for all \(k\in\mathbb{Z}^{2}\cap C\cap B_{R\sqrt{d}}\) such that \(\alpha\cdot k=d\). **Theorem 6**.: _Let \(X\) be a toric variety of Picard rank two and dimension \(N-2\) with weight matrix_ \[\begin{pmatrix}a_{1}&a_{2}&a_{3}&\cdots&a_{N}\\ b_{1}&b_{2}&b_{3}&\cdots&b_{N}\end{pmatrix}\] _Let \(a=a_{1}+\cdots+a_{N}\) and \(b=b_{1}+\cdots+b_{N}\), let \(\ell=\gcd(a,b)\), and let \([\underline{\mu}:v]\in\mathbb{P}^{1}\) be the unique solution to (8). Let \(c_{d}\) denote the coefficient of \(t^{d}\) in the regularized quantum period \(\widetilde{G}_{X}(t)\). Then non-zero coefficients \(c_{d}\) satisfy_ \[\log c_{d}\sim Ad-\frac{\dim X}{2}\log d+B\] _as \(d\to\infty\), where_ \[A =-\sum_{i=1}^{N}p_{i}\log p_{i}\] \[B =-\frac{\dim X}{2}\log(2\pi)-\frac{1}{2}\sum_{i=1}^{N}\log p_{i}- \frac{1}{2}\log\left(\sum_{i=1}^{N}\frac{(a_{i}b-b_{i}a)^{2}}{\ell^{2}p_{i}}\right)\] _and \(p_{i}=\frac{\mu a_{i}+vb_{i}}{\mu a+vb}\)._ Proof.: We need to estimate \[c_{d}=\sum_{\begin{subarray}{c}k\in\mathbb{Z}^{2}\cap C\\ \text{with }\alpha\cdot k=d\end{subarray}}\frac{(\alpha\cdot k)!}{\prod_{i=1}^{N}( \alpha_{i}\cdot k)!}\] Consider first the summands with \(k\in\mathbb{Z}^{2}\cap C\) such that \(\alpha\cdot k=d\) and \(k\notin B_{R\sqrt{d}}\). For \(d\) sufficiently large, each such summand is bounded by \(cd^{-\frac{1+\dim X}{2}}\) for some constant \(c\) - see (9). Since the number of such summands grows linearly with \(d\), in the limit \(d\to\infty\) the contribution to \(c_{d}\) from \(k\notin B_{R\sqrt{d}}\) vanishes. As \(d\to\infty\), therefore \[c_{d}\sim\frac{1}{(2\pi d)^{\frac{N-1}{2}}\prod_{i=1}^{N}p_{i}^{dp_{i}+\frac{ 1}{2}}}\sum_{\begin{subarray}{c}k\in\mathbb{Z}^{2}\cap C\cap B_{R\sqrt{d}}\\ \text{with }\alpha\cdot k=d\end{subarray}}\exp\left(-\frac{(k-x^{*})^{T}A \left(k-x^{*}\right)}{2d}\right)\] Writing \(y_{k}=(k-x^{*})/\sqrt{d}\), considering the sum here as a Riemann sum, and letting \(R\to\infty\), we see that \[c_{d}\sim\frac{1}{(2\pi d)^{\frac{N-1}{2}}\prod_{i=1}^{N}p_{i}^{dp_{i}+\frac{ 1}{2}}}\sqrt{d}\int_{L_{\alpha}}\exp\left(-\tfrac{1}{2}y^{T}Ay\right)dy\] where \(L_{\alpha}\) is the line through the origin given by \(\ker\alpha\) and \(dy\) is the measure on \(L_{\alpha}\) given by the integer lattice \(\mathbb{Z}^{2}\cap L_{\alpha}\subset L_{\alpha}\). To evaluate the integral, let \[\alpha^{\perp}=\frac{1}{\ell}\begin{pmatrix}b\\ -a\end{pmatrix}\quad\text{where }\ell=\gcd\{a,b\}\] and observe that the pullback of \(dy\) along the map \(\mathbb{R}\to L_{\alpha}\) given by \(t\mapsto t\alpha^{\perp}\) is the standard measure on \(\mathbb{R}\). Thus \[\int_{L_{\alpha}}\exp\left(-\tfrac{1}{2}y^{T}Ay\right)dy=\int_{-\infty}^{\infty }\exp\left(-\tfrac{1}{2}\theta x^{2}\right)dx=\sqrt{\frac{2\pi}{\theta}}\] where \(\theta=\sum_{i=1}^{N}\frac{1}{\ell p_{i}}(\alpha_{i}\cdot\alpha^{\perp})^{2}\), and \[c_{d}\sim\frac{1}{(2\pi d)^{\frac{\dim X}{2}}\prod_{i=1}^{N}p_{i}^{dp_{i}+\frac {1}{2}}\sqrt{\theta}}\] Taking logarithms gives the result. ## Appendix A Supplementary Notes We begin with an introduction to weighted projective spaces and toric varieties, aimed at non-specialists. ### Projective spaces and weighted projective spaces The fundamental example of a Fano variety is two-dimensional projective space \(\mathbb{P}^{2}\). This is a quotient of \(\mathbb{C}^{3}\setminus\{0\}\) by the group \(\mathbb{C}^{\times}\), where the action of \(\lambda\in\mathbb{C}^{\times}\) identifies the points \((x,y,z)\) and \((\lambda x,\lambda y,\lambda z)\) in \(\mathbb{C}^{3}\setminus\{0\}\). The variety \(\mathbb{P}^{2}\) is smooth: we can see this by covering it with three open sets \(U_{x}\), \(U_{y}\), \(U_{z}\) that are each isomorphic to the plane \(\mathbb{C}^{2}\): \[U_{x} =\{(1,Y,Z)\}\quad\text{given by rescaling $x$ to $1$}\] \[U_{y} =\{(X,1,Z)\}\quad\text{given by rescaling $y$ to $1$}\] \[U_{z} =\{(X,Y,1)\}\quad\text{given by rescaling $z$ to $1$}\] Here, for example, in the case \(U_{x}\) we take \(x\neq 0\) and set \(Y=y/x\), \(Z=z/x\). Although the projective space \(\mathbb{P}^{2}\) is smooth, there are closely related Fano varieties called weighted projective spaces [20, 36] that have singularities. For example, consider the weighted projective plane \(\mathbb{P}(1,2,3)\): this is the quotient of \(\mathbb{C}^{3}\setminus\{0\}\) by \(\mathbb{C}^{\times}\), where the action of \(\lambda\in\mathbb{C}^{\times}\) identifies the points \((x,y,z)\) and \((\lambda x,\lambda^{2}y,\lambda^{3}z)\). Let us write \[\mu_{n}=\{e^{2\pi k1/n}\mid k\in\mathbb{Z}\}\] for the group of \(n\)th roots of unity. The variety \(\mathbb{P}(1,2,3)\) is once again covered by open sets \[U_{x} =\{(1,Y,Z)\}\quad\text{given by rescaling $x$ to $1$}\] \[U_{y} =\{(X,1,Z)\}\quad\text{given by rescaling $y$ to $1$}\] \[U_{z} =\{(X,Y,1)\}\quad\text{given by rescaling $z$ to $1$}\] but this time we have \(U_{x}\cong\mathbb{C}^{2}\), \(U_{y}\cong\mathbb{C}^{2}/\mu_{2}\), and \(U_{z}=\mathbb{C}^{2}/\mu_{3}\). This is because, for example, when we choose \(\lambda\in\mathbb{C}^{\times}\) to rescale \((x,y,z)\) with \(z\neq 0\) to \((X,Y,1)\), there are three possible choices for \(\lambda\) and they differ by the action of \(\mu_{3}\). In particular this lets us see that \(\mathbb{P}(1,2,3)\) is singular. For example, functions on the chart \(U_{y}\cong\mathbb{C}^{2}/\mu_{2}\) are polynomials in \(X\) and \(Z\) that are invariant under \(X\mapsto-X\), \(Z\mapsto-Z\), or in other words \[U_{y} =\operatorname{Spec}\mathbb{C}[X^{2},XZ,Z^{2}]\] \[=\operatorname{Spec}\mathbb{C}[a,b,c]/(ac-b^{2})\] Thus the chart \(U_{y}\) is the solution set for the equation \(ac-b^{2}=0\), as pictured in Figure 10(a). Similarly, the chart \(U_{z}\cong\mathbb{C}^{2}/\mu_{3}\) can be written as \[U_{z} =\operatorname{Spec}\mathbb{C}[X^{3},XY,Y^{3}]\] \[=\operatorname{Spec}\mathbb{C}[a,b,c]/(ac-b^{3})\] and is the solution set to the equation \(ac-b^{3}=0\), as pictured in Figure 10(b). The variety \(\mathbb{P}(1,2,3)\) has singular points at \((0,1,0)\in U_{y}\) and \((0,0,1)\in U_{z}\), and away from these points it is smooth. Figure 10. Singular charts on the weighted projective space \(\mathbb{P}(1,2,3)\): (a) the real-valued points in the chart \(U_{y}\). (b) the real-valued points in the chart \(U_{z}\). There are weighted projective spaces of any dimension. Let \(a_{1},a_{2},\ldots,a_{N}\) be positive integers such that any subset of size \(N-1\) has no common factor, and consider \[\mathbb{P}(a_{1},a_{2},\ldots,a_{N})=(\mathbb{C}^{N}\setminus\{0\})/\mathbb{C}^ {\times}\] where the action of \(\lambda\in\mathbb{C}^{\times}\) identifies the points \[(x_{1},x_{2},\ldots,x_{N})\qquad\text{and}\qquad(\lambda^{a_{1}}x_{1},\lambda^ {a_{2}}x_{2},\ldots,\lambda^{a_{N}}x_{N})\] in \(\mathbb{C}^{N}\setminus\{0\}\). The quotient \(\mathbb{P}(a_{1},a_{2},\ldots,a_{N})\) is an algebraic variety of dimension \(N-1\). A general point of \(\mathbb{P}(a_{1},a_{2},\ldots,a_{N})\) is smooth, but there can be singular points. Indeed, \(\mathbb{P}(a_{1},a_{2},\ldots,a_{N})\) is covered by \(N\) open sets \[U_{i}=\{(X_{1},\ldots,X_{i-1},1,X_{i+1},\ldots,X_{N})\}\qquad i\in\{1,2,\ldots,N\}\] given by rescaling \(x_{i}\) to \(1\); here we take \(x_{i}\neq 0\) and set \(X_{j}=x_{j}/x_{i}\). The chart \(U_{i}\) is isomorphic to \(\mathbb{C}^{N-1}/\mu_{a_{i}}\), where \(\mu_{a_{i}}\) acts on \(\mathbb{C}^{N-1}\) with weights \(a_{j}\), \(j\neq i\). In Reid's notation, this is the cyclic quotient singularity \(\frac{1}{a_{i}}(a_{1},\ldots,\widehat{a}_{i},\ldots,a_{N})\); it is smooth if and only if \(a_{i}=1\). The topology of weighted projective space is very simple, with \[H^{k}(\mathbb{P}(a_{1},a_{2},\ldots,a_{N});\mathbb{Q})=\begin{cases}\mathbb{Q} &\text{if $0\leq k\leq 2N-2$ and $k$ is even;}\\ 0&\text{otherwise.}\end{cases}\] Hence every weighted projective space has second Betti number \(b_{2}=1\). There is a closed formula [9, Proposition D.9] for the regularized quantum period of \(X=\mathbb{P}(a_{1},a_{2},\ldots,a_{N})\): \[\widehat{G}_{X}(t)=\sum_{k=0}^{\infty}\frac{(ak)!}{(a_{1}k)!(a_{2}k)!\cdots(a_ {N}k)!}t^{ak} \tag{10}\] where \(a=a_{1}+a_{2}+\cdots+a_{N}\). #### Toric varieties of Picard rank 2 As well as weighted projective spaces, which are quotients of \(\mathbb{C}^{N}\setminus\{0\}\) by an action of \(\mathbb{C}^{\times}\), we will consider varieties that arise as quotients of \(\mathbb{C}^{N}\setminus S\) by \(\mathbb{C}^{\times}\times\mathbb{C}^{\times}\), where \(S\) is a union of linear subspaces. These are examples of _toric varieties_[16, 25]. Specifically, consider a matrix \[\begin{pmatrix}a_{1}&a_{2}&\cdots&a_{N}\\ b_{1}&b_{2}&\cdots&b_{N}\end{pmatrix} \tag{11}\] with non-negative integer entries and no zero columns. This defines an action of \(\mathbb{C}^{\times}\times\mathbb{C}^{\times}\) on \(\mathbb{C}^{N}\), where \((\lambda,\mu)\in\mathbb{C}^{\times}\times\mathbb{C}^{\times}\) identifies the points \[(x_{1},x_{2},\ldots,x_{N})\quad\text{and}\quad(\lambda^{a_{1}}\mu^{b_{1}}x_{1},\lambda^{a_{2}}\mu^{b_{2}}x_{2},\ldots,\lambda^{a_{N}}\mu^{b_{N}}x_{N})\] in \(\mathbb{C}^{N}\). Set \(a=a_{1}+a_{2}+\ldots+a_{N}\) and \(b=b_{1}+b_{2}+\cdots+b_{N}\), and suppose that \((a,b)\) is not a scalar multiple of \((a_{i},b_{i})\) for any \(i\). This determines linear subspaces \[S_{+}=\{(x_{1},x_{2},\ldots,x_{N})\mid x_{i}=0\text{ if $b_{i}/a_{i}<b /a$}\}\] \[S_{-}=\{(x_{1},x_{2},\ldots,x_{N})\mid x_{i}=0\text{ if $b_{i}/a_{i}>b /a$}\}\] of \(\mathbb{C}^{N}\), and we consider the quotient \[X=(\mathbb{C}^{N}\setminus S)/(\mathbb{C}^{\times}\times\mathbb{C}^{\times}) \tag{12}\] where \(S=S_{+}\cup S_{-}\). See e.g. [5, SSA.5]. These quotients behave in many ways like weighted projective spaces. Indeed, if we take the weight matrix (11) to be \[\begin{pmatrix}a_{1}&a_{2}&\cdots&a_{N}&0\\ 0&0&\cdots&0&1\end{pmatrix}\] then \(X\) coincides with \(\mathbb{P}(a_{1},a_{2},\ldots,a_{N})\). We will consider only weight matrices such that the subspaces \(S_{+}\) and \(S_{-}\) both have dimension two or more; this implies that the second Betti number \(b_{2}(X)=2\), and hence \(X\) is not a weighted projective space. We will refer to such quotients (12) as _toric varieties of Picard rank two_, because general theory implies that the Picard lattice of \(X\) has rank two. The dimension of \(X\) is \(N-2\). As for weighted projective spaces, toric varieties of Picard rank two can have singular points, the precise form of which is determined by the weights (11). There is also a closed formula [9, Proposition C.2] for the regularized quantum period. Let \(C\) denote the cone in \(\mathbb{R}^{2}\) defined by the equations \(a_{i}x+b_{i}y\geq 0\), \(i\in\{1,2,\ldots,N\}\). Then \[\widehat{G}_{X}(t)=\sum_{(k,l)\in\mathbb{Z}^{2}\cap C}\frac{(ak+b )!}{(a_{1}k+b_{1}l)!(a_{2}k+b_{2}l)!\cdots(a_{N}k+b_{N}l)!}t^{ak+bI} \tag{13}\] _Classification results._ Weighted projective spaces with terminal quotient singularities have been classified in dimensions up to four; see Table 2 for a summary. There are 35 three-dimensional Fano toric varieties with terminal quotient singularities and Picard rank two [41]. There is no known classification of Fano toric varieties with terminal quotient singularities in higher dimension, even when the Picard rank is two. ## Appendix B Supplementary Methods 1 _Data analysis: weighted projective spaces._ We computed an initial segment \((c_{0},c_{1},\ldots,c_{m})\) of the regularized quantum period, with \(m\approx 100\,000\), for all the examples in the sample of \(150\,000\) weighted projective spaces with terminal quotient singularities. We then considered \(\{\log c_{d}\}_{d\in S}\) where \(S=\{d\in\mathbb{Z}_{\geq 0}\mid c_{d}\neq 0\}\). To reduce dimension we fitted a linear model to the set \(\{(d,\log c_{d})\mid d\in S\}\) and used the slope and intercept of this model as features. The linear fit produces a close approximation of the data. Figure 11 shows the distribution of the standard errors for the slope and the \(y\)-intercept: the errors for the slope are between \(3.9\times 10^{-8}\) and \(1.4\times 10^{-5}\), and the errors for the \(y\)-intercept are between \(0.0022\) and \(0.82\). As we will see below, the standard error for the \(y\)-intercept is a good proxy for the accuracy of the linear model. This accuracy decreases as the dimension grows - see Figure 11(c) - but we will see below that this does not affect the accuracy of the machine learning classification. _Data analysis: toric varieties of Picard rank 2._ We fitted a linear model to the set \(\{(d,\log c_{d})\mid d\in S\}\) where \(S=\{d\in\mathbb{Z}_{\geq 0}\mid c_{d}\neq 0\}\), and used the slope and intercept of this linear model as features. The distribution of standard errors for the slope and \(y\)-intercept of the linear model are shown in Figure 12. The standard errors for the slope are small compared to the range of slopes, but in many cases the standard error for the \(y\)-intercept is relatively large. As Figure 13 illustrates, discarding data points where the standard error \(s_{\text{int}}\) for the \(y\)-intercept exceeds some threshold reduces apparent noise. As discussed above, we believe that this reflects inaccuracies in the linear regression caused by oscillatory behaviour in the initial terms of the quantum period sequence. _Weighted projective spaces._ We excluded dimensions one and two from the analysis, since there is only one weighted projective space in each case (namely \(\mathbb{P}^{1}\) and \(\mathbb{P}^{2}\)). Therefore we have a dataset of \(149\,998\) slope-intercept pairs, labelled by the dimension which varies between three and ten. We standardised the features, by translating the means to zero and scaling to unit variance, and applied a Support Vector Machine (SVM) with linear kernel and regularisation parameter \(C=10\). By looking at different train-test splits we obtained the learning curves shown in Figure 15. The figure displays the mean accuracies for both training and validation data obtained by performing five random test-train splits each time: the shaded areas around the lines correspond to the \(1\sigma\) region, where \(\sigma\) denotes the Figure 11. Standard errors for the slope and \(y\)-intercept. The distribution of standard errors for the slope and \(y\)-intercept from the linear model applied to weighted projective spaces \(X\) with terminal quotient singularities: (a) standard error for the slope. (b) standard error for the \(y\)-intercept. (c) standard error for the \(y\)-intercept by dimension. Figure 12. Standard errors for the slope and \(y\)-intercept. The distribution of standard errors for the slope and \(y\)-intercept from the linear model applied to toric varieties of Picard rank two with terminal quotient singularities: (a) standard error for the slope. (b) standard error for the \(y\)-intercept. Figure 14. The logarithm of the non-zero coefficients \(c_{d}\) for Example 3: (a) the first 250 terms. (b) terms between \(d=1000\) and \(d=1250\). In each case, the linear approximation is also shown. Figure 13. The slopes and \(y\)-intercepts from the linear model applied to toric varieties of Picard rank two with terminal quotient singularities. Data points are selected according to the standard error \(s_{\text{int}}\) for the \(y\)-intercept. The colour records the dimension of the toric variety. (a) All data points. (b) Points with \(s_{\text{int}}<1\): \(101\,183/200000\) points. (c) Points with \(s_{\text{int}}<0.3\): \(67\,445/200000\) points. standard deviation. Using 10% (or more) of the data for training we obtained an accuracy of 99.99%. In Figure 16 we plot the decision boundaries computed by the SVM between neighbouring dimension classes. Toric varieties of Picard rank 2In light of the discussion above, we restricted attention to toric varieties with Picard rank two such that the \(y\)-intercept standard error \(s_{\text{int}}\) is less than 0.3. We also excluded dimension two from the analysis, since in this case there are only two varieties (namely, \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) and the Hirzebruch surface \(\mathbb{F}_{1}\)). The resulting dataset contains 67 443 slope-intercept pairs, labelled by dimension; the dimension varies between three and ten, as shown in Table 3. Support Vector MachineWe used a linear SVM with regularisation parameter \(C=50\). By considering different train-test splits we obtained the learning curves shown in Figure 17, where the means and the standard deviations were obtained by performing five random samples for each split. Note that the model did not overfit. We obtained a validation accuracy of 88.2% using 70% of the data for training. Figure 18 shows the decision boundaries computed by the SVM between neighbouring dimension classes. Figure 19 shows the confusion matrices for the same train-test split. Random Forest ClassifierWe used a Random Forest Classifier (RFC) with 1500 estimators and the same features (slope and \(y\)-intercept for the linear model). By considering different train-test splits we obtained the learning curves shown in Figure 20; note again that the model did not overfit. Using 70% of the data for training, the RFC gave a validation accuracy of 89.4%. Figure 21 on page 22 shows confusion matrices for the same train-test split. Figure 16. Decision boundaries computed from a Support Vector Machine with linear kernel trained on 70% of the dataset of weighted projective spaces. Note that the data has been standardised. Figure 15. Learning curves for a Support Vector Machine with linear kernel applied to the dataset of weighted projective spaces. The plot shows the means of the training and validation accuracies for five different random train–test splits. The shaded regions show the \(1\sigma\) interval, where \(\sigma\) is the standard deviation. Figure 19. Confusion matrices for a Support Vector Machine with linear kernel trained on 70% of the dataset of toric varieties of Picard rank two. Figure 17. Learning curves for a Support Vector Machine with linear kernel applied to the dataset of toric varieties of Picard rank two. The plot shows the means of the training and validation accuracies for five different random train-test splits. The shaded regions show the \(1\sigma\) interval, where \(\sigma\) is the standard deviation. Figure 18. Decision boundaries computed from a Support Vector Machine with linear kernel trained on 70% of the dataset of toric varieties of Picard rank two. Note that the data has been standardised. Figure 21. Confusion matrices for a Random Forest Classifier trained on 70% of the dataset of toric varieties of Picard rank two. Figure 20. Learning curves for a Random Forest Classifier applied to the dataset of toric varieties of Picard rank two. The plot shows the means of the training and validation accuracies for five different random train-test splits. The shaded regions show the \(1\sigma\) interval, where \(\sigma\) is the standard deviation. \begin{table} \begin{tabular}{c r r} \hline \hline \multicolumn{3}{c}{Rank-two toric varieties with \(s_{\text{int}}<0.3\)} \\ \hline Dimension & \multicolumn{1}{c}{Sample size} & \multicolumn{1}{c}{Percentage} \\ \hline 3 & 17 & 0.025 \\ 4 & 758 & 1.124 \\ 5 & 5 504 & 8.161 \\ 6 & 12 497 & 18.530 \\ 7 & 16 084 & 23.848 \\ 8 & 13 701 & 20.315 \\ 9 & 10 638 & 15.773 \\ 10 & 8 244 & 12.224 \\ \hline Total & 67 443 & \\ \hline \hline \end{tabular} \end{table} Table 3. The distribution by dimension among toric varieties of Picard rank two in our dataset with \(s_{\text{int}}<0.3\). Feed-forward neural networkAs discussed above, neural networks do not handle unbalanced datasets well, and therefore we removed the toric varieties with dimensions three, four, and five from our dataset: see Table 3. We trained a Multilayer Perceptron (MLP) classifier on the same features, using an MLP with three hidden layers \((10,30,10)\), Adam optimiser [44], and rectified linear activation function [2]. Different train-test splits produced the learning curve in Figure 22; again the model did not overfit. Using 70% of the data for training, the MLP gave a validation accuracy of 88.7%. One could further balance the dataset, by randomly undersampling so that there are the same number of representatives in each dimension (8244 representatives: see Table 3). This resulted in a slight decrease in accuracy: the better balance was outweighed by loss of data caused by undersampling. Feed-forward neural network with many featuresWe trained an MLP with the same architecture, but supplemented the features by including \(\log c_{d}\) for \(1\leq d\leq 100\) (unless \(c_{d}\) was zero in which case we set that feature to zero), as well as the slope and \(y\)-intercept as before. We refer to the previous neural network as MLP\({}_{2}\), because it uses 2 features, and refer to this neural network as MLP\({}_{102}\), because it uses 102 features. Figure 23 shows the learning curves obtained for different train-test splits. Using 70% of the data for training, the MLP\({}_{102}\) model gave a validation accuracy of 97.7%. We do not understand the reason for the performance improvement between MLP\({}_{102}\) and MLP\({}_{2}\). But one possible explanation is the following. Recall that the first 1000 terms of the period sequence were excluded when calculating the slope and intercept, because they exhibit irregular oscillations Figure 23. Learning curves for a Multilayer Perceptron classifier MLP\({}_{102}\) applied to the dataset of toric varieties of Picard rank two and dimension at least six, using as features the regression data as well as \(\log c_{d}\) for \(1\leq d\leq 100\). The plot shows the means of the training and validation accuracies for five different random train–test splits. The shaded regions show the \(1\sigma\) interval, where \(\sigma\) is the standard deviation. Figure 22. Learning curves for a Multilayer Perceptron classifier MLP\({}_{2}\) applied to the dataset of toric varieties of Picard rank two and dimension at least six, using just the regression data as features. The plot shows the means of the training and validation accuracies for five different random train–test splits. The shaded regions show the \(1\sigma\) interval, where \(\sigma\) is the standard deviation. that decay as \(d\) grows. These oscillations reduce the accuracy of the linear regression. The oscillations may, however, carry information about the toric variety, and so including the first few values of \(\log(c_{d})\) potentially makes more information available to the model. For example, examining the pattern of zeroes at the beginning of the sequence (\(c_{d}\)) sometimes allows one to recover the values of \(a\) and \(b\) - see (13) for the notation. This information is relevant to estimating the dimension because, as a very crude approximation, larger \(a\) and \(b\) go along with larger dimension. Omitting the slope and intercept, however, and training on the coefficients \(\log c_{d}\) for \(1\leq d\leq 100\) with the same architecture gave an accuracy of only 62%. Comparison of models.The validation accuracies of the SVM, RFC, and the neural networks \(\mathrm{MLP}_{2}\) and \(\mathrm{MLP}_{102}\), on the same data set (\(s_{\mathrm{int}}<0.3\), dimension between six and ten), are compared in Table 4. Their confusion matrices are shown in Table 5. All models trained on only the regression data performed well, with the RFC slightly more accurate than the SVM and the neural network \(\mathrm{MLP}_{2}\) slightly more accurate still. Misclassified examples are generally in higher dimension, which is consistent with the idea that misclassification is due to convergence-related noise. The neural network trained on the supplemented feature set, \(\mathrm{MLP}_{102}\), outperforms all other models. However, as discussed above, feature importance analysis using SHAP values showed that the slope and the intercept were the most influential features in the prediction. ## Appendix D Supplementary Discussion Comparison with Principal Component Analysis.An alternative approach to dimensionality reduction, rather than fitting a linear model to \(\log c_{d}\), would be to perform Principal Component Analysis (PCA) on this sequence and retain only the first few principal components. Since the vectors (\(c_{d}\)) have different patterns of zeroes - \(c_{d}\) is non-zero only if \(d\) is divisible by the Fano index \(r\) of \(X\) - we need to perform PCA for Fano varieties of each index \(r\) separately. We analysed this in the weighted projective space case, finding that for each \(r\) the first two components of PCA are related to the growth coefficients (\(A,B\)) from Theorem 5 by an invertible affine-linear transformation. That is, our analysis suggests that the coefficients (\(A,B\)) contain exactly the same information as the first two components of PCA. Note, however, that the affine-linear transformation that relates PCA to (\(A,B\)) varies with the Fano index \(r\). Using \(A\) and \(B\) as features therefore allows for meaningful comparison between Fano varieties of different index. Furthermore, unlike PCA-derived values, the coefficients (\(A,B\)) can be computed for a single Fano variety, rather than requiring a sufficiently large collection of Fano varieties of the same index. Towards more general Fano varieties.Weighted projective spaces and toric varieties of Picard rank two are very special among Fano varieties. It is hard to quantify this, because so little is known about Fano classification in the higher-dimensional and non-smooth cases, but for example this class includes only 18% of the \(\mathbb{Q}\)-factorial terminal Fano toric varieties in three dimensions. On the other hand, one can regard weighted projective spaces and toric varieties of Picard rank two as representative of a much broader class of algebraic varieties called toric complete intersections. Toric complete intersections share the key properties that we used to prove Theorems 5 and 6 - geometry that is tightly controlled by combinatorics, including explicit expressions for genus-zero Gromov-Witten invariants in terms of hypergeometric functions - and we believe that the rigorous results of this paper will generalise to the toric complete intersection case. All smooth two-dimensional Fano varieties and 92 of the 105 smooth \begin{table} \begin{tabular}{c c c c} \hline \hline & \multicolumn{2}{c}{ML models} \\ SVM & RFC & \(\mathrm{MLP}_{2}\) & \(\mathrm{MLP}_{102}\) \\ \hline 87.7\% & 88.6\% & 88.7\% & 97.7\% \\ \hline \hline \end{tabular} \end{table} Table 4. Comparison of model accuracies. Accuracies for various models applied to the dataset of toric varieties of Picard rank two and dimension at least six: a Support Vector Machine with linear kernel, a Random Forest Classifier, and the neural networks \(\mathrm{MLP}_{2}\) and \(\mathrm{MLP}_{102}\). \begin{table} \begin{tabular}{l c c} \hline \hline Model & True confusion matrix & Predicted confusion matrix \\ \hline SVM & 0.00001 & 0.00002 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0000 & 0.0001 \\ RFC & 0.0000 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0. three-dimensional Fano varieties are toric complete intersections [9]. Many theorems in algebraic geometry were first proved for toric varieties and later extended to toric complete intersections and more general algebraic varieties; cf. [26, 27, 33] and [28, 56]. The machine learning paradigm presented here, however, applies much more broadly. Since our models take only the regularized quantum period sequence as input, we expect that whenever we can calculate \(\widehat{G}_{X}\) - which is the case for almost all known Fano varieties - we should be able to apply a machine learning pipeline to extract geometric information about \(X\). **Data availability.** Our datasets [11, 12] and the code for the Magma computer algebra system [4] that was used to generate them are available from Zenodo [23] under a CC0 license. The data was collected using Magma V2.25-4. **Code availability.** All code required to replicate the results in this paper is available from Bitbucket under an MIT license [13]. **Acknowledgements.** TC is funded by ERC Consolidator Grant 682603 and EPSRC Programme Grant EP/N03189X/1. AK is funded by EPSRC Fellowship EP/N022513/1. SV is funded by the EPSRC Centre for Doctoral Training in Geometry and Number Theory at the Interface, grant number EP/L015234/1. We thank Giuseppe Pitton for conversations and experiments that began this project, and thank John Aston and Louis Christie for insightful conversations and feedback. We also thank the anonymous referees for their careful reading of the text and their insightful comments, which substantially improved both the content and the presentation of the paper.
Fano varietàは幾何学の基本的な構成要素です - それは数学的な形状の「原子」です。Fano varietàの分類における最近の進歩には、量子周期というInvariantを分析することなどが含まれます。これは整数 Folgeで、Fano varietàに数字の指紋を与えるものです。これは、Fano varietàは量子周期で一意に決定されることが予想されています。もしこれが正しいなら、Fano varietàの幾何学的性質をその量子周期から直接復元できるはずです。私たちは機械学習をこの問題に適用しました:量子周期がXの次元を知っているのか?これはまだ理論的な理解がありません。私たちは、単純なフィードフォワードニューラルネットワークがXの次元を98%の精度で復元できることを示しました。このことを基礎に、Fano varietàのクラスの量子周期に対する厳密な漸近的性質を確立しました。これらの漸近的性質は、量子周期からXの
2310.00468
Encouraging Inferable Behavior for Autonomy: Repeated Bimatrix Stackelberg Games with Observations
When interacting with other non-competitive decision-making agents, it is critical for an autonomous agent to have inferable behavior: Their actions must convey their intention and strategy. For example, an autonomous car's strategy must be inferable by the pedestrians interacting with the car. We model the inferability problem using a repeated bimatrix Stackelberg game with observations where a leader and a follower repeatedly interact. During the interactions, the leader uses a fixed, potentially mixed strategy. The follower, on the other hand, does not know the leader's strategy and dynamically reacts based on observations that are the leader's previous actions. In the setting with observations, the leader may suffer from an inferability loss, i.e., the performance compared to the setting where the follower has perfect information of the leader's strategy. We show that the inferability loss is upper-bounded by a function of the number of interactions and the stochasticity level of the leader's strategy, encouraging the use of inferable strategies with lower stochasticity levels. As a converse result, we also provide a game where the required number of interactions is lower bounded by a function of the desired inferability loss.
Mustafa O. Karabag, Sophia Smith, David Fridovich-Keil, Ufuk Topcu
2023-09-30T19:08:05
http://arxiv.org/abs/2310.00468v1
# Encouraging Inferable Behavior for Autonomy: ###### Abstract When interacting with other non-competitive decision-making agents, it is critical for an autonomous agent to have inferable behavior: Their actions must convey their intention and strategy. For example, an autonomous car's strategy must be inferable by the pedestrians interacting with the car. We model the inferability problem using a repeated bimatrix Stackelberg game with observations where a leader and a follower repeatedly interact. During the interactions, the leader uses a fixed, potentially mixed strategy. The follower, on the other hand, does not know the leader's strategy and dynamically reacts based on observations that are the leader's previous actions. In the setting with observations, the leader may suffer from an inferability loss, i.e., the performance compared to the setting where the follower has perfect information of the leader's strategy. We show that the inferability loss is upper-bounded by a function of the number of interactions and the stochasticity level of the leader's strategy, encouraging the use of inferable strategies with lower stochasticity levels. As a converse result, we also provide a game where the required number of interactions is lower bounded by a function of the desired inferability loss. ## I Introduction Autonomous agents repeatedly interact with other agents, e.g., humans and other autonomous systems, in their environments during their operations. Often, the intentions and strategies of these autonomous agents are not perfectly known by the other agents, and the other agents rely on inference from the past interactions when they react to the actions of the autonomous agent. For example, an autonomous car interacts with pedestrians who intend to cross the road, and pedestrians do not have a perfect knowledge of the car's strategy. Consequently, acting in an inferable way is consequential for autonomous agents. We model the interaction between the autonomous agent and the other agent with a bimatrix Stackelberg game. In this game, the autonomous agent is the _leader_ that commits to a strategy, and the other agent is the _follower_ that does not know the leader's action and reacts to the leader's strategy where the leader's actions are drawn from. The game is repeated between the agents. While the leader follows the same strategy at every interaction, the follower's strategy can change between interactions. For the autonomous car example, the fixed strategy over actions stopping and proceeding represents a version of the car's software. We consider that the follower does not have perfect information of the leader's strategy and relies on the observations from the previous rounds. In detail, at every interaction, the follower plays optimally against the empirical action distribution from the previous interactions. For example, in the car-pedestrian scenario, the pedestrian would act based on the frequency that the car stopped in the previous interactions. For a traditional bimatrix Stackelberg game, the leader's optimal strategy may be mixed [1]. However, in the inference setting that we consider, this strategy may not be optimal since the follower reacts to the empirically observed strategy of the leader, not the actual strategy. As a result, the leader might be better off using a less stochastic strategies since such strategies would be more inferable. The leader's expected return in the inference setting might be lower than its expected return in the perfect information setting. We call the return gap between these settings the leader's _inferability loss_. We show that when the follower has bounded rationality (modeled by the maximum entropy model), the leader's cumulative inferability loss is bounded above. The upper bound is a function of both the stochasticity level (trace of the covariance matrix) of the leader's strategy and the number of interactions. As the stochasticity level of the leader's strategy decreases, the inferability loss vanishes. In the extreme case where the leader's strategy is deterministic, the leader does not suffer from any inferability loss; the expected return in the inference setting is the same as the expected return in the perfect information setting. The inferability loss at interaction \(k\) is at most \(\mathcal{O}(\nicefrac{{1}}{{\sqrt{k}}})\), implying that \(\mathcal{O}(\nicefrac{{1}}{{\epsilon^{2}}})\) interactions are sufficient to achieve a maximum of \(\epsilon\) inferability loss. Motivated by the bound, we use the stochasticity level as a regularization term in the leader's objective function to find optimal strategies for the inference setting. Numerical experiments show that the leader indeed suffers from an inferability loss in the inference setting, and the strategies generated by the regularized objective function lead to improved transient returns compared to the strategies that are optimal for the perfect information case. Additionally, as a converse result, we provide an example bimatrix Stackelberg game where the inferability loss at interaction \(k\) is at least \(\epsilon\) if \(k\) is not at the order of \(\mathcal{O}(\nicefrac{{1}}{{\epsilon^{2}}})\) under the full rationality assumption for the follower. _Related work:_ Bimatrix Stackelberg games with a commitment to mixed strategies have been extensively studied in the literature under the assumption that the follower has perfect knowledge of the leader's strategy [1, 2, 3]. For these games, an optimal strategy for the leader can be computed in polynomial time via linear programming (assuming that the follower breaks ties in favor of the leader) [2]. The paper [4] considers Stackelberg games with partial observability where the follower observes the leader's strategy with some probability and does not otherwise. We consider a different observability setting where the follower gets observations from the leader's strategy. Papers [5, 6] also consider this observation setting. To account for the follower's partial information, [5, 6, 7] consider a robust set that represents the possible realizations of the leader's strategy and maximize the leader's worst-case return by solving a robust optimization problem. We follow a different approach and try to maximize the leader's expected return under observations by relating it to the return under the perfect information setting. We provide a lower bound on the leader's return that involves the stochasticity level (inferability) of the leader's strategy. To our knowledge, a bound in this spirit does not exist for Stackelberg games with observations. Works [8, 9, 10] increase the stochasticity level of the control policy (the leader's strategy in our context) to improve the non-inferability in different contexts. We consider a stochasticity metric that coincides with the Fisher information metric considered in [8, 9, 10]. However, unlike these works, which focus on minimizing information and providing unachievability results, we provide an achievability result. Human-robot interactions are more efficient if the human knows the robot's intent. Conveying intent information via movement is explored to create legible behavior [11, 12]. These works are often concerned with creating trajectories that are distant from the trajectories under other intents. The leader's optimization problem in our setting is a bilevel optimization problem under data uncertainty [13]. Works [14, 15, 16] consider stochastic bilevel optimization problems where first the leader commits to a strategy before the data uncertainty is resolved, then the data uncertainty is resolved, and finally, the follower makes its decision with known data. In our problem, the distribution of data depends on the leader's decision1, whereas [14, 15, 16] consider a fixed distribution of data. Footnote 1: For single-level stochastic optimization problems, this setting is referred to as non-oblivious stochastic optimization [17]. We represent the boundedly rational follower using the maximum entropy model (also known as Boltzmann rationality model or quantal response) [18, 19]. Alternatively, [5, 20] consider boundedly rational followers using the anchoring theory [21] or \(\epsilon\)-optimal follower models. ## II Preliminaries ### _Notation_ We use upper-case letters for matrices and bold-face letters for random variables. \(\|\cdot\|\) denotes the L2 norm. \(\Delta^{N}\) denotes the \(N\)-dimensional probability simplex. For \(z\in\Delta^{N}\), the entropy of \(z\), is \[H(z)=\sum_{i=1}^{N}z_{i}\log(1/z_{i})\] where \(z_{i}\) is the \(i\)-th element of \(z\). The softmax function \(\sigma_{\lambda}:\mathbb{R}^{N}\rightarrow\Delta^{N}\) is defined as \[\sigma_{\lambda}(z)_{i}:=\frac{\exp(\lambda z_{i})}{\sum_{j=1}^{N}\exp(\lambda z _{j})}\] where \(\sigma_{\lambda}(z)_{i}\) is the \(i\)-th element of \(\sigma_{\lambda}(z)\). The softmax function \(\sigma_{\lambda}\) is \(\lambda\)-Lipschitz continuous, i.e., it satisfies \(\|\sigma(z)-\sigma(q)\|\leq\lambda\|z-q\|\)for all \(z,q\in\mathbb{R}^{N}\)[22]. We define the _stochasticity level_ of a probability distribution \(z\in\Delta^{N}\) as \[v(z):=\sqrt{\sum_{i=1}^{N}z_{i}(1-z_{i})}\] that is the square root of the trace of the covariance matrix. ### _Bimatrix Stackelberg Games with Mixed Strategies_ A _bimatrix Stackelberg game_ is a two-player game between a _leader_ and a _follower_. The leader has \(m\) (enumerated) actions, and the follower has \(n\) (enumerated) actions. We call matrix \(A\in\mathbb{R}^{m\times n}\) the leader's _utility matrix_ and \(B\in\mathbb{R}^{m\times n}\) the follower's utility matrix. When the leader takes action \(i\) and the follower takes action \(j\), the leader and follower returns are \(A_{ij}\) and \(B_{ij}\) respectively. In bimatrix Stackelberg games with _mixed strategies_, the leader has a mixed strategy \(x\in\Delta^{m}\), and the follower has mixed strategy \(y\in\Delta^{n}\). Let \(\mathbf{i}\) and \(\mathbf{j}\) denote the random versions of \(i\) and \(j\), respectively. The leader's _expected utility_ is \(x^{\top}Ay=\mathbb{E}_{\mathbf{i}\sim x,\mathbf{j}\sim y}\left[A_{\mathbf{i} \mathbf{j}}\right],\) and the follower's expected utility is \(x^{\top}By=\mathbb{E}_{\mathbf{i}\sim x,\mathbf{j}\sim y}\left[B_{\mathbf{i} \mathbf{j}}\right].\) When deciding on strategy \(y\), the follower knows the leader's strategy \(x\). This means the follower knows the probability distribution of the leader's action but does not know the leader's realized action. The follower's goal is to maximize its expected return given \(x\), the leader's strategy, by solving: \[\max_{y\in\Delta^{n}}x^{\top}By.\] An optimal solution exists for the follower's problem. The leader's goal is to maximize its (conservative2) expected return, i.e., solve the bilevel optimization problem: Footnote 2: Here conservative refers to the fact that if there are multiple optimal follower strategies, the follower chooses the worst strategy for the leader. \[\sup_{x\in\Delta^{m}} \min_{y^{*}}x^{\top}Ay^{*}\] s.t. \[y^{*}\in\arg\max_{y\in\Delta^{n}}x^{\top}By.\] Note that an optimal solution may not exist for this problem. We define \[SR(x):= \min_{y^{*}}x^{\top}Ay^{*}\] s.t. \[y^{*}\in\arg\max_{y\in\Delta^{n}}x^{\top}By.\] We refer to \(SR\) as the Stackelberg return under perfect information and full follower rationality. ### _Boundedly Rational Follower_ Bounded rationality models represent the decision-making process of an agent with limited information or information processing capabilities and are often used to model the decision-making process of humans [23]. We consider the maximum entropy model (Boltzmann rationality) to represent boundedly rational followers [18]. Given the leader's strategy \(x\), a boundedly rational follower solves the following optimization problem \[\max_{y\in\Delta^{n}}x^{\top}By+\frac{1}{\lambda}H(y)\] where \(\lambda\) denotes the follower's rationality level. Note that for \(\lambda\in(0,\infty)\), the optimal solution for the above problem is unique since the objective function is strictly concave and is given by \(\sigma_{\lambda}(B^{\top}x)\)[22]. In words, the action probabilities are weighted exponentially according to their expected returns. As \(\lambda\to 0\), the follower does not take its expected utility \(x^{\top}By\) into account and takes all available actions uniformly randomly. As \(\lambda\to\infty\), the follower becomes fully rational. Given that the follower is boundedly rational with level \(\lambda\in(0,\infty)\), the leader's goal is to maximize its expected utility, i.e., solve \[\max_{x\in\Delta^{m}}x^{\top}Ay^{*}\] such that \(y^{*}=\sigma_{\lambda}(B^{\top}x)\). We drop the inner optimization problem since the optimal solution to the follower's optimization problem is unique due to strict convexity. We define \[SR_{\lambda}(x):=x^{\top}Ay^{*}\text{ where }y^{*}=\sigma_{\lambda}(B^{\top}x).\] We refer to \(SR_{\lambda}\) as the Stackelberg return under perfect information and bounded follower rationality. ## III Problem Formulation ### _Repeated Bimatrix Stackelberg Games with Inference_ Consider a bimatrix Stackelberg game with mixed strategies that is repeated \(K\) times. However, assume the follower does not know the leader's fixed mixed strategy \(x\). Instead, the follower infers the leader's strategy from observations of the previous interactions. At interaction \(k\), let \(\hat{x}_{k}\) be the _sample mean estimation_ of the leader's strategy. Specifically, if the leader takes actions \(i_{1},\ldots,i_{k-1}\) at the previous \(k-1\) timesteps, \[\left(\hat{x}_{k}\right)_{l}=\frac{\#_{t=1}^{k-1}\left(i_{t}=l\right)}{k-1},\] where \(\left(\hat{x}_{k}\right)_{l}\) is the \(l^{\text{th}}\) element of vector \(\hat{x}_{k}\) and \(\#(\cdot)\) counts the number of times the input is true. Under these assumptions, the follower's strategy \(y_{k}\) at time \(k\) depends on the leader's actions in the previous \(k-1\) timesteps. For this reason, \(y_{k}\) changes over time. For example, for a fully rational follower, \(y_{k}^{*}\in\arg\max_{y\in\Delta^{n}}\hat{x}_{k}^{\top}By\). Next, we consider the following formulations of the leader's problem for different levels of follower rationality. **Fully rational follower:** The leader's decision-making problem is to a priori select a strategy \(x^{*}\) that maximizes its expected cumulative return under inference, i.e., assuming that the follower rationally responds to the plug-in sample mean estimator of \(x^{*}\) at each time \(k\). Let \(\mathbf{i}_{k}\), \(\hat{\mathbf{x}}_{k}\), and \(\mathbf{y}_{k}\) be random variables denoting the unrealized versions of \(i_{k}\), \(\hat{x}_{k}\) and \(y_{k}\), respectively. The leader's optimal strategy is \[x^{*}= \arg\max_{x\in\Delta^{m}}\mathbb{E}\left[\sum_{k=1}^{K}\min_{ \mathbf{y}_{k}^{*}}x^{\top}A\mathbf{y}_{k}^{*}\right]\] \[\text{s.t. }\mathbf{y}_{k}^{*}\in\arg\max_{y\in\Delta^{n}}\hat{ \mathbf{x}}_{k}^{\top}By.\] Here, the expectation is over the randomness in the leader's actions \(\mathbf{i}_{1},\ldots,\mathbf{i}_{K}\). The leader solves this decision problem prior to taking any action, meaning their future actions \(\mathbf{i}_{1},\ldots,\mathbf{i}_{K}\) are random variables. Since the follower's estimation \(\hat{\mathbf{x}}_{k}\) is a function of these future actions and the follower's strategy \(\mathbf{y}_{k}^{*}\) is a function of \(\hat{\mathbf{x}}_{k}\), they are both random variables as well and therefore bolded. **Boundedly rational follower:** The leader's decision problem is to find a strategy \(x^{*}\) such that \[x^{*}= \arg\max_{x\in\Delta^{m}}\mathbb{E}\left[\sum_{k=1}^{K}x^{\top}A \mathbf{y}_{k}\right]\] \[\text{s.t. }\mathbf{y}_{k}=\sigma_{\lambda}(B^{\top}\hat{ \mathbf{x}}_{k}).\] Once again, the expectation is over the randomness in the leader's (random) actions \(\mathbf{i}_{1},\ldots,\mathbf{i}_{K}\). In the Stackelberg game with inference, the leader's strategy affects the follower's optimal strategy in two ways, regardless of the level of follower rationality. First, as in the original Stackelberg game formulation, the leader's strategy determines the expected return for different follower actions, i.e., \(x^{\top}B\). This affects which strategies are optimal for the follower. Second, unlike in the perfect information Stackelberg game, the leader's strategy \(x\) modifies the distribution of its empirical action distribution \(\hat{\mathbf{x}}_{k}\) and, consequently, the follower's strategy \(\mathbf{y}_{k}\). A strategy with a high Stackelberg return under perfect information may be highly suboptimal in a Stackelberg game with inference. Different realizations of \(\hat{\mathbf{x}}_{k}\) lead to different solutions for \(\mathbf{y}_{k}\). If \(x\) is poorly inferred by the follower, the follower's strategy \(\mathbf{y}_{k}\) may yield poor returns when simultaneously played with \(x\). In the inference setting, an optimal strategy \(x^{*}\) will strike a balance between having a high Stackelberg return under perfect information and efficiently conveying information about itself to the follower. **Remark 1**.: _Inferability is important in semi-cooperative games where the objectives of the players are weakly positively correlated. Due to the positive correlation between the utility matrices \(A\) and \(B\), it is useful for the leader to be correctly inferred by the follower. On the other hand, since there is only a weak correlation between \(A\) and \(B\), i.e., \(A\neq B\), the leader's optimal strategy may still be mixed._ ### _Motivating Example: Autonomous Car and Pedestrian Interaction_ This section describes a motivating example of the interaction between an autonomous car and a pedestrian. Similar scenarios have been considered in [24, 25]. Consider an autonomous car moving in its lane. A pedestrian is dangerously close to the road and aims to cross. The car has the right of way and wants to proceed, as an unnecessary stop is inefficient. However, if the pedestrian decides to cross the car may need to make a dangerous emergency stop. In the event the pedestrian crosses, they may get find for jaywalking. The pedestrian and the car must make simultaneous decisions that will determine the outcome. Since the autonomous car's software is fixed prior to deployment, the car's decision is drawn from a fixed strategy that does not change over time. For the pedestrian, crossing has a value \(r_{\mathrm{cr}}^{f}=2\), and potentially getting find for jaywalking has \(r_{\mathrm{jw}}^{f}=-1\) value. For the car, proceeding without a stop has a value \(r_{\mathrm{pr}}^{l}=2\), and making an emergency stop has \(r_{\mathrm{em}}^{l}=-8\) value. **Scenario 1:** The car stops and the pedestrian waits for the car. The pedestrian's return is \(r_{\mathrm{cr}}^{f}=2\) since the car's stop allows them to cross. The car's return is \(0\) since it does not proceed and stops unnecessarily. **Scenario 2:** The car stops, but the pedestrian crosses before the car yields the right of way. In this case, the pedestrian's return is \(r_{\mathrm{cr}}^{f}+r_{\mathrm{jw}}^{f}=1\) since they cross the road but risk being fined for jaywalking. Once again, the car receives a return of \(0\) since it does not proceed. **Scenario 3:** The car proceeds and the pedestrian waits. The car's return is \(r_{\mathrm{pr}}^{l}=2\) since it makes no unnecessary stops. The pedestrian gets a return of \(0\) since it can not cross. **Scenario 4:** The car proceeds and the pedestrian crosses. The pedestrian's return is \(r_{\mathrm{cr}}^{f}+r_{\mathrm{jw}}^{f}=1\). While the car proceeds, it makes an emergency stop due to the crossing pedestrian, resulting in a return of \(r_{\mathrm{em}}^{l}=-8\). Assume that the car stops with probability \(p\) and proceeds with probability \(1-p\). If the pedestrian knows the probability \(p\), the pedestrian would wait if \(2p+0(1-p)>1p+1(1-p)\), i.e., \(p>0.5\). Knowing that the pedestrian would wait when \(p>0.5\), the car gets a return of \(0p+2(1-p)\). Knowing that the pedestrian would cross when \(p<0.5\), the car gets a return of \(0p-8(1-p)\). Hence, it is optimal for the car to choose a \(p\) such that \(p>0.5\) and \(p\approx 0.5\). While such a strategy is optimal and has a return of \(\approx 1\) for the car, it relies on the fact that the pedestrian has perfect information of the car's strategy. Such a strategy may not be optimal if the pedestrian does not know \(p\) and relies on observations. Consider a scenario where the pedestrian and car will interact a certain number of times. The pedestrian estimates the car's fixed strategy using observations from previous interactions. If in most of the previous interactions the car stopped, the pedestrian would expect the car to stop in the next interaction. Knowing that the pedestrian relies on observations, the car should pick an easily inferable strategy. If the pedestrian has a good estimate \(\hat{p}\) of the car's strategy, the pedestrian will act optimally with respect to the car's actual strategy. On the flip side, the car will get a return that is close to the perfect information case. For example, consider that \(p=1\). In this case, the pedestrian has the correct estimate \(\hat{p}=1\) after a single interaction, and the car will get a return of \(0\) in the subsequent interactions. If the car's strategy is not easily inferable, then the car may suffer from an _inferability loss_. For example, if \(p\) is such that \(p>0.5\) and \(p\approx 0.5\), the car has an expected return of \(\approx-1.5\) in the second interaction. This is significantly lower compared to the expected return of \(1\) in the perfect information case. This is because the pedestrian's estimate will be \(\hat{p}=0\) with probability \(1-p\approx 0.5\). In those events, the pedestrian will cross, and the car will get \(-8\) return if it proceeds. Overall, a strategy that maximizes the car's expected return over a finite number of interactions should take the pedestrian's estimation errors into account. ## IV Performance Bounds Under Inference In this section, we compare the leader's expected utility under repeated interactions with inference with the leader's expected utility under repeated interactions with perfect information. We define \[IR_{k}(x):= \mathbb{E}\left[\min_{\mathbf{y}_{k}^{\top}}x^{\top}A\mathbf{y}_{k}^{ \star}\right]\quad\text{s.t. }\mathbf{y}_{k}^{\star}\in\arg\max_{y\in\Delta^{ \star}}\hat{\mathbf{x}}_{k}^{\top}By\] that is the leader's expected (conservative) return under inference against a fully rational follower at interaction \(k\). Similarly, we define \[IR_{k,\lambda}(x):=\mathbb{E}\left[\sum_{k=1}^{K}x^{\top}A\mathbf{y}_{k}\right] \quad\text{s.t. }\mathbf{y}_{k}=\sigma_{\lambda}(B^{\top}\hat{\mathbf{x}}_{k})\] that is the leader's expected return under inference against a boundedly rational follower at the \(k^{\text{th}}\) interaction. The leader's expected return at the first interaction is arbitrary since the follower does not have any action samples. Hence, we are interested in analyzing the expected cumulative return for interactions \(k=2,\ldots,K\). Due to the linearity of expectation, the expected cumulative return can be represented as a sum of expected returns of every interaction. In the fully rational follower setting \[\sum_{k=2}^{K}IR_{k}(x)=\mathbb{E}\left[\sum_{k=2}^{K}\min_{\mathbf{y}_{k}^{ \star}}x^{\top}A\mathbf{y}_{k}^{\star}\right]\] where \(\mathbf{y}_{k}^{\star}\in\arg\max_{y\in\Delta^{\star}}\hat{\mathbf{x}}_{k}^{\top}By\), and in the boundedly rational follower setting \[\sum_{k=2}^{K}IR_{k,\lambda}(x)=\mathbb{E}\left[\sum_{k=2}^{K}x^{\top}A \mathbf{y}_{k}\right]\] where \(\mathbf{y}_{k}=\sigma_{\lambda}(B^{\top}\hat{\mathbf{x}}_{k})\). ### _Achievability Bound for a Boundedly Rational Follower_ The follower uses the sample mean estimator \(\hat{x}_{k}\) to infer the leader's strategy \(x\). As \(k\to\infty\), the estimate converges to the true distribution \(x\). Given that \(\sigma(x)\) is a continuous mapping, the leader's expected utility under inference, i.e. the observation return, converges to the expected utility in the perfect information setting, i.e., the Stackelberg return. However, with a finite number of interactions these returns are not necessarily the same, and the leader may suffer from an _inferability loss_. The following result shows that the inferability loss is upper bounded by a function of the trace of the covariance matrix of the leader's strategy. **Theorem 1**.: _Define \(d^{f}=\max_{i,j}B_{i,j}-\min_{i,j}B_{i,j}\) and \(d^{l}=\max_{i,j}A_{i,j}-\min_{i,j}A_{i,j}\). We have_ \[(K-1)SR(x)-\sum_{k=2}^{K}IR_{k,\lambda}(x)\leq\sum_{k=2}^{K}\frac{d^{l}d^{f} \lambda n^{\nicefrac{{3}}{{2}}}\sqrt{m}\nu(x)}{4\sqrt{(k-1)}}.\] **Remark 2**.: _There are \(\frac{(k+m-2)!}{(k-1)!(m-1)!}\approx(k-1)^{m-1}\) (assuming \(k\gg m\)) different values of \(\hat{x}_{k}\). Computing the exact value of \(IR\) may require evaluating the expected return under all possible realizations of \(\hat{x}_{k}\)._ The cumulative inferability loss grows sublinearly, i.e, \[\sum_{k=2}^{K}\frac{d^{l}d^{f}\lambda n^{\nicefrac{{3}}{{2}}}\sqrt{m}\nu(x)}{ 4\sqrt{(k-1)}}=\mathcal{O}\left(\sqrt{K}\lambda\nu(x)\right).\] As the leader's strategy becomes deterministic, i.e., \(\nu(x)\to 0\), the inferability loss vanishes to \(0\). In the extreme case where the leader's strategy is deterministic \(\nu(x)=0\), the leader does not suffer from an inferability loss. As the follower becomes irrational, i.e., \(\lambda\to 0\), the inferability loss vanishes to \(0\), and when the follower is fully irrational, \(\lambda=0\), the leader does not suffer from an inferability loss since the follower's strategy is uniformly random and does not depend on observations. The leader's optimal strategy under inference depends on various factors. Such a strategy should have a balance between having a high Stackelberg return under perfect information and having a minimal inferability loss, i.e., efficiently conveying information about itself to the follower. We use a series of lemmas to prove Theorem 1. The proofs for the lemmas are given in the appendix. **Lemma 1**.: \[\|\sigma_{\lambda}(B^{\top}x)-\sigma_{\lambda}(B^{\top}\hat{x}_{k})\|\leq \frac{\lambda d^{f}n\sqrt{m}}{2}\|\hat{x}_{k}-x\|.\] (1) **Lemma 2**.: _Let \(y_{k}=\sigma_{\lambda}(B^{\top}\hat{x}_{k})\) and \(y=\sigma_{\lambda}(B^{\top}x)\)._ \[|x^{\top}Ay-x^{\top}Ay_{k}|\leq\frac{\sqrt{n}d}{2}\|y-y_{k}\|.\] **Lemma 3**.: \[\mathbb{E}[\|\hat{\mathbf{x}}_{k}-x\|^{2}]=\frac{\nu(x)^{2}}{k-1}.\] Proof of Theorem 1.: Let \(y_{k}=\sigma_{\lambda}(B^{\top}\hat{x}_{k})\) and \(y=\sigma_{\lambda}(B^{\top}x)\). We combine Lemmas 1 and 2 to get \[x^{\top}Ay_{k}\geq x^{\top}Ay-\frac{n^{\nicefrac{{3}}{{2}}}\sqrt{m}d^{l}d^{f} \lambda}{4}\|\hat{x}_{k}-x\|. \tag{2}\] Let \(f(t)\) be the p.d.f. of \(\|\hat{\mathbf{x}}_{k}-x\|\). We have \[IR_{k,\lambda}(x)=\mathbb{E}\left[x^{\top}A\mathbf{y}_{k}|\mathbf{ y}_{k}=\sigma_{\lambda}(B^{\top}\hat{\mathbf{x}}_{k})\right] \tag{3a}\] \[=\int_{0}^{\infty}\mathbb{E}\left[x^{\top}A\mathbf{y}_{k}|\mathbf{ y}_{k}=\sigma_{\lambda}(B^{\top}\hat{\mathbf{x}}_{k}),\|\hat{\mathbf{x}}_{k}-x\|=t \right]f(t)dt\] (3b) \[\geq\int_{0}^{\infty}\left(x^{\top}Ay-\frac{td^{l}d^{f}n^{\nicefrac{{ 3}}{{2}}}\sqrt{m}}{4}\right)f(t)dt\] (3c) \[=SR_{\lambda}(x)-\frac{d^{l}d^{f}n^{\nicefrac{{3}}{{2}}}\sqrt{m}}{4} \int_{0}^{\infty}tf(t)dt \tag{3d}\] where (3b) is due to the definition of expectation, (3c) is due to (2), and (3d) is due to the definition of \(SR_{\lambda}\) and \(\int_{t=0}^{\infty}f(t)=1\). Note that \[\mathbb{E}[\|\hat{\mathbf{x}}_{k}-x\|^{2}]=\int_{0}^{\infty}t^{2}f(t)dt\geq \left(\int_{0}^{\infty}tf(t)dt\right)^{2}\] due to the Jensen's inequality. Using this inequality and Lemma 3 in (3d), we get \[IR_{k,\lambda}(x) \geq SR_{\lambda}(x)-\frac{d^{l}d^{f}n^{\nicefrac{{3}}{{2}}}\sqrt {m}}{4}\sqrt{\left(\int_{0}^{\infty}tf(t)dt\right)^{2}}\] \[\geq SR_{\lambda}(x)-\frac{d^{l}d^{f}n^{\nicefrac{{3}}{{2}}}\sqrt {m}}{4}\sqrt{\left(\int_{0}^{\infty}t^{2}f(t)dt\right)}\] \[=SR_{\lambda}(x)-\frac{d^{l}d^{f}n^{\nicefrac{{3}}{{2}}}\sqrt{m} \nu(x)}{4\sqrt{(k-1)}.}\] Summation from \(k=2\) to \(K\) yields the desired result. ### _Converse Bound for a Fully Rational Follower_ Theorem 1 shows that with a boundedly rational follower, the gap between the leader's expected return in the perfect information setting and in the inference setting, \(SR_{\lambda}(x)-IR_{k,\lambda}(x)\), is at most at the order of \(\mathcal{O}(\nicefrac{{1}}{{\sqrt{k}}})\) at interaction \(k\). In other words, after \(\mathcal{O}(\nicefrac{{1}}{{\epsilon^{2}}})\) interactions, we have \(SR_{\lambda}(x)-IR_{k,\lambda}(x)\leq\epsilon\). In this section, we give an example for the fully rational follower setting that matches the upper bound: \(\mathcal{O}(\nicefrac{{1}}{{\epsilon^{2}}})\) interactions are required to achieve \(SR(x)-IR_{k}(x)\leq\epsilon\). We consider \[A=\begin{bmatrix}0&0\\ 1&0\end{bmatrix},\quad B=\begin{bmatrix}2&1\\ 0&1\end{bmatrix}. \tag{4}\] For these choices of \(A\) and \(B\), we have \[v^{*}=\sup_{x\in\Delta^{m}}\ \min_{y^{*}}x^{\top}Ay\ \ \text{s.t.}\ \ y^{*}\in\arg \max_{y\in\Delta^{m}}x^{\top}By=\frac{1}{2}\] **Proposition 1**.: _Let \(A\) and \(B\) be as defined in (4). For every \(\epsilon\in(0,1/2)\) and \(x\in\Delta^{2}\) such that \(SR(x)\geq\nicefrac{{1}}{{2}}-\epsilon\), if_ \[k\leq\frac{1-20\epsilon+132\epsilon^{2}}{32\epsilon^{2}},\] _then_ \[SR(x)-IR_{k}(x)\geq\epsilon.\] For small enough \(\epsilon\), the term \(\nicefrac{{1}}{{2}}a\epsilon^{2}\) dominates the other terms. If there are \(o(\nicefrac{{1}}{{\epsilon^{2}}})\) interactions, then the leader's expected return under inference is at least \(\epsilon\) worse than its return under perfect information. The strategies with near-optimal Stackelberg returns, i.e., \(SR(x)\geq\nicefrac{{1}}{{2}}-\epsilon\), will have poor returns under inference since they are close to the decision boundary where the follower abruptly changes its strategy. The returns for the strategies close to the boundary will be poor since the empirical distribution may be on the other side of the decision boundary. Proof of Proposition 1.: We first define some notation. Let \(E^{1}=\{x|x\in\Delta^{2},x^{\top}[1,1]>0\}\) and \(E^{2}=\Delta^{2}\setminus E^{1}\). Note that \(E^{1}\) is the set of leader distributions for which action \(1\) is optimal for the follower, and \(E^{2}\) is the set of leader distributions for which action \(2\) is optimal for the follower. We consider two strategies for the leader \[x=[\nicefrac{{1}}{{2}}+\epsilon,\nicefrac{{1}}{{2}}-\epsilon]\text{ and }z=[ \nicefrac{{1}}{{2}}-\epsilon,\nicefrac{{1}}{{2}}+\epsilon].\] We prove the statement in four steps: 1. Show that the expected return of \(x\) under perfect information \(SR(x)\) is lower bounded. 2. Show that the expected return of \(z\) under inference \(IR_{k}(z)\) is upper bounded. 3. Show that the expected returns of \(z\) and \(x\) under inference, \(IR_{k}(z)\) and \(IR_{k}(x)\) are close. 4. Combine the above results. **Step 1:** Note that \(x\in E^{2}\). Consequently, \(SR(x)=\nicefrac{{1}}{{2}}-\epsilon\) which trivially implies \[SR(x)\geq\nicefrac{{1}}{{2}}-\epsilon. \tag{5}\] **Step 2:** Next, we upper bound \(IR_{k}(z)\). We have \[IR_{k}(z)= \Pr(\textbf{i}_{k}=1|z)\Pr(\textbf{j}_{k}=1|z)A_{1,1}\] \[+\Pr(\textbf{i}_{k}=1|z)\Pr(\textbf{j}_{k}=2|z)A_{1,2}\] \[+\Pr(\textbf{i}_{k}=2|z)\Pr(\textbf{j}_{k}=1|z)A_{2,1}\] \[+\Pr(\textbf{i}_{k}=2|z)\Pr(\textbf{j}_{k}=2|z)A_{2,2}\] \[= (\nicefrac{{1}}{{2}}+\epsilon)\Pr(\textbf{j}_{k}=1|z)\] since \(A_{1,1}=A_{1,2}=A_{2,2}=0\), \(A_{2,1}=1\), and \(\Pr(\textbf{i}_{k}=2|z)=\nicefrac{{1}}{{2}}+\epsilon\). We note that \(\textbf{j}_{k}=1\) if and only if \(\textbf{z}_{k}\in E^{1}\), i.e., the empirical distribution of the leader's actions belongs to \(E^{1}\). It implies that \(\Pr(\textbf{z}_{k}\in E^{1}|z)=\Pr(\textbf{j}_{k}=1|z)\). Note that \(\Pr(\textbf{z}_{k}\in E^{1}|z)\leq\nicefrac{{1}}{{2}}\) since \(z\) has more bias towards action \(2\). Consequently, we have \[IR_{k}(z)\leq\nicefrac{{1}}{{2}}(\nicefrac{{1}}{{2}}+\epsilon)=\nicefrac{{1} }{{4}}+\nicefrac{{\epsilon}}{{2}}. \tag{6}\] **Step 3:** Finally, we upper bound \(|IR_{k}(x)-IR_{k}(z)|\). Note that \[IR_{k}(x)-IR_{k}(z)\] \[=\Pr(\textbf{i}_{k}=2|x)\Pr(\textbf{j}_{k}=2|x)-\Pr(\textbf{i}_{k }=2|z)\Pr(\textbf{j}_{k}=2|z)\] \[=(\nicefrac{{1}}{{2}}-\epsilon)\Pr(\textbf{j}_{k}=2|x)-( \nicefrac{{1}}{{2}}+\epsilon)\Pr(\textbf{j}_{k}=2|z)\] \[\leq\nicefrac{{1}}{{2}}(\Pr(\textbf{j}_{k}=1|x)-\Pr(\textbf{j}_{k }=1|z)). \tag{7a}\] Let \(\mathcal{D}_{\hat{\textbf{z}}_{k}}\) and \(\mathcal{D}_{\hat{\textbf{x}}_{k}}\) be the distributions of \(\hat{\textbf{z}}_{k}\) and \(\hat{\textbf{x}}_{k}\), respectively. Also, let \(KL(\mathcal{D}^{1}||\mathcal{D}^{2})\) denote the KL divergence between distributions \(\mathcal{D}^{1}\) and \(\mathcal{D}^{2}\), and \(Be(p)\) denote the Bernoulli random variable with parameter \(p\). We have \[KL(\mathcal{D}_{\hat{\textbf{z}}_{k}}||\mathcal{D}_{\hat{\textbf{x}}_{k}})=(k-1 )KL(Be(\nicefrac{{1}}{{2}}-\epsilon)||Be(\nicefrac{{1}}{{2}}+\epsilon))\] since the leader's actions are withdrawn from \(z\) ( and from \(x\)) at every interaction independently. Using the upper bound given in Theorem 1 of [26], we get \[KL(\mathcal{D}_{\hat{\textbf{z}}_{k}}||\mathcal{D}_{\hat{\textbf{x}}_{k}})\leq \frac{16\epsilon^{2}(k-1)}{1-4\epsilon^{2}}.\] Since \(\textbf{j}_{k}\) is a function of the emprical distribution \(\hat{\textbf{z}}_{k}\) (\(\hat{\textbf{x}}_{k}\)), using the data processing inequality [27], we get \[KL(Be(\Pr(\textbf{j}_{k}=1|z))||Be(\Pr(\textbf{j}_{k}=1|x)))\leq KL(\mathcal{D} _{\hat{\textbf{z}}_{k}}||\mathcal{D}_{\hat{\textbf{x}}_{k}}).\] By Pinsker's inequality and the above inequalities, \[|\Pr(\textbf{j}_{k}=1|z)-\Pr(\textbf{j}_{k}=1|x)|\leq\sqrt{\frac{8\epsilon^{2}(k- 1)}{1-4\epsilon^{2}}}.\] Combining this with (7), we get \[IR_{k}(x)-IR_{k}(z)\leq\sqrt{\frac{2\epsilon^{2}(k-1)}{1-4\epsilon^{2}}}. \tag{8}\] **Step 4:** Combining (5), (6), and (8) yields \[SR(x)-IR_{k}(x)\geq\frac{1}{4}-\frac{3\epsilon}{2}-\sqrt{\frac{2\epsilon^{2}(k- 1)}{1-4\epsilon^{2}}}.\] The bound is a monotone function of \(k\). Setting the right-hand side to \(\epsilon\) and solving for \(k\) yields the desired result. ## V Numerical Examples In this section, we evaluate the effect of inference on repeated bimatrix Stackelberg games with boundedly rational followers. As an example, we consider the aforementioned car-pedestrian interaction and randomly generated bimatrix games. For clarity of presentation, we plot the average return \(\frac{1}{K}\sum_{k=2}^{K}IR_{k,\lambda}(x)\), which is the expected cumulative return up to interaction \(K\) divided by \(K\). We approximate the expectation with repeated simulations. Car-Pedestrian InteractionsWe consider the bimatrix game presented in Table I. We simulate the game play under inference for \(100\) interactions with a rationality constant \(\lambda=100\). The car's strategy is determined by \(p\), i.e., the probability the car stops, and for \(\lambda=100\), the optimal \(p\) is \(0.53\) in the perfect information setting. We repeat the simulation \(10,000\) times, and the leader's average expected return for different values of \(p\). Results are shown in Fig. 1. The optimal strategy for the car in the perfect information setting, i.e., \(p=0.53\), would achieve the highest return in the long run. However, after 100 interactions, this strategy is still underperforming compared to more deterministic strategies. This is because a small error in the pedestrian's estimation \(\hat{\textbf{p}}_{k}\) results in large changes in the pedestrian's strategy, demonstrating the impact inference has on the leader's return. On the other hand, as we expected, the strategies with higher stopping probabilities achieve higher transient returns: More deterministic strategies are easier for the pedestrian to infer at small time horizons \(K\), and any error in \(\hat{\textbf{p}}_{k}\) results in only small changes to the pedestrian's strategy. Randomly Generated Bimatrix GamesWe evaluate the performance under inference for randomly generated bimatrix games when the follower has bounded rationality. From the achievability bound given in Theorem 1, \[(K-1)SR_{\lambda}(x)-c\,\nu(x)\leq\sum_{k=2}^{K}IR_{k,\lambda}(x)\] for some constant \(c\) depending on \(K\), \(A\) and \(B\). We use \(\nu\) as a regularizer and optimize the bound for fixed values of \(c\): \[x^{*}(c)=\arg\max_{x\in\Delta^{m}}SR(x)-c\ (\nu(x))^{2}.\] and compare the performance of leader strategies for different values of \(c\). We replace \(\nu\) with \(\nu^{2}\) in the optimization problem since the gradients of \(\nu^{2}\) are Lipschitz continuous. We note that even when \(c=0\), this is a nonconvex optimization problem. To find a maximum, we use gradient descent with decaying stepsize. We use the leader's optimal strategy from the Stackelberg game with a fully rational follower as the starting point for the gradient descent. In this example, we randomly generate bimatrix games. For each bimatrix game, the entries of the leader's utility matrix \(A\) are uniformly randomly distributed between \(0\) and \(1\). The follower's utility matrix \(B=\nicefrac{{A}}{{2}}+\nicefrac{{C}}{{2}}\), where \(C\) is a uniformly randomly distributed matrix between \(0\) and \(1\). This construction makes \(A\) and \(B\) weakly positively correlated highlighting the importance of mixed strategies and inferability as explained in Remark 1. We randomly generate \(10,000\)\(4\times 4\) bimatrix games. For each random bimatrix game, we find the leader's strategy \(x^{*}(c)\) for \(c=0,1,10\), and \(100\). For each bimatrix game, we simulate play for \(100\) interactions with rationality constant \(\lambda=100\). We repeat the simulations 100 times, and the leader's return is averaged at each interaction over these simulations. Then, the leader's average return until interaction \(k\) for each bimatrix is averaged at each interaction \(k\) over all bimatrix games. Results are shown in Fig. 2. In these simulations, higher regularization constants correspond to more inferable (less stochastic) strategies, as more weight is given to the stochasticity level of a strategy. The optimal strategies for the perfect information setting \(x^{*}(c=0)\) (the optimal strategy with no stochasticity regularization) achieves a higher average expected return in the long run (after \(45\) interactions) since the follower's estimation accuracy improves with more interactions. However, these strategies still suffer inferability loss after \(100\) interactions. For the first \(45\) interactions, the regularization constant \(c=100\) yields higher average returns, and the average return reaches its final value even after the first interaction since the generated strategies are deterministic and estimated by the follower perfectly even with a single sample. ## VI Conclusions When interacting with other non-competitive agents, an agent should have an inferable behavior to inform others about intentions effectively. We model the inferability problem using a repeated bimatrix Stackelberg game where the follower infers the leader's strategy via observation from previous interactions. We show that in the inference setting, the leader may suffer from an inferability loss compared to the perfect information setting. However, this loss is upper bounded by a function that depends on the stochasticity level of the leader's strategy. The bound and experimental results show that to maximize the transient returns, the leader may be better off using a less stochastic strategy compared to the strategy that is optimal in the perfect information setting. ## Appendix: Proofs ### Proof of Lemma 1 We first define \[q^{l}=\max_{i,j}A_{ij}+\min_{i,j}A_{ij},\quad q^{f}\quad=\max_{i,j}B_{ij}+\min _{i,j}B_{ij}.\] Let \(J\) be a matrix of ones. We note that \[\sigma_{\lambda}(B^{\top}\hat{x}_{k}) =\sigma_{\lambda}\left(\left(B-\nicefrac{{Jq^{f}}}{{2}}\right)^{ \top}\hat{x}_{k}\right)\] \[\sigma_{\lambda}(B^{\top}x) =\sigma_{\lambda}\left(\left(B-\nicefrac{{Jq^{f}}}{{2}}\right)^{ \top}x\right)\] Fig. 1: The car’s average return in the pedestrian-car example. Solid lines represent the average return for different strategies where \(p\) is the probability of the car stopping. Dashed lines represent the average return per interaction under perfect information, i.e., \(x^{\top}A\sigma_{\lambda}(B^{\top}x)\) for \(x=[p,1-p]\). Fig. 2: The leader’s average return for the randomly generated 4x4 bimatrix games. Solid lines represent the average return for the bound’s local maxima for different values of the regularization constant \(c\). Dashed lines represent the average return per interaction under perfect information, i.e., \((x^{*}(c)))^{\top}A\sigma_{\lambda}(B^{\top}x^{*}(c))\). since subtracting the same constant from all elements does not change the result of the softmax function. Let \(z_{i}\) be the \(i^{\text{th}}\) column of \(B-\nicefrac{{Jq^{f}}}{{2}}\). We have \[\|\sigma_{\lambda}(B^{\top}x)-\sigma_{\lambda}(B^{\top}\hat{x}_{k})\|\] \[=\left\|\sigma_{\lambda}\left(\left(B-\frac{Jq^{f}}{2}\right)^{ \top}x\right)-\sigma_{\lambda}\left(\left(B-\frac{Jq^{f}}{2}\right)^{\top}\hat {x}_{k}\right)\right\|\] \[\leq\lambda\left\|\left(B-\frac{Jq^{f}}{2}\right)^{\top}x-\left(B -\frac{Jq^{f}}{2}\right)^{\top}\hat{x}_{k}\right\| \tag{9a}\] \[\leq\sum_{i=1}^{n}\|x-\hat{x}_{k}\|\|z_{i}\|\] (9b) \[\leq\|x-\hat{x}_{k}\|\ \frac{r^{f}n\sqrt{m}}{2} \tag{9c}\] where (9a) is due to the \(\lambda\)-Lipschitzness of \(\sigma_{x}\), (9b) is due to Cauchy-Schwartz ineq., (9c) is due to \(\max_{i,j}z_{i,j}=\nicefrac{{r^{f}}}{{2}}\). Proof of Lemma 2.: Let \(J\) be a matrix of ones, and \(z_{i}\) be the \(i^{\text{th}}\) column of \(A-\nicefrac{{Jq^{f}}}{{2}}\). We have \[|x^{\top}Ay_{k}-x^{\top}Ay|\] \[=\left|x^{\top}Ay_{k}-x^{\top}Ay-x^{\top}\frac{q^{l}}{2}(Jy_{k}- Jy)\right| \tag{10a}\] \[\leq\left\|x^{\top}\left(A-\frac{q^{l}}{2}J\right)\right\|\|y_{k }-y\|\] (10b) \[\leq\max_{x^{\prime}\in\Delta^{m}}\left(\sum_{i}^{n}(\nicefrac{{ r^{\prime}}}{{z_{i}}})^{2}\right)^{1/2}\|y_{k}-y\|\] \[\leq\left(\sum_{i}^{n}\max_{x^{\prime}\in\Delta^{m}}(\nicefrac{{ r^{\prime}}}{{z_{i}}})^{2}\right)^{1/2}\|y_{k}-y\|\] \[\leq\left(\frac{n(r^{l})^{2}}{4}\right)^{1/2}\|y_{k}-y\|=\frac{ \sqrt{n}r^{l}}{2}\|y_{k}-y\| \tag{10c}\] where (10a) is due to \(Jy=Jy_{k}\), (10b) is due Cauchy-Schwartz ineq., and (10c) is due to \(\max_{j}|z_{ij}|\leq\nicefrac{{r^{l}}}{{2}}\). Proof of Lemma 3.: Due to the linearity of expectation, we have \(\mathbb{E}[\|\hat{\mathbf{x}}_{k}-x\|^{2}]=\sum_{i=1}^{m}\mathbb{E}[(\hat{ \mathbf{x}}_{k})_{i}-(x)_{i})^{2}]\). Note that \(\mathbf{i}_{1},\dots,\mathbf{i}_{k-1}\) is sampled independently from the distribution \(x\), and \(\Pr(\mathbf{i}_{t}=a|x)=(x)_{i}\) for \(t=1,\dots,k-1\). Consequently, \((k-1)(\hat{\mathbf{x}}_{k})_{i}\) is a binomial distribution with parameters \(k-1\) and \((x)_{i}\), and satisfies \[\mathbb{E}[((k-1)(\hat{\mathbf{x}}_{k})_{i}-(k-1)(x)_{i})^{2}]=(k-1)(x)_{i}(1- (x)_{i}).\] Due to this fact, we have \[\mathbb{E}[\|\hat{\mathbf{x}}_{k}-x\|^{2}]=\sum_{i=1}^{m}\frac{(x)_{i}(1-(x)_ {i})}{k-1}=\frac{\nu(x)^{2}}{k-1}.\]
他の人間との競争しない意思決定を行う自動 agent との相互作用において、自律 agent には、推論可能な行動が必要である。彼らの行動は意図と戦略を伝える必要がある。例えば、自律車の場合、歩行者と車との相互作用を通してその戦略が推論可能である。この推論可能性の問題を、リーダーとフォロワーが繰り返し相互作用する繰り返し二元スタックエルグゲームでモデル化する。リーダーは固定された、潜在的に混合戦略を使用する。一方、フォロワーはリーダーの戦略を知らないで、リーダーの過去の行動に基づいて動的に反応する。観測がある設定では、リーダーは推論不可能な損失に苦しむ可能性がある。つまり、リーダーの戦略の完璧な情報がフォロワーに与えられる設定との比較におけるパフォーマンスが低下する。この推論不可能な損失を、相互作用数とリーダー戦略の確率的レベルに基づいて上限とする。
2309.15660
Enhanced Frequency Containment Reserve Provision from Battery Hybridized Hydropower Plants: Theory and Experimental Validation
This paper presents a solution to address wear and tear of Run-of-River (RoR) Hydropower Plants (HPPs) providing enhanced Frequency Containment Reserve (FCR). In this respect, the study proposes the integration of a Battery Energy Storage System (BESS) with RoR HPPs controlled by a double-layer Model Predictive Control (MPC). The upper layer MPC acts as a state of energy manager for the BESS, employing a forecast of the required regulating energy for the next hour. The lower-layer MPC optimally allocates the power set-point between the turbine and the BESS. Reduced-scale experiments are performed on a one-of-a-kind testing platform to validate the proposed MPC- based control considering a comparison with different control strategies and different BESS sizes. The results demonstrate superior performance of the proposed framework, compared to simpler techniques like dead-band control or to the standalone RoR scenario, leading to improved FCR provision, reduced servomechanism stress, and extended hydropower asset lifespan.
Francesco Gerini, Elena Vagnoni, Martin Seydoux, Rachid Cherkaoui, Mario Paolone
2023-09-27T13:49:13
http://arxiv.org/abs/2309.15660v2
# Enhanced Frequency Containment Reserve ###### Abstract This paper presents a solution to address wear and tear of _Run-of-River_ (RoR) _Hydropower Plants_ (HPPs) providing enhanced _Frequency Containment Reserve_ (FCR). In this respect, the study proposes the integration of a _Battery Energy Storage System_ (BESS) with RoR HPPs controlled by a double-layer _Model Predictive Control_ (MPC). The upper layer MPC acts as a state of energy manager for the BESS, employing a forecast of the required regulating energy for the next hour. The lower-layer MPC optimally allocates the power set-point between the turbine and the BESS. Reduced-scale experiments are performed on a one-of-a-kind testing platform to validate the proposed MPC-based control considering a comparison with different control strategies and different BESS sizes. The results demonstrate superior performance of the proposed framework, compared to simpler techniques like dead-band control or to the standalone RoR scenario, leading to improved FCR provision, reduced servomechanism stress, and extended hydropower asset lifespan. Battery Energy Storage System, Frequency Containment Reserve, Hydropower, Model Predictive Control, Run-of-River Power Plant. ## I Introduction As widely recognized, hydropower plants are renewable energy assets that play a crucial role in providing fundamental ancillary grid services, such as _Frequency Containment Reserve_ (FCR), which have become increasingly important due to the decommissioning of dispatchable thermal power plants. Part of the FCR reserve is provided by _Run-of-River_ (RoR) power plants [1], accounting for 5.94 % of the total generated electricity in the ENTSOE area in 2022 [2]. The need for continuous power regulations impacts the lifetime of hydroelectric assets [3]. RoR _Hydropower Plants_ (HPP) are often equipped with double regulated Kaplan turbines (i.e. machines able to control guide vanes and blades opening angles), able to guarantee high efficiency values through a wide range of water discharge and head conditions. In the case of Kaplan turbines, the lifetime of the servomechanism that controls the movement of the blades is significantly impacted by continuous regulation. Continuous movements can stress the servomechanisms, leading to increased wear and tear, potential mechanical failures, and reduced overall turbine life [4]. To address these challenges, this paper focuses on the hybridization of RoR HPP with _Battery Energy Storage Systems_ (BESS) to enhance FCR provision and extend hydropower asset lifetime. This approach has been gathering an increasing interest in the literature [5, 6, 7, 8]. However, despite this interest, many of the existing contributions primarily suggest simple control techniques based on dead-band control or fuzzy logic [7, 5]. Others discuss HPP-BESS hybridization for other applications, such as pentock fatigue reduction in medium-head HPPs [8]. Moreover, most of the above-mentioned contributions are only simulation-based or with very limited experimental validation [6]). For this reason, the scope of this paper is to propose and experimentally validate an optimal control technique for hybrid RoR HPPs operating under a daily dispatch plan that provide FCR with a fixed droop characteristic. In particular, we present a double-layer _Model Predictive Control_ (MPC) to drive the hybrid system. The upper layer MPC (slower and farsighted) ensures the continuous operation of the BESS, by acting as _State of Energy_ (SOE) manager, leveraging a forecast of the regulating energy necessary to provide the FCR service in the following hour. The lower layer (faster and short-sighted) is responsible for splitting the requested power set-point between the turbine and the BESS, ensuring the feasible operation of both systems. The framework is validated for different BESS power and energy ratings to study the impact of the BESS sizing on the control problem. Furthermore, the efficacy of the proposed control strategy is examined and validated through reduced-scale experiments conducted on an innovative testing platform [9]. The evaluation covers comparison with classical control strategies and BESS sizes, with a specific emphasis on assessing the FCR provision quality and the reduction in servomechanism stress. The paper is organized as follows. Section II proposes the general formulation of the control problem. Section III presents a detailed description of the two-stage control framework. Section IV provides the experimental validation of the proposed framework. Finally, Section V summarizes the original contributions and main results of the article and proposes perspectives for further research activities. ## II Problem Statement As stated in Section I, the control addresses run-of-river HPPs operating under daily dispatch plans \(P^{\text{disp}}\), scheduled hourly, and obligated to provide FCR service with a fixed droop characteristic1\(\sigma_{f}\). The dispatch plan, input to the problem, is the product of an external optimization, taking into account market prices and constraints such as the concession head limit and other variables, and not object of this study. As already stated, a BESS is integrated into the system. The primary focus of this study is to propose an optimal set-point splitting policy that effectively achieves the following objectives: Footnote 1: Numerous European Transmission System Operators (TSOs) source their FCR in a shared market with 4-hour adjustments [10]. However, HPP droop settings tend to remain fixed for extended periods (years or even decades) due to contractual agreements, making them constant inputs in our study. 1. Ensuring dispatch tracking and high-quality FCR provision characterized by a rapid response time, in compliance with stringent FCR requirements [11, 12]; 2. Minimizing the number of movements and the mileage of the hydropower servomechanisms; 3. Ensuring the continuous and efficient operation of the BESS by managing its SOE within physically feasible limits; 4. Validating the feasibility of the BESS power set-point to uphold operational constraints composed by the power converter capability curve. To achieve these objectives, the study proposes the use of a _Double-Layer Model Predictive Control_ (DLMPC), structured as visible in Fig. 1. The _upper layer_ (UL) acts as a SOE manager for the BESS, employing a forecast of the required regulating energy2 for each hour \(g\). The output of the UL is a constant power offset for the hour that directly modifies the dispatch plan of the HPP. The _Lower Layer_ (LL) optimally allocates the power set point between the turbine and the BESS in real time, that is, every second \(k\) (see the time indices used in Fig. 2), This control layer leverages short-term frequency prediction3, together with real time measurement of the HPP power and the BESS SOE. In Fig. 1, each quantity \(A\) is indicated with a certain time subscript: hour \(g\) or seconds \(k\). In particular, if the quantity is a vector, including information from time \(k\) to \(k+p\), it is indicated as: Footnote 2: The idea of forecasting the FCR regulating energy was first introduced by [13] and later used from [14] and others. More information about the way this forecast is performed can be found in Appendix A. Footnote 3: Appendix A examines the viability of this forecast by making references to relevant literature. \[\mathbf{A}_{k:k+p}=\left[A_{k},A_{k+1},A_{k+2},\ldots,A_{k+p-1},A_{k+p}\right].\] ## III Control Framework ### _Upper Layer MPC_ Let us consider \(H\) as the power of the hydroelectric unit (active sign notation) and \(B\) as the power of BESS (passive sign notation). This MPC layer, executed at the beginning of every hour \(g\), is responsible for computing the smallest constant hourly power offset: \[H_{g}^{0}=B_{g}^{0}\] that allows to keep the BESS SOE within its physical limits. The problem relies on the last measure of the BESS SOE and on a forecast of the frequency integral over the next hour \(\hat{W}_{g}\), together with its confidence intervals \(\hat{W}_{g}^{\uparrow},\hat{W}_{g}^{\downarrow}\). Information on the forecasting method can be found in Appendix A. The forecast is modified to take into account the charging efficiency \(\eta_{\text{ch}}\) and the discharging efficiency \(\eta_{\text{ch}}\) of the BESS as: \[W_{g}=\left\{\begin{array}{ll}\hat{W}_{g}\cdot\eta_{\text{ch}},&\text{if }\hat{W}_{g}\leq 0\\ \hat{W}_{g}/\eta_{\text{ch}},&\text{if }\hat{W}_{g}\geq 0\end{array}\right. \tag{1}\] The updated value \(W_{g}\) is an input for the upper layer MPC, where it is used together with the frequency droop \(\sigma_{f}\) to forecast the regulating energy for hour \(g\) for the FCR service: \[E_{g}^{\text{FCR}}=\sigma_{f}\cdot W_{g} \tag{2}\] Fig. 1: DLMPC structure, showing inputs and output of the UL (in blue) and LL (in red). Fig. 2: Timeline of the problem, showing the actuation of the UL in blue, i.e., each hour \(g\) and of the LL in red, i.e., each second \(k\) The constant hourly offset \(B_{g}^{0}\) is either positive (BESS is charging) or negative (BESS is discharging): \[B_{g}^{0}=\left\{\begin{array}{cc}B_{g}^{0+},&\text{if }B_{g}^{0}\geq 0,\\ -B_{g}^{0-},&\text{if }B_{g}^{0}\leq 0\end{array}\right. \tag{3}\] where \(B_{g}^{0+}\) and \(B_{g}^{0-}\) are the charging and discharging BESS power in \(\mathrm{kW}\), respectively. The constant hourly offset \(B_{g}^{0}\) is then computed according to the following _Optimisation Problem_ (OP): \[B_{g}^{0}=\operatorname*{arg\,min}_{B_{g}^{0+}\&B_{g}^{0-}= \mathbb{R}}\left(B_{g}^{0+}+B_{g}^{0-}\right)^{2}\] (4a) s.t. \[W_{g}=\left\{\begin{array}{cc}\hat{W}_{g}\cdot\eta_{\text{ch}},& \text{if }\hat{W}_{g}\leq 0\\ \hat{W}_{g}/\eta_{\text{ch}},&\text{if }\hat{W}_{g}\geq 0\end{array}\right. \tag{4b}\] \[E_{g}^{\text{FCR}}=\sigma_{f}\cdot W_{g}\] (4c) \[E_{g}^{0}=\eta_{\text{ch}}\cdot B_{g}^{0+}-\frac{1}{\eta_{ \text{ch}}}\cdot B_{g}^{0-}\] (4d) \[SOE_{g}=SOE_{h-1}+\left[E_{g}^{0}+E_{g}^{\text{FCR}}\right] \frac{1}{C_{\text{B}}}\] (4e) \[SOE_{g}^{\uparrow}=SOE_{g}+\left[\sigma_{f}\cdot\hat{W}_{g}^{ \uparrow}\right]\frac{1}{C_{\text{B}}}\] (4f) \[SOE_{g}^{\downarrow}=SOE_{g}-\left[\sigma_{f}\cdot\hat{W}_{g}^{ \downarrow}\right]\frac{1}{C_{\text{B}}}\] (4g) \[SOE_{g}^{\uparrow}\leq SOE_{\max}\] (4h) \[SOE_{g}^{\downarrow}\geq SOE_{\min}\] (4i) \[0\leq B_{g}^{0+}\leq B_{\max}^{+}\] (4j) \[0\leq B_{g}^{0-}\leq B_{\max}^{-}\] (4k) \[B_{g}^{0}=B_{g}^{+}-B_{g}^{-} \tag{4l}\] The energy variation \(E_{g}^{0}\) due to the offset action during hour \(g\) is provided by Eq. (4d) and expressed in \(\mathrm{kW}\,\mathrm{h}\), as the power offset is applied for one hour. The latter is used, together with the forecast of \(W_{g}\) in Eq. (4e) to predict the \(SOE_{g}\) at the end of hour \(g\), taking into consideration the BESS capacity \(C_{\text{B}}\). Eqs. (4f)-(4i) compute the confidence interval of the SOE forecast and ensure the feasible operation in the considered intervals. Separating the charging and discharging components of the power offset allows for taking into consideration charging and discharging efficiency. Relaxation according to [15], visible in Eq. (4a) forces only one of the two decision variables to be different from zero, so that Eq. (4l) is compatible with the definition in Eq. (3). Equations (4j)-(4k) ensure the final power of the battery to be within the operational limits. Finally, Eq. (4l) computes the final power output of the BESS, positive if charging. The offset is directly applied to modify the dispatch plan of the HPP, as visible in Fig. 1. ### _Lower Layer MPC (LLMPC)_ The lower layer is a rolling-horizon MPC responsible for computing in real-time (i.e., each second) the set-point splitting policy between the HPP and the BESS, with the following objectives: 1. Optimal tracking of the FCR provision; 2. minimize the number of movements and the total mileage of the hydropower servomechanisms; 3. ensure the BESS to operate within its physical limits. For every time-step: \[k\in[k,k+p],\] where \(p\) is the length of the MPC horizon, the expected power output \(P_{k}^{\text{set}}\) of the hybrid system is: \[P_{k}^{\text{set}}=P_{k}^{\text{disp}}+\left[50-\hat{f}_{k}\right]\cdot\sigma _{f} \tag{5}\] where the term term \(\hat{f}_{k}\) indicates the expected value of the grid frequency at time \(k\). Information about this short-term frequency forecast is contained in Appendix A. As a consequence, the tracking error \(\text{TE}_{k}\) is the difference between the expected power output and the actual aggregated production of HPP + BESS, i.e. the power flow at the _Point of Common Coupling_ (PCC): \[\text{TE}_{k}=P_{k}^{\text{set}}-\left(H_{k}-B_{k}\right) \tag{6}\] and, over the entire MPC window: \[\text{TE}_{k:k+p}=\|\text{P}_{k:k+p}^{\text{set}}-\left(\text{H}_{k:k+p}- \text{B}_{k:k+p}\right)\|_{2} \tag{7}\] The second objective, i.e. the reduction of the number of movements of the hydropower actuators, can be modeled as the minimization of the cardinality of the array containing the variation of the hydroelectric unit power output with respect to the previous time instant: \[\min\quad\text{card}(\boldsymbol{\Delta}\text{H}_{k:k+p}) \tag{8}\] with \(\Delta H_{k}=H_{k}-H_{k-1}\quad\forall k\in[k,k+p]\). As discussed in [16], the cardinality of a quantity can be relaxed with its \(\ell_{1}\)-norm. In other words, \(\|\boldsymbol{\Delta}\text{H}_{k:k+p}\|_{1}\) is the convex envelope of the objective function of Eq. (8). The latter equation can therefore be relaxed, and assumes the following form: \[\min\quad\gamma\|\boldsymbol{\Delta}\text{H}_{k:k+p}\|_{1} \tag{9}\] where \(\gamma\) is non-negative parameter tuned to achieve the desired sparsity. Equations (7) and (9) constitute the objective of the lower-layer MPC. The constraints of the OP ensure that the BESS and the HPP operate within their physical limits. In particular, the capability curve \(\zeta\) of the BESS converter is considered as a function of the voltage on both the DC and AC side (\(v_{k}^{DC}\) and \(v_{k}^{AC}\), respectively) and of the BESS SOE: \[B_{k}\leq\zeta(v_{k}^{DC},v_{k}^{AC},SOE_{k}) \tag{10}\] Similarly to [17], the estimation of the BESS \(v_{k}^{DC}\) is based on the battery Three Time Constant (TTC) whose dynamic evolution can be expressed as a linear function of battery current by applying the transition matrices \(\phi^{v}\),\(\psi^{v}\),\(\psi^{v}_{1}\) (see Eq. (12d)). In the TCC model, the quantity \(x_{k}\) is the state vector of the voltage model 4, and the DC current is expressed as \(i_{k}^{DC}\). The latter is computed in Eq. (12e). The active power at the DC bus \(B_{k}^{DC}\) is related to the active power set-point AC side of the converter \(B_{k}^{\text{set}}\) according to (12c). The magnitude of the direct sequence component \(v_{k}^{AC}\) of the phase-to-phase voltage on the AC side of the converter, is assumed to be equal to the last available measurement, as indicated in (12f). Finally, the BESS SOE evolution is computed in Eq. (12g) in ensured within its physical limits by Eqs. (12h). For the HPP, a first-order discrete-time dynamical system is used to model the response \(H_{k}\) to a set point \(H_{k}^{\text{set}}\) considering a time constant \(\tau_{H}\): \[H_{k}=(1-\frac{\Delta k}{\tau_{H}})\cdot H_{k-1}+\frac{\Delta k}{\tau_{H}} \cdot H_{k}^{\text{set}} \tag{11}\] where \(\Delta k\) is the time interval between two consecutive discrete-time samples (i.e., one second). The HPP output power is limited within \(H_{\min}\) and \(H_{\max}\) by Eq. (12j) while Eq. (12k) ensures the power ramping rate to be lower than its maximum allowed value \(\dot{H}_{\max}\), expressed in \(\operatorname{kW}\operatorname{s}^{-1}\). As we assume the dispatch plan to include concession head control and given the short-term horizon of the MPC, no constraints on the concession head are introduced5. The LLMPC, running every second \(k\), is the following OP: Footnote 5: In the case of need for modelling constraints on the concession head, similarly to what done in [20], information about the hill-chart of the runner are needed, to translate power set-points into discharge values. \[\left[\mathbf{H}_{k:k+p}^{\text{set}};\mathbf{B}_{k:k+p}^{\text{set}}\right]=\] \[=\operatorname*{arg\,min}_{\mathbf{H},\mathbf{B}\in\mathbb{R}^{p} }\quad\mathbf{TE}_{k:k+p}+\gamma\cdot\|\mathbf{\Delta H}_{k:k+p}\|_{1}\] (12a) subject to: \[B_{k} \leq\zeta(v_{k}^{DC},v_{k}^{AC},SOE_{k}) \tag{12b}\] \[B_{k}^{DC} =\left\{\begin{array}{ll}B_{k}^{\text{set}}\cdot\eta_{\text{ kh}},&\forall B_{k}^{\text{set}}\geq 0\\ B_{k}^{\text{set}}/\eta_{\text{kh}},&\forall B_{k}^{\text{set}}<0\end{array}\right.\] (12c) \[v_{k}^{DC} =\phi_{\nu}x_{k-1}+\psi_{i}^{v_{L}DC}+\psi_{1}^{v}\mathbf{1}\] (12d) \[i_{k}^{DC} \approx\frac{B_{k}^{DC}}{v_{k}^{DC}}\] (12e) \[v_{k}^{AC} \approx v_{k-1}^{AC}\] (12f) \[SOE_{k} =SOE_{k-1}+i_{k}^{DC}v_{k}^{DC}\frac{1}{C_{B}}\] (12g) \[SOE_{\min} \leq SOE_{k}\leq SOE_{\max}\] (12h) \[H_{k} =(1-\frac{\Delta k}{\tau_{H}})\cdot H_{k-1}+\frac{\Delta k}{\tau_ {H}}\cdot H_{k}^{\text{set}}\] (12i) \[H_{\min} \leq H_{k}\leq H_{\max}\] (12j) \[-\dot{H}_{\max} \leq\Delta H_{k}\leq\dot{H}_{\max}\] (12k) \[\qquad\qquad\forall k\in[k,\ldots,k+p]\] A way to convexify constraints (12b) - (12f) has been presented in [21] and used in [17]. The optimization problem is solved at each time step \(k\) (with updated information) on a sliding horizon from the index \(k\) to \(k+p\). At each \(k\), the control trajectories for HPP and BESS for the whole residual horizon \(\left[\mathbf{H}_{k:k+p}^{\text{set}},\ \mathbf{B}_{k:k+p}^{\text{set}}\right]\) are available. However, only the first components, denoted by:\(\left[H_{k}^{\text{set}},B_{k}^{\text{set}}\right]\) are considered for actuation. ## IV Experimental Validation ### _Experimental setup_ The hybrid experimental platform used modernizes an existing platform in the _Plateforme Technologique Machines Hydrauliques_ (PTMH), at EPFL. The current platform is a closed-loop test rig that allows performance assessments of hydraulic machines in the four-quadrant characteristic curve with an accuracy of 0.2 %, complying with the IEC60153 standard for the testing of reduced scale physical models. The specific hydraulic energy in the closed-loop test-rig is generated by two centrifugal pumps. They allow for a maximum head of 100 m and a maximum discharge of 1.4 m\(\operatorname{s}^{3}\operatorname{s}^{-1}\). Furthermore, the pressure in the draft tube is set by adjusting the pressure in the downstream reservoir by using a vacuum pump. The hydroelectric unit governing systems is built as a standard speed governor [22] with frequency droop \(\sigma_{f}=$125\,\mathrm{kW}\,\mathrm{Hz}^{-1}$\) and a dead band of \(2\,\mathrm{mHz}\). The choice of droop is dictated by the similarity need with the prototype of the reduced scale model turbine, installed in Vogelgrun (FR) [7], and comprehensively discussed in the Appendix B. The measurement infrastructure employs a distributed sensing system based on _Phasor Measurement Units_. This system allows for real-time acquisition of precise power flow data thanks to the PMUs' high reporting rate of 50 frames per second and remarkable accuracy, with a standard deviation equivalent to \(0.001\) degrees (approximately 18 \(\mu\)rad) and error in the frequency estimation \(\leq$0.4\,\mathrm{mHz}$\)[23]. A 50 kW Kaplan turbine is connected to the grid bus with a synchronous machine, rated 100 kVA. On the same bus, the BESS is connected. For more information, see [9]. The frequency of the grid is regulated by a grid emulator with a nominal power rating of 100 kVA. The configuration of the hybrid system is illustrated in Figure 3. As the turbine is connected to the grid through a synchronous machine, a synchro-check mechanism is necessary. The comprehensive list of measurements is detailed in Table I. Fig. 3: Schematics of the PTMH PF3 power grid used to carry out the experiments ### _Experimental Tests_ In this section, we present the experimental campaign to validate the proposed framework. A series of 12 hour-long tests are performed under the same grid condition. The frequency time-series enforced at the PCC by the grid emulator is illustrated in Fig. 4. This frequency data corresponds to measurements taken on January 8th, 2021, when the ENTSO continental Europe synchronous area experienced a system split. This particular time frame is chosen to ensure the inclusion of typical daily frequency patterns, for the first 8h of test, and also to subject the system to more challenging scenarios in the remaining 4 hours. More information about the system split event can be found in [24]. Under the described grid conditions, the following tests are conducted: 1. Kaplan unit operating alone. 2. Hybrid Kaplan unit with a \(5\,\mathrm{kW/5\,kW\,h}\) BESS, controlled by a _Dead Band Filter_ (DBF). 3. Hybrid Kaplan unit with a \(9\,\mathrm{kW/9\,kW\,h}\) BESS, controlled by DBF. 4. Hybrid Kaplan unit with a \(5\,\mathrm{kW/5\,kW\,h}\) BESS, controlled by DLMPC. 5. Hybrid Kaplan unit with a \(9\,\mathrm{kW/9\,kW\,h}\) BESS, controlled by DLMPC. where the DBF control approach consists in the use of a dead-band filter on the frequency signal input of the governor [7]. Under this scheme, any frequency deviation lower than the one associated with the maximum power capacity of the BESS is allocated to the BESS itself, while any deviation exceeding this threshold is directed to the hydroelectric unit. For instance, in the context of a \(5\,\mathrm{kW}\) BESS, the frequency threshold for the DBF control is at \(40\,\mathrm{mHz}\), and for a \(9\,\mathrm{kW}\) BESS, it is set at \(72\,\mathrm{mHz}\). For more details on the sizing of the BESS and the determination of these threshold values, please refer to Appendix B. Even when utilizing the DBF control strategy, an upper layer is implemented to function as a SOE manager of the BESS. All the tests are performed considering a flat dispatch plan of \(27\,\mathrm{kW}\), constant for the 12 hours, to ensure that the wear and tear analysis of the movements is only related to FCR support. Moreover, the head of the system is kept constant at \(10\,\mathrm{m}\). As for the DLMPC approach, the ULMPC is executed hourly based on the energy prediction for regulation, as outlined in Appendix A. On the other hand, the LLMPC runs every second using a sliding window spanning 30 seconds. Additional information about short-term frequency forecasting is also detailed in Appendix A. ### _Results_ This analysis aims to highlight the advantages of BESS hybridization and to conduct a comparative assessment of distinct control techniques across various _Key Performance Indicators_ (KPIs). This comprehensive analysis covers three primary aspects: the quality of the FCR provision, the mitigation of wear and tear on the hydroelectric governing system, and the safe operation of the BESS. #### Iii-C1 FCR provision quality To assess the effectiveness of the FCR provision, we introduce the _Root Mean Squared_ (RMS) of the tracking error TE, as defined in Eq. (6), computed over the full experiment time horizon. By integrating over a certain amount of time \(\Delta t\) the power error it is possible to estimate a mean energy error \(E_{k}^{k+\Delta t}\), as follows: \[E_{k}^{k+\Delta t}=\frac{1}{\Delta t}\sum_{k}^{k+\Delta t}\text{TE}_{k} \tag{13}\] The resulting power error and energy error values, computed for \(\Delta t=[10,30,60]\) seconds are presented in Table II. While the RMS of TE might be influenced by power measurement noise, the energy errors consistently underline the benefits of hybridization in enhancing FCR provision quality. Although the BESS size contributes to reducing the RMS of TE, the impact is not notably significant. Interestingly, DBF and DLMPC techniques can be considered equivalent in terms of set-point tracking quality. Nonetheless, for data aggregated at 10, 30, and 60-second intervals, the tracking error of any hybrid system demonstrates a reduction of at least 50% in comparison to the error exhibited by the non-hybrid system. #### Iii-C2 Wear and Tear Reduction The assessment of wear reduction benefits is undertaken through the consideration of three specific KPIs: servomotors mileage, _Number of Movements_ (NoM) and torque oscillation on the blades. The reduction in both guide vane and runner blade NoM and mileage are significant indicators of the wear reduction achieved through hybridization [6]. Table III provides a comprehensive overview of this wear reduction, underscoring the substantial advantages offered by the MPC technique. All the percentage values are computed relative to the baseline configuration, represented by 'Only Hydro'. Notably, the DLMPC control method demonstrates superior performance compared to the DBF control. The degree of improvement achieved through more sophisticated control techniques is particularly pronounced for smaller BESS sizes. In essence, when the BESS size is sufficiently large, the necessity for optimal control diminishes. For instance, in the \(5\,\mathrm{kW}\) BESS experiments, the DBF control attains a substantial reduction of 90.7% in runner blade mileage and 91.6% in the number of movements. Similarly, with the same BESS size, the DLMPC algorithm demonstrates even greater reductions, achieving 94.0% and 97.1% reduction in runner blade angle mileage and the number of movements, respectively. Comparable outcomes are observed when examining the corresponding reduction in guide vane servomotors. These findings highlight the benefits of implementing hybridization with advanced control techniques such as DLMPC, particularly standing out for its wear reduction performance for smaller BESS sizes. Finally, for an estimate of the forces that affect the blades and their servomechanisms, the torque oscillations occurring on the runner blades are evaluated. To capture torque data, strain gauges are employed on the runner blades. As outlined in [25], strain gauge testing emerges as the sole reliable approach for acquiring both static and dynamic stress information from the runner. Over recent years, this method has gained widespread acceptance as a standard practice for newly installed runners, as highlighted in [26]. The _Cumulative Density Function_ (CDF) of the blade torque derivative is analyzed in Fig. 5. In line with the trends observed in other wear reduction KPIs, the sole hydro case demonstrated inferior performance, characterized by elevated levels of torque oscillations on the blades. In the case of the 5 kW BESS, it's noticeable that the CDF of the blade torque derivative is significantly narrower under the DLMPC control strategy compared to DBF. This narrower distribution implies that there are fewer occurrences of high torque oscillations, indicating improved performance in mitigating such oscillations with DLMPC. The presence of a larger BESS size demonstrates a positive impact on the torque oscillation reduction. Fig. 4: Frequency Time Series for the test. Measurement from SwissGrid during the system split of the 8th of January 2021 [24] Fig. 5: Comparative analysis of Blade Torque Oscillation for the different scenarios. #### Iv-B3 Safe BESS Operation The BESS SOE evolution over the 12 hours, for each experiment, is illustrated in Fig. 6. It is worth noting that, the two DBF experiments operate effectively until the grid split event occurs, around 14h00. Indeed, the regulating energy prediction from Eq. (14), is designed to function correctly in approximately 95% of cases (see Table IV) under normal grid conditions. However, in the event of a grid disruption, the power grid dynamics drastically change, and the new grid configuration differs from the one on which the SARIMA model was originally trained. Consequently, both DBF control strategies are unable to control the SOE within its limits during the latter half of the experiment. In contrast, the DLMPC strategy continually monitors the BESS SOE by means of the LLMPC. This frequent monitoring guarantees that the BESS operates within its predefined operational limits, even if the FCR energy prediction \(E_{\text{FCR}}\) deviates from the expected values. This feature substantially bolsters the system's reliability, allowing it to adapt to unforeseen grid dynamics. ## V Conclusion In this paper, we present a comprehensive solution to address operational challenges in Run-of-River Hydropower Plants (RoR HPPs) related to continuous power regulations due to Frequency Containment Reserve (FCR) provision. By integrating Battery Energy Storage Systems (BESS) with RoR HPPs using a double-layer Model Predictive Control (MPC) approach, we validate a novel strategy to enhance FCR capabilities and simultaneously reduce the wear and tear. The proposed control framework consists of an upper layer MPC responsible for State-of-Charge management of the BESS and a lower layer MPC for optimal splitting policy of power set points between the turbine and the BESS. Rigorous reduced-scale experiments conducted in an innovative testing platform validated the effectiveness of the proposed MPC-based control strategy across diverse grid scenarios and BESS sizing. The experimental results showcased the superiority of the double-layer MPC strategy over simpler techniques such as dead-band control. The hybridized system showcased enhanced FCR provision quality and a notable reduction in servomechanism stress. The analysis of wear reduction revealed a substantial decrease in both guide vane and runner blade angle mileage, ranging from 93.2% with a 5 kW BESS to as much as 98% for a 9 kW BESS. Similar reductions were observed in the case of movements. Moreover, reduced torque oscillations on the blades further emphasize the benefits of the proposed hybridization approach. Future work on the subject involves an experimental comparison between the hybridization scenario outlined and the implementation of variable speed for the Kaplan turbine used as a propeller, i.e. with fixed blades.
This paper presents a solution to address wear and tear of Run-of-River (RoR)Hydropower Plants (HPPs) by providing enhanced Frequency Containment Reserve(FCR). In this respect, the study proposes the integration of a Battery EnergyStorage System (BESS) with RoR HPPs controlled by a double-layer ModelPredictive Control (MPC). The upper layer MPC acts as a state of energy manager for the BESS, employing a forecast of the required regulating energy for the next hour. The lower-layer MPC optimally allocates the power set-point between the turbine and the BESS. Reduced-scale experiments are performed on a one-of-a-kind testing platform to validate the proposed MPC- based control considering a comparison with different control strategies and different BESS sizes. The results demonstrate superior performance of the proposed framework compared to simpler techniques like dead-band control or to the standalone RoR scenario, leading to improved FCR provision, reduced serv
2308.04444
Changes in Policy Preferences in German Tweets during the COVID Pandemic
Online social media have become an important forum for exchanging political opinions. In response to COVID measures citizens expressed their policy preferences directly on these platforms. Quantifying political preferences in online social media remains challenging: The vast amount of content requires scalable automated extraction of political preferences -- however fine grained political preference extraction is difficult with current machine learning (ML) technology, due to the lack of data sets. Here we present a novel data set of tweets with fine grained political preference annotations. A text classification model trained on this data is used to extract policy preferences in a German Twitter corpus ranging from 2019 to 2022. Our results indicate that in response to the COVID pandemic, expression of political opinions increased. Using a well established taxonomy of policy preferences we analyse fine grained political views and highlight changes in distinct political categories. These analyses suggest that the increase in policy preference expression is dominated by the categories pro-welfare, pro-education and pro-governmental administration efficiency. All training data and code used in this study are made publicly available to encourage other researchers to further improve automated policy preference extraction methods. We hope that our findings contribute to a better understanding of political statements in online social media and to a better assessment of how COVID measures impact political preferences.
Felix Biessmann
2023-07-31T16:07:28
http://arxiv.org/abs/2308.04444v1
# Changes in Policy Preferences in German ###### Abstract Online social media have become an important forum for exchanging political opinions. In response to COVID measures citizens expressed their policy preferences directly on these platforms. Quantifying political preferences in online social media remains challenging: The vast amount of content requires scalable automated extraction of political preferences - however fine grained political preference extraction is difficult with current machine learning (ML) technology, due to the lack of data sets. Here we present a novel data set of tweets with fine grained political preference annotations. A text classification model trained on this data is used to extract policy preferences in a German Twitter corpus ranging from 2019 to 2022. Our results indicate that in response to the COVID pandemic, expression of political opinions increased. Using a well established taxonomy of policy preferences we analyse fine grained political views and highlight changes in distinct political categories. These analyses suggest that the increase in policy preference expression is dominated by the categories pro-welfare, pro-education and pro-governmental administration efficiency. All training data and code used in this study are made publicly available to encourage other researchers to further improve automated policy preference extraction methods. We hope that our findings contribute to a better understanding of political statements in online social media and to a better assessment of how COVID measures impact political preferences. Keywords:Policy Preference extraction text classification social media ## 1 Introduction The past decades have shown two trends that are becoming increasingly interdependent: Political campaigns take place online in social media. And at the same time online content for individual users is recommended using automated machine learning (ML) systems that are often optimized for user engagement or other proxy metrics for economic profit. These mechanisms can increase visibility of polarizing content and simultaneously enforce a bias towards existing user preferences. During the COVID pandemic, global platforms such as online social media allowed users to directly express their preferences for or against the measures taken by governments, such as lockdowns or vaccination programs. Analysing these policy preferences can yield valuable insights that could help to improve governmental policies. The large amount of content requires methods for automated extraction of policy preferences. Recent trends in machine learning (ML) towards bigger and more powerful language models could help to improve policy preference extraction. However there are few training data sets that contain annotations for fine grained policy preferences [9]. The lack of high quality annotated data sets with political information impedes the development of better models for automated detection of policy preferences. Here we present a data set of online social media content, Twitter posts, with fine grained political annotations as defined in [24]. The data set is used to train a text classification model that predicts policy preferences from social network posts. On a larger corpus of tweets collected from 2019 to 2022 the model is used to predict policy preferences before and during the COVID pandemic. Analyses of automatically extracted policy preferences suggest that the amount of policy preferences expressed on Twitter increased after the first lockdown. Leveraging a fine grained political viewpoint taxonomy we can investigate which policy preferences were expressed in those political tweets. To summarize, the main contributions of this study are: * A data set of German tweets with fine grained political preference annotation * A novel text classification model * An analysis of policy preferences before and during the COVID pandemic ## 2 Related Work The general topic of automated information extraction from online social media has been widely studied and different approaches have been proposed, including supervised ML methods, such as text classification [11], and unsupervised methods, such as topic models, or extensions thereof [1, 7, 8]. Many of these methods are dedicated to trending topic extraction. Since not all trending topics are related to the political discourse a large fraction of these methods do not lend themselves easily to the investigation of policy preferences. A number of studies have explored automated extraction of policy preferences, for a comprehensive overview we refer the interested reader to [9]. There have been many studies exploring traditional ML techniques for ideology detection and policy preference extraction [21] as well as approaches based on more recent natural language processing models, such as Recurrent Neural Networks [12] or more recently also Transformers [17]. The authors of [9] highlight that training ML models for automated extraction of fine grained policy preferences expressed in online social media content remains challenging. Primarily this is due to the fact that annotating this data requires expertise that can not as easily be crowdsourced, as the annotation of hate speech for instance. Annotation of policy preferences requires domain expertise and in particular experience with policy preferences as expressed in online media. There are some publicly available data sets that can be used for training ML models that detect policy preferences in text data. One of the largest and best curated data sets is the corpus of the Manifesto Project [23] which contains over 1,500,000 quasi-sentences, extracted from over 1,500 party manifestos, and annotated according to a well established category scheme of 56 policy categories [24]. This data has been used by researchers to investigate policy preferences [15] and there have been efforts to train ML models on this data to make predictions on online social media texts [6, 18, 16]. However the texts of party manifestos are written in a different style than posts in online social media. Hence models trained on the manifesto data usually do not work well on online social media texts. Other data sets focus more on texts in online social media but these often focus on a small set of political policy preferences [4, 13, 2, 10]. ## 3 Training Data Set For annotating training data with fine grained policy preferences we sampled tweets from a corpus of German tweets [14]. The tweets were sampled between August 2019 and March 2022 and filtered using the following criteria: User InteractionWe selected tweets that were interacted with in some form (likes, retweets, quotes) at least once. \begin{table} \begin{tabular}{l c c c} \hline \hline \multicolumn{4}{c}{precision recall f1-score support} \\ \hline controlled economy + & 1.00 & 0.67 & 0.80 3.0 \\ europe - & 0.80 & 0.75 & 0.77 16.0 \\ environmentalism + & 0.76 & 0.70 & 0.73 90.0 \\ democracy + & 0.63 & 0.74 & 0.68 77.0 \\ anti-imperialism + & 1.00 & 0.50 & 0.67 2.0 \\ economic orthodoxy + & 0.57 & 0.67 & 0.62 6.0 \\ europe + & 0.56 & 0.64 & 0.60 14.0 \\ undefined & 0.58 & 0.55 & 0.57 271.0 \\ infrastructure + & 0.43 & 0.80 & 0.56 20.0 \\ foreign special + & 0.50 & 0.55 & 0.52 11.0 \\... & & & \\ \hline accuracy & & & 0.46 1214 \\ macro avg & 0.30 & 0.31 & 0.30 1214 \\ weighted avg & 0.46 & 0.46 & 0.46 1214 \\ \hline \hline \end{tabular} \end{table} Table 1: F1 scores for tweets in the test set for the top 10 (according to F1) political categories. The complete list can be found in the Appendix, table 3. #### Relevance We used a ML model (see below) trained on the Manifesto Project corpus [23] to estimate the political relevance of each tweet. To increase the usefulness of the annotated data set we tried to cover all labels of the Manifesto Project's category scheme by selecting for each week only the top 5 tweets that were predicted as the most likely for each political category by an ML model trained on German party manifestos [23]. The filtered set of tweets were then annotated by two experts trained by researchers of the Manifesto Project. The annotation was performed in a custom written web app and later using labelstudio [22]. Annotators were instructed to label a tweet with one of the 56 political categories of the Manifesto Project codebook [24]. Additionally annotators provided the label undefined for tweets that could not be associated with any of the relevant political categories. If the tweet contained an image, annotators also considered the image content for the annotation. Context beyond the actual tweet content was not taken into account. Exceptions were tweets that replied to or commented on another tweet. In that case the original tweet was also considered. These replied-to tweets are, to keep the data set simpler, not part of the data set but can be retrieved via the url of the annotated tweet. In the current version of the data set there are 6097 unique tweets and the most frequent political categories annotated are shown in Table 2 (Appendix). Note that the majority of tweets is labeled as undefined, despite the filtering with the ML model. This is an indication that the data set contains useful negative examples for training better models. The data set is released and available for research purposes [5]. ## 4 Evaluation of Policy Preference Predictors To establish a simple baseline for policy preference extraction on the PoliTweet data set we used the TextPredictor module of the autoML package AutoGluon [3, 19]. The model was trained on a V100 NVIDIA GPU with a pretrained BERT model checkpoint (bert-base-german-cased) on the entire German part of the manifesto corpus [23] and 4883 annotated tweets from the training data set in section 3; 1214 annotated tweets were held out for testing the model. In section 2 we list the results for the top 10 political categories that could be predicted with highest F1 score by the model; the full list of results for all categories is listed in the Appendix, Table 3. Note that while the overall prediction performance is below 0.50 F1 score (macro or class-frequency weighted), these results are still encouraging. Fine grained political viewpoint extraction is a challenging task and even when trained on the manifesto corpus, the classification performance for all categories with extensive tuning and leveraging state-of-the-art ML models often stays below an F1 score of 0.5 [20]. ## 5 Policy Preferences after COVID lockdown The model as trained in section 4 was then applied to the entire twitter corpus [14] between 2019 and 2022 and filtered using the relevance and activity criteria as mentioned in section 3. We applied additional relevance filters to extract only tweets expressing political views. All tweets for which the category undefined was amongst the top 3 predictions of the text classification model were considered irrelevant and filtered out. The histograms of policy preferences in the remaining tweets were then compared before and after the COVID lockdown onset in Germany. In Figure 1 we show histograms of political views expressed in tweets before and after onset of the first lockdown. Overall our results suggest that the number of political tweets increased after the first lockdown. Investigating the fine grained political categories we find that this increase is driven by an increased number of tweets categorized as _pro education, pro welfare_ and _pro government administration efficiency_. These changes in policy preferences of tweets could reflect the negative impact that COVID measures such as lockdowns had: many employes lost their jobs, many needed to teach their children at home and all administrational processes were substantially slowed down due to the poor digitalization in German administration. In Figure 2 timelines are shown for the political categories _pro education, pro welfare_ and _pro government administration efficiency_, which exhibit the largest change after onset of the COVID lockdown as shown in Figure 1. The bottom panel in Figure 2 shows the onsets of lockdowns and COVID case numbers. The strongest impact of lockdown measures with respect to political policy preferences on Twitter appears to develop during the second wave of the pandemic. Figure 1: Increases in political tweets after the first COVID lockdown in Germany. Policy preferences were extracted with a text classifier. _Left:_ After the first lockdown the total number of political tweets per day increases. _Middle:_ Strong increases were observed in the broad political category of political system and welfare; note the log scale on the x-axis. _Right:_ Fine grained policy preferences show a strong increase in _pro education, pro welfare_ and _pro government administration efficiency_ ## 6 Conclusion This study presents three main contributions, a) a data set of German tweets with fine grained political preference annotation, b) a novel text classification model trained on that data and c) an analysis of policy preferences before and during the COVID pandemic. Our preliminary analyses of tweets during the COVID pandemic showed a pronounced increase in political tweets overall and in particular also in certain fine grained political categories. These findings are not from a representative sample and have several other limitations, such as the predictive performance of the text classification model for some classes, especially in rare categories. Nonetheless we believe the data set, the model and the experimental results are an important step towards more scalable and more fine grained policy preference extraction in German online social media. We hope that our contributions will encourage other researchers to improve current state-of-the-art models for policy preference extraction in research and applications. ## Acknowledgements We thank Jonas Bauer for conceptualizing, implementing and maintaining the first data annotation setup, Teo Chiaburu for setting up labelstudio, Marvin Figure 2: Number of tweets over time in those political categories that exhibit a strong increase after the first lockdown in Germany. Bottom panel shows an overview of COVID cases reported from Robert-Koch-Institutel, lockdown starts are indicated in blue. While the first lockdown did not result in strong increases of Tweets with political preferences, during the second COVID wave political preferences in the categories _pro education, pro welfare_ and _pro government administration efficiency_ were expressed more often than before. Muller and Maren Krumbein for annotating tweets, Pola Lehmann for training the annotators and valuable feedback on the analyses, Johannes Hoster for analyses and Philipp Staab for valuable discussions on sociological aspects.
オンラインソーシャルメディアは、政治的意見交換の重要な場となっています。COVID対策に対する市民の意見は、これらのプラットフォームで直接表明されています。オンラインソーシャルメディアにおける政治的偏見の定量化は難しい: 膨大なコンテンツをスケーラブルで自動抽出できる政治的偏見の抽出を必要とする一方、現在の機械学習技術では、データセットの不足によって細かい政治的偏見の抽出が困難です。ここでは、細分化された政治的偏見の注釈付きのツイートの新しいデータセットを提案します。このデータセットに基づいて訓練されたテキスト分類モデルを使用して、2019年から2022年までのドイツのTwitter corpusから政策的偏見を抽出します。私たちの研究結果では、COVIDパンデミックに対する反応として、政治的意見の表明が増加したことが示されています。政策的偏見の細分化された分類体系を用いて、政治的見解を分析し、
2309.07264
Small error algorithms for tropical group testing
We consider a version of the classical group testing problem motivated by PCR testing for COVID-19. In the so-called tropical group testing model, the outcome of a test is the lowest cycle threshold (Ct) level of the individuals pooled within it, rather than a simple binary indicator variable. We introduce the tropical counterparts of three classical non-adaptive algorithms (COMP, DD and SCOMP), and analyse their behaviour through both simulations and bounds on error probabilities. By comparing the results of the tropical and classical algorithms, we gain insight into the extra information provided by learning the outcomes (Ct levels) of the tests. We show that in a limiting regime the tropical COMP algorithm requires as many tests as its classical counterpart, but that for sufficiently dense problems tropical DD can recover more information with fewer tests, and can be viewed as essentially optimal in certain regimes.
Vivekanand Paligadu, Oliver Johnson, Matthew Aldridge
2023-09-13T18:56:38
http://arxiv.org/abs/2309.07264v2
# Small error algorithms for tropical group testing ###### Abstract We consider a version of the classical group testing problem motivated by PCR testing for COVID-19. In the so-called tropical group testing model, the outcome of a test is the lowest cycle threshold (Ct) level of the individuals pooled within it, rather than a simple binary indicator variable. We introduce the tropical counterparts of three classical non-adaptive algorithms (COMP, DD and SCOMP), and analyse their behaviour through both simulations and bounds on error probabilities. By comparing the results of the tropical and classical algorithms, we gain insight into the extra information provided by learning the outcomes (Ct levels) of the tests. We show that in a limiting regime the tropical COMP algorithm requires as many tests as its classical counterpart, but that for sufficiently dense problems tropical DD can recover more information with fewer tests, and can be viewed as essentially optimal in certain regimes. ## 1 Introduction Group testing is the problem of reliably recovering a subset \(\mathcal{K}\) of 'defective' items from a population of \(N\) items, using a relatively small number \(T\) of so-called pooled tests to test multiple items at the same time. In the classical noiseless setting, the outcome of such tests is binary, indicating whether or not there is at least one defective item in the pool. Group testing was initially introduced by Dorfman [10] in the context of testing for syphilis. It has now developed into a combinatorial and algorithmic problem with a rich history [11, 4], where one seeks to understand the trade-offs between \(N\), \(|\mathcal{K}|\) and \(T\), and to understand how \(\mathcal{K}\) can be efficiently recovered using computationally feasible algorithms. Group testing has applications to many fields such as biology, manufacturing, communications and information technology, as described in [4, Section 1.7]. This framework also recently gained considerable attention during the COVID-19 pandemic - see [3] for a review of its use in this context, and [21] for an early proof of concept that pooled testing can detect the presence of COVID-positive individuals. In general, the efficiency of group testing makes it useful when using PCR machines to test large populations for rare conditions. In the PCR test (see for example [20]), the viral genetic material is extracted from a sample and amplified in 'cycles' of a process using polymerase enzymes. In each cycle, the quantity of this material is approximately doubled. The presence of the viral genetic material is detected by fluorescence, indicating a positive result if the quantity present exceeds a certain amount. The _cycle threshold_ (Ct) value is the number of cycles after which fluorescence is observed - this represents the number of doublings required to achieve detection. Hence when using PCR to test for COVID [18], a lower Ct value indicates a higher concentration of viral genetic material in the sample. Since classical group testing only considers a binary outcome of tests, it fails to take advantage of all the available information regarding strength of infection. However, some COVID-inspired pooled testing schemes, such as Tapestry [14, 13] and the two-stage adaptive scheme of Heidarzadeh and Narayanan [15] are designed to take account of quantitative information through numerical values of test outcomes. An alternative perspective comes through the way that Ct values are explicitly combined together in the so-called _tropical group testing_ model of Wang _et al._[19]. The Ct value \(z\) of two pooled samples with individual Ct values \(x\) and \(y\) satisfies \(z\approx\min\{x,y\}\). This is because the combined pool will contain an amount of viral genetic material proportional to \(2^{-x}+2^{-y}\), and require \(-\log_{2}(2^{-x}+2^{-y})\approx\min\{x,y\}\) doublings to fluoresce. Uninfected samples will never fluoresce given any number of doublings, so we can think of those as having Ct value \(\infty\). The tropical group testing model (see Definition 2.1 below) simply takes the outcome of the pooled test to be the minimum Ct value of the infected individuals contained within it. Consequently, strongly infected items with low Ct values tend to dominate the test outcomes, potentially concealing weakly infected individuals with high Ct values. To address this limitation, Wang _et al._ introduce the concept of a tropical code, which involves a 'delay matrix'. With this approach, Wang describes adaptive and non-adaptive constructions. The key contribution of this paper is the development and analysis of non adaptive algorithms in this tropical setting to recover the Ct values of defective items under a small-error criterion, and to demonstrate gains in performance relative to the classical group testing setting. These algorithms are tropical generalisations of the classical COMP, DD and SCOMP algorithms [2, 4]. Our algorithms and results do not require the use of a delay matrix, meaning that all the tests can be run in parallel simultaneously, making the resulting schemes easy to implement in practice on a PCR machine. In particular, we identify a sharp threshold for the performance of the tropical DD algorithm (see Section 3.4) in certain parameter regimes. In Theorem 6.1 we give an achievability result by showing that in a limiting regime where the number of tests \[T\geq(1+\delta)\max\{T_{\infty},T_{d},T_{d-1},\ldots,T_{1}\} \tag{1}\] for some \(\delta\) then the error probability of this algorithm tends to zero. Here the \(T_{r}\) are explicit expressions in terms of the total number of items with particular defectivity levels. Roughly speaking, \(T_{\infty}\) tests are required to find most (but not necessarily all) of the non-defective items, while \(T_{r}\) tests are required to find all the defective items with Ct value \(r\). Further in Remark 6.2, we argue that in a certain 'uniform' case, this result represents an explicit (albeit second-order) improvement over the performance of classical DD. In contrast, in Theorems 7.1 and 7.4 we show that in the regime where \[T\leq(1-\delta)\max\{T_{d},T_{d-1},\ldots,T_{1}\}\] for any \(\delta\) then the error probability of tropical DD and even of an optimal algorithm tends to \(1\). Since apart from the sign of the \(\delta\) term and the absence of \(T_{\infty}\) this is identical to the expression (1), we can conclude that our tropical DD algorithm is asymptotically optimal in parameter regimes where \(T_{\infty}\) does not give the maximum in (1). The structure of the rest of the paper is as follows. In Section 2 we introduce the notation used in the paper, and formalise the tropical group testing model. In Section 3 we describe the three tropical algorithms we will study, and briefly mention some of their basic properties. Section 4 gives simulation results indicating the performance of these algorithms. We analyse the theoretical performance of the tropical COMP algorithm in Section 5 and of the tropical DD algorithm in Sections 6 and 7. ## 2 Notation and tropical model We adapt the classical group testing notation and algorithms of [2, 4] to the tropical group testing model of [19]. The tropical model replaces the 'OR' operation of standard group testing with a'min' operation, in a way motivated by the use of PCR testing for COVID-19. In more detail, for a fixed positive integer value \(d\) we define the set \(\mathcal{D}=\{1,2,\ldots,d,\infty\}\) of possible defectivity (or infection) levels. Here level \(\infty\) represents the state of being not defective, and levels \(1,2,\ldots,d\) represent different levels of defectivity. As with Ct values in PCR testing, the lower the numerical value of the defectivity level, the stronger the infection; and the higher the numerical value of the defectivity level, the weaker the infection. The exact values represented in the set do not matter from a mathematical point of view - while it may be closer to medical practice to use Ct values such as \(\{20,21,\ldots,40,\infty\}\), the choice \(\{1,2,\ldots,d,\infty\}\) provides notational convenience. Given \(N\) items, we represent the defectivity level \(U_{i}\in\mathcal{D}\) of each item \(i\) as a vector \(\mathbf{U}=(U_{1},\ldots,U_{N})\in\mathcal{D}^{N}\). We write \(\mathcal{K}_{r}=\{j:U_{j}=r\}\) for the set of items at each level \(r\in\mathcal{D}\), and write \(\mathcal{K}=\bigcup_{r=1}^{d}\mathcal{K}_{r}\) for the total set of defective items, with finite \(U_{i}\). We write \(K_{r}=|\mathcal{K}_{r}|\) for the size of each set, \(K=\sum_{r=1}^{d}K_{r}=|\mathcal{K}|\) for the total number of defective items, and adopt the notation \(\mathbf{K}=(K_{1},\ldots,K_{d})\). For \(1\leq r\leq d\) and \(1\leq s\leq K_{r}\), we will write \(i(r,s)\) for the \(s\)th item in set \(\mathcal{K}_{r}\) (labelled arbitrarily within \(\mathcal{K}_{r}\)). We assume a combinatorial model: that is, we fix set sizes \(K_{1},\ldots,K_{d}\) in advance and assume that the sets \(\mathcal{K}_{r}\) are disjoint and chosen uniformly at random among sets which satisfy these constraints. We will sometimes consider a limiting sequence of problems where \(N\to\infty\) with \(K\simeq N^{\alpha}\) for some fixed \(\alpha\in(0,1)\) and \(K_{i}\simeq\theta_{i}K\) for some \(\theta_{i}\) with \(\sum_{i=1}^{d}\theta_{i}=1\). We use a non-adaptive testing strategy, where we fix the whole test design in advance. We represent the test design in a binary \(T\times N\) test matrix \(\mathbf{x}\), with the standard convention that \(x_{ti}=1\) means that item \(i\) appears in test \(t\) and \(x_{ti}=0\) means that item \(i\) does not appear in test \(t\). Our use of non-adaptive strategies in this context is motivated by the fact that PCR tests can be performed in parallel using plates with large numbers of wells (such as 96 or 384) - see for example [12] - meaning that the test strategy needs to be designed in advance. We now describe the outcome of a so-called tropical group test. **Definition 2.1**.: _Tropical group testing_ is defined by the outcome \(Y_{t}\) of test \(t\) being given by the lowest defectivity level \(U_{i}\) among items \(i\) that appear in the test: \[Y_{t}=\min_{i}\{U_{i}:x_{ti}=1\}. \tag{2}\] For \(d=1\), there are only two defectivity levels possible for an item \(i\), namely \(U_{i}=1\) (defective) and \(U_{i}=\infty\) (non-defective). In this case, Definition 2.1 reduces to Dorfman's standard binary group testing model [10], with the outcome of a negative test \(t\) denoted by \(Y_{t}=\infty\) (rather than the usual \(Y_{t}=0\)). We refer to this as 'classical group testing'. For any value of \(d\), if a test contains no defective items (that is, if \(U_{i}=\infty\) for all items \(i\) in the test) then the outcome is \(Y_{t}=\infty\), which we regard as a negative test, just as in classical group testing. However, unlike classical group testing, we also receive information about the defectivity levels of the items through the outcomes of positive tests being a number from \(1\) to \(d\). In order to analyse tropical group testing, we make some definitions that will be useful, and which extend the definitions and terminology of [2]. **Definition 2.2**.: Write \(\mu_{i}\) for the highest outcome of any test that item \(i\) appears in: \[\mu_{i}\coloneqq\max_{t}\{Y_{t}:X_{ti}=1\}. \tag{3}\] If item \(i\) is not tested, so that \(\{Y_{t}:X_{ti}=1\}=\emptyset\), we use the convention \(\mu_{i}\coloneqq 1\). A key deduction is that \(\mu_{i}\) is the lowest possible defectivity level for item \(i\). **Lemma 2.3**.: _For each item \(i\), we have \(U_{i}\geq\mu_{i}\)._ _In particular, if \(\mu_{i}=\infty\) (that is, if the item appears in a negative test) then we can recover with certainty that \(U_{i}=\infty\)._ Proof.: If an item \(i\) is not tested at all, then by Definition 2.2 we know that \(\mu_{i}=1\), and so the result trivially holds. Otherwise, if an item \(i\) is tested, then for each \(t\) such that \(x_{ti}=1\), by Definition 2.1, we know that \(U_{i}\geq Y_{t}\). So \(U_{i}\geq\max_{t}\{Y_{t}:x_{ti}=1\}=\mu_{i}\). **Definition 2.4**.: We define the following: 1. For each \(1\leq r\leq d\), we refer to an item \(i\) that has \(\mu_{i}=r\) as \(\mathrm{PD}(r)\) ('Possibly Defective at levels \(\{r,\ldots,d,\infty\}\)') and an item \(i\) with \(\mu_{i}>r\) as \(\mathrm{DND}(r)\) ('Definitely Not Defective at levels \(\{1,\ldots,r\}\)'). 2. For \(r\in\mathcal{D}\), we say that an item of defectivity level \(r\) is 1. _intruding_ if it never appears in a test of outcome \(r\) (in which case strict inequality \(U_{r}>\mu_{r}\) holds in Lemma 2.3), 2. _masked_ if it never appears in a test without some other item of level \(\leq r\) also present. 3. For \(r\in\mathcal{D}\), write \(H_{r}\) for the number of tested non-defective items in \(\mathrm{PD}(r)\) (those that have \(\mu_{i}=r\)). For convenience, also define \(H_{0}\) to be the number of untested non-defective items, and define \(G_{r}=\sum_{j=0}^{r}H_{j}\). We note that there are \((N-K)-G_{r}=H_{r+1}+\ldots+H_{d}+H_{\infty}\) non-defective items in \(\mathrm{DND}(r)\). The notion of'masked' items in Definition 2.4.2 generalizes the one given in [4, Proof of Theorem 2.2]. If \(d=1\), then, in the notation of [2], the number \(G\) of intruding non-defectives (i.e. non-defectives that don't appear in any negative tests) corresponds here to those items \(i\) with \(\mu_{i}=1\), tested or untested; so \(G\) in [2] corresponds to \(G_{1}=H_{0}+H_{1}\) here. To aid understanding, it can be helpful to sort the rows and columns of the test matrix as illustrated in Figure 1. The algorithms we describe in Section 3 will be effective for a variety of matrix designs. However, as in [2], in the theoretical analysis in Sections 5-7 we assume that the matrix \(\boldsymbol{x}\) is sampled according to a Bernoulli design with parameter \(p\); that is, that the elements \(x_{ti}\) are equal to \(1\) independently of one another with a fixed probability \(p\). As in [4, Section 2.1], we consider a probability \(p=\nu/K\) for some fixed \(\nu\). In fact, as justified in [2] and in Section 4 it is often reasonable to take \(\nu=1\). It remains possible that some other choice may be better in some situations, although simulation evidence in Figure 3 shows that the performance of our algorithms is relatively robust to choices of \(\nu\) close to \(1\). This means that while in theory we need to know the number of defective items in the sample to design the matrix, for practical purposes it is enough to have a good estimate of this number. The paper [16] proves that performance is improved in the classical case when using matrix designs with near-constant column weights \(L=\lfloor\nu T/K\rfloor\), and sim Figure 1: Schematic illustration of test matrix and outcomes sorted into block form. Here a \(0\) represents a submatrix of all zeroes, a \(+1\) represents a submatrix which has at least one entry equal to \(1\) in each column, and \(?\) represents a submatrix which could be of any form. The defective items are sorted by level to the left of the vertical line. The column labels above the matrix represents the number of elements of each type; the vector represents the outcomes of the test. ulation evidence in Figure 6 suggests that the same might well be true in the tropical case. However the analysis involved in [16] is significantly more complicated than that in [2], so here we restrict ourselves to the Bernoulli case for the sake of simplicity of exposition, and leave alternate matrix designs for future work. ## 3 Description of tropical algorithms ### General remarks concerning algorithms In this section, we describe three algorithms which estimate the true vector of defectivity levels \(\mathbf{U}\), given the test design matrix \(\mathbf{x}\) and the vector of test outcomes \(\mathbf{Y}\). These are the tropical COMP, tropical DD and tropical SCOMP algorithms, adapted from the classical algorithms of the same names in [6, 2] (see also [4, Chapter 2] for a more detailed description). We first define what is meant by an algorithm in this setting. **Definition 3.1**.: A decoding (or detection) algorithm is a function \(\widehat{\mathbf{U}}:\{0,1\}^{T\times N}\times\mathcal{D}^{T}\to\mathcal{D}^{N}\) which estimates the defectivity level of each of the items, based only on knowledge of the test design \(\mathbf{x}\) and outcomes \(\mathbf{Y}\). We write \(\mathbb{P}(\mathrm{err})\) for the error probability of an algorithm, and \(\mathbb{P}(\mathrm{suc})=1-\mathbb{P}(\mathrm{err})\) for the success probability. We define \[\mathbb{P}(\mathrm{err})=\mathbb{P}(\widehat{\mathbf{U}}\neq\mathbf{U}) \tag{4}\] to be the probability that the algorithm fails to recover all the defectivity levels exactly, where the randomness comes through the design of the matrix and the value of \(\mathbf{U}\) itself. Sometimes for emphasis we will include the name of the algorithm and the number of tests, for example by writing \(\mathbb{P}(\mathrm{err};\mathrm{DD},T)\). Recovering \(\mathbf{U}\) exactly represents a strong success criterion for this problem. For example, in some cases, we might be happy to simply recover the defective set \(\mathcal{K}=\{i:U_{i}<\infty\}\). We later show that recovering \(\mathbf{U}\) and recovering \(\mathcal{K}\) represent equivalent success criteria for tropical DD and tropical SCOMP, but not for tropical COMP. From a clinical point of view, since lower Ct levels are generally associated with higher infectiousness [18], it might be sufficient to recover all the items with defectivity level below a certain threshold \(t\), that is to find \(\bigcup_{r<t}\mathcal{K}_{r}=\{i:U_{i}<t\}\). In this setting, we say that a _false positive error_ is an error of underestimating the defectivity level \(U_{i}\) of an item \(i\) - that is, of setting \(\widehat{U}_{i}<U_{i}\) - and a _false negative error_ is an error of overestimating the defectivity level \(U_{i}\) of an item \(i\) - that is, of setting \(\widehat{U}_{i}>U_{i}\). In the remainder of this section, we define the tropical COMP (Subsection 3.3), tropical DD (Subsection 3.4) and tropical SCOMP (Subsection 3.5) algorithms as tropical equivalents of their established classical counterparts. All of these algorithms are relatively simple: they do not require exhaustive search over possible values of \(\mathbf{U}\) (in contrast to the classical SSS algorithm [2], for example), can be implemented with a small number of passes through the data, and require an amount of storage which is proportional to the number of items and tests. Despite this simplicity, in the classical case, the DD algorithm has performance close to optimal for certain parameter ranges. This can be seen by comparing [9, Eq. (1.1), (1.2)], which show that DD under a constant column weight design achieves an asymptotic performance which matches that achievable by any algorithm and any test design in the case where \(K\simeq N^{\alpha}\) and \(1/2\leq\alpha<1\). Also, note that while simulations show that classical SCOMP outperforms classical DD for a range of finite size problems, Coja-Oghlan _et al._[8] prove that it requires the same number of tests in an asymptotic sense, with SCOMP having the same rate (in the sense of [2]) as classical DD. ### Counting bounds For classical group testing, a lower bound on the number of tests required is given by the so-called'magic number' \(T^{*}_{\text{class}}:=\log_{2}\binom{N}{K}\), which can be justified on information-theoretic grounds. In fact below this number of tests there is exponential decay in performance of any algorithm, adaptive or non-adaptive, and for any test strategy. Specifically, [5, Theorem 3.1] shows that if \(T\) tests are used then in any scenario the success probability for classical group testing satisfies \[\mathbb{P}(\text{suc})\leq 2^{-(T^{*}_{\text{class}}-T)}=\frac{2^{T}}{\binom{N}{K }}, \tag{5}\] sometimes referred to as the counting bound. It may not be _a priori_ obvious how the difficulty of the tropical decoding problem with success criterion (4) compares with the corresponding classical problem. In the tropical setting, we receive more information from each test through the more diverse test outcomes, which suggests the problem could be easier; but we also need to recover more information (to find the levels \(\mathbf{U}\)), which suggests the problem could be harder. Nonetheless, if for given parameters any tropical algorithm can demonstrate performance exceeding the classical counting bound (5) then we can be sure that the corresponding tropical problem is easier than its classical counterpart. By closely mimicking the proof of the classical counting bound (5) given in [5] we can prove its tropical counterpart. **Theorem 3.2**.: _Write \(T^{*}_{\mathrm{trop}}:=\log_{d+1}\binom{N}{\mathbf{K}}\), where_ \[\binom{N}{\mathbf{K}}=\binom{N}{K_{1},K_{2},\ldots,K_{d},N-K}=\frac{N!}{K_{1}!K_{2}! \cdots K_{d}!(N-K)!}\] _is the multinomial coefficient. Then_ \[\mathbb{P}(\mathrm{suc})\leq(d+1)^{-(T^{*}_{\mathrm{trop}}-T)}=\frac{(d+1)^{T} }{\binom{N}{\mathbf{K}}}. \tag{6}\] Proof.: See Appendix A. Writing \(\binom{K}{\mathbf{K}}=K!/(K_{1}!K_{2}!\ldots K_{d}!)\) and \(H(\mathbf{\theta})=-\sum_{i=1}^{d}\theta_{i}\log_{2}(\theta_{i})\), we expand \[T^{*}_{\mathrm{trop}}=\log_{d+1}\binom{N}{K}+\log_{d+1}\binom{K}{\mathbf{K}}\simeq \frac{T^{*}_{\mathrm{class}}}{\log_{2}(d+1)}+K\frac{H(\mathbf{\theta})}{\log_{2}(d +1)}. \tag{7}\] Compared with the classical case, the scaling factor \(1/\log_{2}(d+1)<1\) on the first term of (7) represents the fact that we potentially gain more information through each test, while the second additive term represents the extra information we are required to recover. ### Tropical COMP We now describe the tropical COMP algorithm, which extends the classical COMP algorithm described in [6] (see also [7]) - although the idea of the algorithm dates back at least to the work of Kautz and Singleton [17]. We first describe the classical COMP algorithm, which simply declares any item that appears in a negative test as non-defective. All other items are declared defective. In the notation of this paper, classical COMP can be described in the following way. For each item \(i\) with \(\mu_{i}=\infty\), we set \(\widehat{U}_{i}=\infty\); otherwise, \(\mu_{i}=1\) and we set \(\widehat{U}_{i}=1\). In other words, we set \(\widehat{U}_{i}=\mu_{i}\) for each item \(i\). The same rule \(\widehat{U}_{i}=\mu_{i}\) can also be used in tropical group testing. This is what we call the tropical COMP algorithm. ``` Input: Test design matrix \(\mathbf{x}\) and vector of test outcomes \(\mathbf{Y}\) Output: Estimated vector of defectivity levels \(\widehat{\mathbf{U}}\) for each item \(i\)do set \(\widehat{U}_{i}=\mu_{i}\); ``` **Algorithm 1**Tropical COMP algorithm While both classical and tropical COMP mark items appearing in negative tests as non-defective, the tropical COMP algorithm further classifies items into estimated defectivity levels. Note that the two algorithms operate identically when \(d=1\), and have some analogous properties in general. To aid terminology, we first define the notion of unexplained tests in this setting. **Definition 3.3**.: Fix a test matrix \(\mathbf{x}\) and an estimate \(\widehat{\mathbf{U}}\) of \(\mathbf{U}\). Write \[\widehat{Y}_{t}=\min_{i}\{\widehat{U}_{i}:x_{ti}=1\}\] to be the outcome of test \(t\) using matrix \(\mathbf{x}\) if the true defectivity vector were equal to \(\widehat{\mathbf{U}}\). We say that test \(t\) is _unexplained_ by \(\widehat{\mathbf{U}}\) if \(\widehat{Y}_{t}\neq Y_{t}\), where \(Y_{t}\) is the actual test outcome, or _explained_ if \(\widehat{Y}_{t}=Y_{t}\). We call an estimate vector \(\widehat{\mathbf{U}}\) a _satisfying vector_ if it explains all \(T\) tests. The terminology'satisfying vector' here is the tropical group testing equivalent of the classical group testing notion of a satisfying set [2, 4]. For classical COMP, the estimate given is a satisfying set [4, Lemma 2.3]) - indeed, the largest satisfying set. We have a similar result for tropical COMP. **Lemma 3.4**.: _The estimate \(\widehat{\mathbf{U}}^{\mathrm{COMP}}\) given by tropical COMP is a satisfying vector._ _Further, \(\widehat{\mathbf{U}}^{\mathrm{COMP}}\) is the least satisfying vector, in that if \(\mathbf{V}\in\mathcal{D}^{N}\) is also a satisfying vector, then \(U_{i}^{\mathrm{COMP}}\leq V_{i}\) for all items \(i\)._ Proof.: For the first part, take an arbitrary test \(t\) with outcome \(Y_{t}\). All items \(i\) included in this test have \(U_{i}\geq\mu_{i}\geq Y_{t}\). Further, there must be an item \(j\) with \(U_{j}=Y_{t}\), otherwise the test outcome would be greater than \(Y_{t}\). For that item, \(\mu_{j}=Y_{t}\). Hence, \[\widehat{Y}_{t}=\min_{i}\{\mu_{i}:x_{ti}=1\}=\mu_{j}=U_{j}=\min_{i}\{U_{i}:x_{ ti}=1\}=Y_{t},\] and the test is explained. Since the test \(t\) was arbitrary, we have \(\widehat{\mathbf{Y}}=\mathbf{Y}\), and hence \(\widehat{\mathbf{U}}^{\mathrm{COMP}}\) explains all the tests. For the second part, note that any satisfying vector \(\mathbf{V}\) must have \(V_{i}\geq\mu_{i}=U_{i}^{\mathrm{COMP}}\) for all \(i\). To see this, consider a vector \(\mathbf{V}\) and item \(j\) with \(V_{j}<\mu_{j}\). Then let \(t\) be a test containing item \(j\) for which \(Y_{t}=\mu_{j}\). There must be at least one such test, by the definition of \(\mu_{j}\), unless \(j\) is never tested. If \(j\) is never tested, then by assumption \(V_{j}\in\mathcal{D}\) has \(V_{j}\geq 1=\mu_{j}\). For this test \(t\), \[\min_{i}\{V_{i}:x_{ti}=1\}\geq\mu_{j}>V_{j},\] so \(\mathbf{V}\) is not satisfying. We know that classical COMP never makes false negative errors. The same is true for tropical COMP - recall that we use this terminology to refer to an error of the form \(\widehat{U}_{i}>U_{i}\). **Lemma 3.5**.: _Tropical COMP never makes false negative errors._ Proof.: This follows directly from Lemma 2.3, which tells us that \(U_{i}\geq\mu_{i}\), where \(\mu_{i}\) is the tropical COMP estimate. For tropical COMP, the success criterion given by (4) to recover the whole vector \(\mathbf{U}\) is not equivalent to the success criterion of merely recovering the defective set \(\mathcal{K}\). It is true that if any algorithm correctly recovers \(\mathbf{U}\), so that \(\widehat{\mathbf{U}}=\mathbf{U}\), then it also recovers \(\mathcal{K}\) as \(\widehat{\mathcal{K}}=\{i:\widehat{U}_{i}<\infty\}=\mathcal{K}\). But the following example shows that for tropical COMP the converse does not hold; that is, just because COMP fails to recover \(\mathbf{U}\), that does not necessarily mean it also fails to recover \(\mathcal{K}\): **Example 3.6**.: Suppose we have two items, with true defectivity levels \(\mathbf{U}=(1,2)\). Suppose further that we run just one test, which contains both items, so \(\mathbf{x}=(1,1)\) and \(\mathbf{Y}=(1).\) Then both items are in just one test with outcome \(Y_{1}=1\), so have \(\mu_{1}=\mu_{2}=1\). Tropical COMP therefore incorrectly estimates \(\widehat{\mathbf{U}}=(1,1)\neq\mathbf{U}\). However, it does succeed in recovering the defective set \(\widehat{\mathcal{K}}=\mathcal{K}=\{1,2\}\). Despite this, we show in Section 5 that tropical COMP asymptotically requires the same number of tests to recover \(\mathbf{U}\) as classical COMP does to recover \(\mathcal{K}\). ### Tropical DD We now describe the tropical DD algorithm. This extends the classical DD algorithm introduced in [2], which works in three steps: 1. Every item appearing in a negative test is non-defective. (All other items are 'possibly defective'.) 2. If a positive test contains a single possibly defective item, that item is defective. 3. All remaining items are assumed non-defective. Tropical DD works the same way, except in step 2, it takes account of the different levels of defectivity in the tropical testing. Recalling that a PD(\(r\)) item is one with \(\mu_{i}=r\), the tropical DD algorithm is as follows: ``` Input: Test design matrix \(\mathbf{x}\) and vector of test outcomes \(\mathbf{Y}\) Output: Estimated vector of defectivity levels \(\widehat{\mathbf{U}}\) for each item \(i\) with \(\mu_{i}=\infty\)do set \(\widehat{U}_{i}=\infty\); for each test \(t\) with \(Y_{t}=r<\infty\)do if there exists only one \(\text{PD}(r)\) item \(i\) in test \(t\)then set \(\widehat{U}_{i}=r\); end Declare all remaining unclassified items to have \(\widehat{U}_{i}=\infty\); ``` **Algorithm 2**Tropical DD algorithm To understand why this algorithm works, consider a test \(t\) with outcome \(Y_{t}=r\). Observe that (by Definitions 2.1 and 2.2 respectively): 1. test \(t\) cannot contain any items \(i\) with \(U_{i}<r\), and must contain at least one'special item' \(j\) with \(U_{j}=r\); 2. every item \(i\) appearing in test \(t\) has \(\mu_{i}\geq r\) and so is either \(\text{PD}(r)\) or \(\text{DND}(r)\). Suppose all but one item \(j\) in test \(t\) are \(\text{DND}(r)\) (i.e. have \(\mu_{i}>r\)), so none of those other items are in \(\text{PD}(r)\). Then we know (by Lemma 2.3) that each of those items has \(U_{i}\geq\mu_{i}>r\), and cannot be a special item. Hence the remaining item \(j\) must be the special item that we seek. In other words, the sole \(\text{PD}(r)\) item in the test is marked as definitely defective at level \(r\). This mirrors the classical case where if there is a single PD item in a test, it is marked as definitely defective. We can think of the problem of finding \(\mathcal{K}\) in the classical case as now being split into sub-problems of finding \(\mathcal{K}_{1},\ldots,\mathcal{K}_{d}\) in the tropical case. It is helpful to think of moving upwards through the rows in the block formulation of Figure 1: 1. By examining the tests with outcome \(\infty\), we can identify \(H_{\infty}\) non-defective items and remove them from consideration for tests of outcome \(r<\infty\). 2. In general, for \(r=d,d-1,\ldots,1\), by examining all the tests with outcome \(r\), we hope to find all the defective items \(i(r,1),\ldots,i(r,K_{r})\) and to find the \(H_{r}\) non-defective items that are in \(\text{PD}(r)\) and remove them from consideration for tests of outcome lower than \(r\). We note that the operation of classical DD is the same as the operation of tropical DD when \(d=1\). We know that classical DD never makes false positive errors [4, Lemma 2.2]. The same is true for tropical DD: **Lemma 3.7**.: _Tropical DD never makes false positive errors. Indeed the only errors it can make is wrongly declaring a defective items of some finite level \(U_{i}=r\) to be non-defective \(\widehat{U}_{i}=\infty\)._ Proof.: The first step finds some non-defective items from negative tests, and so is error-free. The second step identifies the sole defective item that can explain the outcome of the test it is in. It is thus also error-free. The final step is the only point at which errors can be made; specifically, false negative errors where a defective item is marked non-defective can occur. For tropical DD, the success criteria of recovering the vector \(\mathbf{U}\) and recovering the defective set \(\mathcal{K}\) are equivalent. We know that if an algorithm recovers \(\mathbf{U}\), then it recovers \(\mathcal{K}\). To prove equivalence of the success criteria, it suffices to show that if tropical DD fails to recover \(\mathbf{U}\), then it fails to recovers \(\mathcal{K}\). This is done in the following paragraph. Suppose that tropical DD fails to recover \(\mathbf{U}\). Then by Lemma 3.7, the only errors that could have been made are false negative errors where a defective item is wrongly marked non-defective. Hence, tropical DD also fails to recover \(\mathcal{K}\). ### Tropical SCOMP We now describe the tropical SCOMP algorithm, which extends the classical SCOMP algorithm introduced in [2]. Classical SCOMP starts with the estimate given by the DD algorithm. It then greedily adds items to the estimated defective set \(\hat{\mathcal{K}}\) until all tests are explained. Similarly, the tropical SCOMP algorithm starts with the estimate given by tropical DD. It then greedily adds items to the sets \(\hat{\mathcal{K}}_{r}\), for each \(r\) such that there are unexplained tests of outcome \(r\). This is done until all tests are explained. ``` Input: Test design matrix, \(\mathbf{x}\), and vector of test outcomes \(\mathbf{Y}\) Output: Estimated vector of defectivity levels, \(\widehat{\mathbf{U}}\) Initialize \(\widehat{\mathbf{U}}\) as the estimate \(\widehat{\mathbf{U}}_{\text{DD}}\) of \(\mathbf{U}\) produced by the DD algorithm; while unexplained tests exist do Choose a test outcome \(r\) from the unexplained tests; Retrieve all the tests with outcome \(r\); Find the \(\text{PD}(r)\) item \(i\) that occurs the most times in those tests (ties can be broken arbitrarily); Set \(\widehat{U}_{i}=r\) and update the list of unexplained tests; end ``` **Algorithm 3**Tropical SCOMP algorithm Note that tropical SCOMP attempts to solve the sub-problems of finding \(\mathcal{K}_{1},\ldots,\mathcal{K}_{d}\) that are not solved by tropical DD. The action of classical SCOMP is the same as that of tropical SCOMP when \(d=1\). _Remark 3.8_.: If tropical DD succeeds, then so does tropical SCOMP. This is because tropical SCOMP starts with the estimate produced by tropical DD. If tropical DD succeeds, then no tests are unexplained and tropical SCOMP also succeeds (cf. [4, Theorem 2.5]). We show that the success criteria of recovering \(\mathbf{U}\) and recovering \(\mathcal{K}\) are equivalent for tropical SCOMP. Similar to the case of tropical DD, it suffices to show that if tropical SCOMP fails to recovers \(\mathbf{U}\), then it fails to recovers \(\mathcal{K}\). This is done in the following paragraph. Suppose that tropical SCOMP fails to recover \(\mathbf{U}\). Then necessarily, tropical DD also fails to recover \(\mathbf{U}\). Then there exists an item \(i\in\mathcal{K}\) such that there are no tests in which it is the only \(\mathrm{PD}(\mu_{i})\) item. Since tropical SCOMP fails to recover \(\mathbf{U}\), at least one such item \(i\) was not chosen to explain the test outcomes of any of the tests that it is in and is marked as non-defective. Hence, \(\widehat{\mathcal{K}}\neq\mathcal{K}\). ### Comparison of tropical algorithms Table 1 summarises the features of the tropical algorithms, while comparing them to the classical algorithms (cf. [4, Table 2.1]). We now present a worked example which illustrates the operation of the various tropical algorithms: **Example 3.9**.: Suppose we use the test design \(\mathbf{x}\) and receive the outcomes \(\mathbf{Y}\) as \begin{table} \begin{tabular}{l c c c} \hline \hline & **satisfying** & **no false +** & **no false \(-\)** \\ \hline COMP & ✓ & ✗ & ✓ \\ DD & ✗ & ✓ & ✗ \\ SCOMP & ✓ & ✗ & ✗ \\ \hline tropical COMP & ✓ & ✗ & ✓ \\ tropical DD & ✗ & ✓ & ✗ \\ tropical SCOMP & ✓ & ✗ & ✗ \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of features of algorithms in the classical and tropical case: (i) whether the output estimate \(\widehat{\mathbf{U}}\) is guaranteed to explain all test outcomes; (ii)–(iii) guarantees on false positives or false negatives. follows: \[\mathbf{x}=\begin{bmatrix}1&0&0&0&0&0&0\\ 1&0&1&0&0&0&1\\ 0&1&0&1&1&0&0\\ 0&1&0&0&1&1&0\\ 1&0&0&0&1&0&0\end{bmatrix}\qquad\mathbf{Y}=\begin{bmatrix}\infty\\ 37\\ \infty\\ 29\\ \infty\end{bmatrix}.\] It is convenient to first calculate \(\mathbf{\mu}\). For example, item 1 occurs in tests \(1,2,5\). We then deduce \[\mu_{1}=\max_{t\in\{1,2,5\}}Y_{t}=\max\{\infty,37,\infty\}=\infty.\] Proceeding similarly for the other items, we obtain \[\mathbf{\mu}=\big{(}\infty,\infty,37,\infty,\infty,29,37\big{)}.\] **Tropical COMP:**: We set \(\widehat{\mathbf{U}}=\mathbf{\mu}\), obtaining the following: \[\widehat{\mathbf{U}}^{\rm COMP}=\big{(}\infty,\infty,37,\infty,\infty,29,37\big{)}.\] **Tropical DD:**: In the first step, we find the items with \(\mu_{i}=\infty\). These are items \(1,2,4\) and \(5\). We declare these to be non-defective, so \(\widehat{U}_{1}^{\rm DD}=\widehat{U}_{2}^{\rm DD}=\widehat{U}_{4}^{\rm DD}= \widehat{U}_{5}^{\rm DD}=\infty\). In the second step, we check each positive test \(t\) and look to see if they contain a single \({\rm PD}(Y_{t})\) item. For test 2, there are two \({\rm PD}(Y_{2})={\rm PD}(37)\) items in the test, items 3 and 7, so DD does nothing. For test 4, items \(2,5\) and \(6\) appear, but only item 6 is a \({\rm PD}(Y_{4})={\rm PD}(29)\) item. Hence, the tropical DD algorithm sets \(\widehat{U}_{6}^{\rm DD}=29\). Finally, in the third step, items 3 and 7, which have not yet been classified, get assigned a defectivity level of \(\widehat{U}_{3}^{\rm DD}=\widehat{U}_{7}^{\rm DD}=\infty\). Hence, the output of the tropical DD algorithm is: \[\widehat{\mathbf{U}}_{\rm DD}=\big{(}\infty,\infty,\infty,\infty,\infty,29,\infty \big{)}.\] **Tropical SCOMP:**: The algorithm initializes with the tropical DD estimate \(\widehat{\mathbf{U}}=\widehat{\mathbf{U}}_{\rm DD}.\) The corresponding outcome would be (written as the transpose, a row vector) \[\widehat{\mathbf{Y}}=(\infty,\infty,\infty,29,\infty)^{\top},\] where \(\widehat{Y}_{2}=\infty\neq 37=Y_{2}\). Hence, test 2 is the only unexplained test. We retrieve the \({\rm PD}(37)\) items in test 2. These are items 3 and 7. Because these items both appear in the same number of tests with outcome 37, namely one, the tropical SCOMP algorithm chooses between them arbitrarily - let's say it chooses item 7 - and assigns the defectivity level of \(\widehat{U}_{7}^{\mathrm{SCOMP}}=37\) to it. Now no tests remain unexplained, and the algorithm terminates. Hence the algorithm returns \[\widehat{\mathbf{U}}^{\mathrm{SCOMP}}=\big{(}\infty,\infty,\infty,\infty,\infty,29, 37\big{)}.\] ## 4 Simulation results In this section, we present some simulation results. We empirically compare the performance of the tropical and classical algorithms, and investigate how changing the probability \(p\) and the sequence \(\mathbf{K}=(K_{1},\ldots,K_{d})\) affects their performance. We also investigate the effect of using a combinatorial model with random defectivity levels for defective items, as opposed to the model with fixed \(K_{r}\) introduced in Section 2. Finally, we compare the Bernoulli design and the near-constant column weight design, described in [4, Section 2.7]. Figure 2 shows the performance of the tropical algorithms, relative to the performance of the classical algorithms and to the counting bounds (5) and (6). Figure 2: Empirical performance of the classical COMP, DD and SCOMP algorithms together with their tropical counterparts, through simulation with a Bernoulli design. For comparison, we plot the classical and tropical counting bounds of (5) and (6). The parameters chosen are \(N=500,K=10,p=0.1,\mathbf{K}=(2,2,2,2,2).\) Each point is obtained through \(10^{4}\) simulations. Figure 2 shows for the chosen set of parameters that the tropical COMP algorithm performs almost identically to its classical counterpart (the lines are so close that they are difficult to distinguish), and the tropical DD and SCOMP algorithms perform better than their classical counterparts. We also notice that in this case tropical SCOMP beats the classical counting bound (5) for small values of \(T\), showing that the tropical model can allow genuine performance gains over even adaptive classical group testing algorithms. Figure 3 shows how the performance of the tropical algorithms vary with \(p\), for fixed \(N\), \(T\) and \(\mathbf{K}\). Figure 3 shows that the performance of tropical DD and tropical SCOMP have a relatively wide plateau near their peak, indicating some robustness to misspecification of \(p\), and showing that in general the choice \(p=1/K\) is close to optimal for each algorithm. Figure 4 shows how the performance of the tropical algorithms vary as \(\mathbf{K}\) is varied, for fixed \(N,K,T\) and \(d\). We note that there are \(\binom{K-1}{d-1}\) possible vectors \(\mathbf{K}\) that sum to \(K\) while having each \(K_{i}>0\). Also, a \(d\)-dimensional plot is required to visualize the performance of the algorithms for all the \(\mathbf{K}\) simultaneously. Hence, for simplicity of exposition, we only present the case \(d=2\). Figure 4 shows that changing \(\mathbf{K}\) has very little effect on the performance of tropical COMP. This is quantified later in Section 5, where we find that, for tropical COMP, the error contribution of the \(K\) defective items is small compared to that of the \(N-K\) Figure 3: Simulation of the tropical COMP, DD and SCOMP algorithms with a Bernoulli design to investigate the effect of changing the parameter \(p\) for the Bernoulli design. The parameters are \(N=500,K=10,T=125\) and \(\mathbf{K}=(2,2,2,2,2)\). Each point is obtained through \(10^{4}\) simulations. non-defective items. Figure 4 also shows that the performance of tropical DD and tropical SCOMP improves as \(K_{1}\) increases, until reaching a peak. Figure 5 shows the effect on the performance of the tropical algorithms when the defective set \(\mathcal{K}\) is chosen with a combinatorial prior, and the defectivity level for each defective item is drawn uniformly and independently from \(\{1,\ldots,d\}\). We note that that the performance of tropical DD and tropical SCOMP improves as \(d\) increases, until reaching a peak, while the performance of tropical COMP does not change. Finally, Figure 6 compares the performance of the tropical algorithms with the Bernoulli design and with a near-constant column weight design. We see that the performance mirrors the classical case (cf. [4, Figure 2.3]). ## 5 Analysis of tropical COMP algorithm In this section, we give an analysis of the performance of tropical COMP. The main result of this section (Theorem 5.1) shows that the number of tests needed to ensure a vanishing error probability using a Bernoulli design is asymptotically identical to that needed in the classical case. Figure 4: Simulation of the tropical COMP, DD and SCOMP algorithms with a Bernoulli design to investigate the effect of changing \(\mathbf{K}\) on the performance. The parameters are \(N=1000,K=20,T=400,p=0.1\) and \(\mathbf{K}=(K_{1},20-K_{1})\). Each point is obtained through \(10^{4}\) simulations. Figure 5: Simulation of the tropical COMP, DD and SCOMP algorithms to investigate their performance with a Bernoulli design when the set of defective items, \(\mathcal{K}\), is chosen with a combinatorial prior, and the defectivity level for each defective item is drawn uniformly and independently from \(\{1,\ldots,d\}\). The parameters are \(N=500,K=10,T=120,p=0.1\). Each point is obtained through \(10^{4}\) simulations. Figure 6: Simulation of the tropical COMP, DD and SCOMP algorithms to investigate their performance with a Bernoulli design as well as with a near-constant column weight design. The parameters are \(N=500,K=10,p=0.1,\nu=\ln 2,L=\lfloor\nu T/K\rfloor,\mathbf{K}=(2,2,2,2,2)\). Each point is obtained through \(10^{4}\) simulations. ### Achievability result Our main result for tropical COMP is the following (cf. [4, Eq. (2.9)]): **Theorem 5.1**.: _Let \(\delta>0\). Let \(p=\nu/K\), for \(0<\nu<K\). Taking_ \[T\geq(1+\delta)\,\frac{\mathrm{e}^{\nu}}{\nu}K\ln N\] _ensures that the tropical COMP error probability \(\mathbb{P}(\mathrm{err})\) is asymptotically at most \(N^{-\delta}\)._ _Remark 5.2_.: Note that \(T=(1+\delta)\frac{\mathrm{e}^{\nu}}{\nu}K\ln N\) is minimised over \(\nu\) when \(\nu=1\). This corresponds to choosing the same optimal \(p\) as in the classical case. We note that tropical COMP, similar to classical COMP, is reasonably robust to misspecification of \(p\) (cf. [4, Remark 2.3]). To reach the result of Theorem 5.1, we find a bound on the error probability of tropical COMP using a Bernoulli design. This bound, given below, extends the corresponding bound by Chan _et al._ for classical COMP, given in [6, Eq. (8)], to the tropical setting. **Lemma 5.3**.: _For a Bernoulli test design with parameter \(p\), we can bound the error probability of Tropical COMP from above by_ \[\mathbb{P}(\mathrm{err};\mathrm{COMP},T)\leq\sum_{r\in\mathcal{D}}K_{r}(1-p(1- p)^{\sum_{i<r}K_{i}})^{T}. \tag{8}\] Proof.: To obtain an upper bound on the error probability of tropical COMP, we consider each item in turn, using the fact that the union bound \[\mathbb{P}(\mathrm{err})=\mathbb{P}\left(\bigcup_{i}\{\widehat{U}_{i}\neq U_{ i}\}\right)\leq\sum_{i}\mathbb{P}(\widehat{U}_{i}\neq U_{i}), \tag{9}\] tells us that we only need to control the individual probabilities that an item is misclassified. Any given item \(i\) is misclassified only if it is intruding. This happens if every test which it appears in contains at least one of the \(\sum_{i<r}K_{i}\) items of lower level. For a given test, the chance that it contains \(i\) but doesn't contain such an item is \(p(1-p)^{\sum_{i<r}K_{i}}\). Hence using independence between tests, we have that \[\mathbb{P}(\widehat{U}_{i}\neq U_{i})\leq(1-p(1-p)^{\sum_{i<r}K_{i}})^{T}. \tag{10}\] The result follows on substituting (10) in (9). We can now prove the main result. Proof of Theorem 5.1.: This proof is adapted from the one given in the classical case by Chan _et al._ in [6]. Let \(T=\beta K\ln N\). The key is to observe for a given \(T\) and \(p\) that the function \(f(\ell)=(1-p(1-p)^{\ell})^{T}\) is increasing in \(\ell\). Hence we can write (8) as \[\mathbb{P}(\mathrm{err})\leq\sum_{r\in\mathcal{D}}K_{r}f\left(\sum_{i<r}K_{i} \right)\leq\sum_{r\in\mathcal{D}}K_{r}f\left(K\right)=Nf(K). \tag{11}\] Then, setting \(p=\nu/K\) in Lemma 5.3, we have \[\mathbb{P}(\mathrm{err}) \leq N\exp(-Tp(1-p)^{K})\] \[=N\exp(-\beta\nu(1-\nu/K)^{K}\ln N)\] \[\simeq N\exp(-\beta\nu\mathrm{e}^{-\nu}\ln N)\qquad\text{as }K\to\infty\] \[=N^{1-\beta\nu\mathrm{e}^{-\nu}}.\] Hence, taking \(\beta=(1+\delta)\frac{\mathrm{e}^{\nu}}{\nu}\) ensures that \(\mathbb{P}(\mathrm{err})\) is asymptotically at most \(N^{-\delta}\). ### Contributions to the error probability Figure 7 illustrates the contribution of each summand to the bound (8) for a range of values of \(T\). It is clear that the dominant term contributing to the error bound is \(r=\infty\) (that the dominant error event is for a non-defective item to be wrongly Figure 7: Plot illustrating the variation of \(\min\{1,K_{r}f(\sum_{i<r}K_{r})\}\) with \(T\), for each \(r\in\mathcal{D}\). The parameters chosen are \(N=500,K=10,p=0.1\) and \(\mathbf{K}=(2,2,2,2,2)\). classified as defective, and that the defective items are more typically correctly classified). Indeed, (11) implies that the proportion of the bound (8) arising from the \(r=\infty\) term is at least \(1-K/N\), since this result gives \[\frac{(N-K)f(K)}{\sum_{r\in\mathcal{D}}K_{r}f\left(\sum_{i<r}K_{i}\right)}\geq \frac{(N-K)f(K)}{Nf(K)}=1-\frac{K}{N}.\] In fact in the 'uniform' case where \(\mathbf{K}=(K/d,\ldots,K/d)\), the contributions to the bound \[K_{r}f\left(\sum_{i<r}K_{i}\right)\simeq K_{r}\exp\left(-Tp(1-p)^{(r-1)K/d} \right)\simeq\frac{K}{d}\exp\left(-Tp\mathrm{e}^{-p(r-1)K/d}\right) \tag{12}\] decay doubly-exponentially as \(r\) gets smaller, meaning that the errors are overwhelmingly likely to arise from wrongly classifying items with high \(r\). ### Error probability for different defectivity sequences For a fixed number of defective items \(K\), it would be interesting to know what defectivity sequences \(\mathbf{K}\) make the tropical group testing problem hardest or easiest. This is explored via simulation in Figure 4, but we would also like to understand which sequences \(\mathbf{K}\) lead to the largest and smallest error probability for tropical COMP. Unfortunately, we cannot directly control the error probability in this way. However, we can use the COMP error bound (8) to induce a partial order on sequences \(\mathbf{K}\) with the same sum. This will show that the error bound is smallest in the classical case where all items have the same level and largest in the case where each item has distinct levels. Given a sequence \(\mathbf{K}=(K_{1},\ldots,K_{d})\), we can sort the items in increasing order of level, and for each item \(k\) write \(L_{k}\) for the number of items with a strictly lower level. For example, with \(K=8\), the sequence \(\mathbf{K}^{(1)}=(2,2,2,2)\) induces the sequence \(\mathbf{L}^{(1)}=(0,0,2,2,4,4,6,6)\), while the sequence \(\mathbf{K}^{(2)}=(1,1,1,\ldots,1)\) induces the sequence \(\mathbf{L}^{(2)}=(0,1,2,3,4,5,6,7)\). We can compare two sequences \(\mathbf{K}^{(i)}=\left(K_{1}^{(i)},\ldots,K_{d}^{(i)}\right)\), for \(i=1,2\), and define a partial order \(\mathbf{K}^{(1)}\preceq\mathbf{K}^{(2)}\) if the corresponding sequences \(L_{k}^{(1)}\leq L_{k}^{(2)}\) for all \(k\). Hence in the example above, \(\mathbf{K}^{(1)}\preceq\mathbf{K}^{(2)}\). In this partial ordering, for a given \(K\) the minimal sequence will be \(\mathbf{K}=(K)\), giving \(\mathbf{L}=(0,0,\ldots,0)\), and the sequence \(\mathbf{K}=(1,1,\ldots,1)\) as seen above will be maximal. Now, note that the bound on the RHS of (8) respects this partial order. That is, since the \(r=\infty\) term will be the same for all such sequences, we can regard the variable part of the bound (11) as a sum over defective items \[\sum_{k\in\mathcal{K}}f(L_{k}), \tag{13}\] and use the fact that the function \(f(\ell)\) is increasing in \(\ell\) to deduce that: **Lemma 5.4**.: _If \(\mathbf{K}^{(1)}\preceq\mathbf{K}^{(2)}\) then the corresponding error bound (13) is lower for \(\mathbf{K}^{(1)}\) than \(\mathbf{K}^{(2)}\)._ Hence, for fixed \(K\) the error bound (13) is smallest for the minimal sequence \(\mathbf{K}=(K)\) corresponding to the classical case and largest for the sequence \(\mathbf{K}=(1,1,\ldots,1)\) where each defective item has its own unique level. ## 6 Analysis of tropical DD algorithm: achievability In this section we give an analysis of the performance of tropical DD, which extends that given in [2] for the classical \(d=1\) case by taking advantage of the information provided by the more varied test outcomes of the tests. Our main achievability result ensures success with high probability when we have a number of tests above a certain threshold. **Theorem 6.1**.: _For \(\nu>0\), write_ \[\psi_{r}:=\psi_{r}(\nu)=\Big{(}1-\frac{\nu}{K}\Big{)}^{\sum_{t\leq r}K_{t}}\,.\] _Also define_ \[T_{\infty}(\nu):=\frac{1}{\nu\psi_{d}}K\ln\frac{N}{K}\] _and_ \[T_{r}(\nu):=\frac{1}{\nu\psi_{r}}K\ln K_{r}.\] _If we take_ \[T\geq(1+\delta)\max\big{\{}T_{\infty}(\nu),T_{d}(\nu),T_{d-1}(\nu),\ldots,T_{1 }(\nu)\big{\}},\] _tests, then the error probability of the tropical DD algorithm for a Bernoulli design with \(p=\nu/K\) tends to \(0\)._ _Remark 6.2_.: Note that in the uniform case where \(K_{r}=K/d\) for all \(r\), in regimes where \(K=N^{\alpha}\) and \(\alpha>1/2\), the dominant term in Theorem 6.1 is \[T_{d}(\nu)=\frac{1}{\nu(1-\nu/K)^{K}}\,K\ln\frac{K}{d};\] that is, the maximum over \(r\) is achieved at \(r=d\), since \(\psi_{r}\) is decreasing in \(r\) and \(\ln K_{r}\) is constant. Further, in this case, since we are able to choose \(\nu\) to minimise \(T_{\mathrm{fin}}(\nu)\), we can maximise \(\nu(1-\nu/K)^{K}\) by taking \(\nu=K/(K+1)\) or \(p=1/(K+1)\). Note that this does not necessarily mean that we minimise the error probability, since we are minimising the number of tests to control a bound on \(\mathbb{P}(\mathrm{err})\), not \(\mathbb{P}(\mathrm{err})\) itself. In other words, asymptotically we require \(\mathrm{e}K\ln(K/d)\) tests. This means that the tropical performance bound of Theorem 6.1 represents an asymptotic reduction of \(\mathrm{e}K\ln d\) tests over the classical bound of \(\mathrm{e}K\ln K\) tests (see [1, Theorem 1]). While this is a second-order asymptotic term compared with the leading order \(\mathrm{e}K\ln K\) term, it may still represent a valuable improvement in problems of finite size. In the next section, we will see a converse result Theorem 7.1, showing that success probability can't be high with a number of tests below a threshold. Further, we show that for certain parameter regimes these two thresholds coincide, showing that we have sharp performance bounds on tropical DD. ### Proof outline To prove Theorem 6.1, we need a general upper bound on the error probability of DD. The key idea is that DD succeeds if and only if each defective item is proven to be such in the second stage of the algorithm. **Definition 6.3**.: For any \(1\leq s\leq K_{r}\), we write \(L_{r,s}\) for the number of tests that contain item \(i(r,s)\), no other defective item \(i(t,u)\) with \(t\leq r\), and also no non-defective \(\mathrm{PD}(r)\) item. A test that counts towards \(L_{r,s}\) is precisely one that discovers \(i(r,s)\) to be defective at level \(r\). So with this definition, we can say that the tropical DD algorithm succeeds if and only if \(L_{r,s}\geq 1\) for all \((r,s)\). Hence we have \[\mathbb{P}(\mathrm{err})=\mathbb{P}\left(\bigcup_{r,s}\{L_{r,s}=0\}\right) \leq\sum_{r=1}^{d}\mathbb{P}\left(\bigcup_{s=1}^{K_{r}}\{L_{r,s}=0\}\right), \tag{14}\] One way we could get \(L_{r,s}=0\) is if there is a large number of potentially intruding non-defectives at this outcome level. Recall from Definition 2.4.3 that this number of intruding non-defectives is \(G_{r}\). However, provided we use sufficiently many tests, \(G_{r}\) is unlikely to be large. Hence, we will condition on \(G_{r}\) being no larger than some threshold level \(g_{r}^{*}\), to be chosen later. So for each level \(r\), the summand in (14) can be bound as \[\mathbb{P}\left(\bigcup_{s=1}^{K_{r}}\{L_{r,s}=0\}\right)=\mathbb{P} \left(\bigcup_{s=1}^{K_{r}}\{L_{r,s}=0\}\left|\ G_{r}\leq g_{r}^{*} \right.\right)\mathbb{P}(G_{r}\leq g_{r}^{*})\] \[\qquad+\mathbb{P}\left(\bigcup_{s=1}^{K_{r}}\{L_{r,s}=0\}\left|\ G_ {r}>g_{r}^{*}\right.\right)\mathbb{P}(G_{r}>g_{r}^{*})\] \[\leq\mathbb{P}\left(\bigcup_{s=1}^{K_{r}}\{L_{r,s}=0\}\left|\ G_ {r}\leq g_{r}^{*}\right.\right)+\mathbb{P}(G_{r}>g_{r}^{*})\] \[\leq K_{r}\,\mathbb{P}\left(L_{r,s}=0\mid G_{r}\leq g_{r}^{*} \right)+\mathbb{P}(G_{r}>g_{r}^{*}), \tag{15}\] where we used the union bound in the last line. We need to show that both terms in (15) are small. The first term being small tells us we're likely to find the level-\(r\) defectives provided \(G_{r}\) is not too big; we will show this happens when \(T\geq(1+\delta)\max\{T_{\infty},T_{r}\}\). The second term being small tells us that \(G_{r}\) is unlikely to be too big; we will show this happens when \(T\) is big; for example, \(T\geq(1+\delta)T_{r}\) will be plenty. In Subsection 6.2 we will analyse the first term \(\mathbb{P}\left(L_{r,s}=0\mid G_{r}\leq g_{r}^{*}\right)\). In Subsection 6.3 we will bound the second term \(\mathbb{P}(G_{r}>g_{r}^{*})\). Then in Subsection 6.4 we will put the pieces together to prove Theorem 6.1. ### Finding defectives We first describe the joint distribution of certain random variables arising in the analysis of DD. We provide additional notation to that used in Section 2. **Definition 6.4**.: We define the following random variables: 1. Write \(M_{\infty}\) for the number of tests which contain no defectives - and so are negative tests with outcome \(\infty\). 2. For \(1\leq r\leq d\), write \(M_{r}\) for the total number of positive tests with outcome \(r\). 3. Further, decompose \(M_{r}=\sum_{s=1}^{K_{r}}M_{r,s}+M_{r,+}\) as follows: 1. For \(1\leq s\leq K_{r}\), write \(M_{r,s}\) for the number of tests that contain a single item \(i(r,s)\) at level \(r\) and no other defective item \(i(t,u)\) with \(t\leq r\); note that each such test has outcome \(r\). 2. Write \(M_{r,+}\) for the number of tests which have outcome \(r\) but contain multiple defective items at level \(r\). Write \(\mathbf{M}\) for the collection of random variables \[\mathbf{M}=(M_{1,1},M_{1,2},\ldots M_{1,K_{1}},M_{1,+},\ldots,M_{d,1},M_{d,2},\ldots, M_{d,K_{d}},M_{d,+},M_{\infty})\] (noting this includes the terms in the decompositions of the \(M_{r}\)s, but not the \(M_{r}\)s themselves). Note that (using Definition 2.4.2) item \(i(r,s)\) is masked if and only if \(M_{r,s}=0\). In particular \(M_{r,s}=0\) means necessarily that \(L_{r,s}=0\), and this is the event we wish to avoid. But first, let us note the joint distribution of \(\mathbf{M}\). **Lemma 6.5**.: _The random vector \(\mathbf{M}\) is multinomial with parameters \(T\) and \(\mathbf{q}\), where_ \[\mathbf{q}=(q_{1,1},q_{1,2},\ldots q_{1,K_{1}},q_{1,+},\ldots,q_{d,1},q_{d,2}, \ldots,q_{d,K_{d}},q_{d,+},q_{\infty})\,.\] _Here for each \(r\) and \(s\):_ \[q_{\infty} :=(1-p)^{K},\] \[q_{r,s} :=\prod_{t<r}(1-p)^{K_{t}}\left((1-p)^{K_{r}-1}p\right),\] \[q_{r} :=\prod_{t<r}(1-p)^{K_{t}}\left(1-(1-p)^{K_{r}}\right),\] \[q_{r,+} :=q_{r}-K_{r}q_{r,s}.\] Proof.: First, a test is negative if all defective items are absent, with happens with probability \(q_{\infty}(1-p)^{K}\). Second, \(q_{r,s}\) is the probability all items at levels \(t<r\) are absent, that item \(i(r,s)\) is present, and that the \(K_{r}-1\) other items at level \(r\) are absent. Third, \(q_{r}\) is the probability of outcome \(r\), which happens all items at level \(t<r\) are absent, and also it's not the case that all items at level \(r\) are absent. Fourth, \(q_{r,+}\) is the probability \(q_{r}\) of an outcome \(r\) minus the probabilities of a single level-\(r\) item being the cause. Although the distribution of the crucial variable \(L_{r,s}\) seems tricky to derive from first principles, it is much easier once we know \(M_{r,s}\) and the number of potentially intruding non-defectives. This is because a test counting towards \(M_{r,s}\) will count also towards \(L_{r,s}\) provided that no non-defectives intrude on the test too. **Lemma 6.6**.: _The conditional distribution of \(L_{r,s}\), given \(M_{r,s}\) and \(G_{r}\), is_ \[L_{r,s}\mid\{M_{r,s}=m_{r,s},G_{r}=g_{r}\}\sim\mathrm{Bin}(m_{r,s},(1-p)^{g_{r }}). \tag{16}\] Proof.: There are \(m_{r,s}\) tests which contain item \(i(r,s)\) and no other defective item \(i(t,u)\) with \(t\leq r\). Each of these \(m_{r,s}\) tests independently contributes to \(L_{r,s}\) if and only if none of the \(g_{r}\) potentially intruding non-defective items appear in the test. Because of the Bernoulli design, each of those \(g_{r}\) non-defectives appears in the test with probability \(p\), so the probability none of them appear is \((1-p)^{g_{r}}\), independent over the \(m_{r,s}\) tests. We can now bound the probability that of undesirable event that \(L_{r,s}=0\). **Lemma 6.7**.: _Using a Bernoulli design with parameter \(p\), for any \(g_{r}^{*}\), we have the bound_ \[\mathbb{P}(L_{r,s}=0\mid G_{r}\leq g_{r}^{*})\leq\exp\big{(}-q_{r,1}(1-pg_{r}^ {*})T\big{)}, \tag{17}\] _where as in Lemma 6.5 we write \(q_{r,1}=p(1-p)^{\sum_{t\leq r}K_{t}-1}\)._ Proof.: We start by conditioning on equality \(G_{r}=g_{r}\). Noting that by Lemma 6.5\(M_{r,s}\sim\operatorname{Bin}(T,q_{r,1})\) and that \(\mathbb{P}(\operatorname{Bin}(m,q)=0)=(1-q)^{m}\), we can write \[\mathbb{P}(L_{r,s}=0\mid G_{r}=g_{r}) =\sum_{m=0}^{T}\mathbb{P}(M_{r,s}=m)\,\mathbb{P}(L_{r,s}=0\mid G_ {r}=g_{r},M_{r,s}=m)\] \[=\sum_{m=0}^{T}\binom{T}{m}q_{r,1}^{m}(1-q_{r,1})^{T-m}\,(1-(1-p) ^{g_{r}})^{m}\] \[\leq(1-q_{r,1}(1-p)^{g_{r}})^{T}\] \[\leq\exp\left(-q_{r,1}(1-p)^{g_{r}}T\right)\] \[\leq\exp\left(-q_{r,1}(1-pg_{r})T\right). \tag{18}\] From the second to the third line, we used the binomial theorem, and then we used Bernoulli's inequality in the form \((1-p)^{g}\geq 1-pg\). Note that (18) is increasing in \(g_{r}\). Thus we can bound (17) by the worst-case conditioning, where \(G_{r}=g_{r}^{*}\). ### Intruding non-defectives Recall from Definition 2.4.3 that \(G_{r}\) is the number of non-defectives that could intrude into tests with outcome \(r\). Our goal is to bound that tail probability of \(G_{r}\) in (15). **Lemma 6.8**.: _Write_ \[\overline{M}_{r}:=M_{\infty}+M_{d}+M_{d-1}+\cdots+M_{r+1}\] _for the number of tests with outcomes higher than \(r\). Then \(\overline{M}_{r}\) has distribution_ \[\overline{M}_{r}\sim\operatorname{Bin}\left(T,\ \prod_{t\leq r}(1-p)^{K_{t}}\right) \tag{19}\] _Further, the conditional distribution of \(G_{r}\) given \(\boldsymbol{M}\) is_ \[G_{r}\ |\ \{\boldsymbol{M}=\boldsymbol{m}\}\sim\operatorname{Bin}(N-K,(1-p)^{m_{r }^{*}}), \tag{20}\] _where \(m_{r}^{*}=m_{r+1}+\dots+m_{d}+m_{\infty}\)._ Proof.: By standard properties of the multinomial (see [2, Lemma 30]), \[\overline{M}_{r}\sim\operatorname{Bin}\left(T,q_{r+1}+\dots+q_{d}+q_{\infty} \right).\] But \[q_{r+1}+\dots+q_{d}+q_{\infty}=\prod_{t\leq r}(1-p)^{K_{t}}, \tag{21}\] since it forms a collapsing sum. This proves the first distribution. Given \(\overline{M}_{r}=m_{r}^{*}\), each of the \(N-K\) non-defectives will be independently counted in \(G_{r}\) provided they don't appear in any of the \(m_{r}^{*}\) tests with outcomes higher than \(r\). By the Bernoulli design structure, each item is independently counted with probability \((1-p)^{m_{r}^{*}}\). This proves the second distribution. We can calculate the expectation of \(G_{r}\) by conditioning on \(\overline{M}_{r}\). **Lemma 6.9**.: \(\mathbb{E}G_{r}\leq(N-K)\exp(-p\psi_{r}T)\)_._ Proof.: We use the facts that (19) gives that \(\overline{M}_{r}\sim\operatorname{Bin}(T,\psi_{r})\) and Lemma 6.8 gives that \(G_{r}\ |\ \{\overline{M}_{r}=m_{r}^{*}\}\sim\operatorname{Bin}\left(N-K,(1-p)^{m_{r }^{*}}\right)\). Hence we can use the binomial theorem to write \[\mathbb{E}G_{r} =\sum_{m=0}^{T}\mathbb{P}(\overline{M}_{r}=m)\,\mathbb{E}[G_{r}\ |\ \overline{M}_{r}=m]\] \[=\sum_{m=0}^{T}\binom{T}{m}\psi_{r}^{m}(1-\psi_{r})^{T-m}\left(N -K\right)(1-p)^{m}\] \[=(N-K)(1-\psi_{r}+\psi_{r}(1-p))^{T},\] and the result follows. We will choose the threshold \(g_{r}^{*}\) to be just slightly bigger than this expectation; specifically, we take \(g_{r}^{*}=(N-K)\exp(-p\psi_{r}T(1-\epsilon))\) for some \(\epsilon>0\) to be determined later. **Lemma 6.10**.: _With \(g_{r}^{*}=(N-K)\exp(-p\psi_{r}T(1-\epsilon))\), we have_ \[\mathbb{P}(G_{r}>g_{r}^{*})\leq\exp(-p\psi_{r}T\epsilon).\] Proof.: This is a simple application of Markov's inequality. Using Lemma 6.9, we get \[\mathbb{P}(G_{r}>g_{r}^{*}) \leq\frac{\mathbb{E}G_{r}}{g_{r}^{*}}\] \[\leq\frac{(N-K)\exp(-p\psi_{r}T)}{(N-K)\exp(-pT\psi_{r}(1-\epsilon ))}\] \[=\exp(-p\psi_{r}T\epsilon).\qed\] ### Completing the proof We are now ready to complete the proof of our main result. Proof of Theorem 6.1.: From (14) and (15), we had got as far as the bound \[\mathbb{P}(\mathrm{err})\leq\sum_{r=1}^{d}\big{[}K_{r}\,\mathbb{P}\left(L_{r, s}=0\mid G_{r}\leq g_{r}^{*}\right)+\mathbb{P}(G_{r}>g_{r}^{*})\big{]}, \tag{22}\] and in Subsection 6.3 we had chosen \(g_{r}^{*}=(N-K)\exp(-p\psi_{r}T(1-\epsilon))\), with \(\epsilon\) still to be fixed. We need to show that, in each summand of (22), both the first and second terms tend to \(0\). We begin with the first term. From Lemma 6.7, we have the bound \[K_{r}\,\mathbb{P}(L_{r,s}=0\mid G_{r}\leq g_{r}^{*}) \leq K_{r}\exp\big{(}-q_{r,1}(1-pg_{r}^{*})T\big{)}\] \[=\exp\left(\ln K_{r}-T\psi_{r}p\frac{1-pg_{r}^{*}}{1-p}\right),\] where we have used that \(q_{r,1}=\psi_{r}p/(1-p)\). The condition \(T\geq(1+\delta)T_{r}\) means that \(T\psi_{r}p\geq(1+\delta)\ln K_{r}\), so we get \[K_{r}\,\mathbb{P}(L_{r,s}=0\mid G_{r}\leq g_{r}^{*})\leq\exp\left(\ln K_{r} \left(1-(1+\delta)\frac{1-pg_{r}^{*}}{1-p}\right)\right).\] This tends to \(0\) so long as \(pg_{r}^{*}\) tends to \(0\), which we will now check. Since \(T\geq(1+\delta)T_{\infty}\) and \(\psi_{r}\geq\psi_{d}\), we know that \(Tp\psi_{r}\geq(1+\delta)\ln(N/K)\). With \(g_{r}^{*}=(N-K)\exp(-p\psi_{r}T(1-\epsilon))\), we therefore have \[pg_{r}^{*}\leq\left(\frac{N}{K}\right)\exp\left(-Tp\psi_{r}(1-\epsilon) \right)\leq\left(\frac{N}{K}\right)^{1-(1-\epsilon)(1+\delta)}.\] This means that \(pg_{r}^{*}\to 0\) is indeed guaranteed by choosing \(\epsilon<\delta/(1+\delta)\). Now the second term. From Lemma 6.10, we have \[\mathbb{P}(G_{r}>g_{r}^{*})\leq\exp(-p\psi_{r}T\epsilon)<\exp(-p\psi_{r}T\delta/( 1+\delta)),\] since we have just chosen \(\epsilon<\delta/(1+\delta)\). The condition \(T\geq(1+\delta)T_{r}\) gives us \(p\psi_{r}T/(1+\delta)=\ln K_{r}\), so this term does indeed tend to \(0\). Since we have shown that all the terms in (22) tend to zero, the proof is complete. ## 7 Converse results Our achievability result Theorem 6.1 shows that tropical DD can succeed with \[T\geq(1+\delta)\max\{T_{\infty}(\nu),T_{d}(\nu),T_{d-1}(\nu),\ldots,T_{1}(\nu )\}\qquad\text{tests}. \tag{23}\] We can use similar ideas to provide a converse result for the tropical DD algorithm. **Theorem 7.1**.: _For a given \(\nu>0\), in the limiting regime where_ \[T\leq(1-\delta)\max\{T_{d}(\nu),T_{d-1}(\nu),\ldots,T_{1}(\nu)\}\] _then the error probability of the tropical DD algorithm for a Bernoulli design with \(p=\nu/K\) tends to \(1\)._ Note that the difference between the achievability and the converse results is the lack of the \(T_{\infty}\) term in the converse. We will prove Theorem 7.1 for tropical DD in Subsection 7.1. In addition, we will show in Subsection 7.2 that this same bound acts as a more general 'algorithm-independent' converse for Bernoulli designs. ### Proof for tropical DD The key to proving Theorem 7.1 is to observe that tropical DD will definitely fail if any \(M_{r,s}=0\), since that means that item \(i(r,s)\) never appears without at least one other defective item with which it could be confused, so \(L_{r,s}\) is certainly \(0\) too. Thus we start with the bound \[\mathbb{P}(\mathrm{err})\geq\mathbb{P}\left(\bigcup_{r,s}\{M_{r,s}=0\}\right). \tag{24}\] By picking out just the defective items at a given level \(r=r^{*}\), we have \[\mathbb{P}(\mathrm{err})\geq\mathbb{P}\left(\bigcup_{s=1}^{K_{r^{*}}}\{M_{r^ {*},s}=0\}\right) \tag{25}\] As in [2, Eq. (13)], we define the function \[\phi_{K}(q,T):=\sum_{j=0}^{K}(-1)^{j}\binom{K}{j}(1-jq)^{T}. \tag{26}\] We will bound the error probability in terms of \(\phi_{K}\) as follows. **Lemma 7.2**.: _For \(1\leq r^{*}\leq d\), the error probability of tropical DD is bounded below by_ \[\mathbb{P}(\mathrm{err})\geq 1-\phi_{K^{*}_{r}}(q_{r^{*},1},T), \tag{27}\] Proof.: We follow the general idea from [2]. We can calculate (25) using the inclusion-exclusion formula \[\mathbb{P}\left(\bigcup_{s=1}^{K_{r^{*}}}\{M_{r^{*},s}=0\}\right)=\sum_{|S|=0 }^{K_{r^{*}}}(-1)^{|S|}\,\mathbb{P}\left(\bigcap_{s\in S}\{M_{r^{*},s}=0\} \right), \tag{28}\] where the sum is over subsets \(S\) of \(\mathcal{K}_{r}\). By the multinomial distribution form of Lemma 6.5, we have \[\mathbb{P}\left(\bigcap_{s\in S}\{M_{r^{*},s}=0\}\right) =\binom{T}{0,0,\ldots,0,T}\left(\prod_{s\in S}q_{r^{*},s}^{0} \right)\left(1-\sum_{s\in S}q_{r^{*},s}\right)^{T}\] \[=\left(1-\sum_{s\in S}q_{r^{*},s}\right)^{T}\] \[=(1-|S|q_{r^{*},1})^{T}\,. \tag{29}\] Substituting (29) into (28) gives \[\mathbb{P}\left(\bigcup_{s=1}^{K_{r^{*}}}\{M_{r^{*},s}=0\}\right)=\sum_{|S|=0 }^{K_{r^{*}}}(-1)^{|S|}\,\left(1-|S|q_{r^{*},1}\right)^{T}.\] Collecting together the summands according to the value of \(|S|=j\) gives the result. We bound this quantity from below by deducing an upper bound on \(\phi_{K}\). **Lemma 7.3**.: _For all \(K\), \(q\) and \(T\) we can bound_ \[\phi_{K}(q,T)\leq\exp\left(-\frac{K(1-q)^{T+1}}{1+Kq(1-q)^{T}}\right) \tag{30}\] Proof.: See Appendix B.1. We can now finish our proof of the converse for tropical DD. Proof of Theorem 7.1.: By hypothesis, \(T\leq(1-\delta)\max_{r}T_{r}\). So pick some level \(r^{*}\) such that \(T\leq(1-\delta)T_{r^{*}}\). We had already reached the bound (27): \[\mathbb{P}(\mathrm{err})\geq 1-\phi_{K_{r}^{*}}(q_{r^{*},1},T).\] We now combine this with Lemma 7.3. We deduce that \[\mathbb{P}(\mathrm{err})\geq 1-\exp\left(-\frac{K_{r^{*}}(1-q_{r^{*},1})^{T+1}}{1 +K_{r^{*}}q_{r^{*},1}(1-q_{r^{*},1})^{T}}\right). \tag{31}\] The exponential term here is of the form \(\exp(-(1-q)u/(1+qu))\), with \(u=K_{r^{*}}(1-q_{r^{*},1})^{T}\). Since \(\exp(-(1-q)u/(1+qu))\) increases as \(u\) decreases (for fixed \(q\)), it suffices to bound \(K_{r^{*}}(1-q_{r^{*},1})^{T}\) from below, which we do now. Since \(q_{r^{*},1}=p\psi_{r^{*}}/(1-p)\) we know that \[\frac{q_{r^{*},1}}{1-q_{r^{*},1}}=\frac{p\psi_{r^{*}}}{1-p(1+\psi_{r^{*}})}.\] Combining this with \(T\leq(1-\delta)T_{r^{*}}=(1-\delta)K\ln K_{r^{*}}/(\nu\psi_{r^{*}})\) gives \[\frac{Tq_{r^{*},1}}{1-q_{r^{*},1}} \leq\frac{(1-\delta)P}{\nu\psi_{r^{*}}}\frac{p\psi_{r^{*}}}{1-p(1 +\psi_{r^{*}})}\ln K_{r^{*}}\] \[=\frac{(1-\delta)}{1-p(1+\psi_{r}^{*})}\ln K_{r}^{*}\] \[\leq(1-c)\ln K_{r}^{*}, \tag{32}\] for some \(c>0\), for \(K_{r^{*}}\) sufficiently large. This gives us the lower bound \[K_{r^{*}}(1-q_{r^{*},1})^{T} =K_{r^{*}}\exp(T\log(1-q_{r^{*},1}))\] \[\geq K_{r^{*}}\exp\left(-\frac{Tq_{r^{*},1}}{1-q_{r^{*},1}}\right)\] \[\geq K_{r^{*}}^{c}, \tag{33}\] where we used \(\log(1-q)\geq-q/(1+q)\) for \(q>-1\) and (32). Using the bound (33) in (31), we get \[\mathbb{P}(\mathrm{err}) \geq 1-\exp\left(-\frac{K_{r^{*}}(1-q_{r^{*},1})^{T+1}}{1+q_{r^{*}, 1}K(1-q_{r^{*},1})^{T}}\right)\] \[\geq 1-\exp\left(-\frac{K_{r^{*}}^{c}(1-q_{r^{*},1})}{1+q_{r^{*},1 }K_{r^{*}}^{c}}\right)\] \[=1-\exp\left(-\frac{(1-\psi_{r^{*}}/(K-1))K_{r^{*}}^{c}}{1+\psi_{ r^{*}}K_{r^{*}}^{c}/(K-1)}\right). \tag{34}\] where (34) follows since \(q_{r^{*},1}=p\psi_{r^{*}}/(1-p)=\psi_{r^{*}}/(K-1)\). Finally, that bound (34) is asymptotically equivalent to \(1-\exp(-K_{r^{*}}^{c})\), which tends to \(1\) as \(K\to\infty\). This completes the proof. ### Algorithm-independent converse In fact, our DD-specific converse, Theorem 7.1, helps give a converse bound for _all_ algorithms with a Bernoulli design. We can write \(\mathbb{P}(\mathrm{err};\mathrm{optimal},T)\) for the minimum error probability that can be achieved by any algorithm. The key observation is that in Theorem 7.1 we find that with too few tests there is a good chance that some item \(i(r^{*},s)\) appears in no tests without other items of the same level, so a satisfying vector can be formed without it. **Theorem 7.4**.: _For a given \(\nu>0\) and any \(T\leq(1-\delta)T_{\mathrm{fin}}(\nu)\), the error probability of the optimal algorithm \(\mathbb{P}(\mathrm{err};\mathrm{optimal},T)\) for a Bernoulli design with parameter \(\nu/K\) is bounded away from zero, even for algorithms which are given access to the values of \(K_{i}\)._ Proof.: First note that for any \(T^{\prime}\geq 1\) the \(\mathbb{P}(\mathrm{err};\mathrm{optimal},T)\geq\mathbb{P}(\mathrm{err}; \mathrm{optimal},T+T^{\prime})\), since there exists a (possibly suboptimal) algorithm using \(T+T^{\prime}\) tests which simply ignores the last \(T^{\prime}\) tests and applies the optimal \(T\)-test algorithm to the remaining tests. Hence it will be sufficient to bound \(\mathbb{P}(\mathrm{err};\mathrm{optimal},T)\) away from zero for \(T=(1-\delta)\max_{r}T_{r}\), as the same bound will hold for all \(T\leq(1-\delta)\max_{r}T_{r}\). We argue as in [1]. Recall that \(M_{r,s}\) is the number of tests that have a chance of proving \(i(r,s)\) is defective at level \(r\), and \(H_{r}\) is the number of non-defective items in \(\mathrm{PD}(r)\). The key idea is this: Suppose for some \(r^{*}\) that both \(A=\bigcup_{s=1}^{K_{r^{*}}}\{M_{r^{*},s}=0\}\) and \(B=\{H_{r^{*}}\geq 1\}\) occur. The event \(A\) would mean that there is some item \(i(r^{*},t)\) that is masked and the event \(B\) would mean that there is some non-defective item which is a possible defective at that level. So we could form an alternative satisfying vector from the true vector \(\mathbf{U}\) by swapping the entries in \(\mathbf{U}\) of these two items. Hence, if \(A\cap B\) occurs, then there are at least two satisfying vectors with the correct number of items at each level, so we the success probability can only be at most \(1/2\). The probability of an intersection can always be bounded with \(\mathbb{P}(A\cap B)\geq\mathbb{P}(A)-\mathbb{P}(B^{c})\), so the error probability for any algorithm satisfies \[\mathbb{P}(\mathrm{err})\geq\frac{1}{2}\,\mathbb{P}\left(\bigcup_{s=1}^{K_{r ^{*}}}\{M_{r^{*},s}=0\}\right)-\frac{1}{2}\,\mathbb{P}(H_{r^{*}}=0).\] Now the first term involves exactly the term we have controlled in Theorem 7.1, so we know it is larger than \(1/4\) in the regime of interest for \(K\) sufficiently large. Hence, to bound the error probability away from zero it will be sufficient to prove that \(\mathbb{P}(H_{r^{*}}=0)\leq 1/4\). We will prove this in a series of technical lemmas in Appendix B.2: 1. In Lemma B.4, we will show that \[\mathbb{P}(H_{r^{*}}=0)\leq\mathbb{P}(G_{r^{*}}=0)+\mathbb{E}(1-p)^{M_{r^{*}}}.\] We deal with the two terms separately. 2. In Lemma B.5, we will show that the first term is bounded by \[\mathbb{P}(G_{r^{*}}=0)\leq\left(1-(1-p)^{m_{r^{*}}^{*}}\right)^{N-K}+\exp \left(-\frac{\delta^{2}T\psi_{r^{*}}}{2}\right),\] (35) where \(m_{\ell}^{*}=T\psi_{\ell}(1+\delta)\). 3. In Lemma B.6, we show that the second term is bounded by \[\mathbb{E}(1-p)^{M_{r^{*}}}\leq\exp(-p\psi_{r^{*}}d_{r^{*}}T),\] (36) where \(d_{\ell}=(1-p)^{-K_{\ell}}-1\). Recall that we consider \(T=(1-\delta)T_{r^{*}}=(1-\delta)K\ln K_{r^{*}}/(\nu\psi_{r^{*}})\), for the maximising \(r^{*}\). Since \(p\psi_{r^{*}}T=(1-\delta)\ln K_{r}^{*}\), we know that \((1-p)^{m_{r^{*}}}\simeq K^{-(1-\delta^{2})}\), so that both terms in (35) tend to zero. Similarly, (36) also tends to zero for this choice of \(\ell\) and \(T\). This completes the proof. ## 8 Discussion In this paper, we have considered the tropical group testing model of Wang _et al._[19] in a small-error setting. We have described small-error algorithms in Section 3. We demonstrated the empirical performance of these algorithms in Section 4, showing that tropical DD and tropical SCOMP outperform their classical counterparts. We performed theoretical analysis of the tropical COMP algorithm in Section 5 and of the DD algorithm in Sections 6 and 7, proving that in certain parameter regimes the tropical DD algorithm is asymptotically optimal. We briefly mention some open problems. Further work could explore test designs with near-constant column weights in the tropical setting, as these designs show a gain in performance in the classical case (see [4]), and Figure 6 suggests the same may well be true here. The results could be made more practically valuable by developing bounds in a noisy setting, under a variety of noise models similar to those described in [4, Chapter 4]. Also, there is potential to extend the results in this paper by considering models with random defectivity levels, as illustrated in Figure 5. It may also be mathematically interesting to develop small-error algorithms and bounds using the delay matrix approach of [19]. ## Acknowledgements This work was carried out while Vivekanand Paligadu was on a University of Bristol School of Mathematics undergraduate summer bursary placement, funded by Mark Williams Alumni Funds.
クラシックな群検出問題のバージョンをPCR検査を motivación にし、このモデルで、試験の結果は集団に含まれる個人の最も低いサイクル閾値(Ctレベル)であり、単純な二値指示変数はそれです。私たちは、古典的な非適応アルゴリズムの3つ(COMP、DD、SCOMP)のトロピカルバージョンを導入し、シミュレーションとエラー確率の境界値を通じてその動作を分析します。トロピカルと古典的なアルゴリズムの結果を比較することで、試験のアウトカム(Ctレベル)を学習することで得られる追加情報について洞察を得ることができます。私たちは、限られた領域では、トロピカルCOMPアルゴリズムは古典的なアルゴリズムと同数のテストを必要とします。しかし、十分に密度の高い問題では、トロピカルDDはより少ないテストでより多くの情報を取得することができます。そして、特定の領域では、トロピカルDDは実質的に
2301.13433
On well-posedness results for the cubic-quintic NLS on $\mathbb{T}^3$
We consider the periodic cubic-quintic nonlinear Schr\"odinger equation \begin{align}\label{cqnls_abstract} (i\partial_t +\Delta )u=\mu_1 |u|^2 u+\mu_2 |u|^4 u\tag{CQNLS} \end{align} on the three-dimensional torus $\mathbb{T}^3$ with $\mu_1,\mu_2\in \mathbb{R} \setminus\{0\}$. As a first result, we establish the small data well-posedness of \eqref{cqnls_abstract} for arbitrarily given $\mu_1$ and $\mu_2$. By adapting the crucial perturbation arguments in \cite{zhang2006cauchy} to the periodic setting, we also prove that \eqref{cqnls_abstract} is always globally well-posed in $H^1(\mathbb{T}^3)$ in the case $\mu_2>0$.
Yongming Luo, Xueying Yu, Haitian Yue, Zehua Zhao
2023-01-31T06:00:31
http://arxiv.org/abs/2301.13433v1
# On well-posedness results for the cubic-quintic NLS on \(\mathbb{T}^{3}\) ###### Abstract. We consider the periodic cubic-quintic nonlinear Schrodinger equation (CQNLS) \[(i\partial_{t}+\Delta)u=\mu_{1}|u|^{2}u+\mu_{2}|u|^{4}u\] on the three-dimensional torus \(\mathbb{T}^{3}\) with \(\mu_{1},\mu_{2}\in\mathbb{R}\setminus\{0\}\). As a first result, we establish the small data well-posedness of (CQNLS) for arbitrarily given \(\mu_{1}\) and \(\mu_{2}\). By adapting the crucial perturbation arguments in [33] to the periodic setting, we also prove that (CQNLS) is always globally well-posed in \(H^{1}(\mathbb{T}^{3})\) in the case \(\mu_{2}>0\). Key words and phrases:Nonlinear Schrodinger equation, global well-posedness, perturbation theory 2020 Mathematics Subject Classification: Primary: 35Q55; Secondary: 35R01, 37K06, 37L50 ###### Contents * 1 Introduction and main results * 2 Preliminaries * 3 Proof of Theorem 1.1 * 4 Proof of Theorem 1.2 ## 1. Introduction and main results In this paper, we study the cubic-quintic nonlinear Schrodinger equation (CQNLS) \[(i\partial_{t}+\Delta_{x})u=\mu_{1}|u|^{2}u+\mu_{2}|u|^{4}u \tag{1.1}\] on the three-dimensional torus \(\mathbb{T}^{3}\), where \(\mu_{1},\mu_{2}\in\mathbb{R}\setminus\{0\}\) and \(\mathbb{T}=\mathbb{R}/2\pi\mathbb{Z}\). The CQNLS (1.1) arises in numerous physical applications such as nonlinear optics and Bose-Einstein condensate. Physically, the nonlinear potentials \(|u|^{2}u\) and \(|u|^{4}u\) model the two- and three-body interactions respectively and the positivity or negativity of \(\mu_{1}\) and \(\mu_{2}\) indicates whether the underlying nonlinear potential is repulsive (defocusing) or attractive (focusing). We refer to, for instance, [10, 11, 27] and the references therein for a more comprehensive introduction on the physical background of the CQNLS (1.1). Mathematically, the CQNLS model (1.1) on Euclidean spaces \(\mathbb{R}^{d}\) (\(d\leq 3\)) has been intensively studied in [3, 4, 6, 18, 19, 22, 23, 25, 26, 28, 33], where well-posedness and long time behavior results for solutions of (1.1) as well as results for existence and (in-)stability of soliton solutions of (1.1) were well established. In this paper, we aim to give some first well-posedness results for (1.1) in the periodic setting, which, to the best of our knowledge, have not existed to that date. We also restrict ourselves to the most appealing case \(d=3\), where the quintic potential is energy-critical. (By 'energy-critical', we mean the energy of solution is invariant under the scaling variance. See [9] for more details.) In this case, the well-posedness of (1.1) shall also depend on the profile of the initial data and the analysis becomes more delicate and challenging. Our first result deals with the small data well-posedness of (1.1), which is given in terms of the function spaces \(Z^{\prime}(I),X^{1}(I)\) defined in Section 2 for a given time slot \(I\). **Theorem 1.1** (Small data well-posedness).: _Consider (1.1) on a time slot \(I=(-T,T)\subset\mathbb{R}\) with some \(T\in(0,\infty)\). Let \(u_{0}\in H^{1}(\mathbb{T}^{3})\) satisfies \(\|u_{0}\|_{H^{1}(\mathbb{T}^{3})}\leq E\) for some \(E>0\). Then there exists \(\delta=\delta(E,T)>0\) such that if_ \[\|e^{it\Delta}u_{0}\|_{Z^{\prime}(I)}\leq\delta, \tag{1.2}\] _then (1.1) admits a unique strong solution \(u\in X^{1}(I)\) with initial data \(u(0,x)=u_{0}(x)\)._ The proof of Theorem 1.1 is based on a standard application of the contraction principle. Nonetheless, one of the major challenges in proving well-posedness of dispersive equations on tori is the rather exotic Strichartz estimates, leading in most cases to very technical and cumbersome proofs. In the energy-subcritical setting, Strichartz estimates for periodic nonlinear Schrodinger equations (NLS) were first established by Bourgain [1] by appealing to the number-theoretical methods. In our case, where an energy-critical potential is present, we shall make use of the Strichartz estimates introduced by Herr-Tataru-Tzvetkov [14] based on the atomic space theory, which in turn initiates applications of the function spaces defined in Section 2. Notice also that in comparison to the purely quintic NLS model studied in [14], an additional cubic term should also be dealt in our case. A new bilinear estimate on \(\mathbb{T}^{3}\) will therefore be proved in order to obtain a proper estimate for the cubic potential, and we refer to Lemma 3.2 for details. For interested readers, we also refer to [7, 8, 14, 15, 16, 17, 29, 30, 32, 34, 35, 36] for further well-posedness results for NLS (with single nonlinear potential) on tori or waveguide manifolds based on the atomic space theory. (See [24, 31] for other dispersive equations on waveguides.) Despite that small data well-posedness results are satisfactory to certain extent, it is more interesting (and hence also more challenging) to deduce well-posedness results where the initial data are not necessarily small. We focus here on the particular scenario where the quintic potential is repulsive (\(\mu_{2}>0\)), which is motivated by the following physical concern: Consider for instance the focusing cubic NLS1 Footnote 1: When \(d=1\), the mass-subcritical nature of the nonlinear potential, combining with conservation of mass and energy, guarantees the global well-posedness of (1.3) in \(H^{1}(\mathbb{R})\) as well as \(H^{1}(\mathbb{T})\). \[(i\partial_{t}+\Delta)u=-|u|^{2}u \tag{1.3}\] on \(\mathbb{R}^{d}\) with \(d\in\{2,3\}\). By invoking the celebrated Glassey's identity [12] one may construct finite time blow-up solutions of (1.3) for initial data lying in weighted \(L^{2}\)-spaces or satisfying radial symmetric conditions, see for instance [5] for a proof. Surprisingly, in contrast to the rigorously derived blow-up results, collapse of the wave function does not appear in many actual experiments. It is therefore suggested to incorporate a higher order repulsive potential into (1.3), the case that the repulsive potential is taken as the three-body interaction leads to the study of CQNLS (1.1). More interestingly, it turns out that in the presence of a quintic stabilizing potential, (1.1) is in fact globally well-posed for arbitrary initial data in \(H^{1}(\mathbb{R}^{d})\). While for \(d=2\) this follows already from conservation laws and the energy-subcritical nature of (1.1) on \(\mathbb{R}^{2}\), the proof in the case \(d=3\), where the quintic potential becomes energy-critical, is more involved. A rigorously mathematical proof for confirming such heuristics in \(d=3\) was first given by Zhang [33]. The idea from [33] can be summarized as follows: We may consider (1.1) as a perturbation of the three dimensional defocusing quintic NLS \[(i\partial_{t}+\Delta)u=|u|^{4}u \tag{1.4}\] whose global well-posedness in \(\dot{H}^{1}(\mathbb{R}^{3})\) was shown in [9]. We then partition the time slot \(I\) into disjoint adjacent small intervals \(I=\cup_{j=0}^{m}I_{j}\). On each of these intervals, the cubic term is expected to be "small" because of the smallness of the subinterval, and by invoking a stability result we may prove that (1.1) is well-posed on a given \(I_{j}\). Based on the well-posedness result on \(I_{j}\) we are then able to prove the same result for the consecutive interval \(I_{j+1}\) and so on. Starting from the interval \(I_{0}\) and repeating the previous procedure inductively over all \(I_{j}\) follows then the desired claim. Inspired by the result given in [33], we aim to prove the following analogous global well-posedness result for (1.1) on \(\mathbb{T}^{3}\) in the case \(\mu_{2}>0\). **Theorem 1.2** (Global well-posedness in the case \(\mu_{2}>0\)).: _Assume that \(\mu_{2}>0\). Then (1.1) is globally well-posed in \(H^{1}(\mathbb{T}^{3})\) in the sense that for any \(T>0\) and \(u_{0}\in H^{1}(\mathbb{T}^{3})\), (1.1) possesses a solution \(u\in X^{1}(I)\) on \(I=(-T,T)\) with \(u(0)=u_{0}\)._ _Remark 1.3_.: We note that one can also obtain the waveguide analogues of Theorem 1.2, (i.e. considering (1.1) posed on \(\mathbb{R}^{2}\times\mathbb{T}\) and \(\mathbb{R}\times\mathbb{T}^{2}\)) with suitable modifications. Moreover, for the \(\mathbb{R}^{2}\times\mathbb{T}\) case, scattering behavior is also expected according to existing literature (see [35]). However, the scattering result require a lot more than this GWP scheme and we leave it for future considerations. _Remark 1.4_.: It is worth mentioning that the same global well-posedness result for the supercubic-quintic NLS \[(i\partial_{t}+\Delta)u=\mu_{1}|u|^{p-1}u+\mu_{2}|u|^{4}u,\quad\text{for}\quad 3 <p<5,\] is expected to be yielded by adapting the nonlinear estimates in Section 3 into the fractional product case (see [21] for reference, see also [33] for the Euclidean case). We leave it for interested readers. We follow closely the same lines from [33] to prove Theorem 1.2. In comparison to the Euclidean case, there are essentially two main new ingredients needed for the proof of Theorem 1.2: 1. The Black-Box-Theory from [9] is replaced by the one from [16] for (1.4) on \(\mathbb{T}^{3}\). 2. The estimates are correspondingly modified (in a very technical and subtle way) in order to apply the Strichartz estimates based on the atomic space theory. We refer to Section 4 for the proof of Theorem 1.2 in detail. For further applications of such interesting perturbation arguments on NLS with combined powers, we also refer to [28]. _Remark 1.5_.: By a straightforward scaling argument it is not hard to see that both Theorems 1.1 and 1.2 extend verbatim to the case where \(\mathbb{T}^{3}\) is replaced by any rational torus. Such direct scaling argument, however, does not apply to irrational tori. Nevertheless, thanks to the ground breaking work of Bourgain and Demeter [2] we also know that the Strichartz estimates established in [14] are in fact available for irrational tori, by which we are thus able to conclude that Theorems 1.1 and 1.2 indeed remain valid for arbitrary tori regardless of their rationality. For simplicity we will keep working with the torus \(\mathbb{T}^{3}\) in the rest of the paper. We outline the structure of the rest of the paper. In Section 2, we summarize the notations and definitions which will be used throughout the paper and define the function spaces applied in the Cauchy problem (1.1). In Sections 3 and 4, we prove Theorems 1.1 and 1.2 respectively. **Acknowledgment.** Y. Luo was funded by Deutsche Forschungsgemeinschaft (DFG) through the Priority Programme SPP-1886 (No. NE 21382-1). H. Yue was supported by the Shanghai Technology Innovation Action Plan (No. 22JC1402400), a Chinese overseas high-level young talents program (2022) and the start-up funding of ShanghaiTech University. Z. Zhao was supported by the NSF grant of China (No. 12101046, 12271032), Chinese overseas high-level young talents program (2022) and the Beijing Institute of Technology Research Fund Program for Young Scholars. ## 2. Preliminaries In this section, we first discuss notations used in the rest of the paper, introduce the function spaces with their properties that we will be working on, and list some useful tools from harmonic analysis. ### Notations We use the notation \(A\lesssim B\) whenever there exists some positive constant \(C\) such that \(A\leq CB\). Similarly we define \(A\gtrsim B\) and we use \(A\sim B\) when \(A\lesssim B\lesssim A\). For simplicity, we hide in most cases the dependence of the function spaces on their spatial domain in their indices. For example \(L^{2}_{x}=L^{2}(\mathbb{T}^{3})\), \(\ell^{2}_{k}=\ell^{2}(\mathbb{Z}^{3})\) and so on. However, when the space is involved with time we still display the underlying temporal interval such as \(L^{p}_{t,x}(I)\), \(L^{p}_{t}L^{q}_{x}(I)\), \(L^{\infty}_{t}\ell^{2}_{k}(\mathbb{R})\) etc. We also frequently write \(\|\cdot\|_{p}:=\|\cdot\|_{L^{p}_{x}}\). ### Fourier transforms and Littlewood-Paley projections Throughout the paper we use the following Fourier transformation on \(\mathbb{T}^{3}\): \[(\mathcal{F}f)(\xi)=\widehat{f}(\xi)=(2\pi)^{-\frac{3}{2}}\int_{\mathbb{T}^{d}}f (x)e^{-ix\cdot\xi}\,dx\] for \(\xi\in\mathbb{Z}^{3}\). The corresponding Fourier inversion formula is then given by \[f(x)=(2\pi)^{-\frac{3}{2}}\sum_{\xi\in\mathbb{Z}^{3}}(\mathcal{F}f)(\xi)e^{ix \cdot\xi}.\] By definition, the Schrodinger propagator \(e^{it\Delta}\) is defined by \[\left(\mathcal{F}e^{it\Delta}f\right)(\xi)=e^{-it|\xi|^{2}}(\mathcal{F}f)(\xi).\] Next we define the Littlewood-Paley projectors. We fix some even decreasing function \(\eta\in C_{c}^{\infty}(\mathbb{R};[0,1])\) satisfying \(\eta(t)\equiv 1\) for \(|t|\leq 1\) and \(\eta(t)\equiv 0\) for \(|t|\geq 2\). For a dyadic number \(N\geq 1\) define \(\eta_{N}:\mathbb{Z}^{3}\to[0,1]\) by \[\eta_{N}(\xi)=\eta(|\xi|/N)-\eta(2|\xi|/N),\quad N\geq 2,\] \[\eta_{N}(\xi)=\eta(|\xi|),\quad N=1.\] Then the Littlewood-Paley projector \(P_{N}\) (\(N\geq 1\)) is defined as the Fourier multiplier with symbol \(\eta_{N}\). For any \(N\in(0,\infty)\), we also define \[P_{\leq N}:=\sum_{M\in\mathbb{Z}^{N},M\leq N}P_{M},\quad P_{>N}:=\sum_{M\in \mathbb{Z}^{N},M>N}P_{M}.\] ### Strichartz estimates As already pointed out in the introductory section, unlike the Euclidean case, the Strichartz estimates on (rational or irrational) tori are generally proved in a highly non-trivial way and in most cases only frequency-localized estimates can be deduced. For our purpose we will make use of the following Strichartz estimate proved by Bourgain and Demeter [2] (see also [1, 20]). **Proposition 2.1** (Frequency-localized Strichartz estimates on \(\mathbb{T}^{3}\), [2]).: _Consider the linear Schrodinger propagator \(e^{it\Delta}\) on a (rational or irrational) three-dimensional torus. Then for \(p>\frac{10}{3}\) we have for any time slot \(I\) with \(|I|\leq 1\)_ \[\|e^{it\Delta}P_{N}f\|_{L^{p}_{t,x}(I\times\mathbb{T}^{3})}\lesssim_{p}N^{ \frac{3}{2}-\frac{5}{p}}\|P_{N}f\|_{L^{2}_{x}(\mathbb{T}^{3})}. \tag{2.1}\] ### Function spaces Next, we define the function spaces and collect some of their useful properties which will be used for the Cauchy problem (1.1). We begin with the definitions of \(U^{p}\)- and \(V^{p}\)-spaces introduced in [13]. **Definition 2.2** (\(U^{p}\)-spaces).: Let \(1\leq p<\infty\), \(\mathcal{H}\) be a complex Hilbert space and \(\mathcal{Z}\) be the set of all finite partitions \(-\infty<t_{0}<t_{1}<...<t_{K}\leq\infty\) of the real line. A \(U^{p}\)-atom is a piecewise constant function \(a:\mathbb{R}\to\mathcal{H}\) defined by \[a=\sum_{k=1}^{K}\chi_{[t_{k-1},t_{k})}\phi_{k-1},\] where \(\{t_{k}\}_{k=0}^{K}\in\mathcal{Z}\) and \(\{\phi_{k}\}_{k=0}^{K-1}\subset\mathcal{H}\) with \(\sum_{k=0}^{K}\|\phi_{k}\|_{\mathcal{H}}^{p}=1\). The space \(U^{p}(\mathbb{R};\mathcal{H})\) is then defined as the space of all functions \(u:\mathbb{R}\to\mathcal{H}\) such that \(u=\sum_{j=1}^{\infty}\lambda_{j}a_{j}\) with \(U^{p}\)-atoms \(a_{j}\) and \(\{\lambda_{j}\}\in\ell^{1}\). We also equip the space \(U^{p}(\mathbb{R};\mathcal{H})\) with the norm \[\|u\|_{U^{p}}:=\inf\{\sum_{j=1}^{\infty}|\lambda_{j}|:u=\sum_{j=1}^{\infty} \lambda_{j}a_{j},\,\lambda_{j}\in\mathbb{C},\,a_{j}\text{ are $U^{p}$-atoms}\}.\] **Definition 2.3** (\(V^{p}\)-spaces).: We define the space \(V^{p}(\mathbb{R},\mathcal{H})\) as the space of all functions \(v:\mathbb{R}\to\mathcal{H}\) such that \[\|v\|_{V^{p}}:=\sup_{\{t_{k}\}_{k=0}^{K}\in\mathbb{Z}}(\sum_{k=1}^{K}\|v(t_{k})- v(t_{k-1})\|_{\mathcal{H}}^{p})^{\frac{1}{p}}<+\infty,\] where we use the convention \(v(\infty)=0\). Also, we denote by \(V^{p}_{re}(\mathbb{R},\mathcal{H})\) the closed subspace of \(V^{p}(\mathbb{R},\mathcal{H})\) containing all right-continuous functions \(v\) with \(\lim_{t\to-\infty}v(t)=0\). In our context we shall set the Hilbert space \(\mathcal{H}\) to be the Sobolev space \(H^{s}_{x}\) with \(s\in\mathbb{R}\), which will be the case in the remaining parts of the paper. **Definition 2.4** (\(U^{p}_{\Delta}\)- and \(V^{p}_{\Delta}\)-spaces in [13]).: For \(s\in\mathbb{R}\) we let \(U^{p}_{\Delta}H^{s}_{x}(\mathbb{R})\) resp. \(V^{p}_{\Delta}H^{s}_{x}(\mathbb{R})\) be the spaces of all functions such that \(e^{-it\Delta}u(t)\) is in \(U^{p}(\mathbb{R},H^{s}_{x})\) resp. \(V^{p}_{re}(\mathbb{R},H^{s}_{x})\), with norms \[\|u\|_{U^{p}_{\Delta}H^{s}_{x}(\mathbb{R})}=\|e^{-it\Delta}u\|_{U^{p}(\mathbb{ R},H^{s}_{x})},\quad\|u\|_{V^{p}_{\Delta}H^{s}_{x}(\mathbb{R})}=\|e^{-it\Delta}u \|_{V^{p}(\mathbb{R},H^{s}_{x})}.\] Having defined the \(U^{p}_{\Delta}\)- and \(V^{p}_{\Delta}\)-spaces we are now ready to formulate the function spaces for studying the Cauchy problem (1.1). For \(C=[-\frac{1}{2},\frac{1}{2})^{3}\in\mathbb{R}^{3}\) and \(z\in\mathbb{R}^{3}\) let \(C_{z}=z+C\) be the translated unit cube centered at \(z\) and define the sharp projection operator \(P_{C_{z}}\) by \[\mathcal{F}(P_{C_{z}}f)(\xi)=\chi_{C_{z}}(\xi)\mathcal{F}(f)(\xi),\quad\xi\in \mathbb{Z}^{3},\] where \(\chi_{C_{z}}\) is the characteristic function restrained on \(C_{z}\). We then define the \(X^{s}\)- and \(Y^{s}\)-spaces as follows: **Definition 2.5** (\(X^{s}\)- and \(Y^{s}\)-spaces).: For \(s\in\mathbb{R}\) we define the \(X^{s}(\mathbb{R})\)- and \(Y^{s}(\mathbb{R})\)-spaces through the norms \[\|u\|_{X^{s}(\mathbb{R})}^{2} :=\sum_{z\in\mathbb{Z}^{3}}\langle z\rangle^{2s}\|P_{C_{z}}u\|_{U ^{2}_{\Delta}(\mathbb{R};L^{2}_{x})}^{2},\] \[\|u\|_{Y^{s}(\mathbb{R})}^{2} :=\sum_{z\in\mathbb{Z}^{3}}\langle z\rangle^{2s}\|P_{C_{z}}u\|_{V ^{2}_{\Delta}(\mathbb{R};L^{2}_{x})}^{2}.\] For an interval \(I\subset\mathbb{R}\) we also consider the restriction spaces \(X^{s}(I),Y^{s}(I)\) etc. For these spaces we have the following useful embedding: **Proposition 2.6** (Embedding between the function spaces, [13]).: _For \(2<p<q<\infty\) we have_ \[U^{2}_{\Delta}H^{s}_{x}\hookrightarrow X^{s}\hookrightarrow Y^{s} \hookrightarrow V^{2}_{\Delta}H^{s}_{x}\hookrightarrow U^{p}_{\Delta}H^{s}_{x} \hookrightarrow U^{q}_{\Delta}H^{s}_{x}\hookrightarrow L^{\infty}H^{s}_{x}.\] As usual, the proofs of the well-posed results rely on the contraction principle and thus a dual norm estimation for the Duhamel term is needed. In the periodic setting, the dual norm is given as the \(N^{s}\)-norm, which is defined as follows: **Definition 2.7** (\(N^{s}\)-norm).: On a time slot \(I\) we define the \(N^{s}(I)\)-norm for \(s\in\mathbb{R}\) by \[\|h\|_{N^{s}(I)}=\|\int_{a}^{t}e^{i(t-s)\Delta}h(s)\,ds\|_{X^{s}(I)}.\] The following proposition sheds light on the duality of the spaces \(N^{1}(I)\) and \(Y^{-1}(I)\). **Proposition 2.8** (Duality of \(N^{1}(I)\) and \(Y^{-1}(I)\) in [14]).: _The spaces \(N^{1}(I)\) and \(Y^{-1}(I)\) satisfy the following duality inequality_ \[\|f\|_{N^{1}(I)}\lesssim\sup_{\|v\|_{Y^{-1}(I)}\leq 1}\int_{I\times\mathbb{T}^{3}} f(t,x)\overline{v(t,x)}\,dxdt.\] _Moreover, the following estimate holds for any smooth (\(H^{1}_{x}\)-valued) function \(g\) on an interval \(I=[a,b]\):_ \[\|g\|_{X^{1}(I)}\lesssim\|g(a)\|_{H^{1}_{x}}+(\sum_{N}\|P_{N}(i\partial_{t}+ \Delta)g\|_{L^{1}_{x}H^{1}_{x}(I)}^{2})^{\frac{1}{2}}.\] For our purpose we shall also need appeal to the \(Z\)-norm which is defined as follows: **Definition 2.9** (\(Z\)-norm).: For a time slot \(I\) we define the \(Z(I)\)-norm by \[\|u\|_{Z(I)}=\sup_{J\subset I,|J|\leq 1}(\sum_{N\geq 1}N^{3}\|P_{N}u\|_{L^{4}_{t, x}(J)}^{\frac{1}{4}}.\] As a direct consequence of the Strichartz estimates it is easy to verify that \[\|u\|_{Z(I)}\lesssim\|u\|_{X^{1}(I)}.\] For those readers who are familiar with NLS on the standard Euclidean space \(\mathbb{R}^{d}\), we also note that intuitively, the \(X^{1}\)- and \(Z\)-norms play exactly the same roles as the norm \(\|\cdot\|_{S^{1}}:=\|\cdot\|_{L^{\infty}_{t}H^{1}_{x}\cap L^{2}_{t}W^{1,6}_{x}}\) and \(L^{10}_{t,x}\)-norm for the quintic NLS on \(\mathbb{R}^{3}\) respectively. Nevertheless, the \(Z\)-norm can not be directly applied to prove the well-posedness results. To that end, we introduce the \(Z^{\prime}\)-norm defined by \[\|u\|_{Z^{\prime}}:=\|u\|_{\mathbb{Z}}^{\frac{1}{2}}\|u\|_{X^{1}}^{\frac{1}{2}}\] which will be more useful for the proof of Theorem 1.1. ### Conservation laws We end this section by introducing the mass \(M(u)\) and energy \(E(u)\) associating to the NLS flow (1.1): \[\begin{split} M(u)&=\int_{\mathbb{T}^{3}}|u|^{2}\,dx, \\ E(u)&=\int_{\mathbb{T}^{3}}\frac{1}{2}|\nabla u|^{2} +\frac{\mu_{1}}{4}|u|^{4}+\frac{\mu_{2}}{6}|u|^{6}\,dx.\end{split} \tag{2.2}\] It is well-known that both mass and energy are conserved over time along the NLS flow (1.1). As a direct application of conservation laws and Holder's inequality, we have the following uniform estimate of the kinetic energy \(\|\nabla u\|_{L^{\infty}_{t}L^{2}_{x}(I\times\mathbb{T}^{3})}^{2}=:\|\nabla u\| _{L^{\infty}_{t}L^{2}_{x}(I)}^{2}\) for a solution \(u\) of (1.1). (As mentioned in Notations, we omit the space \(\mathbb{T}^{3}\) for convenience.) We include the proof below for completeness (see the original argument in [33, Sec. 2.2]). **Lemma 2.10**.: _Let \(u\in X^{1}(I)\) be a solution of \((\ref{eq:1.1})\) with \(u(0)=u_{0}\). Then_ \[\|\nabla u\|_{L^{\infty}_{t}L^{2}_{x}(I)}^{2}\lesssim E(u_{0})+M(u_{0})^{2}.\] Proof of Lemma 2.10.: Recall the mass and energy defined in (2.2). If both \(\mu_{1}\) and \(\mu_{2}\) are positive, it is easy to see that for any \(t\) \[\|\nabla u(t)\|_{L^{2}_{x}}^{2}\lesssim E.\] If \(\mu_{1}<0\) and \(\mu_{2}>0\), then we use the following inequality for some \(C(\mu_{1},\mu_{2})\) \[-\frac{|\mu_{1}|}{4}|u(t,x)|^{4}+\frac{|\mu_{2}|}{6}|u(t,x)|^{6}\geq-C(\mu_{1}, \mu_{2})|u(t,x)|^{2}\] to conclude that for any \(t\) \[\|\nabla u(t)\|_{L^{2}_{x}}^{2}\lesssim E+M^{2}.\] ## 3. Proof of Theorem 1.1 In this section we give the proof of Theorem 1.1. As the precise value of \(|I|=2T\) has only impact on the numerical constants, without loss of generality, we may also assume that \(|I|\leq 1\) throughout this section. We begin with recording a trilinear estimate deduced in [16]. **Lemma 3.1** (Trilinear estimate, [16]).: _Suppose that \(u_{i}=P_{N_{i}}u\), for \(i=1,2,3\) satisfying \(N_{1}\geq N_{2}\geq N_{3}\geq 1\). Then there exists some \(\delta>0\) such that_ \[\|u_{1}u_{2}u_{3}\|_{L^{2}_{t,x}(I)}\lesssim\left(\frac{N_{3}}{N_{1}}+\frac{1} {N_{2}}\right)^{\delta}\|u_{1}\|_{Y^{0}(I)}\|u_{2}\|_{Z^{\prime}(I)}\|u_{3}\|_ {Z^{\prime}(I)}.\] For dealing with the cubic term, we also need the following bilinear estimate. **Lemma 3.2** (Bilinear estimate).: _Suppose that \(u_{i}=P_{N_{i}}u\), for \(i=1,2\) satisfying \(N_{1}\geq N_{2}\geq 1\). Then there exists some \(\kappa>0\) such that_ \[\|u_{1}u_{2}\|_{L^{2}_{t,x}(I)}\lesssim\left(\frac{N_{2}}{N_{1}}+\frac{1}{N_ {2}}\right)^{\kappa}|I|^{\frac{1}{20}}\|u_{1}\|_{Y^{0}(I)}\|u_{2}\|_{Z^{\prime }(I)}.\] Proof of Lemma 3.2.: For any cube \(C\) centered at \(\xi\in\mathbb{Z}^{3}\) of size \(N_{2}\), using Holder's inequality and the Strichartz estimate (2.1), we have \[\|(P_{C}u_{1})u_{2}\|_{L^{2}_{t,x}(I)} \lesssim\|P_{C}u_{1}\|_{L^{4}_{t,x}(I)}\|u_{2}\|_{L^{4}_{t,x}(I)} \lesssim|I|^{\frac{1}{10}}\|P_{C}u_{1}\|_{L^{\frac{20}{4}}_{t,x}(I)}\|u_{2}\|_ {L^{4}_{t,x}(I)}\] \[\lesssim|I|^{\frac{1}{10}}\|P_{C}u_{1}\|_{U^{\frac{20}{3}}_{ \lambda}L^{2}_{x}(I)}\left(N_{2}^{\frac{3}{2}}\|u_{2}\|_{L^{4}_{t,x}(I)}\right) \lesssim|I|^{\frac{1}{10}}\|P_{C}u_{1}\|_{Y^{0}(I)}\left(N_{2}^{\frac{3}{2}} \|u_{2}\|_{L^{4}_{t,x}(I)}\right).\] Using the orthogonality and summability properties of \(Y^{0}(I)\) and the definition of \(Z(I)\), the above estimate provides \[\|u_{1}u_{2}\|_{L^{2}_{t,x}(I)}^{2}\lesssim|I|^{\frac{1}{5}}\sum_{C}\|P_{C}u_ {1}\|_{Y^{0}(I)}^{2}\left(N_{2}^{\frac{3}{2}}\|u_{2}\|_{L^{4}_{t,x}(I)}\right) ^{2}\lesssim|I|^{\frac{1}{5}}\|u_{1}\|_{Y^{0}(I)}^{2}\|u_{2}\|_{Z(I)}^{2}, \tag{3.1}\] where the sum is over all \(\xi\in N_{2}^{-1}\mathbb{Z}^{3}\). It remains to prove \[\|u_{1}u_{2}\|_{L^{2}_{t,x}(I)}\lesssim\left(\frac{N_{2}}{N_{1}}+\frac{1}{N_{2 }}\right)^{\kappa_{0}}\|u_{1}\|_{Y^{0}(I)}\|u_{2}\|_{Y^{1}(I)} \tag{3.2}\] for some \(\kappa_{0}>0\), the desired claim follows then from interpolating (3.1) and (3.2) and the embedding \(X^{1}\hookrightarrow Y^{1}\). Again, using the orthogonality and summability properties of \(Y^{0}(I)\) and Strichartz estimate (2.1), we obtain that \[\|u_{1}u_{2}\|_{L^{2}_{t,x}(I)}^{2} \lesssim\sum_{C}\|(P_{C}u_{1})u_{2}\|_{L^{2}_{t,x}(I)}^{2}\lesssim \sum_{C}\left(N_{2}^{\frac{1}{2}}\|P_{C}u_{1}\|_{U^{4}_{\lambda}L^{2}_{x}(I)} \|u_{2}\|_{U^{4}_{\lambda}L^{2}_{x}(I)}\right)^{2}\] \[\lesssim\sum_{C}\left(N_{2}^{-\frac{1}{2}}\|P_{C}u_{1}\|_{Y^{0}(I )}\|u_{2}\|_{Y^{1}(I)}\right)^{2}\lesssim N_{2}^{-1}\|u_{1}\|_{Y^{0}(I)}^{2} \|u_{2}\|_{Y^{1}(I)}^{2},\] as desired. As a direct consequence of the multilinear estimates deduced from Lemmas 3.1 and 3.2, we immediately obtain the following nonlinear estimates. **Lemma 3.3** (Nonlinear estimates).: _For \(u_{k}\in X^{1}(I)\), \(k=1,2,3,4,5\), the following estimates_ \[\Big{\|}\prod_{i=1}^{5}\widetilde{u}_{i}\Big{\|}_{N^{1}(I)} \lesssim\sum_{\{i_{1},\ldots i_{5}\}=\{1,2,3,4,5\}}\|u_{i_{1}}\|_{X ^{1}(I)}\cdot\prod_{i_{k}\neq i_{1}}\|u_{i_{k}}\|_{Z^{\prime}(I)}, \tag{3.4}\] \[\Big{\|}\prod_{i=1}^{3}\widetilde{u}_{i}\Big{\|}_{N^{1}(I)} \lesssim|I|^{\frac{1}{50}}\sum_{\{i_{1},i_{2},i_{3}\}=\{1,2,3\}} \|u_{i_{1}}\|_{X^{1}(I)}\cdot\prod_{i_{k}\neq i_{1}}\|u_{i_{k}}\|_{Z^{\prime}( I)} \tag{3.3}\] _hold true, where \(\widetilde{u}\in\{u,\bar{u}\}\)._ Proof of Lemma 3.3.: (3.3) and (3.4) can be proved, words by words, by using the arguments from [16, Lem. 3.2] and [17, Lem. 3.2], respectively, which make use of Lemma 3.1 as well as Lemma 3.2. We thus omit the repeating arguments. Having all the preliminaries we are in a position to prove Theorem 1.1. Proof of Theorem 1.1.: We define the contraction mapping \[\Phi(u):=e^{it\Delta}u_{0}-i\mu_{1}\int_{0}^{t}e^{i(t-s)\Delta}|u|^{2}u\,ds-i \mu_{2}\int_{0}^{t}e^{i(t-s)\Delta}|u|^{4}u\,ds.\] We aim to show that by choosing \(\delta_{0}\) sufficiently small, the mapping \(\Phi\) defines a contraction on the metric space \[\mathcal{S}:=\{u\in X^{1}(I):\|u\|_{X^{1}(I)}\leq 2CE,\,\|u\|_{Z^{\prime}(I)}\leq 2 \delta\},\] where \(C\geq 1\) is some universal constant. The space \(\mathcal{S}\) is particularly a complete metric space equipping with the metric \(\rho(u,v):=\|u-v\|_{X^{1}(I)}\). First we show that for \(\delta\) small we have \(\Phi(\mathcal{S})\subset\mathcal{S}\). Indeed, using Lemma 3.3 we obtain \[\|\Phi(u)\|_{X^{1}(I)} \leq\|e^{it\Delta}u_{0}\|_{X^{1}(I)}+C\|u\|_{X^{1}(I)}\|u\|_{Z^{ \prime}(I)}^{2}+C\|u\|_{X^{1}(I)}\|u\|_{Z^{\prime}(I)}^{4}\] \[\leq C\|u_{0}\|_{H^{1}_{x}}+C(2CE)(2C\delta)^{2}+C(2CE)(2C\delta) ^{4}\] \[\leq CE(1+(2C)^{3}\delta^{2}+(2C)^{5}\delta^{4})\leq 2CE,\] \[\|\Phi(u)\|_{Z^{\prime}(I)} \leq\|e^{it\Delta}u_{0}\|_{Z^{\prime}(I)}+C\|u\|_{X^{1}(I)}\|u\|_{ Z^{\prime}(I)}^{2}+C\|u\|_{X^{1}(I)}\|u\|_{Z^{\prime}(I)}^{4}\] \[\leq\delta+C(2CE)(2C\delta)^{2}+C(2CE)(2C\delta)^{4}\leq 2\delta\] by choosing \(\delta\) sufficiently small. It is left to show that \(\Phi\) is a contraction for small \(\delta\). Again, using Lemma 3.3 we obtain \[\|\Phi(u)-\Phi(v)\|_{X^{1}(I)} \leq C(\|u\|_{X^{1}(I)}+\|v\|_{X^{1}(I)})(\|u\|_{Z^{\prime}(I)}+ \|v\|_{Z^{\prime}(I)})\|u-v\|_{X^{1}(I)}\] \[\quad+C(\|u\|_{X^{1}(I)}+\|v\|_{X^{1}(I)})(\|u\|_{Z^{\prime}(I)}+ \|v\|_{Z^{\prime}(I)})^{3}\|u-v\|_{X^{1}(I)}\] \[\leq C(4CE)(4C\delta+(4C\delta)^{3})\|u-v\|_{X^{1}(I)}\leq\frac{1 }{2}\|u-v\|_{X^{1}(I)}\] by choosing \(\delta\) small. This completes the proof of Theorem 1.1. ## 4. Proof of Theorem 1.2 In this section we prove Theorem 1.2. Again, without loss of generality, we may assume that \(|I|\leq 1\) and \(\mu_{2}=1\). The goal is therefore to show that (1.1) is well-posed on \(I\) without imposing the smallness condition (1.2). We firstly introduce the following large data Black-Box-Theory for defocusing quintic NLS on \(\mathbb{T}^{3}\) from [16]. **Theorem 4.1** (GWP of the defocusing quintic NLS on \(\mathbb{T}^{3}\), [16]).: _Consider the defocusing quintic NLS_ \[(i\partial_{t}+\Delta)v=|v|^{4}v \tag{4.1}\] _on \(I=(-T,T)\) with \(|I|\leq 1\). Then for any \(v_{0}\in H^{1}_{x}\), (4.1) possesses a unique solution \(v\in X^{1}(I)\) with \(v(0)=v_{0}\). Moreover, we have_ \[\|v\|_{X^{1}(I)}+\|v\|_{Z(I)}\leq C(M(v_{0}),E(v_{0}))<\infty. \tag{4.2}\] We are now ready to prove Theorem 1.2. Proof of Theorem 1.2.: Consider first a subinterval \(J=(a,b)\subset I\) and the difference NLS equation \[(i\partial_{t}+\Delta)w=\mu_{1}|v+w|^{2}(v+w)+|v+w|^{4}(v+w)-|v|^{4}v \tag{4.3}\] on \(J\) with \(w(a)=0\) and \(v\) a solution of (4.1) with \(v(a)=u(a)\). The proof of Theorem 1.2 for the interval \(J\) follows once we are able to prove that (4.3) possesses a unique solution \(w\in X^{1}(J)\). By (4.2) and the definition of the \(Z^{\prime}\)-norm, we may partition \(I\) into disjoint consecutive intervals \(I=\cup_{j=0}^{m}I_{j}\) with \(I_{j}=[t_{j},t_{j+1}]\) such that \[\|v\|_{Z^{\prime}(I_{j})}\leq\eta\] for some to be determined small \(\eta\). From now on we consider those \(I_{j}\) such that \(I_{j}\cap J\neq\varnothing\) (say \(m_{1}\leq j\leq m_{2}\)). Without loss of generality we may also assume that \(J=\cup_{j=m_{1}}^{m_{2}}I_{j}\). Suppose at the moment that for a given \(I_{j}\), the solution \(w\) satisfies \[\max\{\|w\|_{L^{\infty}_{t}H^{1}_{x}(I_{j})},\|w\|_{X^{1}(I_{j})}\}\leq(2C)^{j} |J|^{\frac{1}{20}}\] with some universal constant \(C>0\). We consider the contraction mapping \[\Gamma_{j}w:=e^{i(t-t_{j})\Delta}w(t_{j})-i\int_{t_{j}}^{t}e^{i(t-s)\Delta}( \mu_{1}|v+w|^{2}(v+w)+|v+w|^{4}(v+w)-|v|^{4}v)(s)\,ds\] on the set \[\mathcal{S}_{j}:=\{w\in X^{1}(I_{j}):\max\{\|w\|_{L^{\infty}_{t}H^{1}_{x}(I_{ j})},\|w\|_{X^{1}(I_{j})}\}\leq(2C)^{j}|J|^{\frac{1}{20}}\},\] which is a complete metric space with respect to the metric \[\rho(u_{1},u_{2}):=\|u_{1}-u_{2}\|_{X^{1}(I_{j})}.\] We show that by choosing \(\eta\) and \(|J|\) small, the mapping \(\Gamma_{j}\) defines a contraction on \(\mathcal{S}_{j}\). Using Strichartz estimates, Lemma 3.3, the embedding \(X^{1}\hookrightarrow Z^{\prime}\) and the inductive hypothesis \[\|w(t_{j})\|_{H^{1}_{x}}\leq(2C)^{j-1}|J|^{\frac{1}{20}}\] we obtain \[\max\{\|\Gamma_{j}w\|_{L^{\infty}_{t}H^{1}_{x}(I_{j})},\|\Gamma_{j }w\|_{X^{1}(I_{j})}\}\] \[\leq C\|w(t_{j})\|_{H^{1}_{x}}+\widetilde{C}\sum_{i=1}^{4}(\|w\|_{X^ {1}(I_{j})}^{5-i}\|v\|_{Z^{\prime}(I_{j})}^{i}+\|v\|_{X^{1}(I_{j})}\|w\|_{X^{1 }(I_{j})}^{5-i}\|v\|_{Z^{\prime}(I_{j})}^{i-1})\] \[\quad+\widetilde{C}\|w\|_{X^{1}(I_{j})}^{5}+\widetilde{C}|J|^{\frac {1}{20}}\|v+w\|_{X^{1}(I_{j})}\|v+w\|_{Z^{\prime}(I_{j})}^{2}\] \[\leq\left[C((2C)^{j-1}|J|^{\frac{1}{20}})\right]+\left[\widetilde {C}\sum_{i=1}^{3}((2C)^{j}|J|^{\frac{1}{20}})^{5-i}\eta^{i}+\widetilde{C}\|v\|_ {X^{1}(I)}\sum_{i=2}^{3}((2C)^{j}|J|^{\frac{1}{20}})^{5-i}\eta^{i-1}\] \[\quad+\widetilde{C}\|v\|_{X^{1}(I)}((2C)^{j}|J|^{\frac{1}{20}})^{ 4}+\widetilde{C}|J|^{\frac{1}{20}}((2C)^{j}|J|^{\frac{1}{20}})\eta^{2}\] \[\quad+\widetilde{C}|J|^{\frac{1}{20}}\|v\|_{X^{1}(I)}((2C)^{j}|J|^ {\frac{1}{20}})^{2}+\widetilde{C}|J|^{\frac{1}{20}}((2C)^{j}|J|^{\frac{1}{20}}) ^{3}\right]\] \[\quad+\left[\widetilde{C}|J|^{\frac{1}{20}}\|v\|_{X^{1}(I)}\eta^ {2}+\widetilde{C}((2C)^{j}|J|^{\frac{1}{20}})\eta^{4}+\widetilde{C}\|v\|_{X^{1 }(I)}((2C)^{j}|J|^{\frac{1}{20}})\eta^{3}\right]\] \[=:A_{1}+A_{2}+A_{3}\] for some \(\widetilde{C}>0\). We have \(A_{1}=\frac{1}{2}(2C)^{j}|J|^{\frac{1}{20}}\). By choosing \(\eta=\eta(\|v\|_{X^{1}(I)})=\eta(\|u(a)\|_{H^{1}_{x}})\) sufficiently small depending on \(\|u(a)\|_{H^{1}_{x}}\) we have \(A_{3}\leq\frac{1}{4}(2C)^{j}|J|^{\frac{1}{20}}\). For \(A_{2}\), we may choose \(|J|\leq\widetilde{\eta}\) with \(\widetilde{\eta}\) depending on \(0\leq j\leq m\) so that \(A_{2}\leq\frac{1}{4}(2C)^{j}|J|^{\frac{1}{20}}\) is valid for all \(j\). Indeed, the dependence of \(J\) on \(j\) can be expressed as on \(\|u(a)\|_{H^{1}_{x}}\) via \(j\leq m\leq C(\|v\|_{Z^{\prime}(I)})=C(\|u(a)\|_{H^{1}_{x}})\), where the last equality is deduced from Theorem 4.1. Similarly we are able to show that by shrinking \(\eta\) and \(\widetilde{\eta}\) if necessary, we have \[\|\Gamma_{j}(u_{1})-\Gamma_{j}(u_{2})\|_{X^{1}(I_{j})}\leq\frac{1}{2}\|u_{1}- u_{2}\|_{X^{1}(I_{j})}\] for all \(0\leq j\leq m-1\). The proof is analogous and we hence omit the details here. The claim then follows from the Banach fixed point theorem. Now we close our proof by removing the smallness of \(|J|\). By Lemma 2.10 we have \(\|u\|_{L^{\infty}_{t}H^{1}_{x}(I)}<\infty\). Thus we may choose \((\eta,\widetilde{\eta})=(\eta,\widetilde{\eta})(\|u\|_{L^{\infty}_{t}H^{1}_{ x}(I)})\) in a way such that the previous proof is valid for all \(J=[a,b]\) for any \(a\in I\) with \(|J|\leq\widetilde{\eta}\). We now partition \(I\) into disjoint consecutive subintervals \(I=\cup_{j=0}^{n}J_{j}\) with \(|J_{j}|\leq\widetilde{\eta}\) for all \(0\leq j\leq n\), and the proof follows by applying the previous step to each \(J_{j}\) and summing up.
``` 三次元円環$\mathbb{T}^3$上で、$\mu_1$、$\mu_2$が実数で0を除く場合に周期性のある三次多項方程式 $$ (i\partial_t +\Delta )u=\mu_1 |u|^2 u+\mu_2|u|^4 u\tag{CQNLS} $$ の微小データのWell-posednessを証明します。 ```
2309.10718
DRIVE: Data-driven Robot Input Vector Exploration
An accurate motion model is a fundamental component of most autonomous navigation systems. While much work has been done on improving model formulation, no standard protocol exists for gathering empirical data required to train models. In this work, we address this issue by proposing Data-driven Robot Input Vector Exploration (DRIVE), a protocol that enables characterizing uncrewed ground vehicles (UGVs) input limits and gathering empirical model training data. We also propose a novel learned slip approach outperforming similar acceleration learning approaches. Our contributions are validated through an extensive experimental evaluation, cumulating over 7 km and 1.8 h of driving data over three distinct UGVs and four terrain types. We show that our protocol offers increased predictive performance over common human-driven data-gathering protocols. Furthermore, our protocol converges with 46 s of training data, almost four times less than the shortest human dataset gathering protocol. We show that the operational limit for our model is reached in extreme slip conditions encountered on surfaced ice. DRIVE is an efficient way of characterizing UGV motion in its operational conditions. Our code and dataset are both available online at this link: https://github.com/norlab-ulaval/DRIVE.
Dominic Baril, Simon-Pierre Deschênes, Luc Coupal, Cyril Goffin, Julien Lépine, Philippe Giguère, François Pomerleau
2023-09-19T16:02:23
http://arxiv.org/abs/2309.10718v2
# DRIVE: Data-driven Robot Input Vector Exploration ###### Abstract An accurate motion model is a fundamental component of most autonomous navigation systems. While much work has been done on improving model formulation, no standard protocol exists for gathering empirical data required to train models. In this work, we address this issue by proposing Data-driven Robot Input Vector Exploration (DRIVE), a protocol that enables characterizing uncrewed ground vehicles (UGVs) input limits and gathering empirical model training data. We also propose a novel learned slip approach outperforming similar acceleration learning approaches. Our contributions are validated through an extensive experimental evaluation, cumulating over 7 km and 1.8 h of driving data over three distinct UGVs and four terrain types. We show that our protocol offers increased predictive performance over common human-driven data-gathering protocols. Furthermore, our protocol converges with 46 s of training data, almost four times less than the shortest human dataset gathering protocol. We show that the operational limit for our model is reached in extreme slip conditions encountered on surfaced ice. DRIVE is an efficient way of characterizing UGV motion in its operational conditions. Our code and dataset are both available online at this link: [https://github.com/norlab-ulaval/DRIVE](https://github.com/norlab-ulaval/DRIVE). ## I Introduction The ability to model the motion of uncrewed ground vehicles (UGVs) is fundamental to enabling localization [1], path planning [2] and path following [3]. Poor vehicle-terrain characterization will lead to significant modeling errors, potentially causing system failure [4]. With limited available information and sensory measurements on vehicle and ground properties, generating a reliable UGV motion model remains challenging. For most models, training on empirical data is required to reduce modeling error [5]. This task requires deploying a UGV in its operational environment and manually drive it for an extended period [6]. Since energy consumption and deployment time are critical for various UGV applications, facilitating this task is of high importance. Additionally, standardizing this process could help engineers to ensure that their systems are satisfactory to norm ISO 34502:2022(E) on autonomous navigation.2 Footnote 2: “ISO 34502:2022(E): Road vehicles — Test scenarios for automated driving systems — Scenario-based safety evaluation framework*, 2022 Most work on UGV motion modeling relies on manual driving to gather a training dataset, with little to no details on the driving protocol. Thus, we propose the _Data-driven Robot Input Vector Exploration (DRIVE)_, a protocol aiming to facilitate and standardize vehicle characterization with respect to the terrain, as illustrated in Figure 1. We start by identifying the true vehicle's input space, differing from the manufacturer's specifications. We then automatically send commands to the UGV to cover the entire true input space. This differs from the common manual driving approach, which tends to cover only forward driving, as shown by the red dots representing our previous work [7]. We show that broad input-space coverage offers significant modeling error reduction compared to narrow coverage. With this dataset, we train a learned vehicle slip model that maps UGV commands to resulting body velocities. The resulting trained parameters vary significantly depending on terrain, as highlighted by the green and blue diamond areas in Figure 1, representing navigation on gravel and snow respectively. Fig. 1: Vehicle and terrain characterization done through DRIVE. The manufacturer-defined Naive input-space region is drawn in gray. The vehicle’s true input-space, characterized through internal measurements, is shown in orange. Typical human driving is shown in red. The resulting body velocities are represented in green for gravel and blue for snow. The specific contributions of this paper are (i) DRIVE, a standardized UGV characterization and motion data generation protocol allowing to train motion models on the entire vehicle input space; (ii) A novel slip-based UGV motion prediction model, leveraging the accuracy of model-based approaches and the minimal system characterization requirement of learning-based approaches. We validate our contributions with an extensive experimental evaluation featuring three distinct UGVs, with weights ranging from \(75\,\mathrm{kg}\) to \(470\,\mathrm{kg}\), two types of ground interaction (i.e., wheels and tracks) and four different terrain types. Our observations rely on driving data totaling \(7\,\mathrm{km}\) and \(1.8\,\mathrm{h}\). ## II Related Work Most vehicle motion modeling approaches can be divided into two distinct categories: model-based and learning-based. Both categories share the requirement of using empirical driving data to train their parameters and reduce modeling errors. For both categories, there exists no standardized protocol for training dataset generation. **Model-based approaches** can be split into two distinct categories: _kinematics_ and _dynamics_. Kinematic models remain the most popular for UGVs due to their low computational complexity and number of parameters to train. For skid-steering mobile robots (SSMRs), Mandow _et al._[8] reduced model prediction error by \(15\,\mathrm{\char 37}\) compared to the manufacturer's model using a kinematic model empirically identifying vehicle slip and skid. Segemiller _et al._[9] proposed a similar additive slip approach, computing slip based on kinematic quantities, yielding prediction error reduction between \(70\,\mathrm{\char 37}\) and \(90\,\mathrm{\char 37}\) depending on terrain type, again compared to the manufacturer's model. Bussmann _et al._[10] extended the experimental validation for additive slip approaches and showed similar performance for a \(900\,\mathrm{m}\) experiment on off-road terrain. On the other hand, dynamic models account for various forces acting on the vehicle's body. Segemiller _et al._[4] proposed a multi-body full dynamic motion model with a generic formulation based on vehicle geometry and properties. This work has been extended by Yang _et al._[11], showing simulation errors of less than \(3.4\,\mathrm{\char 37}\) for vehicle slip ratio. While being more accurate than kinematic models, dynamic models require extensive vehicle characterization effort and expertise. For all the work mentioned above, empirical training data is acquired through a human driving the UGV with little to no guidelines, which motivates our standardized protocol. Alternatively, **learning-based approaches** have been explored in the literature, yielding more accurate models for extreme UGV motion. In these approaches, part of the prediction is done through a nominal model, often represented by a unicycle model, with a module allowing to learning system dynamics. Gaussian processes (GPs) have become a popular approach to learn system dynamics, both for vehicle slip in off-road driving [12] and tire forces in high-speed road racing [13]. McKinnon _et al._[14] have proposed a similar learning approach, however replacing GPs learning with Bayesian linear regression (BLR). The lower computational complexity of BLR, when compared to GPs, makes it a more suitable approach for real-time UGV motion prediction. Alternatively, Djeumou _et al._[15] have proposed a tire-force learning framework allowing to perform autonomous drifting with \(3\,\mathrm{min}\) of driving data. Deep learning has also been explored for motion prediction in off-road terrain. Williams _et al._[6] have shown the ability to perform aggressive driving when relying on a \(30\,\mathrm{min}\) training dataset. For increased resilience to sensor failure, Tremblay _et al._[16] have proposed a multi-modal learned-dynamics model that leverages the various sensor measurements available for UGVs. Due to the importance of prediction uncertainty in enabling robust control for UGVs [3], this work focuses on BLR, which provides prediction uncertainty estimations [14]. Our novel _slip-based BLR_ model allows us to leverage the minimal requirements of learning-based approaches in terms of system characterization, [14] as well as the better accuracy of model-based approaches [9]. In this work, the approach of McKinnon _et al._[14] is used as a comparison point, as it is the closest to our model formulation. Although both model-based and learning-based approaches require empirical training data, only a few **dataset-gathering protocols** have been published. Voser _et al._[17] have proposed to maintain a steady forward velocity while slowly increasing angular velocity, enabling generation of a quasi-steady-state empirical dataset. Wang _et al._[18] have proposed a similar approach with a varying commanded curvature radius to empirically identify the relation between angular velocity and SSMR skid. These approaches only cover a small subset of the vehicle's input space. One can also find large, multimodal datasets allowing to train and evaluate models for off-road and extreme driving [19]. However, such datasets overrepresent forward motion, are limited to a specific UGV and would require new training data for any new vehicle configuration. Manual training data gathering guidelines have been proposed by Williams _et al._[6], asking the driver to vary his driving style. However, these remain time-consuming and subject to input space coverage bias. We demonstrate that training a motion model with the DRIVE protocol allows increased motion prediction performance and fast training dataset gathering. ## III Methodology and Theory In this section, we provide details on DRIVE, our automated vehicle characterization and training dataset-gathering protocol. We then describe our proposed slip-based BLR motion model. Due to the limited number of UGVs accessible to us, we focus on SSMRs. The involved model variables are depicted in Figure 2. We limit the states of the vehicle to planar motion, such that the robot's state \(\tensor[\mathbf{q}]{\mathbf{q}}=[x,y,\theta]^{T}\) represents the pose of the vehicle in the global coordinate frame \(\mathcal{G}\). The robot's body frame \(\mathcal{R}\), has its \(x\) and \(y\) axis aligned with the vehicle's longitudinal and lateral directions respectively. For most SSMRs, the input vector is defined as \(\mathbf{u}=[\omega_{l},\omega_{r}]^{T}\), representing the left and right wheel angular velocities. State propagation, allowing to compute the next state \(\mathbf{q}_{t+dt}\) based on the current state \(\mathbf{q}_{t}\) and input \(\mathbf{u}\) is computed as follows: \[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! model as used by Seegmiller _et al._[5]: \[\begin{split}\hat{\omega}_{t}&=\left(e^{\beta}\right) \omega_{t_{0}}+\left(1-e^{\beta}\right)\tilde{\omega}_{t-\tau_{d}}\,\\ \beta&=\frac{(t-\tau_{d})}{\tau_{c}}\,\end{split} \tag{3}\] where \(\hat{\omega}\), \(\tilde{\omega}\) and \(\omega\) are the predicted, commanded and measured wheel velocities, respectively. We also define the initial time \(t_{0}\) and prediction horizon at time \(t\). Here, the parameters that require characterization are the time constant \(\tau_{c}\) and the time delay \(\tau_{d}\). One should note that these parameters are not considered symmetrical in our protocol and are trained independently for both sides of SSMRs. Thus, our protocol can identify vehicle powertrain asymmetries. #### Iii-C2 Body slip model Next, we define a model enabling to compute both the commanded body velocity \({}^{\mathcal{R}}\mathbf{f}_{t}\) and resulting slip velocity \({}^{\mathcal{R}}\mathbf{s}_{t}\) with respect to predicted input \(\tilde{\mathbf{u}}_{t}\). For SSMRs, the commanded body velocity \({}^{\mathcal{R}}\mathbf{f}_{t}(\tilde{\mathbf{u}}_{t})\) can be modeled through the ideal differential-drive model [8] as \[{}^{\mathcal{R}}\mathbf{f}_{t}(\mathbf{u}_{t})=\begin{bmatrix}f_{x}\\ f_{y}\\ f_{\omega}\end{bmatrix}=r\begin{bmatrix}\frac{1}{2},\frac{1}{2}\\ 0,0\\ -\frac{1}{6},\frac{1}{8}\end{bmatrix}\begin{bmatrix}\tilde{\omega}_{t}\\ \tilde{\omega}_{t}\end{bmatrix}, \tag{4}\] where \(r\) and \(b\) are the SSMR's wheel or track sprocket radius and vehicle width, respectively, as shown in Figure 2. We use the estimated wheel velocities through Equation 3 as the input vector \(\tilde{\mathbf{u}}_{t}\). We consider slip in each dimension of the vehicle separately \({}^{\mathcal{R}}\mathbf{s}_{t}=[s_{x},s_{y},s_{\omega}]^{T}\), with the form \[s_{t}=\mathbf{\gamma}^{T\,\mathcal{R}}\mathbf{x}_{t}+\eta, \tag{5}\] where \(\mathbf{\gamma}\in\mathbb{R}^{k}\) are the weights associated to each slip input and \(\eta\sim\mathcal{N}(0,\sigma^{2})\). We draw inspiration from off-road vehicles dynamics work in the literature to define dynamics-aware basis functions for vehicle slip [9]. As shown by Seegmiller _et al._[4], the following set of basis functions to estimate vehicle slip shows similar performance as fully dynamic models in off-road terrain. Firstly, for longitudinal slip \({}^{\mathcal{R}}\mathbf{s}_{x}\), we use the vehicle's rolling resistance, proportional to commanded body longitudinal velocity \({}^{\mathcal{R}}\mathbf{x}_{x}=f_{x}\). Secondly, for lateral slip \({}^{\mathcal{R}}\mathbf{s}_{y}\), we use centrifugal force \({}^{\mathcal{R}}\mathbf{x}_{y}=\psi=(f_{x}f_{w})\), proportional to commanded longitudinal and angular velocities. Thirdly, for angular slip \({}^{\mathcal{R}}\mathbf{s}_{w}\), we use three distinct slip learning inputs \({}^{\mathcal{R}}\mathbf{x}_{w}=[\psi,f_{x},f_{\omega}]\). The first angular slip input is the vehicle's centrifugal force \(\psi\). We then add UGV asymmetry, which can be caused by manufacturing imperfections and mechanical wear, causing angular velocity error proportional to commanded longitudinal velocity \(f_{x}\). Finally, we account for the vehicle's skid, leading to an error between commanded angular velocity and actual angular velocity \(f_{w}\). It should be noted that the vehicle gravity-dependent parameters, used by Seegmiller _et al._[9], are missing in this work. The reason is that we simplify our calibration protocol to be executed on planar terrain. The remainder of this section describes how we learn slip for a single dimension, but the process is the same for all dimensions of slip. We use Bayesian linear regression (BLR) to estimate the values for \(\mathbf{\gamma}\) and \(\sigma^{2}\). For a more in-depth explanation of BLR, refer to the book written by Murphy [22]. It can be shown that the posterior for learned parameters \(p(\mathbf{\gamma},\sigma^{2}|\mathcal{D}_{\text{d}})\) is distributed according to a Normal Inverse Gamma distribution \(\text{NIG}(\mathbf{\gamma},\sigma^{2}|\mathbf{\gamma},\mathbf{K},a,b)\), where \[\begin{split}\mathbf{\gamma}&=\mathbf{K}\left(\mathbf{K}_{0}^{-1} \mathbf{\gamma}_{0}+\mathbf{X}^{T}\mathbf{s}\right),\\ \mathbf{K}&=(\mathbf{K}_{0}^{-1}+\mathbf{X}^{T}\mathbf{X})^{-1},\\ a&=a_{0}+\frac{n}{2},\\ b&=b_{0}+\frac{1}{2}\left(\mathbf{\gamma}_{0}^{T}\mathbf{K}_{0}^{-1} \mathbf{\gamma}_{0}+\mathbf{s}^{T}\mathbf{s}-\mathbf{\gamma}^{T}\mathbf{K}^{-1}\mathbf{\gamma}\right), \end{split} \tag{6}\] where the estimated covariance of the distribution is represented by \(\mathbf{K}\in\mathbb{R}^{k\times k}\). Priors for all parameters are defined by the \((\cdot)_{0}\) subscript. We define \(\mathcal{D}_{\text{d}}=\{\mathbf{X},\mathbf{s}\}\) as a training dataset consisting of vectors of \(n\) concatenated observed values for slip inputs \(\mathbf{X}\) and observed slip velocities \(\mathbf{s}\) for a specific dimension. The posterior equations can be used to train the BLR slip model for each dimension based on a training dataset \(\mathcal{D}_{\text{d}}\). Once the model is trained, we can predict vehicle slip based on \(m\) test inputs \(\tilde{\mathbf{X}}\in\mathbb{R}^{m\times k}\): \[p\left(\tilde{\mathbf{s}}|\tilde{\mathbf{X}},\mathcal{D}_{\text{d}}\right)=\mathcal{T} \left(\mathbf{s}|\mathbf{X}\mathbf{\gamma},\frac{b}{a}\left(\mathbf{I}_{m}+\mathbf{X}\mathbf{K}\mathbf{X}^ {T}\right),2a\right), \tag{7}\] where \(\mathcal{T}\) is a Student's t-distribution and \(\hat{\mathbf{s}}\) represents a vector of \(m\) concatenated predicted slip velocities for a specific direction. In this work, we use an uninformative prior to ensure our protocol requires as little expertise as possible to execute. This consists of setting \(a_{0}=b_{0}=0\), \(\mathbf{\gamma}_{0}=\mathbf{0}\) and \(\mathbf{K}_{0}=\phi(\mathbf{X}^{T}\mathbf{X})^{-1}\) for any positive value \(\phi\). This allows to initialize our slip-based training model with little knowledge of the UGV except for wheel radius \(r\) and vehicle width \(b\). ## IV Results In this section, we evaluate the improvement of motion prediction accuracy when training models with the DRIVE protocol. We also analyze the number of training data required to reach convergence with our model. Finally, we demonstrate that for off-road navigation of SSMRs, learning vehicle slip based on dynamics-aware basis functions is more accurate than learning on vehicle acceleration. Fig. 3: Commanded, encoder-measured and modeled wheel velocities for both sides of a SSMR during two DRIVE training intervals. The powertrain model is described in Section III-B1. Each training step consists of one transient-state window (in light gray) and two steady-state windows (in dark gray). Commands and measurements on the x-axis are acquired at a rate of \(20\,\mathrm{Hz}\). ### _Experimental Setup_ We have conducted an extensive experimental evaluation of our calibration protocol and novel slip-based BLR model. Three distinct UGV platforms were used, as shown in Figure 4. First, we tested on a _Clearpath Robotics_ Warthog on wheels, weighing \(470\,\mathrm{kg}\), on gravel-covered terrain and an ice Fink. The ice Fink was leveled and recently resurfaced, leading to an extreme vehicle slip. Next, we tested on smaller platforms, namely a wheeled _Clearpath Robotics_ Husky, weighing \(75\,\mathrm{kg}\), and a tracked _Superdroid_ HD2, weighing \(80\,\mathrm{kg}\), both on indoor tile and snow-covered terrain. The Warthog has a top speed of \(5\,\mathrm{m}\mathrm{/}\mathrm{s}\), which is around five times that of the HD2 at \(1.2\,\mathrm{m}\mathrm{/}\mathrm{s}\) and of the Husky at \(1\,\mathrm{m}\mathrm{/}\mathrm{s}\). These platforms and terrains were selected to maximize the difference in properties between experiments. For all platforms, localization ground truth is estimated through point-cloud registration with the iterative closest point (ICP) algorithm. This localization approach was selected to use a common, centimeter-accurate ground truth [23] across all indoor and outdoor experiments. The localization system for the Husky and HD2 robots is described in [24] and for the Warthog in [25]. For every experiment, the recorded data was split into two halves, the training dataset and the evaluation dataset, to enable extensive model evaluation. Our experimental dataset totals over \(7\,\mathrm{km}\) and \(1.8\,\mathrm{h}\) of driving data across all platforms and terrain types. ### _Protocol performance analysis_ First, we define a function to evaluate model prediction performance. While learned models are trained on single-step vehicle slips or accelerations, our goal is to use them to predict vehicle motion over a specific horizon. One should note that we train our model on single-step slip velocities to simplify the learning problem. Williams _et al._[6] showed that this simplification allows sufficient prediction performances for high-speed UGV path following. Thus, we use the multi-step root mean squared error (MRMSE) \(\epsilon\) to evaluate prediction errors [14], with our localization as ground truth: \[\epsilon=\frac{1}{h}\sum_{j=1}^{h}\sqrt{(\sfrac{\sfrac{\sfrac{\sfrac{\sfrac{ \sfrac{\sfrac{\sfrac{\sfrac{\sfrac{\sfrac{\sfrac{\sfrac{\s{\sfrac{\s}}}{\cdot}}{{{{{{{{{{ {{{{{{{{{{{{{ } }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} \] \] \] \] \] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ and ice. The rightmost results combine the prediction errors for all experiments conducted in this work. We also show the performance of the model provided by manufacturers (i.e., Naive) and the improvement done through powertrain modeling. When accounting for all datasets, we observe a \(34\,\mathrm{\char 37}\) decrease in translation prediction error median and a \(38\,\mathrm{\char 37}\) decrease in rotation prediction error median when comparing the naive model with the powertrain-aware model. Also, our slip-based BLR approach leads to a \(22\,\mathrm{\char 37}\) decrease in rotation prediction error median and a \(6\,\mathrm{\char 37}\) decrease in translation prediction error median when compared to acceleration-based BLR. Looking at specific experiments, the Warthog in gravel shows the largest improvement between our slip BLR and acceleration BLR, with \(71\,\mathrm{\char 37}\) in rotation error median and \(23\,\mathrm{\char 37}\) in translation error median. In contrast, the HD2 on tile experiment shows a performance decrease for acceleration BLR and similar performance for slip BLR when compared to the powertrain model. Indeed, the indoor tile ground already had low prediction error for the powertrain-aware model. Lastly, the ice rink experiment shows similar performance between slip and acceleration BLR. This experiment corresponds to an extreme slip, similar to a UGV driving over black ice for an extended duration. This result shows the limit of our slip-based BLR model which still performs similarly or better than other models In this case, dynamic modeling could improve performance. Overall, we conclude that slip-based BLR offers improved performances for rotation prediction and similar performance in translation prediction over acceleration-based BLR, especially for driving at higher velocities on off-road terrains. For SSMRs in particular, rotation motion is the highest source of error due to the complexity of wheel-terrain skidding interactions [7], justifying the significance of our model. Moreover, generating the training data is time and energy-consuming, which leads us to look for a trade-off between calibration duration and model prediction accuracy. Thus, we evaluated the relationship between training driving time and prediction accuracy. The results are shown in Figure 7. Three distinct experiments are presented, notably Husky on snow, HD2 on tile and Warthog on gravel. No other experiment is shown, to avoid cluttering, but similar results were observed. As specified in Section III-B2, an uninformative prior is used for every platform, explaining the initially high errors. As shown by the red vertical line in Figure 7, the prediction accuracy stabilizes after \(46\,\mathrm{s}\) of driving time for all robots. To compute this time, we evaluated the error gradient with respect to calibration time. We then evaluated the maximum time for which the translational and rotational error values for all shown experiments was under \(0.01\,\mathrm{m}\mathrm{/}\mathrm{s}\) or \(0.01\,\mathrm{rad}\mathrm{/}\mathrm{s}\), indicating all models have converged. Thus, users of the DRIVE protocol SSMRs can expect that the slip-based BLR motion model has converged after \(46\,\mathrm{s}\) of training data, which is almost four times shorter than \(180\,\mathrm{s}\), the shortest training time observed in other work [15]. ## V Conclusion In this paper, we proposed _Data-driven Robot Input Vector Exploration (DRIVE)_, an automated vehicle characterization and training data generation protocol. We also propose a novel UGV prediction model called slip-based BLR. We show that training our model with our protocol offers improved prediction performances when comparing common training approaches and similar learning-based models. We also show that with our protocol, model convergence is reached with four times less driving time than the shortest similar protocol. We conclude that our protocol represents an efficient option for generating an initial motion model for UGVs. Future work would include generalizing our protocol to any vehicle geometry (e.g., Ackermann steering) and adapting our model formulation for complete dynamic models for extreme slip situations such as driving on surface ice. Adaptive modeling, relying on DRIVE to provide the initial training, should also be investigated. Fig. 6: Translational and rotational prediction errors for all models studied in this work. In yellow is the manufactured-defined naive model, in orange is the powertrain-aware model described in Section III-B1, in red is the acceleration-based BLR model and in purple is our slip-based BLR model. Fig. 7: The relation between training time and our slip-based BLR model prediction performance, for translation and rotation. Three datasets are shown, namely the Husky on snow in real, the HD2 on tile in pink and the wheeled Warthog on gravel, in blue. We highlight at \(46\,\mathrm{s}\) the converged value with the red line, for which our model converges for all UGVs tested. For all subplots, both axes are in log scale.
Accurateな運動モデルは、ほとんどの自律航行システムの基盤的な構成要素です。モデルの形式化を改善するための多くの研究が行われましたが、モデルを訓練するために必要な実証データを集めるための標準的なプロトコルはありません。この研究では、データ駆動ロボット入力ベクトル探索(DRIVE)というプロトコルを提案します。このプロトコルにより、無人 terrestres 車(UGV)の入力制限を特徴付けることができ、モデルのトレーニングに必要な実証的なデータを集めることができます。また、この研究では、似た加速学習アプローチを上回る、新しい学習されたスリップアプローチも提案します。この貢献は、7kmの走行データと1.8時間の走行データ、3種類のUGVと4種類の地形を含め、広範な実験評価を通じて検証されています。これにより、私たちのプロトコルは、一般的な人間運転データ収集プロトコルよりも予測性能が向上していることが