file_name
stringlengths 7
127
| text
stringlengths 1.27k
557k
|
---|---|
2304.13136.pdf | Generating Molecular Fragmentation Graphs with Autoregressive Neural Networks Samuel Goldman Computational and Systems Biology MIT Cambridge, MA 02139 [email protected] Janet Li Computer Science Harvard College Cambridge, MA 02138 [email protected] W. Coley Chemical Engineering Electrical Engineering and Computer Science MIT Cambridge, MA 02139 [email protected] Abstract The accurate prediction of tandem mass spectra from molecular structures has the potential to unlock new metabolomic discoveries by augmenting the communitys libraries of experimental reference standards. Cheminformatic spectrum prediction strategies use a bond-breaking framework to iteratively simulate mass spectrum fragmentations, but these methods are (a) slow, due to the need to exhaustively and combinatorially break molecules and (b) inaccurate, as they often rely upon heuristics to predict the intensity of each resulting fragment; neural network alternatives mitigate computational cost but are black-box and not inherently more accurate. We introduce a physically-grounded neural approach that learns to predict each breakage event and score the most relevant subset of molecular fragments quickly and accurately. We evaluate our model by predicting spectra from both public and private standard libraries, demonstrating that our hybrid approach offers state of the art prediction accuracy, improved metabolite identification from a database of candidates, and higher interpretability when compared to previous breakage methods and black box neural networks. The grounding of our approach in physical fragmentation events shows especially high promise for elucidating natural product molecules with more complex scaffolds. 1 Introduction Identifying unknown molecules in complex metabolomic or environmental samples is of critical importance to biologists , forensic scientists , and ecologists alike . Tandem mass spectrometry, MS/MS, is the standard analytical chemistry method for analyzing such samples, favored for its speed and sensitivity . In brief, MS/MS metabolomics experiments isolate, ionize, and fragment small molecules, resulting in a characteristic spectrum for each where peaks correspond to molecular sub-fragments (Fig. 1A). Importantly, these experiments are high throughput, leading to thousands of detected spectra per single experiment for complex samples such as human serum. The most straightforward way to identify an unknown molecule from its fragmentation spectrum is to compare the spectrum to a library of known standards . However, spectral libraries only contain on the order of 104compoundsa drop in the bucket compared to the vast size of biologically-relevant Preprint.arXiv:2304.13136v1 [q-bio.QM] 25 Apr 2023 chemical space, oft cited as large as 1060. Of the many tandem spectra deposited into a large community library, 87% still cannot be annotated . The accurate prediction of mass spectra from molecular structures would enable these libraries to be augmented with hypothetical compounds and significantly advance the utility of mass spectrometry for structural elucidation. This paradigm of comparing unknown spectra to putative spectra is well established in the adjacent field of proteomics due to the ease of predicting protein fragmentations . Collision cellExperimental measurementPrecursor molecule Intensity M/Z Heuristic scoringIntensity M/ZA BBenzocaineOO H2N OO H2NC Combinatorial simulation OO H2N OOOOO H2NIntensity M/ZICEBERG simulation Learned scoringOO H2N Figure 1: ICEBERG enables the prediction of tandem mass spectra by efficiently navigating the space of possible fragmentation events. A.Example experimental mass spectrum. An input molecule, benzocaine, is depicted entering a mass spectrometer collision cell and fragmenting. The observation of the resulting charged fragments results in a characteristic spectrum. B.A combinatorial mass spectrum simulation. The root molecule, benzocaine, is iteratively fragmented by removing atoms or breaking bonds, resulting in a large fragmentation tree. Heuristic rules score nodes in the tree to predict intensities. C.ICEBERG spectrum simulation. ICEBERG learns to generates only the most relevant substructures. After generating fragments, a neural network module scores the resulting fragments to predict intensities.Because tandem mass spectrometry experiments physically break covalent bonds in a process known as collision-induced-dissociation (CID) to create fragments, simulating such fragmentation events computationally is a natural strategy for prediction. Tools from the last decade including MetFrag ,MAGMa , andCFM-ID [2,37] use fragmentation rules (based on removing atoms or bonds) and local scoring methods to (a) enumerate molecular fragmentation trees and (b) estimate the intensity at each node in the tree with a mix of heuristic rules and statistical learning (Fig. 1B). However, these combinatorial methods are computationally demanding and often make inaccurate predictions by overestimating the possible fragments (Fig. 1B, bottom). We recently found CFM-ID to be far less accurate than black-box neural networks , an observation separately confirmed by Murphy et al. . Further, current learned fragmentation models are not easily adapted or scaled to new datasets; Murphy et al. estimate it would take the leading fragmentation approach, CFM-ID , approximately three months on a 64-core machine to train on a ~300,000 spectrum dataset. Alternative strategies that utilize black box neural networks to predict MS/MS spectra have been attempted. They encode an input molecule (i.e,. as a fingerprint, graph, or 3D structure) and predict either a 1D binned representation of the spectrum [ 17,40,45,46], or a set of output formulae corresponding to peaks in the spectrum [16,26,47]. While we have demonstrated that predicting chemical formulae provides a fast, accurate, and interpretable alternative to binned representation approaches , the improved accuracy surprisingly did not directly translate to better database retrieval for complex natural product molecules contained within the Global Natural Products Social (GNPS) database . We hypothesized that combining the flexibility of neural networks to learn from experimental MS/MS data in reference libraries with the structuralbias of combinatorial fragmentation approaches could lead to increased prediction performance on complex natural product molecules. Herein, we introduce a hybrid strategy for simulating molecular fragmentation graphs using neural networks, Inferring Collision-induced-dissociation by Estimating Breakage Events and Reconstructing their Graphs (ICEBERG ).ICEBERG is a two-part model that simulates probable breakage events (Generate ) and scores the resulting fragments using a Transformer architecture ( Score ) (Fig. 1C; details in Fig. 2). Our core computational contribution is to leverage previous exhaustive cheminformatics methods for the same task, specifically MAGMa , in order to build a training dataset, from 2 A B OO H2N OO H2NOO H2N H2NOO H2N O H2NOO H2NOO H2NOO H2N OO H2N OOOO H2NOO H2NICEBERG Generate Single step generation Prob. fragment Example disconnection Set Transformer Generated fragments from (A)Predicted intensities Each intensity is assigned to its fragment massSimulated spectrumICEBERG ScoreC a1 a1 0.6a2, , {{, , {{, , {{, , {{, , {{, a2 0.7 , {{, a9 0.1 , {{,a3a4a5a6a7a8a9 Intensity M/ZFigure 2: Overview of ICEBERG .A.The target fragmentation directed acyclic graph (DAG) for an example molecule M, benzocaine. Fragments are colored in black with missing substructures in gray. B.Example illustration for the generative process at a single step in the DAG generation predicting subfragments of S(2). The root molecule M, fragment of interest S(2), and context vector Care encoded and used to predict fragment probabilities at each atom of the fragment of interest. A sample disconnection is shown at atom a2, resulting in fragment S(7).C.ICEBERG Score module. Fragments generated from Aare encoded alongside the root molecule. A Set Transformer module predicts intensities for each fragment, allowing mass changes corresponding to the loss or gain of hydrogen atoms, resulting in the final predicted mass spectrum. which our model learns to make fast estimates prioritizing only likely bond breakages. In doing so, we lift MAGMa and previous bond-breaking approaches into a neural network space with demonstrable benefits in performance. We evaluate ICEBERG on two datasets: NPLIB1 (GNPS data as used to train the CANOPUS model ) and NIST20 , which test the models ability to predict both complex natural products and small organic standard molecules, respectively. We find that ICEBERG increases cosine similarity of predicted spectra by over 0.09, a 17% improvement over a recent state of the art method on NPLIB1 data. When used to identify molecules in retrospective retrieval studies, ICEBERG leads to 47% and 10% improvements in top 1 retrieval accuracy on the two datasets compared to the next best model tested. ICEBERG is fully open-sourced with pretrained weights alongside other existing prediction baseline methods available on GitHub at https://github.com/samgoldman97/ms-pred . 2 Results 2.1 ICEBERG is trained as a two-stage generative and scoring model Learning to generate likely substructures. ICEBERG simulates a mass spectrum by generating the substructure fragments from an initial molecule that are most likely to be generated by collision induced dissociation and subsequently measured in the mass spectrometer. We define an input moleculeM(benzocaine example shown in Fig. 2A) and its observed spectrum Y, which is a set of intensities at various mass-to-charge values (m/z), termed peaks. Each peak represents one or more observed molecular fragment. A core question is then how to generate the set of potential fragments . These fragments can be sampled from the many possible substructure options, S(i)(N(i),E(i))M , where the set of nodes and edges in substructures are subsets of the atoms and bonds in the original molecule, 3 M (N,E). Most often, this sampling is accomplished by iteratively and exhaustively removing edges or atoms from the molecular graph, creating a fragmentation graph T (S,E), where all the nodes in this graph are themselves substructures of the original molecule S={S(0),S(1),...S(|T|)} ([2,33,43]) (Fig. 1b). However, such a combinatorial approach leads to thousands of molecular fragments, making this procedure slow and complicating the second step of estimating intensity values for all enumerated fragments. We eschew combinatorial generation and instead leverage a graph neural network to parameterize breakage events of the molecule, defining the Generate module of ICEBERG (Fig. 2A,B). Generate predicts the fragmentation graph iteratively, beginning with just the root of the graph S(0)=M, borrowing ideas from autoregressive tree generation [ 4,14]. At each step in iterative expansion, the modelgGenerate assigns a probability of fragmentation to each atom jin the current substructure fragmentS(i),p(F[S(i) j]). Learned atom embeddings are concatenated alongside embeddings of the root molecule and and a context vector Ccontaining metadata such as the ionization adduct type in order to make this prediction. An illustrative example can be seen for fragment S(2)in Figure 2B. Atoma2has the highest predicted probability, so this atom is then removed from the graph, leading to the subsequent child node S(7)(Fig. 2B). Importantly, the number of child fragments is determined by how many disjoint molecular graphs form upon removal of the jthnode from the molecular graph; in this example, fragments S(1)andS(4)originate from the same fragmentation event of S(0)(Fig. 2A). In this way, ICEBERG predicts breakages at the level of each atom , following the convention of MAGMa rather than each bond as is the convention with CFM-ID . We strategically use this abstraction to ensurethat all fragmentation events lead to changes in heavy-atom composition. We refer the reader to Methods 4.3 for a full description of the model gGenerate (M,S(i),C)j, graph neural network architectures, and context vector inputs. While this defines a neural network for generation, we must also specify an algorithm for how to train this network. Spectral library datasets contain only molecule and spectrum pairs, but not the directed acyclic graph (DAG) Tof the molecules substructures that generated the spectrum. We infer an explanatory substructure identity of each peak for model training by leveraging previous combinatorial enumeration methods, specifically MAGMa . For each training molecule and spectrum pair, (M,Y), we modify MAGMa to enumerate all substructures of Mup to a depth of 3sequential fragmentation events. We filter enumerated structures to include only those with m/z values appearing in the final spectrum, thereby defining a dataset suitable for training ICEBERG Generate (4.2). As a result, each paired example (M,Y), in the training dataset is labeled with an estimated fragmentation DAG. Generate learns from these DAGs to generate only the most relevant and probable substructures for a molecule of interest (4.3). Predicting substructure intensities. After generating a set of potential substructure fragments, we employ a second module, ICEBERG Score , to predict their intensities (Fig. 2C). Importantly, this design decision enables our models to consider two important physical phenomena: (i) neutral losses and (ii) mass shifts due to hydrogen rearrangements and isotope effects. Because we elect to fragment molecules at the level of atoms (4.3), multiple substructures can result from a single fragmentation event. In physical experiments, not all of these substructure fragments will be observed; when fragmentation events occur in the collision cell, one fragment often retains the charge of the parent while the other is uncharged and therefore undetected, termed a neutral loss. By deferring prediction of intensities to a second module, Generate needs not predict or track whether structures are ionized, greatly reducing the complexity of the fragmentation DAG. In addition to the occurrence of neutral losses, molecules often undergo complex rearrangements in the collision cell, leading to bond order promotions or reductions (e.g., spurious formation of double bonds when a single bond breaks to maintain valence), the most classic of which is the McLafferty rearrangement [ 6,25]. While other approaches attempt to model and estimate where these rearrangements occur using hand-crafted rules , we instead adopt the framework of Ridder et al. to consider hydrogen tolerances. That is, for each generated molecular substructure S(i) we consider the possibility that this fragment is observed not only at its mass, but also at masses shifted by discrete hydrogen masses, H. This design choice also simplifies Generate by deferring specification of hydrogen counts to the second model. In addition to accounting for a mass shift of 1 4 hydrogen, such flexibility also allows the model to predict the common M+1 isotopes for carbonand nitrogencontaining compounds. Mathematically, we define a neural network, gScore that predicts multiple intensities for each fragment y(i) corresponding to different hydrogen shifts, : y(i) =gScore (M,S(i),T,C) (1) In practice, we predict up to 13intensities at each fragment (i.e., {+0H,1H,...,6H}). For each individual subfragment, the tolerance is further restricted to the number of bonds broken, most often less than 6. We then take the masses of all fragments, perturb them by the corresponding hydrogen or isotope shifts, and aggregate them into a set of unique m/z peaks by summing the intensities of perturbed fragments with the same m/z value. To consider all fragments simultaneously in a permutation-invariant manner, gScore is parameterized as a Set Transformer network [ 22,36]. We train this second module to maximize the cosine similarity between the ground truth spectrum and the predicted spectrum after converting the set of substructures and intensities to m/z peaks. At test time, we generate the top 100 most likely fragments from ICEBERG Generate and predict intensities for these fragments and their possible hydrogen shifts using ICEBERG Score . We find this tree size allows our model to consider sufficiently many potential fragments while maintaining a speed advantage over previous fragmentation approaches. 2.2 ICEBERG enables highly accurate spectrum prediction A B C DCFM-ID NEIMS (FFN) NEIMS (GNN) SCARF ICEBERG0.00.20.40.6Cosine sim.NPLIB1 CFM-ID NEIMS (FFN) NEIMS (GNN) SCARF ICEBERGNIST20 CFM-ID NEIMS (FFN) NEIMS (GNN) SCARF ICEBERG101102103Time (s) 2 4 6 8 SA score0.00.20.4Frequency NPLIB1 NIST20 500 1000 1500 Mol. weight0.0000.0010.0020.0030.004 NPLIB1 NIST20 Figure 3: ICEBERG predictions are highly accurate. A.Cosine similarities to true spectra on NPLIB1 (left) and NIST20 respectively (right) for CFM-ID ,NEIMS (FFN) ,NEIMS (GNN) [16,40], SCARF , and ICEBERG .B.Time required to predict spectra for 100 molecules randomly sampled from NIST20 on a single CPU, including the time to load models into memory. C,D. Comparison of NPLIB1 and NIST20 molecules in terms of synthetic accessibility (SA) score and molecular weight (Mol. weight). We evaluate ICEBERG on its ability to accurately simulate positive ion mode mass spectra for both natural product like molecules and smaller organic molecules under 1,500 Da. Using the data cleaning pipeline from, we compile a public natural products dataset NPLIB1 with 10,709 spectra (8,533 unique structures) [ 9,15,38] as well as a gold standard chemical library NIST20 with 35,129 spectra (24,403 unique structures) . We note that NPLIB1 was previously named CANOPUS, renamed here to disambiguate the data from the tool CANOPUS . Both datasets are split into structurally disjoint 90%/10% train-test splits, with 10% of the training data reserved for model validation (4.1). To measure performance, we calculate the average cosine similarity between each predicted spectrum and the true spectrum, as cosine similarity is widely used to cluster mass spectra in molecular networking . We find that ICEBERG outperforms the next best state of the art, SCARF , on the natural product focused dataset (Fig. 3A; Table 2). ICEBERG achieves an average cosine similarity of 0.628, compared to SCARF with cosine similarity of 0.534an especially large margin of improvement. Surprisingly, however, this boost in performance does not extend to the gold standard dataset, NIST20. ICEBERG , while still outperforming binned spectrum prediction approaches (i.e., NEIMS ) on this 5 A B C80 100 120 140 160 180 200 M/Z1.51.00.50.00.51.01.5IntensityICEBERG True 100 150 200 250 300 350 400 M/Z1.51.00.50.00.51.01.5IntensityICEBERG True 100 150 200 250 300 350 400 450 M/Z1.51.00.50.00.51.01.5IntensityICEBERG TrueN HOHHO HON HOHHO HON HOHHO HO N HOHHO HO N HOHHO HON HOHHO HO O OO OOHO OO OOHO OO OOH O OO OOHO OO OOHO OO OOHO OO OOH O HO ClN OHO O OO HO ClN OHO O OO HO ClN OHO O O O HO ClN OHO O O O HO ClN OHO O OFigure 4: Examples of predicted spectra from ICEBERG . Predictions are shown as generated byICEBERG trained on NPLIB1 for select test set examples GNPS:CCMSLIB00003137969 (A), GNPS:CCMSLIB00000853015 (B), and GNPS:CCMSLIB00000080524 (C). The input molecular structures are shown (left); fragmentation spectra are plotted (right) with predictions (top, blue) and ground truth spectra (bottom, black). Molecular fragments are shown inset. Spectra are plotted with m/z shifted by the mass of the precursor adduct. All examples shown were not included in the model training set. dataset, is on par with SCARF (0.707 v. 0.713) . Still, our model performs substantially better than CFM-ID and uses only a fraction of the computational resources (Fig. 3B). Unlike previous physically inspired models, because ICEBERG only samples the most relevant fragments from chemical space, it requires just over 1 CPU second per spectrum. We hypothesize that the discrepancy in performance improvement between NPLIB1 and NIST20 may be partially explained by differences in the chemical spaces they cover. Many molecules within NPLIB1 are natural products with more complicated chemical scaffolds. To characterize this, we analyzed the distributions for both the synthetic accessibility (SA) score [ 10,18] (Fig. 3C) and molecular weight (Fig. 3D), both proxies for molecular complexity. In concordance with our hypothesis, we find that SA scores and molecular weight are substantially higher on NPLIB1 than NIST20: NPLIB1 has an average SA score of 3.75, compared to 3.01 for NIST20; the datasets have average molecular weights of 413 Da and 317 Da respectively. 6 2.3 Model explanations of observed peaks are consistent with chemistry intuition In addition to accurate predictions, a key benefit of simulating fragmentation events is that predictions are interpretable, even for highly complex molecules. Each predicted peak from ICEBERG is directly attributed to a fragment of the predicted molecule. By inspecting certain patterns and examples, we find expected broken bonds. Weaker bonds such as carbon-oxygen and carbon-nitrogen bonds tend to more reliably break, compared to carbon-carbon bonds and more complex ring breakages (Fig. 4A). Similar patterns can be seen in more complex example molecules, in which ICEBERG predicts the loss of an acetoxy group in order to explain the highest intensity peak in Fig. 4B and various fragmentations around the central ether or iminol (in equilibrium with its amide form) to explain the many high intensity peaks in Fig. 4C. Further alignment can also be seen within the intensity prediction module. Because ICEBERG predicts multiple intensities for each substructure corresponding to hydrogen shifts, up to 3 peaks can be present when a single bond breaks. In fragmentation example of Figure 4A, the most intense peak is estimated at the a mass shift of 1Hfrom the original fragment, indicating that ICEBERG correctly recognizes the hydroxyl group will likely leave as neutral H2Oand result in a hydrogen rearrangement. 2.4 Fragmentation simulations lead to improved structural elucidation A B 0 5 10 Top k0.00.20.40.60.81.0AccuracyNPLIB1 0 5 10 Top k0.00.20.40.60.81.0NIST20 Random NEIMS (FFN) NEIMS (GNN) SCARF ICEBERG Figure 5: ICEBERG enables improved spectrum retrieval over other methods on both NPLIB1 ( A) and NIST20 ( B) compared to other spectrum prediction models.In addition to improved accuracy on predicting spectra, we next demonstrate that ICEBERG improves the structural elucidation of unknown molecules using reference libraries of modelpredicted spectra. We design a retrospective evaluation using our labeled data to resemble the prospective task of spectrum lookup within libraries. For each test spectrum, we extract up to 49 decoy isomers from PubChem with the highest Tanimoto similarity to the true molecular structure. The consideration of up to 50 isomers mimics the realistic elucidation setting, as an unknown spectrum can yield clues regarding certain properties of its source molecule (e.g., computed using MIST , CSI:FingerID , or molecular networking ), which narrows the chemical space of possible molecules to a smaller, more relevant set. We predict the fragmentation spectrum for each isomer and, for each model, we rank these possible matches by their spectral similarity to the spectrum of interest and compute how often the true molecule is found within the top k ranked isomers for different values of k. We find that ICEBERG improves upon the next best model by a margin of 10% accuracy (a nearly 50% relative improvement) in top 1 retrieval accuracy for the NPLIB1 dataset (Fig. 5A; Table 4). Previous models with high spectrum prediction accuracies have struggled on this task due to their poor ability to differentiate structurally similar isomers . Our structure-based model appears to excel in retrieval and may have out-of-domain robustness beneficial to this task. We observe a similar effect in top 1 retrieval accuracy on the NIST20 dataset, in which ICEBERG outperforms SCARF by an absolute margin of over 2%, a 10% relative improvement, with an even a larger absolute improvement at top 10 accuracy (76.5% vs. 70.3%) (Fig. 5B, Table 3). These results underscore the real world utility of ICEBERG to identify unknown molecules of interest. 2.5 Challenging, non-random data splits better explain retrieval performance The strong performance on the retrieval task suggests that ICEBERG is able to generalize well to decoys not appearing in the training set and to account for how structural changes should affect fragmentation patterns. While encouraging, we observed no increase in cosine similarity accuracy when predicting spectra using NIST20 (Fig. 3, Table 2). 7 To try to explain this apparent discrepancy, we reevaluate prediction accuracy on a more challenging dataset split. We retrain all models on the NIST20 utilizing a Murcko scaffold split of the data with smaller scaffold clusters (i.e., more unique compounds) placed in the test set. This split enforces that molecules in the test set will be more distant and less similar to the training set, probing the ability of each model to generalize in a more stringent setting than our previous random split. Table 1: Comparing the accuracy of spectrum prediction on NIST20 using random (easier) or scaffold (harder) splits. NIST20 Cosine sim. Random split Scaffold split CFM-ID 0.371 0.401 NEIMS (FFN) 0.614 0.548 NEIMS (GNN) 0.689 0.639 SCARF 0.713 0.665 ICEBERG 0.707 0.691In the more strict scaffold split evaluation, the improved accuracy of ICEBERG over existing models is striking (Table 1). While the relative ordering still remains between NEIMS and SCARF , we find that ICEBERG outperforms SCARF by 0.03, equivalent to the difference between SCARF andNEIMS (GNN) . These results suggest that, particularly for standard libraries with more homogeneous molecules, more challenging scaffold split evaluations may yield performance metrics that better correlate with performance on the structural elucidation problem (retrieval). 3 Discussion We have proposed a physically-grounded mass spectrum prediction strategy we term ICEBERG . From a computational perspective, this integration of neural networks into fragmentation prediction is enabled by (a) bootstrapping MAGMa to construct fragmentation trees on which our model is trained, (b) posing the tree generation step as a sequential prediction over atoms, and (c) predicting multiple intensities at each generated fragment with a second module in order to account for hydrogen rearrangements and isotopic peaks. By learning to generate fragmentation events, ICEBERG is able to accurately predict mass spectra, yielding especially strong improvements for natural product molecules under evaluation settings of both spectrum prediction and retrieval. ICEBERG establishes new state of the art performance for these tasks, yet there are some caveats we wish to highlight. First, while we learn to generate molecular substructures to explain each peak, there are no guarantees that they are the correct physical explanations given the number of potential equivalent-mass atom and bond rearrangements that could occur. Second, while we achieve increased accuracy, this comes at a higher computational cost of roughly 1 CPU second per molecule, nearly an order of magnitude more than other neural approaches like SCARF . Future work will consider more explicitly how to synergize fragmentand formulaprediction approaches to achieve higher accuracy and speed. In addition to model architecture modifications, we anticipate model accuracy improvements from modeling other covariates such as collision energy, instrument type, and even jointly modeling MS/MS with other analytical chemistry measurements such as FTIR . The discovery of unknown metabolites and molecules is rapidly expanding our knowledge of potential medical targets , the effects of environmental toxins , and the diversity of biosynthetically accessible chemical space . We envision exciting possibilities to apply our new model to expand the discovery of novel chemical matter from complex mixtures. 4 Methods 4.1 Datasets We train our models on the two datasets, NIST20 as generated by the National Institute of Standards and NPLIB1 extracted from the GNPS database and prepared previously by Dhrkop et al. and Goldman et al.. For each spectrum in the dataset, we first merge all scans at various collision energies, combine peaks that are within 104m/z tolerance from each other, renormalize the resulting spectrum by dividing by the maximum observed intensity, and take the square-root of each intensity. We subset the resulting spectrum to keep the top 50 peaks with intensity above 0.003. This normalization process is identical to our previous work and emphasizes (a) removing peaks that are likely noise and (b) combining various collision energies. We refer the reader to for exact details on dataset extraction. 8 To further normalize the dataset, for each spectrum, we subtract the mass of the adduct ion from each resulting MS2 peak. Concretely, the precursor molecule is ionized with an adduct ion, for instance, H+. In this case, the mass of each peak in the spectrum is shifted by the mass of H+ before proceeding further. In doing so, we normalize against different ionizations. While adduct switching is possible, we note that this is a rarer phenomenon and can be easily interchanged at the data preprocessing step. We make the simplifying assumption that all peaks are singly charged and use mass and m/z interchangeably. Ultimately, each spectrum Ycan be considered a set of mass, intensity tuples,Y={(m0,y0),(m1,y1),...(m|Y|,y|Y|)}. 4.2 Canonical DAG construction We build a custom re-implementation of the MAGMa algorithm to help create explanatory DAGs for each normalized and adduct-shifted spectrum. Given an input molecule M,MAGMa iteratively breaks each molecule by removing atoms. Each time an atom is removed, multiple fragments may form, from which we keep all fragments of >2heavy (non-hydrogen) atoms. To prevent combinatorial explosion of DAG nodes, we use a Weisfeiler-Lehman isomorphism test to generate a unique hash ID of each generated fragment and reject new fragments with hash IDs already observed. When conducting this test, to remain insensitive to howthis fragment originated, we hash only the atom identities and bonds in the fragment graph, not the number of hydrogen atoms . For instance, consider an ethane fragment in which the terminal carbon was originally double-bonded to a single neighboring atom in the precursor molecule compared to an ethane fragment in which the terminal carbon was single-bonded to two adjacent atoms in the original precursor our approach applies the same hash ID to both fragments. The chemical formula and hydrogen status for the fragment is randomly selected from the fragments that required the minimal number of atom removals. Each fragment corresponds to multiple potential m/z observations due to the allowance for hydrogen shifts equal to the number of broken bonds. After creating the fragmentation graph for M, a subset of the fragments are selected to explain each peak inY, using the minimum mass differences of under 10 parts-per-million as the primary filter and the minimal MAGMa heuristic score as a secondary filter. We include nodes along all paths back to the root molecule for each selected fragment. To prune the DAG to select only the most likely paths to each fragment, we design a greedy heuristic. Starting from the lowest level of the DAG, we iteratively select the parent nodes for inclusion into the final DAG that cover the highest number of peak-explaining nodes. Finally, the neutral loss fragments are added into the DAG, as they provide useful training signals for ICEBERG Generate to learn when to stop fragmenting each molecule. 4.3 Model details DAG generation prediction Using the ground truth DAG as described above, we train a neural network, ICEBERG Generate , to reconstruct the DAG from an input molecule and adduct type. Concretely, our model learns to predict for each fragment, S(i), the probability that it will fragment at thejthatom: p( F[S(i) j]|S(i),M,C) =gGenerate (M,S(i),C)j (2) To make this atom-wise prediction, we encode information about the root molecule, fragment molecule, their difference, their respective chemical formulae, the adduct, and the number of bonds that were broken between the root molecule and fragment. To embed the root molecule, we utilize a gated graph neural network ,GNN(M), where either average or weighted summations are used to pool embeddings across atoms (specified by a hyperparameter). We utilize the same network to learn representations of the fragment, GNN(S(i))and define GNN(S(i))jas the graph neural network-derived embedding of fragment iat thejthatom prior to pooling operation. For all graph neural networks, a one-hot encoding of the adduct type is also added as atom-wise features alongside the bond types and atom types. We define the chemical formula ffor each DAG fragment and specify an encoding, Enc, using the Fourier feature scheme defined in . We encode the root and ithnode of the fragmentation DAG as Enc(f0)andEnc(fi), respectively. Lastly, we define a one hot vector for the number of bonds broken, b. 9 All the encodings described above are concatenated together and a shallow multilayer perceptron (MLP) ending with a sigmoid function is utilized to predict binary probabilities of fragmentation at each atom. p( F[S(i) j]|S(i),M,C) =MLP( [GNN(M),GNN(M)GNN(S(i)), GNN(S(i))j,Onehot (b),Enc(fi),Enc(f0fi)]) (3) The model is trained to maximize the probability of generating the DAG by minimizing the binary cross entropy loss over each atom for every fragment in an observed spectrum. DAG intensity prediction The trained Generate module is used to generate DAGs for each input molecule in the training set. In this generation step, molecules are iteratively fragmented beginning with the rootMand the probability of each fragment is computed autoregressively. We define the node indices for an ordering from each fragment S(i)back to the root node through its highest likelihood path [i], where[i,j]defines thejthnode on this factorization path. p(S(i)|M,C) =p(S(i)|S(([i,1]),M,C)|[i]| j=1p(S([i,j])|S([i,j+1]),M,C) (4) At each step, we maintain only the top 100most likely fragments in the DAG as a practical consideration until reaching the maximum possible fragmentation depth. To further reduce complexity in the inference step, we maintain the highest scoring isomer from the DAG. This resulting set of fragments is featurized and passed to a Set Transformer module to generate output values at each fragment. Following the notation from the generative model, we featurize each individual fragment with a shallow MLP to generate hidden representations, hi: hi=MLP( [GNN(M),GNN(M)GNN(S(i)),GNN(S(i)), Onehot (b),Enc(fi),Enc(f0fi)]) (5) These are subsequently jointly embedded with a Transformer module and used to predict unnormalized intensity weights at each possible hydrogen shift alongside an attention weight to determine how heavily to weight each prediction for its specified hydrogen shift. To compute the attention weight, we take a softmax over all prediction indices that fall into the same intensity bin (0.1 resolution), M(i,): y(i) =MLPinten( Transformer (h0,h1,h2,...,h|T|)i) , (6) (i) =SoftmaxkM(i,)( MLPattn( Transformer (h0,h1,h2,...,h|T|)k)) i,(7) The final intensity prediction for the bin at mass mis then a then a weighted sum over all predictions that fall within this mass bin followed by a sigmoid activation function: ym=( i (i) y(i) I[M(i,) =m]) (8) The model is trained to maximize the cosine similarity between the predicted spectrum and ground truth spectrum. Model training All models are implemented and trained using Pytorch Lightning , the Adam optimizer , and the DGL library . Ray is used to complete hyperparameter optimizations over all models and baselines. Models are trained on a single RTX A5000 NVIDIA GPU (CUDA Version 11.6) in under 3hours for each module. A complete list of hyperparameters and their definition can be found in Appendix A.3. 10 4.4 Baselines For model baselines, we utilize splits, hyperparameter training, and numbers as generated by . We include baselines for binned prediction models from Wei et al. that directly predict binned spectra from either molecular fingerprints or graphs, our previous formula prediction model SCARF , and a previous fragmentation model CFM-ID with the same procedure as . All model predictions are transformed into binned representations for fair evaluation at a bin resolution of 0.1from mass 0 to1,500Da. 4.5 Code and data availability All code is made available at https://github.com/samgoldman97/ms-pred , alongside pretrained models on publicly accessible data. Acknowledgements We thank John Bradshaw, Priyanka Raghavan, David Graff, Fanwang Meng, other members of the Coley Research Group, and Michael Murphy for helpful discussions, as well as Lucas Janson for feedback on earlier iterations of this idea. We thank Mingxun Wang for feedback and helpful suggestions regarding both the method and manuscript. S.G. thanks the MIT-Takeda Program for financial support. S.G. and C.W.C. thank the Machine Learning for Pharmaceutical Discovery and Synthesis consortium for additional support. References Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. Optuna: A next-generation hyperparameter optimization framework. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining , pages 26232631, 2019. Felicity Allen, Russ Greiner, and David Wishart. Competitive fragmentation modeling of ESI-MS/MS spectra for putative metabolite identification. Metabolomics , 11(1):98110, 2015. Wout Bittremieux, Mingxun Wang, and Pieter C Dorrestein. The critical role that spectral libraries play in capturing the metabolomics community knowledge. Metabolomics , 18(12):94, 2022. John Bradshaw, Brooks Paige, Matt J. Kusner, Marwin Segler, and Jos Miguel HernndezLobato. Barking up the right tree: an approach to search over molecule synthesis dags. Advances in neural information processing systems , 33:68526866, 2020. Jacob G. Bundy, Matthew P. Davey, and Mark R. Viant. Environmental metabolomics: a critical review and future perspectives. Metabolomics , 5(1):321, 2009. ISBN: 1573-3890 Publisher: Springer. Daniel P Demarque, Antonio EM Crotti, Ricardo Vessecchi, Joo LC Lopes, and Norberto P Lopes. Fragmentation reactions using electrospray ionization mass spectrometry: an important tool for the structural elucidation and characterization of synthetic and natural products. Natural Product Reports , 33(3):432455, 2016. James R Doroghazi, Jessica C Albright, Anthony W Goering, Kou-San Ju, Robert R Haines, Konstantin A Tchalukov, David P Labeda, Neil L Kelleher, and William W Metcalf. A roadmap for natural product discovery based on large-scale genomics and metabolomics. Nature chemical biology , 10(11):963968, 2014. Kai Dhrkop, Huibin Shen, Marvin Meusel, Juho Rousu, and Sebastian Bcker. Searching molecular structure databases with tandem mass spectra using CSI:FingerID. Proceedings of the National Academy of Sciences , 112(41):1258012585, 2015. 11 Kai Dhrkop, Louis-Flix Nothias, Markus Fleischauer, Raphael Reher, Marcus Ludwig, Martin A. Hoffmann, Daniel Petras, William H. Gerwick, Juho Rousu, and Pieter C. Dorrestein. Systematic classification of unknown metabolites using high-resolution fragmentation mass spectra. Nature Biotechnology , 39(4):462471, 2021. Peter Ertl and Ansgar Schuffenhauer. Estimation of synthetic accessibility score of drug-like molecules based on molecular complexity and fragment contributions. Journal of cheminformatics , 1:111, 2009. William Falcon and The PyTorch Lightning team. PyTorch Lightning, 3 2019. URL https: //github.com/Lightning-AI/lightning . Jonathan A Fine, Anand A Rajasekar, Krupal P Jethava, and Gaurav Chopra. Spectral deep learning for prediction and prospective validation of functional groups. Chemical Science , 11 (18):46184630, 2020. Barbara E Frewen, Gennifer E Merrihew, Christine C Wu, William Stafford Noble, and Michael J MacCoss. Analysis of peptide ms/ms spectra from large-scale proteomics experiments using spectrum libraries. Analytical chemistry , 78(16):56785684, 2006. Wenhao Gao, Roco Mercado, and Connor W. Coley. Amortized tree generation for bottom-up synthesis planning and synthesizable molecular design. arXiv preprint arXiv:2110.06389 , 2021. Samuel Goldman, Jeremy Wohlwend, Martin Straar, Guy Haroush, Ramnik J. Xavier, and Connor W. Coley. Annotating metabolite mass spectra with domain-inspired chemical formula transformers. bioRxiv , 2022. doi: 10.1101/2022.12.30.522318. URL https://www.biorxiv. org/content/early/2022/12/31/2022.12.30.522318 . Samuel Goldman, John Bradshaw, Jiayi Xin, and Connor W Coley. Prefix-tree decoding for predicting mass spectra from molecules. arXiv preprint arXiv:2303.06470 , 2023. Yuhui Hong, Sujun Li, Christopher J Welch, Shane Tichy, Yuzhen Ye, and Haixu Tang. 3dmolms: Prediction of tandem mass spectra from three dimensional molecular conformations. bioRxiv , pages 202303, 2023. Kexin Huang, Tianfan Fu, Wenhao Gao, Yue Zhao, Yusuf Roohani, Jure Leskovec, Connor W Coley, Cao Xiao, Jimeng Sun, and Marinka Zitnik. Artificial intelligence foundation for therapeutic science. Nature Chemical Biology , 18(10):10331036, 2022. Sunghwan Kim, Paul A. Thiessen, Evan E. Bolton, Jie Chen, Gang Fu, Asta Gindulyte, Lianyi Han, Jane He, Siqian He, and Benjamin A. Shoemaker. PubChem substance and compound databases. Nucleic Acids Research , 44(D1):D1202D1213, 2016. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 , 2014. Peter Kirkpatrick and Clare Ellis. Chemical space. Nature , 432(7019):823824, 2004. Juho Lee, Yoonho Lee, Jungtaek Kim, Adam Kosiorek, Seungjin Choi, and Yee Whye Teh. Set transformer: A framework for attention-based permutation-invariant neural networks. In Proceedings of the 36th International Conference on Machine Learning , pages 37443753, 2019. Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. In International Conference on Learning Representations , 2016. Richard Liaw, Eric Liang, Robert Nishihara, Philipp Moritz, Joseph E Gonzalez, and Ion Stoica. Tune: A research platform for distributed model selection and training. ICML 2018 AutoML Workshop , 2018. Fred W McLafferty. Mass spectrometric analysis. molecular rearrangements. Analytical chemistry , 31(1):8287, 1959. 12 Michael Murphy, Stefanie Jegelka, Ernest Fraenkel, Tobias Kind, David Healey, and Thomas Butler. Efficiently predicting high resolution mass spectra with graph neural networks. arXiv preprint arXiv:2301.11419 , 2023. Steffen Neumann and Sebastian Bcker. Computational mass spectrometry for metabolomics: identification of metabolites and small molecules. Analytical and bioanalytical chemistry , 398 (7):27792788, 2010. ISBN: 1618-2650 Publisher: Springer. NIST. Tandem Mass Spectral Library. NIST , 2020. URL https://www.nist.gov/ programs-projects/tandem-mass-spectral-library . Louis-Flix Nothias, Daniel Petras, Robin Schmid, Kai Dhrkop, Johannes Rainer, Abinesh Sarvepalli, Ivan Protsyuk, Madeleine Ernst, Hiroshi Tsugawa, Markus Fleischauer, et al. Featurebased molecular networking in the gnps analysis environment. Nature methods , 17(9):905908, 2020. Ern Pretsch, Philippe Bhlmann, Christian Affolter, Ernho Pretsch, P Bhuhlmann, and C Affolter. Structure determination of organic compounds . Springer, 2000. Robert A Quinn, Alexey V Melnik, Alison Vrbanac, Ting Fu, Kathryn A Patras, Mitchell P Christy, Zsolt Bodai, Pedro Belda-Ferre, Anupriya Tripathi, Lawton K Chung, et al. Global chemical effects of the microbiome include new bile-acid conjugations. Nature , 579(7797): 123129, 2020. RDKit Team. RDKit: Open-source cheminformatics, 2019. URL https://www.rdkit.org/ . Lars Ridder, Justin JJ van der Hooft, and Stefan Verhoeven. Automatic compound annotation from mass spectrometry data using MAGMa. Mass Spectrometry , 3(Spec Iss 2):S0033, 2014. Michal Szeremeta, Karolina Pietrowska, Anna Niemcunowicz-Janica, Adam Kretowski, and Michal Ciborowski. Applications of metabolomics in forensic toxicology and forensic medicine. International Journal of Molecular Sciences , 22(6), 2021. ISSN 1422-0067. doi: 10.3390/ ijms22063010. URL https://www.mdpi.com/1422-0067/22/6/3010 . Zhenyu Tian, Haoqi Zhao, Katherine T Peter, Melissa Gonzalez, Jill Wetzel, Christopher Wu, Ximin Hu, Jasmine Prat, Emma Mudrock, Rachel Hettinger, et al. A ubiquitous tire rubberderived chemical induces acute mortality in coho salmon. Science , 371(6525):185189, 2021. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems 30 , pages 59986008, 2017. Fei Wang, Jaanus Liigand, Siyang Tian, David Arndt, Russell Greiner, and David S Wishart. Cfm-id 4.0: more accurate esi-ms/ms spectral prediction and compound identification. Analytical chemistry , 93(34):1169211700, 2021. Mingxun Wang, Jeremy J. Carver, Vanessa V . Phelan, Laura M. Sanchez, Neha Garg, Yao Peng, Don Duy Nguyen, Jeramie Watrous, Clifford A. Kapono, and Tal Luzzatto-Knaan. Sharing and community curation of mass spectrometry data with Global Natural Products Social Molecular Networking. Nature biotechnology , 34(8):828837, 2016. Minjie Wang, Da Zheng, Zihao Ye, Quan Gan, Mufei Li, Xiang Song, Jinjing Zhou, Chao Ma, Lingfan Yu, Yu Gai, Tianjun Xiao, Tong He, George Karypis, Jinyang Li, and Zheng Zhang. Deep graph library: A graph-centric, highly-performant package for graph neural networks. arXiv preprint arXiv:1909.01315 , 2019. Jennifer N. Wei, David Belanger, Ryan P. Adams, and D. Sculley. Rapid prediction of electronionization mass spectrometry using neural networks. ACS Central Science , 5(4):700708, 2019. Boris Weisfeiler and Andrei Leman. The reduction of a graph to canonical form and the algebra which appears therein. nti, Series , 2(9):1216, 1968. 13 David S. Wishart. Metabolomics for investigating physiological and pathophysiological processes. Physiological Reviews , 99(4):18191875, 2019. doi: 10.1152/physrev.00035.2018. Sebastian Wolf, Stephan Schmidt, Matthias Mller-Hannemann, and Steffen Neumann. In silico fragmentation for computer assisted identification of metabolite mass spectra. BMC Bioinformatics , 11(1):148, 2010. doi: 10.1186/1471-2105-11-148. Kevin Yang, Kyle Swanson, Wengong Jin, Connor Coley, Philipp Eiden, Hua Gao, Angel Guzman-Perez, Timothy Hopper, Brian Kelley, Miriam Mathea, et al. Analyzing learned molecular representations for property prediction. Journal of chemical information and modeling , 59 (8):33703388, 2019. Adamo Young, Bo Wang, and Hannes Rst. MassFormer: Tandem Mass Spectrum Prediction with Graph Transformers. arXiv preprint arXiv:2111.04824 , 2021. Hao Zhu, Liping Liu, and Soha Hassoun. Using Graph Neural Networks for Mass Spectrometry Prediction. In Machine Learning for Molecules Workshop at NeurIPS 2020 , 2020. Richard Licheng Zhu and Eric Jonas. Rapid approximate subset-based spectra prediction for electron ionizationmass spectrometry. Analytical Chemistry , 2023. 14 A Appendix A.1 Extended results We include a complete accounting of all model results and metrics utilized by our previous work in Table 2. Interestingly, while cosine similarity is higher for ICEBERG on nearly all evaluations, the fraction of peaks in the ground truth spectrum explained by the predicted spectrum, coverage , does not increase. We posit that the more strict and rigid fragment-grounding causes our model to miss certain lower intensity peaks. In doing so, however, the model is able to maintain higher accuracy. We note that not all predicted fragments are valid, as a very small fraction of predicted fragments fail the RDBE chemical formula test when hydrogen shifts are considered. Full results for retrieval are shown for NIST20 and NPLIB1 and in Tables 3 and 4 respectively. Table 2: Spectra prediction accuracy for NPLIB1, NIST20, NIST20 (scaffold split) for CFM-ID , NEIMS (FFN) ,NEIMS (GNN) [16,40], and SCARF . Cosine similarity is calculated at a bin resolution of 0.1m/z; Coverage indicates the fraction of peaks in the ground truth spectrum explained by the predicted spectrum on average; Valid indicates the fraction of predicted peaks that can be explained as a subset of the precursor molecules chemical formula (filtered to formulae with ring-double bond equivalents of greater or equal to zero); Time (s) indicates the number of seconds required to predict 100 random spectra from NIST20 on a single CPU including the time to load the model. Dataset NPLIB1 NIST20 (Random split) NIST20 (Scaffold split) Metric Cosine sim. Coverage Valid Cosine sim. Coverage Valid Cosine sim. Coverage Valid Time (s) CFM-ID 0.368 0.232 1.000 0.371 0.273 1.000 0.401 0.271 0.999 1114.7 NEIMS (FFN) 0.494 0.528 0.948 0.614 0.739 0.951 0.548 0.719 0.966 3.4 NEIMS (GNN) 0.520 0.552 0.942 0.689 0.777 0.949 0.639 0.764 0.973 4.3 SCARF 0.534 0.553 1.000 0.713 0.797 1.000 0.665 0.798 1.000 21.5 ICEBERG 0.628 0.544 0.998 0.707 0.733 0.997 0.691 0.764 0.999 117.5 Table 3: NIST20 spectra retrieval top k accuracy for different values of k. Top k 1 2 3 4 5 6 7 8 9 10 Random 0.025 0.047 0.073 0.097 0.118 0.139 0.162 0.186 0.209 0.234 NEIMS (FFN) 0.106 0.233 0.318 0.378 0.424 0.465 0.501 0.532 0.564 0.592 NEIMS (GNN) 0.169 0.296 0.391 0.462 0.515 0.555 0.584 0.620 0.650 0.678 SCARF 0.184 0.323 0.415 0.492 0.546 0.588 0.624 0.653 0.677 0.703 ICEBERG 0.203 0.383 0.492 0.565 0.617 0.658 0.693 0.722 0.745 0.765 Table 4: NPLIB1 spectra retrieval top k accuracy for different values of k. Top k 1 2 3 4 5 6 7 8 9 10 Random 0.021 0.053 0.087 0.114 0.137 0.151 0.180 0.205 0.225 0.241 NEIMS (FFN) 0.212 0.330 0.412 0.469 0.510 0.543 0.569 0.590 0.613 0.636 NEIMS (GNN) 0.187 0.302 0.370 0.427 0.470 0.514 0.550 0.586 0.613 0.635 SCARF 0.112 0.233 0.320 0.369 0.425 0.470 0.515 0.552 0.582 0.613 ICEBERG 0.312 0.466 0.538 0.603 0.648 0.675 0.704 0.732 0.754 0.768 A.2 Graph neural network details ICEBERG relies upon graph neural network embeddings of the molecule GNN(()M). Given the widespread use and descriptions of such models, we refer the reader to Li et al. for a description of the gated graph neural networks we employ.We utilize the DGL library to implement and featurize molecular graphs. Because our fragmentation method relies upon iteratively removing atoms, we often have molecular fragments with incomplete valence shells that would not be parsed by RDKit . As such, we opt for a more minimal set of atom and bond features described in Table 6 . 15 Table 5: Dataset details. Name # Spectra # Molecules Description NIST20 35,129 24,403 Standards library NPLIB1 10,709 8,533 Natural products from GNPS Table 6: Graph neural network (GNN) atom features. Name Description Element type one-hot encoding of the element type Hydrogen number one-hot encoding the number of hydrogens on each atom Adduct type one-hot encoding of the ionization adduct Random walk embed steps positional encodings of the nodes computed using DGL A.3 Hyperparameters All baseline hyperparameters for NEIMS (FFN) ,NEIMS (GNN) , and SCARF are used exactly as in . To fairly compare ICEBERG , we apply an equivalent hyperparameter optimization scheme. We use RayTune with Optuna and an ASHAScheduler to identify hyperparameters from a grid set of options. Both Generate andScore were allotted 50 different hyperoptimization trials on a 10,000spectra subset of NIST20. We define a list of parameters in Table 7 and their chosen values in Table 8. Table 7: Hyperparameter descriptions. Name Model Description learning rate both optimizer learning rate learning rate decay both step-wise learning rate decay every 5,000 model weight update steps dropout both model dropout applied in-between linear hidden layers hidden size both number of hidden layers gnn layers both number of graph neural network layers to encode molecules and fragments mlp layers both number of feed forward layers to encode concatenated representations transformer layers Score number of set transformer attention layers after mlp encoding batch size both number of spectra to include in each training batch weight decay both optimizer weight decay random walk embed steps Generate number of random walk embedding steps to for graph neural network atom features graph pooling both how to combine atom features into a single representation bin size Score binned spectrum resolution spacing from 0Da to 1,500Da 16 Table 8: ICEBERG Generate andScore hyperparameter grid and selected values. Model Parameter Grid Value ICEBERG Generate learning rate [1e4,1e3] 0.00099 learning rate decay [0.7,1.0] 0.7214 dropout {0.1,0.2,0.3} 0.2 hidden size {128,256,512} 512 mlp layers 1 gnn layers [1,6] 6 batch size {8,16,32,64} 32 weight decay {0,1e6,1e7}0 random walk embed steps [0,20] 14 graph pooling {mean, attention} mean ICEBERG Score learning rate [1e4,1e3] 0.00074 learning rate decay [0.7,1.0] 0.825 dropout {0.1,0.2,0.3} 0.2 hidden size {128,256,512} 256 mlp layers [0,3] 1 gnn layers [1,6] 4 transformer layers [0,3] 3 batch size {8,16,32,} 32 weight decay {0,1e6,1e7}1e6 bin size 0.1 graph pooling {mean, attention} mean 17 |
spl20.pdf | 1 Deep Clustering with Variational Autoencoder Kart-Leong Lim and Xudong Jiang, Senior Member, IEEE and Chenyu Yi Abstract An autoencoder that learns a latent space in an unsupervised manner has many applications in signal processing. However, the latent space of an autoencoder does not pursue the same clustering goal as Kmeans or GMM. A recent work of Song et al proposes to artificially re-align each point in the latent space of an autoencoder to its nearest class neighbors during training. The resulting new latent space is found to be much more suitable for clustering, since clustering information is used. Inspired by Song et al, in this paper we propose several extensions to this technique. First, we propose a probabilistic approach to generalize Songs approach, such that Euclidean distance in the latent space is now represented by KL divergence. Second, as a consequence of this generalization we can now use probability distributions as inputs rather than points in the latent space. Third, we propose using Bayesian Gaussian mixture model for clustering in the latent space. We demonstrated our proposed method on digit recognition datasets, MNIST, USPS and SHVN as well as scene datasets, Scene15 and MIT67 with interesting findings. I. I NTRODUCTION Deep clustering networks that exploit autoencoder (AE) for clustering have been found in many recent signal processing applications such as computer vision and pattern recognition , , , , , , speech and audio recognition , , , , , , wireless communication , , , text classification , , and etc. Deep clustering network , typically trains a clustering algorithm e.g. Kmeans on the latent space of AE. However, the latent space of an AE may not be suitable for clustering. We can view this problem from the probabilistic perspective of the variational autoencoder (V AE) . The main difference between AE and variational autoencoder (V AE) , is the way the latent space is represented. In AE, an encoded image is represented as a point in the latent space, while in V AE an encoded image is represented by the sample draw from a Gaussian distribution. The latter is described by V AEs random variable, mean and variance associated with the image. The problem of clustering faced by V AE is that when we have a multiclass dataset such as MNIST, the underlying Gaussian distribution assumption may not be sufficient to separate different classes in the latent space. This is especially true when two different digit classes share very similar mean and variance. There is simply no mechanism in V AE that enforces samples from different classes to have different mean and variance. Unless the underlying data layout is inherently class discriminative, there is no way AE or V AE can generate a latent space suitable for clustering. K. Lim, X. Jiang and C. Yi are with the Rapid-Rich Object Search lab, School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798 (email: [email protected], [email protected], [email protected])In order to solve V AEs clustering problem, at least two groups of researchers have converged to the same idea of using categorial distribution for V AE since the underlying distribution is discrete , . Fortunately, there is an easier way to solve the problem. A recent approach by Song et al focuses on minimizing the difference between the original latent space learnt by AE and the feature space learnt over it by traditional machine learning (ML) techniques. In such approach, there are two objectives to be solved in each iteration, the network weights and the ML parameters . The standard way to learn it is to alternate between each optimization while fixing the other. Our work mainly follows Songs approach which we named as autoencoder with distance (AED). We further extend it to using V AE which we call variational autoencoder with distance (V AED). There are some challenges faced when using AED: i) AE may not be the most ideal tool for training compact representation since, unlike V AE it cannot model latent space using random variable. ii) The distance error function of AED only takes points in the latent space as inputs. It is not so straightforward to extend this function to using random variables as inputs. iii) Kmeans assumes a spherical Gaussian distribution for each cluster. However, this is a strong assumption for most datasets. Novel contributions in this work include: i) Inputs to the distance error function are now probability distributions, rather than points in the latent space. ii) The second order term (variance) of network (V AE) and ML (GMM) are now optimized by the distance error function. iii) Bayesian GMM is used to improve the clustering. More hidden variables and hyperparameters can better capture the latent space over Kmeans alone. A. Related work AED first proposes to integrate both reconstruction error and the error between Kmeans and the encoded image (a.k.a. distance error or L3) into a single objective. Backpropagation on this objective will adjust the AE weights to minimize the within class latent space representation of the encoded image. Many recent works , , , , , including our paper follow this strategy. DCN offers a concise study of AED but both use identical L3. DC-Kmeans use the alternating directed method of multiplier to train AED. The authors of DEC proposed using a Students tdistribution kernel for L3. DBC combines a convolutionalPage 1 of 5 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 2 autoencoder with DEC. Instead of Euclidean distance, in Sun et al ,L3is learnt using a sparse representation. Similarly, PARTY proposed a sparsity prior to L3. NSC-AGA applies the self-expressiveness model to L3. The inputs to theL3represented by sparse coding, NSC-AGA and PARTY are essentially point estimates. Furthermore, when sparse representation or self-expressiveness in L3are negligible, we can effectively treat L3as Euclidean distance much like in AED. The most related work to ours is VaDE . Where the goal of VaDE is to decode images of different classes from its respective cluster representation in the latent space, our goal is to minimize the difference between a V AEs latent space and the cluster representation learnt over it using traditional machine learning approach. VaDE represents the relationship between V AE and GMM using a jointly distributed probability distribution, a.k.a. the variational lower bound in . However, the intuition of VaDE is not easily understood in . We offer a different perspective. The same relationship between V AE and GMM is initially inspired by AED . We then further shows that this relationship can be further refined using a probabilistic approach, e.g. KL divergence. We discuss why such a probabilistic approach is necessary to correctly represent both V AE and GMM. Following that, we show that under a condition, the KL divergence will revert back to AED. Furthermore, we use a Bayesian GMM over the standard GMM. More hidden variables and hyperparameters can better capture the latent space over Kmeans alone. II. B ACKGROUND A. Gaussian mixture model (GMM) GMM models a set of Nobservation z={zn}N n=1 RD, using a linear superposition of KGaussian components. The total dimension of each observed instance is denoted by D. The GMM mean is denoted by ={k}K k=1RD. When assuming diagonal covariance k=1 kI, we define GMM precision as ={k}K k=1RD. The GMM cluster assignment is denoted by ={n}N n=1wherenis a 1ofKbinary vector, subjected toK k=1nk= 1 and nk{0,1}. In the Bayesian approach to GMM, we treat each hidden variable as a posterior distribution and we introduce a prior distribution for each hidden variable. The Bayesian GMM posterior is formulated below, where ={,,} p(z) = p(z|)p()d, p(z|,, ) =N n=1K k=1N( zn|k,1 k)nk, p(, ) =K k=1N( k|m0,01 k) Gamma (k|a0,b0) (1) Due to the intractable integral in p(z), an approximation such as variational inference is required to perform learning of the hidden variables . We will discuss this in later section and we refer the readers to for more details. B. Autoencoder with distance (AED) The AED error function first appeared in . refers to the nearest cluster to zin the latent space. Tandyrefer to target and network output respectively. We use cluster assignment to find out the kthcluster membership kbelongsto. The parameter 3is set to [0,1]where a smaller 3reduces the effect of distance error function on network weight updates LAED= min w,bTy23z2(2) C. Variational autoencoder (VAE) A standard V AE has a network structure including the encoder and decoder as defined below hj=f1( iwijxi), = jwjhj ln2= jwjhj, hk=f4( zwzkz) yl=f5( kwklhk)(3) The latent variable zgenerates an output from the encoder network using z=+whereN (0,1). For the activations, we mainly use f() =tanh(). The error function of V AE consists of the standard reconstruction error and KL divergence as follows. LVAE= lnp(x|z )DKL(q(z|x)p(z)) =1 2(Ty)21 2( 2 1+2 1ln2 11) (4) V AE represents both encoder q(z|x)and decoder p(x|z ) using diagonal covariance Gaussian distribution. The KL divergence above is expressed in terms of V AEs random variables, which are in turn expressed in terms of network parameters w. Weight training for V AEs error function is performed using stochastic gradient ascent (SGA). III. P ROPOSED : VAE WITH DISTANCE (VAED) A. Naive approach, AED A naive way to represent the distance error function in V AE is to usez2such as in . The reduced complexity is that we can exploit the reparameterization trick z=+ to represent mean and variance in V AE. However, problems will arise: i) The GMM variance term , cannot be optimized byz2 ii) The network gradientLV AED is essentially a factor ofLV AED weighted by the randomly generated noise N (0,1)as seen in eqn (5) LAED =z LAED = (z)(5) A severse issue is when 0, the naive approach suffers from the vanishing gradient problem. Fortunately, this problem can be elevated by the proposed method in eqn (10) and (11). B. Proposed approach, VAED A V AE representation in the latent space is the mean and variance. In a ML approach, a GMM contains Kindependent Gaussian distributions that models the latent space of V AE. We can use the KL divergence to measure the distance between these two probability distributions. We introduce our V AED objective as follows LVAED=LVAE3DKL(p(zn|)q(zn|xn)) (6)Page 2 of 5 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 3 We will show an illustration of eqn (6) in Fig 1. Also, the validity of eqn (6) should prevail to cases where both p(zn|)andq(zn|xn)are no longer Gaussian distributed. We introduce the KL divergence term as our new distance error which measures the distance between the distributions of GMM and V AE encoder. We can further re-express the GMM term as p(zn|) =K k=1N(zn|k,(k)1)nk=N(zn|,()1) (7) We refer to as the optimal kcomputed by GMMs cluster assignment E[nk]shown in the next section and vice versa for. Thus, the KL divergence for distance error now becomes a function between two Gaussian distributions. Under such assumption, KL divergence is well defined as follows DKL( N(zn|,()1)N(zn|,)) = ln+ ln+()1+()2 221 2(8) Interestingly, when we assume spherical Gaussian i.e. unit variance, the V AED distance error reverts back to the AED distance error DKL(N(zn|)N(zn|)) =1 2()2(9) C. Optimization of VAED The optimization of V AED is achieved by i) using SGA to learn V AED weights wij,wj,wj,wkl,wzkand ii) using variational inference to learn GMM parameters E[k],E[k],E[nk]. 1) Weights learning: It is straightforward to obtain the network gradient terms for eqn (8) as follows LVAED =() 2(10) LVAED =1 ()1+ ()2 3(11) After that, the goal is to update all the following weights for eqn (6) using SGA where is the weight change of the hidden layers andis the learning rate. wj=wj+wj, wkl=wkl+wkl wj=wj+wj, wzk=wzk+wzk wij=wij+wij(12) 2) GMM learning: The GMM parameters (or Bayesian posteriors of the hidden variables) can be learnt in the latent space using the variational inference. In variational inference, the hidden variables of a mixture model are maintained as posterior distributions. When performing iterative learning, we update the mixture model by estimating the expected value of each posterior per iteration. In Bayesian GMM , E[k],E[k],E[nk]are the expected value of the Bayesian posteriors of GMM precision, mean and cluster assignment respectively. Using the maximization-maximization algorithm , closed solution for the expectations are shown below. The hyperparameters a0,b0,0,m0are treated as constants. E[nk] = arg max nk{ lnE[k]E[k] (znE[k])2} nk (13) Fig. 1. Proposed V AED: The weights of the encoder is updated via backpropagation using the reconstruction error, V AEs regularization error, and the proposed V AED distance error. E[k] =1 2N n=1E[nk] + (a 01) b0+N n=1E[nk] 2(znE[k])2+0 2(E[k]m0)2 (14) E[k] =N n=1znE[nk] +0m0N n=1E[nk] +0(15) D. Proposed algorithm for VAED We introduce our proposed algorithm for V AED in Algo. 1. The first part of V AED trains a GMM in the latent space of V AED. The learnt GMM parameters are in turn used to perform V AED weight updating. Finally, the updated weights of V AED replace the weights from the previous iteration. The process repeats itself until enough iterations have passed. A way to check the convergence of V AED is to run GMM training accuracy each iteration. When normalized mutual information (NMI) and accuracy (ACC) of GMM clustering have converged, we can stop the training of V AED. We refer to for NMI and ACC computations. Algorithm 1 V AED Input:x Output: a) V AED weights, ={wij,wj,wj,wzk,wkl} b) GMM parameters ={E[k],E[k],E[nk]} Initialization: ,,3, Main: Repeat till convergence % -This is GMM optimization1) run forward pass to obtain zgiven the raw input x 2) update GMM parameters using eqn (13-15) %-This is VAED optimization3) given a random sample zn, computeE[n]to get corresponding and 4) perform SGA on V AED in eqn (6) and (8) IV. E XPERIMENTS A. Comparison of end-to-end clustering We compare our method with recent clustering methods , , , , , , in Table 1. The most commonly used digit datasets are: i) USPS with 7291 train and 2007 test images, ii) MNIST with 50,000 train and 10,000 test images. For USPS, we use raw image pixel asPage 3 of 5 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 4 USPS MNIST Method NMI ACC NMI ACC Kmeans 0.4503 0.4585 0.5163 0.5618 AED 0.5449 0.6111 0.6615 0.734 DC-Kmeans 0.5737 0.6442 0.7448 0.8015 DCN 0.81 0.83 DC-GMM 0.6939 0.6476 0.8318 0.8555 DEC 0.6191 0.6246 0.8273 0.8496 DBC 0.724 0.743 0.917 0.964 VaDE 0.945 NSC-AGA 0.7727 0.7256 V AED (ours) 0.6233 0.7613 0.819 0.8875 TABLE I PROPOSED METHOD VS STATE -OF-THE-ARTS (USING RAW PIXEL ) SCENE15 SVHN MIT67 Method NMI ACC NMI ACC NMI ACC Original 0.7754 71.20 0.6397 68.15 0.6610 48.61 AED 0.8016 80.27 0.7080 73.73 0.6650 49.12 V AE 0.8150 82.41 0.7371 76.63 0.6516 48.62 V AED (ours) 0.8332 88.12 0.8111 91.53 0.67098 58.96 TABLE II PROPOSED METHOD VS BASELINES (USING RESNET18) SCENE15 SVHN MIT67 AED 1080 2297 7704 V AE 122 122 120 V AED 1858 3393 13200 TABLE III COMPUTATIONAL TIME (IN SECONDS )FOR 50ITERATIONS feature vector, hence, the encoder network is 256-196-128. For MNIST, we also use raw image pixel as feature vector and the encoder network we use is 784-512-256. We rerun our experiments at least 10 rounds and take the average result. The GMM hyperparameters we use are a0= 1.25,b0= 0.25, m0= 1 and0= 0.5. The V AED parameter is 3= 0.1. On USPS in Table 1, Kmeans obtained NMI=0.4503, ACC=0.4585 on the original feature space. All deep clustering methods outperforms Kmeans by a large margin. AED obtains better overall result than DC-GMM and DC-Kmeans. The ACC of DEC is the poorest amongst all deep methods. Overall, our proposed method obtains the best ACC but our NMI suffers. We believe this is due to V AED using randomly initalized weights and USPS having a smaller training sample size. On MNIST in Table 1, Kmeans on the original space was only able to obtain NMI=0.5163 and ACC=0.5618. In comparison, V AED obtained better result than most methods at NMI=0.819, ACC=0.8817 except DBC. Reason could be that 3 layers for V AEDs encoder may not be enough for state-of-the-art clustering on MNIST. In comparison, VaDE, DCN, DBC and DEC use 5 layers for encoder while AED, NSC-AGA, DC-Kmeans and DC-GMM use 4 layers. B. More challenging datasets Our next goal is to evaluate V AED on real datasets such as datasets having larger classes (MIT67) and more difficult image content such as scene categories (Scene15 and MIT67).These datasets are rarely used by deep clustering algorithms such as , , , . As a result, we implemented AED and V AE as our baselines for comparison. For the latter, V AE is first learnt on the dataset and then Kmeans is applied in the latent space. All methods here use ResNet18 as the input and they have the same network dimensions as V AED. Our V AED encoder uses a 512384256network structure whereby 512 refers to the output dimension of Resnet18 as our image feature extraction, 384 is the dimension of our 2nd layer and our latent space has 256 neurons. Two scene recognition datasets are used in our experiments. Scene15 has 15 classes, 4485 images and MIT67 has 67 classes, 15,620 images. We also include SVHN which is a more complex and challenging dataset than MNIST. The images in SVHN are natural and not clean which have large variance in illumination and distraction. For each dataset, we start with Kmeans on the original ResNet18 feature space. From Table 2, we see that AED is able to outperform Kmeans by a large gain in both Scene15 and SVHN. V AE is able to obtain minor performance gain over AED. However, both AED and V AE do not perform any better than Kmeans on MIT67. In fact, V AE performes worse than Kmeans on MIT67. We suspect that the poor performance of both methods on MIT67 is due to the large class number. Fortunately, V AED does not suffer from this issue. The performance of V AED is significantly much better than both AED and V AE on all three datasets. In Table 3, we compare the complexity of V AED with AED and V AE using CPU time. Overall, V AED is the most expensive. In V AE, the only requirement is to perform weight updates. It is also consistent across all datasets. In comparison, we see that AED and V AED are much slower due to ensuring Kmeans or GMM converged. Also, the class number and sample size also affect Kmeans and GMM and hence the computational time. V. C ONCLUSION We have discussed about training an AE or V AE for clustering. One of the main problems is how to improve the multiclass representation in the latent space. A recent approach known as AED attempts to solve this problem, where D refers to the distance error function between AE and Kmeans in the latent space. We found several issues with the original AED. Firstly, AED suffers from the constraint of using points in the latent space as inputs. Secondly, AED cannot be optimized for both V AE and GMM since it does not treat variance as useful information. Lastly, when using the reparametrization trick for AED, the network gradient for V AE may suffer from the vanishing gradient problem. We proposed V AED to overcome all these problems of AED. In fact, AED is a specific case of V AED when assuming spherical Gaussian. We showed significant improvements using V AED over AED and V AE on the digit and scene recognition datasets as well as on par results or better results than recent published best methods on deep clustering networks.Page 4 of 5 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 5 REFERENCES M. Abavisani and V . M. Patel. Deep sparse representation-based classification. IEEE Signal Processing Letters, 26(6):948952, 2019. I A. Ali and F. Yangyu. Automatic modulation classification using deep learning based on sparse autoencoders with nonnegativity constraints. IEEE Signal Processing Letters, 24(11):16261630, 2017. I S. Amini and S. Ghaemmaghami. A new framework to train autoencoders through non-smooth regularization. IEEE Transactions on Signal Processing, 67(7):18601874, April 2019. I B. O. Ayinde and J. M. Zurada. Deep learning of constrained autoencoders for enhanced understanding of data. IEEE transactions on neural networks and learning systems, 29(9):39693979, 2017. I C. M. Bishop. Pattern recognition and machine learning. springer, 2006. I,II-A, II-A, III-C2 D. Cai, X. He, and J. Han. Document clustering using locality preserving indexing. IEEE Transactions on Knowledge and Data Engineering, 17(12):16241637, December 2005. III-D J. Deng, X. Xu, Z. Zhang, S. Fr uhholz, and B. Schuller. Universum autoencoder-based domain adaptation for speech emotion recognition. IEEE Signal Processing Letters, 24(4):500504, 2017. I T. Hastie, R. Tibshirani, and J. Friedman. The elements of statistical learning. springer series in statistics. In :. Springer, 2001. IV-A K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770778, 2016. IV-B S. Inoue, H. Kameoka, L. Li, S. Seki, and S. Makino. Joint separation and dereverberation of reverberant mixtures with multichannel variational autoencoder. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 96100. IEEE, 2019. I E. Jang, S. Gu, and B. Poole. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016. I P. Ji, T. Zhang, H. Li, M. Salzmann, and I. Reid. Deep subspace clustering networks. In Advances in Neural Information Processing Systems, pages 2433, 2017. I,I-A Q. Ji, Y . Sun, J. Gao, Y . Hu, and B. Yin. Nonlinear subspace clustering via adaptive graph regularized autoencoder. IEEE Access, 7:74122 74133, 2019. IV-A X. Jiang. Asymmetric principal component and discriminant analyses for pattern classification. IEEE Transactions on Pattern Analysis & Machine Intelligence, (5):931937, 2008. I X. Jiang. Linear subspace learning-based dimensionality reduction. IEEE Signal Processing Magazine, 28(2):1626, 2011. I Z. Jiang, Y . Zheng, H. Tan, B. Tang, and H. Zhou. Variational deep embedding: An unsupervised and generative approach to clustering. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, IJCAI17, pages 19651972. AAAI Press, 2017. I-A, IV-A H. Kameoka, T. Kaneko, K. Tanaka, and N. Hojo. Acvae-vc: Nonparallel voice conversion with auxiliary classifier variational autoencoder. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2019. I E. Karamatli, A. T. Cemgil, and S. Kirbiz. Audio source separation using variational autoencoders and weak class supervision. IEEE Signal Processing Letters, pages 11, 2019. I D. P. Kingma and M. Welling. Stochastic gradient vb and the variational auto-encoder. In Second International Conference on Learning Representations, ICLR, 2014. I,I-A, II-C S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, volume 2, pages 21692178. IEEE, 2006. IV-B Y . LeCun, L. Bottou, Y . Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278 2324, 1998. IV-A S. Leglaive, U. S ims ekli, A. Liutkus, L. Girin, and R. Horaud. Speech enhancement with variational autoencoders and alpha-stable distributions. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 541545. IEEE, 2019. I F. Li, H. Qiao, and B. Zhang. Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition, 83:161 173, 2018. I-A, IV-A, IV-B K.-L. Lim and H. Wang. Map approximation to the variational bayes gaussian mixture model and application. Soft Computing, pages 113, 2017. III-C2 C. J. Maddison, A. Mnih, and Y . W. Teh. The concrete distribution: A continuous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712, 2016. I Y . Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y . Ng. Reading digits in natural images with unsupervised feature learning. 2011. IV-B S. Parthasarathy, V . Rozgic, M. Sun, and C. Wang. Improving emotion classification through variational inference of latent variables. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 74107414. IEEE, 2019. I X. Peng, J. Feng, S. Xiao, W.-Y . Yau, J. T. Zhou, and S. Yang. Structured autoencoders for subspace clustering. IEEE Transactions on Image Processing, 27(10):50765086, 2018. I-A A. Quattoni and A. Torralba. Recognizing indoor scenes. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 413420. IEEE, 2009. IV-B R. G. Soares. Effort estimation via text classification and autoencoders. In2018 International Joint Conference on Neural Networks (IJCNN), pages 0108. IEEE, 2018. I C. Song, F. Liu, Y . Huang, L. Wang, and T. Tan. Auto-encoder based data clustering. In Iberoamerican Congress on Pattern Recognition, pages 117124. Springer, 2013. I,I-A, II-B, IV-A, IV-B B. Sun and H. Feng. Efficient compressed sensing for wireless neural recording: A deep learning approach. IEEE Signal Processing Letters, 24(6):863867, 2017. I J. Sun, X. Wang, N. Xiong, and J. Shao. Learning sparse representation with variational auto-encoder for anomaly detection. IEEE Access, 6:3335333361, 2018. I-A, III-A K. Tian, S. Zhou, and J. Guan. Deepcluster: A general clustering framework based on deep learning. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 809 825. Springer, 2017. I-A, IV-A, IV-B J. Xie, R. Girshick, and A. Farhadi. Unsupervised deep embedding for clustering analysis. In International conference on machine learning, pages 478487, 2016. I-A, IV-A, IV-B W. Xu and Y . Tan. Semisupervised text classification by variational autoencoder. IEEE transactions on neural networks and learning systems, 2019. I B. Yang, X. Fu, N. D. Sidiropoulos, and M. Hong. Towards kmeans-friendly spaces: Simultaneous deep learning and clustering. In Proceedings of the 34th International Conference on Machine LearningVolume 70, pages 38613870. JMLR. org, 2017. I,I-A, IV-A J. Yang, J. Liang, K. Wang, P. Rosin, and M.-H. Yang. Subspace clustering via good neighbors. IEEE transactions on pattern analysis and machine intelligence, 2019. IV-A T. Yu, C. Guo, L. Wang, S. Xiang, and C. Pan. Self-paced autoencoder. IEEE Signal Processing Letters, 25(7):10541058, 2018. I Q. Zhang and J. H. Hansen. Language/dialect recognition based on unsupervised deep learning. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26(5):873882, 2018. IPage 5 of 5 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
1712.06527.pdf | 1 Deep generative models of genetic variation capture mutation effects Adam J. Riesselman* John B. Ingraham* Program in Biomedical Informatics Program in Systems Biology Harvard Medical School Harvard University [email protected] [email protected] Debora S. Marks Department of Systems Biology Harvard Medical School [email protected] * Equal contribution Abstract The functions of proteins and RNAs are determined by a myriad of interactions between their constituent residues, but most quantitative models of how molecular phenotype depends on genotype must approximate this by simple additive effects. While recent models have relaxed this constraint to also account for pairwise interactions, these approaches do not provide a tractable path towards modeling higher-order dependencies. Here, we show how latent variable models with nonlinear dependencies can be applied to capture beyond-pairwise constraints in biomolecules. We present a new probabilistic model for sequence families, DeepSequence, that can predict the effects of mutations across a variety of deep mutational scanning experiments significantly better than site independent or pairwise models that are based on the same evolutionary data. The model, learned in an unsupervised manner solely from sequence information, is grounded with biologically motivated priors, reveals latent organization of sequence families, and can be used to extrapolate to new parts of sequence space. Introduction Modern medicine and biotechnology are routinely challenged to both interpret and exploit how mutations will affect biomolecules. From interpreting which genetic variants in humans underlie disease, to developing modified proteins that have useful properties, to synthesizing large molecular libraries that are enriched with functional sequences, there is need to be able to rapidly assess whether a given mutation to a protein or RNA will disrupt its function [1, 2]. Motivated by these diverse applications, new technologies have emerged that simultaneously assess the effects of thousands of mutations in parallel [3-25] (sometimes referred to as deep mutational 2 scans or MAVEs[27, 28] ). In these assays, the measured attributes range from ligand binding, splicing and catalysis [4, 8, 11, 13, 21, 23, 29] to cellular or organismal fitness under selection pressure [5-7, 9, 12, 14, 17, 19, 20]. Figure 1. A nonlinear latent variable model captures higher-order dependencies in proteins and RNAs. a. In contrast to sitewise and pairwise models that factorize dependency in sequence families with low-order terms, a nonlinear latent variable model posits hidden variables z that can jointly influence many positions at the same time. b. The dependency p(x|z) of the sequence x on the latent variables z is modeled by a neural network, and inference and learning is made tractable by jointly training with an approximate inference network q(z|x). This combination of model and inference is also known as a variational autoencoder Since sequence space is exponentially large and experiments are resource-intensive, accurate computational methods are an important component for high-throughput sequence annotation and design. Many computational tools have been developed for predicting the effects of mutations, and most progress in efficacy of predictions has been driven by the ability of models to leverage the signal of evolutionary conservation among related sequences [30-35]. While previous approaches analyzed this signal in a residue-independent manner, recent work has demonstrated that incorporating inter-site dependencies using a pairwise model can power state of art predictions for high-throughput mutational experiments [36-38]. Although this incorporation of pairwise epistasis represented an important step forward, contemporary models based on natural sequence variation are still unable to model higher-order effects. This is despite the frequent observation that higher order epistasis pervades the evolution of proteins and RNAs [39-42]. Navely, one way to address this would be to simply extend the pairwise models with third or higher terms, but this is statistically unfeasible: fully-parameterized extensions of the pairwise models to third-order interactions will already have approximately ~109 interaction terms for a protein of length only 200 amino acids. Even if such a model could be engineered or coarse-grained to be computationally and statistically tractable, it will only marginally improve the fraction of higher-order terms considered, leaving 4th and higher order interactions seemingly out of reach. The intractability of higher order interaction models for modeling epistasis in proteins is a consequence of how these models describe data: in general, every possible higher order MHAGEDMHAEKLYSTCVRKLYSCTVRHRINTNFGAKApproximate posteriorGenerative modelbaz LPAIVQPAIANPAIAQGDHMRGDKSAADKNAADKNLPAIVQPAIANPAIAQGDHMRGDKSAADKNAADKNLPAIVQPAIANPAIAQGDHMRGDKSAADKNAADKNziSitewise factorsPairwise factorsLatent factors 3 interaction requires explicit incorporation of a unique free parameter that must be estimated. However, this is not the only way to model higher-order correlations. Rather than describing them by explicit inclusion of parameters for each type interaction, it is possible to instead implicitly capture higher-order correlations by means of latent variables. Latent variables models posit hidden factors of variation that explain observed data and involve joint estimation of both hidden variables for each data point as well as global parameters describing how these hidden variables affect the observed. Two widely used models for the analysis of genetic data, PCA and admixture analysis [44-46] can be cast as latent variable models with linear dependencies. Although these linear latent variable models are restricted in the types of correlations that they can model, replacing their linear dependencies with flexible nonlinear transformations can in principle allow the models to capture arbitrary order correlations between observed variables. Recent advances in approximate inference [47, 48] have made such nonlinear latent variable models tractable for modeling complex distributions for many kinds of data, including text, audio, and even chemical structures , but their application to genetic data remains in its infancy. Here, we develop nonlinear latent variable models for biological sequence families and leverage approximate inference techniques to infer them from large multiple sequence alignments. We show how a Bayesian deep latent variable model for protein sequence families can be used to predict the effects of mutations and organize sequence information, all while being grounded with biologically motivated architecture and learned in unsupervised fashion. Results A deep generative model for evolutionary sequence data One strategy for reasoning about the consequences of mutations to genes is to develop models of the selective constraints that have been relevant throughout evolution. Since the genes that we observe across species today are the results of long-term evolutionary processes that select for functional molecules, a generative model of the outputs of evolution must implicitly learn some of these functional constraints. If we approximate the evolutionary process as a sequence generator with probability that has been fit to reproduce the statistics of evolutionary data, we can use the probabilities that the model assigns to any given sequence as a proxy for the relative plausibility that the molecule satisfies functional constraints. We will consider the log-ratio, (((Mutant)|)(((Wild-Type)|) as a heuristic metric for the relative favorability of a mutated sequence, (Mutant), versus a wild-type (Wild-Type). This log-ratio heuristic has been previously shown to accurately predict the effects of mutations across multiple kinds of generative models . Our innovation is to instead consider another class of probabilistic models for , nonlinear latent variable models (Figure 1). It is important to emphasize that this new approach, 4 as with the previous pairwise approach, is fully unsupervised, as we never train on any observed mutation effect data but rather use the statistical patterns in observed sequences as a signal of selective constraint. We introduce a nonlinear latent variable model to implicitly capture higher order interactions between positions in a sequence in a protein family. For every observed sequence , we posit unobserved latent variables together with a generative process p(z)p(x|z) that specifies a joint distribution over hidden variables and observed variables. Inference this model is challenging, as the marginal probability of the observed data, p(x), requires integrating over all possible hidden z with (|) =(|,)(). While directly computing this probability is intractable in the general case, we can use variational inference to instead form a lower bound on the (log) probability. This bound, known as the Evidence Lower Bound (ELBO), takes the form log7log, ;<,||, where q(z|x) is an approximate posterior for hidden variables given the observed variables p(z|x). Modeling both the conditional distribution p(x|z) of the generative model and the approximate posterior q(z|x) with neural networks results in a flexible model-inference combination, known as a Variational Autoencoder [47, 48] ( Figure 1b). Neural network-parameterized latent variable models can in principle model complex correlations in data, but without additional architectural and statistical considerations may be hard to interpret and unlikely to generalize. We encourage generalization in three ways: First, we encourage sparse interactions by placing a group sparsity prior over the last layer of the neural network for p(x|z) that encourages each hidden unit in the network to only influence a few positions at a time. This is motivated by the observation that higher order interactions in proteins, while importantly higher than second order, are nevertheless low-valence compared to the number of residues in the protein. Second, we encourage correlation between amino acid usage, by convolving the final layer with a width-1 convolution operation. Thirdly, we estimate all global parameters with variational Bayes by estimating approximate posterior distributions over each model parameter. The result is that rather than learning a single neural network for p(z|x), we learn an infinite ensemble of networks. This joint variational approximation is then optimized by stochastic gradient ascent on the ELBO to give a fully trained model (Methods). After optimizing the model on a given family, it can be readily applied to predict the effects of arbitrary mutations to arbitrary sequences. Following the previous heuristic of quantifying effects with a log ratio, (((Mutant)|)(((Wild-Type)|) , we approximate this quantity by replacing each log probability with a lower bound, the ELBO. For example, given a starting wild type sequence, 5 one can rapidly compute this difference in ELBOs for all possible single point mutations (Figure 2). Figure 2. Mutation effects can be quantified by likelihood ratios. After fitting a probabilistic model to a family of homologous sequences, we heuristically quantify the effect of mutation as the log ratio of mutant likelihood to wild type likelihood (as approximated by the ELBO; Methods). Below: mutation effect scores for all possible point mutations to -lactamase. Since all model parameters are computed for any combinations of mutations compared to wild type, sequences can be assessed for fitness that are multiple steps away from the wild type and compared. A deep latent variable model captures the effects of mutations Deep mutational scanning (DMS) experiments provide a systematic survey of the mutational landscape of proteins and can be used to benchmark computational predictors for the effects of mutations . Here we surveyed 28 in vivo and in vitro deep mutational scanning experiments comprising of 21 different proteins and a tRNA to assess the ability of the deep latent variable model to predict the effect of mutations purely from natural sequence variation [8, 14, 17, 22, 25, 37, 38, 51-68]. For each multiple sequence alignment of a family, we fit five replicas of the model from 5 different initializations both to assess reproducibility as well as to create an ensemble predictor. We calculate mutation effects as the difference in ELBOs (above and Methods). Our deep latent variable model, DeepSequence, is predictive of the effect of mutations with better performance than a site-independent model without dependencies between DEKRHNQSTPGAVILMCFYW30405060807090100110120 130140150160180170190200210220 230240250260280270NeutralDeleteriousWildtype positionLYSTCVRMHAERKp(xmutant)p(xwildtype)logMutationEffect prediction SecondaryStructure 6 positions with the same or better performance in 23 out of 28 datasets) (average 0.110 Spearman increase 0.11, Figure 3a). Figure 3. A deep latent variable model predicts the effects of mutations better than site-independent or pairwise models. a. A nonlinear latent variable model (DeepSequence) captures the effects of mutations across deep mutational scanning experiments as measured by rank correlation (Supplemental Figure 1). b. The latent variable model tends to be more predictive of mutational effects than pairwise and site-independent models when fit to deeper, more evolutionarily diverse sequence alignments as measured by the effective family size (Methods). c. Average Spearman before and after bias calibration of representative single-mutant datasets (Methods, Supplementary Figure 3). DeepSequence matches or is more predictive than the current state-of-the-art pairwise model in 22 out of 28 datasets (average Spearman increase 0.03) and, as expected, the ensembled prediction of the five model replicas is more predictive than the average performance of individual predictors (28 out of 28 datasets) (Figure 3a, Supplementary Figure 1). A deep alignment is necessary but not sufficient for reasonable agreement between experimental measurements and model predictions. Where the effective family size is greater than 100 DeepSequence| Spearman | Independent| Spearman |EVmutation| Spearman | 10010100010000Effective sequencefamily sizea b 0.8000.8 0.8000.8 -lactamaseDNA methyltransferase HaeIIIPABP singles (RRM domain)-glucosidaseGAL4 (DNA-binding domain)Kanamycin kinase APH(3)-IIYAP1 (WW domain)PSD 95 (PDZ domain)TIM barrel (S. solfataricus)UbiquitinHSP90 (ATPase domain)Influenza polymerase PA subunitInfluenza hemagglutininUBE4B (U-box domain)Hepatitis C NS5APABP doubles (RRM domain)TIM barrel (T. thermophilus)Yeast tRNA (CCU, Arg)Toxin-antitoxin complexTIM barrel (T. maritima)0.10.20.30.40.50.60.70.8| Spearman |Aliphatic amide hydrolaseTranslation initiation factor IF1GTPase HRasLatent (DeepSequence)Pairwise (EVmutation)Independent Mean | Spearman |00.10.20.30.40.50.6c +-+++ IndependentEVmutationDeepSequenceRaw Calibrated---Effect prediction28 ExperimentsCompare22 Sequence familiesLPAIVQPAIANPAIAQGDHMRGDKSAADKNAADKNp(xmutant)p(xwildtype)logFit p(x) 7 sequences, DeepSequence matches or improves predictive performance in 21 out of 24 datasets; in this data regime, the latent variable model increases the average model-data correlation by 0.045. When fit to protein families that have lower effective family sizes (N=4, Neff (=0.2) = 92.6, 44.6, 21.8, 3.0), the independent and pairwise model outperform the deep generative model (Figure 3b). We anticipate effective family size can guide model selection for best predictive performance. Figure 4. Latent variables capture organization of sequence space. In a two-dimensional latent space for the -lactamase family, closeness in latent space reflects phylogenetic groupings. When examining the variation within a single deep mutational scanning experiment, it occupies only a very small portion of the sequence space of the entire evolutionary family. We compared the residuals of the rankings of the predictions versus the experiment for each amino acid transition and observed a similar prediction bias for all three evolutionary models (independent, EVmutation, DeepSequence, Supplementary Figure 2). When averaged across all possible starting amino acids, positions mutated to prolines and charged amino acids are consistently predicted as too deleterious, while sulphur-containing residues and aromatics are consistently predicted as too fit, (Supplementary Figure 2). Although the unsupervised DeepSequence model presented in this paper is improved by accounting for this bias in datasets with only single mutants, the improvements are small (Figure 3c, Supplementary Figure 3) suggesting that most of the disagreement between DeepSequence and the experimental measurements is more complex. We found that the combination of using biologically motivated priors and Bayesian approaches for inference on the weights was important to learning models that generalize. To test the importance of these various aspects of the model and inference, we performed an ablation study across a subset of the proteins. We found that using (i) Bayesian variational approximations on the weights, (ii) sparse priors on the last layer, (iii) a final width 1 convolution for amino acid correlations, and (iv) a global temperature parameter all improved the ability of the model to predict the effects of mutations across this subset of the experiments. 8 Figure 5. Model parameters capture structural proximity and amino acid correlations. a. (Left) Sparse sub-groups targeted by the latent factors with a group sparsity prior are enriched as closer in structure than expected for random sub-groups. (Right) When visualizing structures with only median values of the structural enrichment, structural proximity is apparent. Representative sparse sub-groups are plotted on DNA methyltransferase HaeIII (pdb: 1dct; log-ratio distance from left to right: -0.11,-0.11,-0.13) and -lactamase (pdb: 1axb; log-ratio distance from left to right: -0.11,-0.11,-0.10). b. Correlations in the weights of the final width-1 convolution reflect known amino acid correlations as captured by a well-known substitution matrix BLOSUM62 (Spearman = 0.79). Moreover, when comparing to other common approaches for regularization such as Dropout or point estimation with group sparsity priors , we found that our variational Bayesian approach performed better, (Table 1).Most importantly, only the Bayesian approaches for inference of the global parameters that estimate approximate posteriors were able to consistently outperform the previous pairwise-interaction models. DNA methyltransferase HaeIIIa bAspartic acidGlutamic acidLysineArginineHistidineAsparagineGlutamineSerineProlineGlycineAlanineThreonineValineIsoleucineMethionineCysteinePhenylalanineLeucineTyrosineTryptophanAspartic acidGlutamic acidLysineArginineHistidineAsparagineGlutamineSerineProlineGlycineAlanineThreonineValineIsoleucineMethionineCysteinePhenylalanineLeucineTyrosineTryptophan-lactamaseDictionary correlation coefficientBLOSUM 62 Matrix-4-202461081.00.80.60.40.20.0-0.2-0.4Spearman = 0.79-0.5-1.00.51.0 0 1.0-0.4Correlation coeffientions between amino acids -lactamaseDNA methyltransferase HaeIII-glucosidaseGAL4 (DNA-binding domain)Kanamycin kinase APH(3)-IIYAP1 (WW domain)PSD 95 (PDZ domain)TIM barrel (S. solfataricus)UbiquitinHSP90 (ATPase domain)Influenza polymerase PA subunitInfluenza hemagglutininUBE4B (U-box domain)Hepatitis C NS5ATrypsinPABP (RRM domain)TIM barrel (T. thermophilus)FYN (SH3 domain)Aliphatic amide hydrolaseTranslation initiation factor IF1GTPase HRasAntitoxin proteinDihydrofolate reductaseTIM barrel (T. maritima)BRCA1 (Ring domain)Photoactive yellow proteinMitogen-activated protein kinase 1 Subgroup distances 9 Bayesian Sparsity [S] Convolution [C] Temperature [T] MAP L2 Regularization Dropout [S+C+T] Final ReLU Pair Site -lactamase 0.73 0.73 0.73 0.73 0.73 0.74 0.53 0.61 0.04 0.40 0.56 0.37 0.34 0.42 0.70 0.60 PSD 95 (PDZ domain) 0.58 0.60 0.58 0.57 0.57 0.55 0.55 0.48 0.32 0.47 0.50 0.41 0.37 0.47 0.54 0.47 GAL4 (DNA-binding domain) 0.61 0.46 0.50 0.62 0.60 0.58 0.60 0.53 0.26 0.47 0.52 0.43 0.42 0.47 0.59 0.41 HSP90 (ATPase domain) 0.54 0.54 0.54 0.51 0.52 0.52 0.48 0.45 0.03 0.34 0.44 0.26 0.22 0.33 0.49 0.43 Kanamycin kinase APH(3)-II 0.62 0.62 0.62 0.60 0.59 0.60 0.53 0.49 0.09 0.38 0.49 0.40 0.39 0.38 0.59 0.33 DNA methyltransferase HaeIII 0.70 0.70 0.69 0.70 0.68 0.68 0.64 0.64 0.12 0.54 0.64 0.50 0.49 0.54 0.69 0.44 PABP singles (RRM domain) 0.67 0.67 0.66 0.65 0.63 0.65 0.64 0.62 0.44 0.59 0.63 0.58 0.58 0.59 0.59 0.42 Ubiquitin 0.50 0.46 0.46 0.44 0.48 0.43 0.37 0.39 0.09 0.38 0.37 0.29 0.31 0.38 0.43 0.46 YAP1 (WW domain) 0.64 0.64 0.64 0.63 0.63 0.64 0.63 0.58 0.28 0.50 0.61 0.49 0.44 0.50 0.57 0.58 Table 1. Biologically motivated priors and Bayesian learning improve model performance. Ablation studies of critical components of DeepSequence, showing the average Spearman of predictions from five randomly-initialized models. We include combinations of components of the structured matrix decomposition and use either Bayesian approximation or Maximum a posteriori (MAP) estimation of decoder weights. These can be compared to predictions made from EVmutation (Pair) and the site-independent model (site). Inclusion is indicated with (), and top performing model configurations for each dataset are bolded. The latent variables and global variables capture biological structure Examining the low-dimensional latent spaces learned by a latent variable model can give insight into relationships between data points (sequences), so we fit an identical replica of the model for Beta-lacatamse that was constrained to have 2-dimensional z. We observe that sequence closeness in latent space largely reflects phylogenetic groupings, though with some deviations (Figure 4). Interestingly, when we examine the distribution of single-point mutation sequences in latent, they are tightly clustered. It is important to note that these sequences need not be separated at all; the conditional distribution p(x|z) can in principle model all of this variation without additional need for variation in latent variables. For the pairwise model of sequence families, it is well established that strongly coupled positions in the model are also close in protein 3D structure [71-74]. Assessing an analogous pattern in a latent variable model is difficult, however, because explicit correlations between sites in p(x) will be implicitly captured by the couplings between observed variables and latent variables. Since these dependencies are mediated by the neural network for p(x|z) and the observed variables x are only directly affected via connections from the last hidden layer, we can focus our attention on those neural network weights. The group sparsity prior over this set of weights (Methods) learns 500 soft sub-groups of positions, which can be seen as subsets of the entire 10 sequence that are jointly influenced by the same hidden activations. We tested if these subgroups tend to be closer in 3D structure than might be expected by chance. For each of these subgroups, we computed the average pairwise distance between positions in the group (after thresholding for inclusion; Methods). We observe that the bulk of these average subgroup distances tends to be less than the null expectation for distance (Figure 5a). When focusing on subgroups with enrichment under the null near the median for that protein, we see that they have many nearby subsets of residues on the crystal structures (Figure 5b). The final width-1 convolution in the network is parameterized by a matrix that captures patterns of amino acid usage. To visualize these patterns, we plot the correlations between amino acid types across the input channels of this matrix and find that it groups amino acids of similar types. Moreover, it is well correlated with the widely used BLOSUM62 substituion matrix (Figure 5c). Discussion We have shown that a deep latent variable model can model variation in biological sequence families and be applied to predict the effects of mutations across diverse classes of proteins and RNAs. We find that the predictions of the deep latent variable model are more accurate than a previously published pairwise-interaction approach to model epistasis [36, 75], which in turn was more accurate than commonly used supervised methods [76, 77]. In addition, both the latent variables and global variables of the model learn interpretable structure both for macrovariation and phylogeny as well as structural proximity of residues. However, while deep latent variable models introduce additional flexibility to model higher-order constraints in sequence families, this comes at the price of reduced interpretability and increased potential for overfitting. We find that a Bayesian approach to inference, where averages are computed over an ensemble of models and global parameters are controlled by group sparsity priors, was a crucial step towards attaining generality. This suggests that future work could benefit from additional biologically-motivated, hierarchical priors as well as more accurate methods for variational inference [78, 79]. Additionally, incorporating more rigidly structured probabilistic graphical models to model dependencies between latent variables could improve generality and interpretability . Even our preliminary results with group sparsity priors suggest that fear of a tradeoff between interpretability and flexibility for using deep models on biological data may be largely remedied by hierarchical Bayesian approaches for modeling. A second challenge for all approaches that predict the effects of mutations from evolutionary sequence variation concerns the data themselves. DeepSequence, as with the majority of previous mutation prediction methods, rely critically on the multiple sequences alignments used for training data [36, 72, 81-83]. At present, the criteria for the numbers of non-redundant sequences and the level of diversity in multiple sequence alignments is ad hoc and this should be 11 improved to give better uncertainty criteria and accuracy expectation in predicting the effects of mutations Secondly, the premise that evolutionary data can be applied to predict outcomes of an experiment is highly contingent on the relevance of the experimental assay to long-term selective forces in the family. A mutation may be damaging with regard to some measurable protein feature e.g. enzyme efficiency, but harmless for stability or even organism fitness, as we and others have previously discussed [36, 38, 63]. We therefore suggest that DeepSequence could be incorporated into umbrella or supervised methods to enhance prediction for specific purposes such as disease risk, binding specificity or enzyme efficiency. Despite challenges for deep models of sequence variation and data used to train them, they are likely to be of increasing importance to the high-throughput design and annotation of biological sequences. Evolution has generated millions of protein experiments, and deep generative models can begin to identify the statistical patterns of constraint that characterize essential functions of molecules. We make both the model and datasets available at github.com/debbiemarkslab/DeepSequences Acknowledgements We thank Chris Sander, Frank Poelwijk, David Duvenaud, Sam Sinai, Eric Kelsic and members of the Marks lab for helpful comments and discussions. While in progress Sinai et al also reported on use of variational autoencoders for protein sequences . A.J.R. is supported by DOE CSGF fellowship DE-FG02-97ER25308. D.S.M. and J.B.I. were funded by NIGMS (R01GM106303) Methods Alignments. We used the multiple sequence alignments that were published with EVmutation for the 19 families that overlapped and repeated the same alignment-generation protocol for the 4 additional proteins that were added in this study. Briefly, for each protein (target sequence), multiple sequence alignments of the corresponding protein family were obtained by five search iterations of the profile HMM homology search tool jackhmmer against the UniRef100 database of non-redundant protein sequences (release 11/2015). We used a bit score of 0.5 bits/residue as a threshold for inclusion unless the alignment yielded < 80% coverage of the length of the target domain, or if there were not enough sequences (redundancy-reduced number of sequences 10L). For <10L sequences, we decreased the required average bit score until satisfied and when the coverage was < 80% we increased the bit score until satisfied. Proteins with < 2L sequences at < 70% coverage were excluded from the analysis. See previous work for ParE-ParD toxin-antitoxin and tRNA alignment protocols. 12 Sequence weights. The distributions of protein and RNA sequences in genomic databases are biased by both (i) human sampling, where the sequences of certain highly-studied organisms may be overrepresented, and (b) evolutionary sampling, where some types of species may have undergone large radiations that may not have anything to do with the particular molecule we are studying. We aim to reduce these biases in a mechanistically-agnostic way by reweighting the empirical data distribution to make it smoother. We use the previously established procedure of computing each sequence weight @ as the reciprocal of the number of sequences within a given Hamming distance cutoff. If A@,C is the normalized hamming distance between the query sequence @ and another sequence in the alignment D and is a pre-defined neighborhood size, the sequence weight is: @=A@,C<DCHI The effective sample size of a multiple sequence alignment can then be computed as the sum of these weights as KLL=CDC To fit a model to reweighted data, there are two common approaches. First, as was done previously, one can reweight every log-likelihood in the objective by its sequence weight @. While this works well for batch optimization, we found it to lead to high-variance gradient estimates with mini-batch optimization that make stochastic gradient descent unstable. We instead used the approach of sampling data points with probability @ proportional to their weight in each minibatch as @=@KLL Following prior work, we set =0.2 for all multiple sequence alignments sequences (80%sequence identity) except those for viral proteins where we set =0.01 (99% sequence identity) due to limited sequence diversity and the expectation that small differences in viral sequences have higher probability of containing constraint information that the same diversity might from a sample of mammals, for instance. Background: latent factor models. Probabilistic latent variable models reveal structure in data by positing an unobserved generative process that created the data and then doing inference to learn the parameters of the generative process. We will focus on models with a generative process in which an unobserved set of factors are drawn from an in-dependent distribution and 13 each data point arises according to a conditional distribution , that is parameterized by . This process can be written as ~ 0,T ~ |, Principal Component Analysis (PCA) has been a foundational model for the analysis of genetic variation since its introduction by Cavalli-Sforza. PCA can be realized in this probabilistic framework as the zero-noise limit of Probabilistic PCA[44, 88]]. With linear conditional dependencies ,, PCA can only model additive interactions between the latent factors . This limitation could in principle be remedied by using a conditional model , with nonlinear dependencies on . Nonlinear categorial factor model. We will consider a conditional model , that differs from PCA in two ways: First, the conditional distribution of the data , will be categorical rather than Gaussian to model discrete characters. Second, the conditional distribution , will be parameterized by a neural network rather than a linear map. In this sense, our latent variable model may be thought of as a discrete, nonlinear analog of PCA. For this work, we considered a simple two hidden-layer neural network parameterizations of ,. The generative process , specifying the conditional probability of letter a at position is ~ 0,T (I)=I(I)+(I) (])=](])(I)+(]) (^,_)= (^,_)(])+(^,_) _=|= a(^,_) (b(^,_))b where I=max (0,) and ]= IIgKhi. Structured sparsity. Motivated by the observation that sequences have been well described by models with low-order interactions (such as pairwise undirected models), we structure the final layer of the decoder to be sparse such that each hidden unit may only affect a few positions in the sequence. We parameterize each final weight matrix as (^,_)=log1+j11+H@ k lmn (op)(^,_) 14 where IIgKhq k lmn (op) is a sigmoid function representing a continuous relaxation of a spike and slab prior over the group of dense factors using a logit normal prior. A single set of scale parameters can control the sparsity of dense factors out of the total number factors by tiling. log1+j is a softmax function representing the inverse-temperature of the sequence family and is the dictionary. The priors over the decoder weights are: ~ 0, ~ 0, ~ @,@] ~ 0, The factors =IIgKhq k lmn (op) are a-priori logit-normally distributed, which can be though of as a smooth relaxation of a Bernoulli that can be made sharper by increasing the variance @] , We set @ such that the prior probability of approximate inclusion, IIgKhq k lmn op>0.5, was 0.01. Given a fixed logit-variance of @]=16 and an inclusion probability _}~K=0.01, we set the prior mean for the logit as @= 9.3053 using @= 2@]HI2_}~K1 Variational approximation to ,. Nonlinear latent factor models are difficult to infer. Since the latent variables are not observed, computing the marginal likelihood of the data requires integrating them out: log =log, We must do this integral because we do not know a priori which is responsible for each data point , and so we average over all possible explanations weighted by their relative probability. In principle, the conditional probability , is given by Bayes Theorem as the posterior, ,= ,= ,,, which is a challenging calculation that requires integrating over . 15 Kingma and Welling , and Rezende et al., showed how to tractably overcome the challenge of the intractable posterior by introducing a variational approximation ,. By Jensens inequality, this forms a lower bound on the marginal likelihood of a data point as log=log, =log(,(7,, log(,(7,,. We can write this lower bound as: log7log, ;<,|| We choose the following functional form for the variational approximation for z: (I)=I(I)+(I) (])=I(])(I)+(]) = (^)(])+(^) ]= (^)(])+(^) ,= ,(]) The latent variable can be reparameterized using an auxillary random variable ~ 0,: = + Variational approximation to . We apply a Bayesian approach to learning global parameters by extending the variational approximations to include both the latent variables z as well as the global parameters . Because the posterior for the global parameters is conditioned on the entire dataset, we must consider the marginal likelihood of the full dataset = (),,() which integrates out all the corresponding latent factors = (),,() log=log, =log,,,,, (,)log,,,,, 16 The variational approximation factorizes as ,,=, The approximate posterior for factorizes over the data ,= ()(),_ The approximate posterior for factorizes over the model parameters: = ()_ To incorporate both of these factors into the likelihood, the ELBO is then: log 7()7(|)log, ;<,||;<(_)||(_)(k) We model all variational distributions over the parameters with fully-factorized mean-field Gaussian distributions. In accordance with our data reweighting scheme, we set = KLL, the effective number of sequences that is the sum of the sequence weights. Model hyperparameters. We used a fixed architecture across all sequence families. The encoder has architecture 1500-1500-30 with fully connected layers and ReLU nonlinearities. The decoder has two hidden layers: the first with size 100 and a ReLU nonlinearity, and the second with size 2000 with a sigmoid nonlinearity. The dictionary D is a 40 by q matrix where the alphabet size q was 20 for proteins and 4 for nucleic acids. A single set of sparsity scale parameters controlled 4 sets of dense weights. Dropout was set to 0.5 when used in ablation studies. Models were optimized with Adam with default parameters using a batch size of 100 until convergence, completing 300000 updates. Each model was fit five times to the same multiple sequence alignment using a different random seed. For mutation effect prediction, 2000 ELBO samples were taken of a mutated and wildtype sequence and averaged to estimate the log probability of that mutation. Group sparsity analysis. The sparse scale parameters were introduced into the structured weight matrix decomposition to enable the model to capture low-valence interactions between 17 residues. We aimed to test if positional inclusions for a given hidden unit were closer in three dimensions than expected by chance. To gather ground-truth distance data from protein structures, we retrieved related structures by searching the 3D structure database with our query sequences ui9sng Jackhmmer. We computed multiple distance matrice were generated by taking both the median of the minimum atom distances of both intra and multimer contacts. The final distance matrix was generated by taking the minimum of both of these matrices. The approximate sparsity parameters in our network We use the median of the scale parameters to approximate the value of the scale parameter. For a given vector of scale parameters activated by a neuron , the median activity of a given vector of scale parameters : =11+Hp Since the upstream activation is a sigmoid nonlinearity ([0,1]),we denoted these dead scale parameter vectors as those which do not have any scale parameter above 0.001, and were removed from downstream analysis. We then determined a co-occurrence distance distribution of these scale parameters by first taking the upper triangle of the outer product of the scale parameters and normalizing it such that it sums to 1: _=___<_ <,< A normalized distance per vector of scale parameters (aCCK} can then be reported in Angstroms: (aCCK}=___<_ This value was compared to }(aCCK}, in which the null distribution of scale parameters } are isotropically distributed, generating a }(aCCK} which is the average pairwise distance between residues. Moreover, bootstrapped samplings of converge to the same null value. The distribution of all distances (aCCK} can be compared to the using a one-sided Students t-test with a known mean. Residual analysis. Spearman is calculated by transforming paired data to ranked quantiles and then computing the Pearson correlation between the ranks. To determine where the model over or under predicted the E for each mutation, we transformed the experimental measurements and mutation effect predictions to normalized ranks on the interval [0,1]. Thus, we define the residual effects as the residuals of a least-squares linear fit between the normalized ranks. 18 least-squared line was fit between the normalized ranks of the predictions to the normalized ranks of the experiments , creating a slope and bias parameter, and , respectively. Residuals were generated from the fit line: =+ Positive values represent underprediction of deleteriousness of experimental effect prediction, while negative values represent overprediction of deleteriousness. Deep mutational scans with only single mutations were analyzed, using the most recent experimental data for each protein. Residuals were grouped by the identity of the amino acid either before mutation (wildtype) or after mutation (mutant). Bias correction. To correct for biases between mutation effect predictions and experimental measurements, we created a feature matrix for each mutation that included E, amino acid identity before and after mutation, alignment column statistics (conservation and amino acid frequency), and residue hydrophobicity. Leave-one-out cross validation (LOOCV) was used to correct the bias for each dataset. Using the most recent DMS experiment as the representative of that protein family (15 DMS datasets), the mutants of 14 datasets were used to fit a regression model to predict the residuals of each known mutation, , given the feature matrix. After this model was fit, it was used to predict for the mutants in the test dataset. This predicted residual bias was subtracted off the normalized predicted rank =.These corrected predictions were then reranked and compared to the experimental results to calculate a corrected Spearman . To predict the effects of mutations solely from DMS data, the same LOOCV procedure was used excluding all evolutionary information in the feature matrix for each mutation. In this case, the feature matrix was used to directly compute predict a rank . These values were subsequently reranked and compared to the ranked experimental results to calculate a corrected Spearman . 19 References 1. Fowler, D.M. and S. Fields, Deep mutational scanning: a new style of protein science. Nature methods, 2014. 11(8): p. 801-807. 2. Kosuri, S. and G.M. Church, Large-scale de novo DNA synthesis: technologies and applications. Nature methods, 2014. 11(5): p. 499-507. 3. Romero, P.A., T.M. Tran, and A.R. Abate, Dissecting enzyme function with microfluidic-based deep mutational scanning. Proc Natl Acad Sci U S A, 2015. 112(23): p. 7159-64. 4. Roscoe, B.P. and D.N. Bolon, Systematic exploration of ubiquitin sequence, E1 activation efficiency, and experimental fitness in yeast. J Mol Biol, 2014. 426(15): p. 2854-70. 5. Roscoe, B.P., et al., Analyses of the effects of all ubiquitin point mutants on yeast growth rate. J Mol Biol, 2013. 425(8): p. 1363-77. 6. Melamed, D., et al., Deep mutational scanning of an RRM domain of the Saccharomyces cerevisiae poly(A)-binding protein. RNA, 2013. 19(11): p. 1537-51. 7. Stiffler, M.A., D.R. Hekstra, and R. Ranganathan, Evolvability as a Function of Purifying Selection in TEM-1 beta-Lactamase. Cell, 2015. 160(5): p. 882-92. 8. McLaughlin, R.N., Jr., et al., The spatial architecture of protein function and adaptation. Nature, 2012. 491(7422): p. 138-42. 9. Kitzman, J.O., et al., Massively parallel single-amino-acid mutagenesis. Nat Methods, 2015. 12(3): p. 203-6, 4 p following 206. 10. Melnikov, A., et al., Comprehensive mutational scanning of a kinase in vivo reveals substrate-dependent fitness landscapes. Nucleic Acids Res, 2014. 42(14): p. e112. 11. Araya, C.L., et al., A fundamental protein property, thermodynamic stability, revealed solely from large-scale measurements of protein function. Proc Natl Acad Sci U S A, 2012. 109(42): p. 16858-63. 12. Firnberg, E., et al., A comprehensive, high-resolution map of a gene's fitness landscape. Mol Biol Evol, 2014. 31(6): p. 1581-92. 13. Starita, L.M., et al., Massively Parallel Functional Analysis of BRCA1 RING Domain Variants. Genetics, 2015. 14. Rockah-Shmuel, L., A. Toth-Petroczy, and D.S. Tawfik, Systematic Mapping of Protein Mutational Space by Prolonged Drift Reveals the Deleterious Effects of Seemingly Neutral Mutations. PLoS Comput Biol, 2015. 11(8): p. e1004421. 15. Jacquier, H., et al., Capturing the mutational landscape of the beta-lactamase TEM-1. Proc Natl Acad Sci U S A, 2013. 110(32): p. 13067-72. 16. Qi, H., et al., A quantitative high-resolution genetic profile rapidly identifies sequence determinants of hepatitis C viral fitness and drug sensitivity. PLoS Pathog, 2014. 10(4): p. e1004064. 17. Wu, N.C., et al., Functional Constraint Profiling of a Viral Protein Reveals Discordance of Evolutionary Conservation and Functionality. PLoS Genet, 2015. 11(7): p. e1005310. 18. Mishra, P., et al., Systematic Mutant Analyses Elucidate General and Client-Specific Aspects of Hsp90 Function. Cell Rep, 2016. 15(3): p. 588-98. 19. Doud, M.B. and J.D. Bloom, Accurate measurement of the effects of all amino-acid mutations to influenza hemagglutinin. bioRxiv, 2016. 20. Deng, Z., et al., Deep sequencing of systematic combinatorial libraries reveals beta-lactamase sequence constraints at high resolution. J Mol Biol, 2012. 424(3-4): p. 150-67. 21. Starita, L.M., et al., Activity-enhancing mutations in an E3 ubiquitin ligase identified by high-throughput mutagenesis. Proc Natl Acad Sci U S A, 2013. 110(14): p. E1263-72. 22. Aakre, C.D., et al., Evolving new protein-protein interaction specificity through promiscuous intermediates. Cell, 2015. 163(3): p. 594-606. 23. Julien, P., et al., The complete local genotype-phenotype landscape for the alternative splicing of a human exon. Nat Commun, 2016. 7: p. 11558. 24. Li, C., et al., The fitness landscape of a tRNA gene. Science, 2016. 25. Mavor, D., et al., Determination of ubiquitin fitness landscapes under different chemical stresses in a classroom setting. Elife, 2016. 5. 20 26. Fowler, D.M. and S. Fields, Deep mutational scanning: a new style of protein science. Nat Methods, 2014. 11(8): p. 801-7. 27. Gasperini, M., L. Starita, and J. Shendure, The power of multiplexed functional analysis of genetic variants. Nat Protoc, 2016. 11(10): p. 1782-7. 28. Starita, L.M., et al., Variant Interpretation: Functional Assays to the Rescue. Am J Hum Genet, 2017. 101(3): p. 315-325. 29. Sarkisyan, K.S., et al., Local fitness landscape of the green fluorescent protein. Nature, 2016. 533(7603): p. 397-401. 30. Adzhubei, I.A., et al., A method and server for predicting damaging missense mutations. Nature methods, 2010. 7(4): p. 248-249. 31. Hecht, M., Y. Bromberg, and B. Rost, Better prediction of functional effects for sequence variants. BMC genomics, 2015. 16(8): p. S1. 32. Huang, Y.-F., B. Gulko, and A. Siepel, Fast, scalable prediction of deleterious noncoding variants from functional and population genomic data. Nature genetics, 2017. 49(4): p. 618-624. 33. Kircher, M., et al., A general framework for estimating the relative pathogenicity of human genetic variants. Nature genetics, 2014. 46(3): p. 310-315. 34. Ng, P.C. and S. Henikoff, SIFT: Predicting amino acid changes that affect protein function. Nucleic acids research, 2003. 31(13): p. 3812-3814. 35. Finn, R.D., et al., HMMER web server: 2015 update. Nucleic Acids Res, 2015. 43(W1): p. W30-8. 36. Hopf, T.A., et al., Mutation effects predicted from sequence co-variation. Nature biotechnology, 2017. 35(2): p. 128-135. 37. Mann, J.K., et al., The fitness landscape of HIV-1 gag: advanced modeling approaches and validation of model predictions by in vitro testing. PLoS computational biology, 2014. 10(8): p. e1003776. 38. Boucher, J.I., D.N. Bolon, and D.S. Tawfik, Quantifying and understanding the fitness effects of protein mutations: Laboratory versus nature. Protein Sci, 2016. 39. Weinreich, D.M., et al., Should evolutionary geneticists worry about higher-order epistasis? Curr Opin Genet Dev, 2013. 23(6): p. 700-7. 40. Bendixsen, D.P., B. Ostman, and E.J. Hayden, Negative Epistasis in Experimental RNA Fitness Landscapes. J Mol Evol, 2017. 41. Rodrigues, J.V., et al., Biophysical principles predict fitness landscapes of drug resistance. Proc Natl Acad Sci U S A, 2016. 113(11): p. E1470-8. 42. Echave, J. and C.O. Wilke, Biophysical Models of Protein Evolution: Understanding the Patterns of Evolutionary Sequence Divergence. Annu Rev Biophys, 2017. 46: p. 85-103. 43. Schmidt, M. and K. Hamacher, Three-body interactions improve contact prediction within direct-coupling analysis. Physical Review E, 2017. 96(5): p. 052405. 44. Roweis, S. and Z. Ghahramani, A unifying review of linear gaussian models. Neural Comput, 1999. 11(2): p. 305-45. 45. Pritchard, J.K., M. Stephens, and P. Donnelly, Inference of population structure using multilocus genotype data. Genetics, 2000. 155(2): p. 945-59. 46. Patterson, N., A.L. Price, and D. Reich, Population structure and eigenanalysis. PLoS Genet, 2006. 2(12): p. e190. 47. Kingma, D.P. and M. Welling, Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. 48. Rezende, D.J., S. Mohamed, and D. Wierstra, Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014. 49. Gmez-Bombarelli, R., et al., Automatic chemical design using a data-driven continuous representation of molecules. arXiv preprint arXiv:1610.02415, 2016. 50. Wainwright, M.J. and M.I. Jordan, Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 2008. 1(12): p. 1-305. 51. Deng, Z., et al., Deep sequencing of systematic combinatorial libraries reveals -lactamase sequence constraints at high resolution. Journal of molecular biology, 2012. 424(3): p. 150-167. 52. Firnberg, E., et al., A comprehensive, high-resolution map of a genes fitness landscape. Molecular biology and evolution, 2014. 31(6): p. 1581-1592. 53. Stiffler, M.A., D.R. Hekstra, and R. Ranganathan, Evolvability as a function of purifying selection in TEM-1 -lactamase. Cell, 2015. 160(5): p. 882-892. 54. Jiang, L., et al., Latent effects of Hsp90 mutants revealed at reduced expression levels. PLoS Genet, 2013. 9(6): p. e1003600. 21 55. Mishra, P., et al., Systematic mutant analyses elucidate general and client-specific aspects of Hsp90 function. Cell reports, 2016. 15(3): p. 588-598. 56. Roscoe, B.P. and D.N. Bolon, Systematic exploration of ubiquitin sequence, E1 activation efficiency, and experimental fitness in yeast. Journal of molecular biology, 2014. 426(15): p. 2854-2870. 57. Roscoe, B.P., et al., Analyses of the effects of all ubiquitin point mutants on yeast growth rate. Journal of molecular biology, 2013. 425(8): p. 1363-1377. 58. Araya, C.L., et al., A fundamental protein property, thermodynamic stability, revealed solely from large-scale measurements of protein function. Proceedings of the National Academy of Sciences, 2012. 109(42): p. 16858-16863. 59. Fowler, D.M., et al., High-resolution mapping of protein sequence-function relationships. Nat Methods, 2010. 7(9): p. 741-6. 60. Li, C., et al., The fitness landscape of a tRNA gene. Science, 2016. 352(6287): p. 837-840. 61. Doud, M.B. and J.D. Bloom, Accurate measurement of the effects of all amino-acid mutations on influenza hemagglutinin. Viruses, 2016. 8(6): p. 155. 62. Thyagarajan, B. and J.D. Bloom, The inherent mutational tolerance and antigenic evolvability of influenza hemagglutinin. Elife, 2014. 3. 63. Starita, L.M., et al., Massively parallel functional analysis of BRCA1 RING domain variants. Genetics, 2015. 200(2): p. 413-422. 64. Fein, K.C., N.G. Lamson, and K.A. Whitehead, Structure-Function Analysis of Phenylpiperazine Derivatives as Intestinal Permeation Enhancers. Pharm Res, 2017. 34(6): p. 1320-1329. 65. Kelsic, E.D., et al., RNA Structural Determinants of Optimal Codons Revealed by MAGE-Seq. Cell Syst, 2016. 3(6): p. 563-571 e6. 66. Bandaru, P., et al., Deconstruction of the Ras switching cycle through saturation mutagenesis. Elife, 2017. 6. 67. Melnikov, A., et al., Comprehensive mutational scanning of a kinase in vivo reveals substrate-dependent fitness landscapes. Nucleic acids research, 2014. 42(14): p. e112-e112. 68. Kitzman, J.O., et al., Massively parallel single-amino-acid mutagenesis. Nature methods, 2015. 12(3): p. 203-206. 69. Srivastava, N., et al., Dropout: a simple way to prevent neural networks from overfitting. Journal of machine learning research, 2014. 15(1): p. 1929-1958. 70. Murphy, K.P., Machine learning: a probabilistic perspective. 2012: MIT press. 71. Hopf, T.A., et al., Three-dimensional structures of membrane proteins from genomic sequencing. Cell, 2012. 149(7): p. 1607-21. 72. Marks, D.S., et al., Protein 3D structure computed from evolutionary sequence variation. PLoS One, 2011. 6(12): p. e28766. 73. Morcos, F., et al., Direct-coupling analysis of residue coevolution captures native contacts across many protein families. Proc Natl Acad Sci U S A, 2011. 108(49): p. E1293-301. 74. Jones, D.T., et al., MetaPSICOV: combining coevolution methods for accurate prediction of contacts and long range hydrogen bonding in proteins. Bioinformatics, 2015. 31(7): p. 999-1006. 75. Figliuzzi, M., et al., Coevolutionary Landscape Inference and the Context-Dependence of Mutations in Beta-Lactamase TEM-1. Mol Biol Evol, 2016. 33(1): p. 268-80. 76. Sim, N.L., et al., SIFT web server: predicting effects of amino acid substitutions on proteins. Nucleic Acids Res, 2012. 40(Web Server issue): p. W452-7. 77. Adzhubei, I., D.M. Jordan, and S.R. Sunyaev, Predicting functional effect of human missense mutations using PolyPhen-2. Curr Protoc Hum Genet, 2013. Chapter 7: p. Unit7 20. 78. Rezende, D.J. and S. Mohamed, Variational inference with normalizing flows. arXiv preprint arXiv:1505.05770, 2015. 79. Burda, Y., R. Grosse, and R. Salakhutdinov, Importance weighted autoencoders. arXiv preprint arXiv:1509.00519, 2015. 80. Johnson, M., et al. Composing graphical models with neural networks for structured representations and fast inference. 81. Ovchinnikov, S., et al., Large-scale determination of previously unsolved protein structures using evolutionary information. Elife, 2015. 4: p. e09248. 82. Weinreb, C., et al., 3D RNA and Functional Interactions from Evolutionary Couplings. Cell, 2016. 165(4): p. 963-75. 83. Toth-Petroczy, A., et al., Structured States of Disordered Proteins from Genomic Sequences. Cell, 2016. 167(1): p. 158-170 e12. 22 84. Sinai, S., et al., Variational auto-encoding of protein sequences. arXiv preprint arXiv:1712.03346, 2017. 85. Eddy, S.R., Accelerated Profile HMM Searches. PLoS Comput Biol, 2011. 7(10): p. e1002195. 86. Suzek, B.E., et al., UniRef clusters: a comprehensive and scalable alternative for improving sequence similarity searches. Bioinformatics, 2015. 31(6): p. 926-32. 87. Ekeberg, M., et al., Improved contact prediction in proteins: using pseudolikelihoods to infer Potts models. Phys Rev E Stat Nonlin Soft Matter Phys, 2013. 87(1): p. 012707. 88. Tipping, M.E. and C.M. Bishop, Probabilistic principal component analysis. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 1999. 61(3): p. 611-622. 89. Kingma, D. and J. Ba, Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 90. Berman, H.M., et al., The Protein Data Bank. Nucleic Acids Res, 2000. 28(1): p. 235-42. 91. Kyte, J. and R.F. Doolittle, A simple method for displaying the hydropathic character of a protein. Journal of molecular biology, 1982. 157(1): p. 105-132. 23 Supplementary Figure 1. Mutation effect predictions of all deep mutational scanning datasets. Spearman rank correlation between coefficients of all proteins and all generative models. Here we show both the average rank correlation of individual latent variable models (DeepSequence Average) as well as an ensembled prediction using these 5 models (DeepSequence Ensemble). -lactamase, Ostermeier 2014-lactamase, Ranganathan 2015-lactamase, Tenaillon 2013DNA methyltransferase HaeIII, Tawfik 2015PABP (RRM domain) doubles Fields 2013PABP (RRM domain) singles, Fields 2013-glucosidase, Abate 2015GAL4 (DNA-binding domain), Shendure 2015Aliphatic amide hydrolase, Whitehead 2017Kanamycin kinase APH(3')-II, Mikkelsen 2014YAP1 (WW domain), Fields 2012PSD95 (PDZ domain), Ranganathan 2012TIM Barrel, S. solfataricus, Matthews 2017HSP90, Bolon 2016TIM Barrel, T. thermophilus, Matthews 2017-lactamase, Palzkill 2012Translation initiation factor IF-1, Kishony 2016UBE4B (U-box domain), Klevit 2013Ubiquitin, Fraiser 2016Yeast tRNA (CCU, Arg: singles and doubles), Zhang 2015Ubiquitin, Bolon 2013GTPase HRas, Kuriyan 2017Toxin-antitoxin complex (singles to triples), Laub 2015Flu polymerase PA subunit, Sun 2015Ubiquitin, Bolon 2014Hepatitis C NS5A, Sun 2014TIM Barrel, T. maritima, Matthews 2017Influenza hemagglutinin, Bloom 2016MAPK1/ERK2, Johannessen 20160.10.10.20.30.40.50.60.70.8| Spearman |DeepSeqeunce EnsembleDeepSeqeunce AverageEVmutationIndependent 0.0 24 Supplementary Figure 2. Predictions from all generative models for sequence families exhibit biases when compared to experiments. By transforming all model predictions and mutations to normalized ranks, we can compare effect predictions to experimental data across all biological datasets and models. The site-independent, pairwise, and latent variable models systematically over and under predict the effects of mutations according to amino acid identity. These biases vary in magnitude and direction depending on the amino acid identity before mutation (wildtype) or the residue identity it is mutated to (mutant). 25 Supplemental Figure 3. Supervised calibration of mutation effect predictions improves predictive performance. Amino acid bias was corrected with linear regression for all generative models, leaving one protein out for test and training a model on the rest (Methods). The bottom of the bar is Spearman before correction, while the top is Spearman after correction. Predictions without any evolutionary information (Supervised) performed considerably worse than other predictors. 0.00.90.80.70.60.50.40.30.20.1Spearman r-lactamase, Ranganathan 2015DNA methyltransferase HaeIIIPABP (RRM domain)-glucosidaseGAL4 (DNA-binding domain)Kanamycin Kinase APH(3)-IIYAP1 (WW domain)PSD 95 (PDZ domain)TIM Barrel (S. solfataricus) Ubiquitin, Fraiser 2016HSP90 (ATPase domain) Influenza polymerase PA subunitInfluenza hemagglutininUBE4B (U-box domain)Hepatitis C NS5AGTPase HRasTranslation initiation factor IF1Aliphatic amide hydrolaseDeepSequenceEVmutationIndependentSupervised |
1909.13371.pdf | Gradient Descent: The Ultimate Optimizer Kartik Chandra MIT CSAIL Cambridge, MA [email protected] Xie MIT CSAIL Cambridge, MA [email protected] Ragan-Kelley MIT CSAIL Cambridge, MA [email protected] Meijer Meta, Inc. Menlo Park, CA [email protected] Abstract Working with any gradient-based machine learning algorithm involves the tedious task of tuning the optimizers hyperparameters, such as its step size. Recent work has shown how the step size can itself be optimized alongside the model parameters by manually deriving expressions for hypergradients ahead of time. We show how to automatically compute hypergradients with a simple and elegant modification to backpropagation. This allows us to easily apply the method to other optimizers and hyperparameters (e.g. momentum coefficients). We can even recursively apply the method to its own hyper -hyperparameters, and so on ad infinitum . As these towers of optimizers grow taller, they become less sensitive to the initial choice of hyperparameters. We present experiments validating this for MLPs, CNNs, and RNNs. Finally, we provide a simple PyTorch implementation of this algorithm (see people.csail.mit.edu/kach/gradient-descent-the-ultimate-optimizer). 1 Introduction When we train deep neural networks by gradient descent, we have to select a step size for our optimizer. If is too small, the optimizer runs very slowly, whereas if is too large, the optimizer fails to converge. Choosing an appropriate is thus itself an optimization task that machine learning practitioners face every day. Why not apply gradient descent here, too? To do so, we need to compute the derivative of the loss function not only with respect to the neural networks weights, but also with respect to. Baydin et al. (2018), applying an insight from Almeida et al. (1999), describe how to efficiently compute such hypergradients by manually differentiating standard optimizer update rules with respect to the step size hyperparameter. This allows for on-line learning rate adaptation, which generally improves convergence, especially when the initial is sub-optimal. However, the above method has three limitations: (1) manually differentiating optimizer update rules is tedious and error-prone, and must be re-done for each optimizer variant; (2) the method only tunes the step size hyperparameter, not other hyperparameters such as the momentum coefficient; and (3) the method introduces a newhyperparameter, the hyper-step-size, which must also be tuned. In this paper, we address all three limitations by replacing manual differentiation with automatic differentiation (AD), which (1) automatically computes correct derivatives without any additional human effort, and (2) naturally generalizes to other hyperparameters (e.g. momentum coefficient) for free. As for (3), we observe that AD can be applied to optimize not only the hyperparameters, but also the hyper -hyperparameters, and the hyper-hyper-hyperparameters, and so on. In fact, we can implement arbitrarily tall towers of recursive optimizers, which are increasingly robust to the choice of initial hyperparameter. These hyperoptimizers therefore reduce the burden on humans responsible for tuning the hyperparameters. (Such an effect was hypothesized by Baydin et al., but not tested because manual differentiation of complex sequences of nested optimizers is impractical.) Equal contribution. Work done in part at Meta, Inc. and in part at Stanford University. 36th Conference on Neural Information Processing Systems (NeurIPS 2022).arXiv:1909.13371v2 [cs.LG] 14 Oct 2022 Although just apply AD is a seemingly straightforward recipe, an efficient implementation that properly allows for recursive self-application requires some care. To close the loop, we take inspiration from the study of recursion and combinators in programming language theory (and the long tradition of programming language papers named Lambda: The Ultimate X). We spell out the details in Section 2, and evaluate our method in Section 3. We find that across a variety of architectures (MLPs, CNNs, and RNNs) our hyperoptimizers are robust to suboptimal choices of initial hyperparameters, and that this robustness increases as we grow the stacks of optimizers taller. 2 Implementing hyperoptimizers Consider some loss function fthat we want to minimize using gradient descent, and let wibe the weights at the beginning of step i(we will omit subscripts on f, even though it varies at each step due to the stochasticity of minibatches). First, recall the standard weight update rule at step ifor SGD, using some fixed step size : wi+1=wif(wi) wi We would like to also update at each step, so we will index it as well with the step number; that is, letibe the step size at the beginning of step i. At each step, we will first update itoi+1using some update rule yet to be derived, and then use the updated step size i+1to update the weights fromwitowi+1. i+1=iadjustment for i wi+1=wii+1f(wi) wi What should the adjustment for ibe? By analogy to w, we want to adjust iin the direction of the gradient of the loss function with respect to i, scaled by some hyper -step size. In other words, the adjustment should be (f(wi)/i). Our modified update rule is therefore: i+1=if(wi) i(1) wi+1=wii+1f(wi) wi(2) All that remains is to compute f(wi)/iin equation (1). Below, we first review how Baydin et al. (2018) take this derivative by hand. Then, we show how to obtain the same result automatically and efficiently using AD. Finally, we discuss how this automation allows us to generalize the method. 2.1 Computing the step-size update rule by hand One option to compute f(wi)/i, explored by Baydin et al. (2018), is to proceed by direct manual computation of the partial derivative. Applying the chain rule to (1), we have f(wi) i=f(wi) wiwi i=f(wi) wi( wi1if(wi1) w i1) i(3) =f(wi) wi( f(wi1) wi1) (4) where (3) is obtained by substituting the update rule in (2) for wiand (4) is obtained by observing thatwi1andf(wi1)do not depend on i. As Baydin et al. note, this expression lends itself to a simple and efficient implementation: simply remember the past two gradients from backpropagation, and take their dot product to obtain the hypergradient with respect to the step size. We were able to take this derivative by hand because the update rule for SGD is simply a multiplication by a constant, whose derivative is trivial. What about other optimizers? Consider the Adam optimizer (Kingma and Ba, 2014), which has a much more sophisticated update rule involving the four hyperparameters , 1,2,(thoughis typically not tuned). Differentiating the update rule by hand, 2 we obtain significantly more complex expressions for the hypergradients: wi i=mi( i+vi)wi 1i=i( f(wi1) w i1+mi1+i1(i1) imi) ( 11i i) ( i+vi) wi i=imi( i+vi)2wi 2i=imivi( ( f(wi1) w i1)2 +vi1+i2(i1) ivi) 2vi( i+vi)2 This manual approach to derive hypergradients simply does not scale: it is tedious and error-prone, and must be repeated for every optimizer variant. However, with a little bit of care, we can compute hypergradients automatically and efficiently alongside the regular gradients. 2.2 Computing the step-size update rule automatically In order to compute hypergradients automatically, let us first briefly review the mechanics of reversemode AD. Differentiable programming systems that provide reverse-mode AD typically build up a computation graph as the function is computed forwardly. For example, when a user computes the functionf(wi), the system internally stores a DAG whose leaves are the weights wi, whose internal nodes are intermediate computations, and whose root is the final loss. It can then backpropagate through the computation graph starting at this root node, depositing gradients in each internal node as it descends, until the weights wiat the leaf nodes have accumulated their gradients f(wi)/wi. Once the gradient f(wi)/wiis computed by the backwards pass, we update the weights wi+1= wif(wi)/wi, and repeat the cycle for the next step of gradient descent. An important consideration at this point is for the weights to be detached from the computation graph before the next iteration of this algorithm that is, for the weights to be forcibly converted to leaves of the graph by removing any inbound edges. The effect of the detach operation is depicted in Figure 1a. If this step were skipped, backpropagation at the next iteration would continue beyond the current weights and into the previous iterations computation graph. Over time the computation graph would grow taller linearly in the number of steps taken; because backpropagation is linear in the size of the graph, the overall training would become quadratic-time and intractable. Let us take PyTorch as an example. In the built-in SGD optimizer (Paszke et al., 2017, optim/sgd.py, commit ff94c9d ), this is implemented by wrapping the update in the @torch.no _grad() context manager. Here, we need finer grained control over gradient flow, so will make the .detach() operations explicit. Below is pseudocode for an SGD optimizer that uses .detach() as we have discussed. The highlighted calls to .detach() correspond to detaching the weights and their gradients. def SGD. __init __(self, alpha): self.alpha = alpha def SGD.step(w): d_w = w.grad .detach() w = w .detach() self.alpha .detach() *d_w Now, in order to have backpropagation deposit the gradient with respect to ias well aswi, we can simply refrain from detaching ifrom the graph, detaching instead itsparents. This is depicted in Figure 1b. Because we want to compute f(wi)/i, the edge from itowineeds to remain intact. To implement this, instead of calling .detach() onalpha directly, we detach its parents when applying equation (1). This yields the following fully-automated hyperoptimization algorithm: def HyperSGD.step(w): # update alpha using equation (1) d_alpha = self.alpha.grad .detach() self.alpha = self.alpha .detach() kappa .detach() *d_alpha # update w using equation (2) d_w = w.grad .detach() w = w .detach() self.alpha .detach() *d_w 3 (a) Computation graph of SGD with a single fixed hyperparameter . (b) Computation graph of SGD with a continuouslyupdated hyperparameter i. Figure 1: Visualizing the computation graphs of SGD and HyperSGD. Since we only extend the computation graph by a little extra amount, corresponding to evaluating the optimizer, the hyperoptimizers computational overhead is negligible (see Figure 4f). 2.3 Extending to other optimizers As suggested by Maclaurin et al. (2015), it should be possible to apply gradient-based methods to tune hyperparameters of common variations on SGD such as AdaGrad (Duchi et al., 2011), AdaDelta (Zeiler, 2012), or Adam (Kingma and Ba, 2014). The above implementation of HyperSGD generalizes quite easily to these optimizers we simply replace the last line with the new update rule. Unlike previous work, our method also allows for simultaneously optimizing allhyperparameters of these optimizers (e.g. all of ,1, and2for Adam) for free. We simply treat them just like alpha in the implementation. Our evaluation in Section 3.2 demonstrates that this indeed advantageous to do. There are, however, two important subtleties: First, because hyperparameters like 1and2must be strictly in the domain (0,1), we clamp the raw values to this domain using a scaled sigmoid. Without this step, we might accidentally adjust these values outside their domains. Second, the Adam optimizer in particular involves the termvi, which is continuous but not differentiable at vi= 0. Because Adam normally initializes v0= 0, backpropagation fails on the first step due to division by zero. We fix this problem by initializing v0torather than 0. 2.4 Stacking hyperoptimizers recursively At this point it is natural to ask whether the hyperoptimizer can itself be optimized; that is, whether the hyper-hyperparameters can be adjusted by a hyper-hyperoptimizer. The possibility of doing so recursively ad infinitum to obtain an optimization algorithm that is highly robust to the humanchosen hyperparameter was hypothesized by Baydin et al. (2018). Computing the gradients of these higher-order hyperparameters by hand is impossible without knowing the exact sequence of stacked optimizers in advance, and, as discussed above, would be extremely tedious and error-prone. However, the ability to compute these gradients automatically by AD makes it possible to realize this vision. To do so, let us revisit our previous implementation of HyperSGD. Notice that there is an opportunity for recursion lurking here: the adjustment to alpha can be factored out with a call to SGD.step , where SGDs hyperparameter is kappa . def HyperSGD.step(w): # update alpha using Equation (1) SGD(kappa).step(self.alpha) # update w using Equation (2) d_w = w.grad.detach() w = w.detach() self.alpha *d_w 4 Because SGD is already careful to properly detach its parameter (typically w, but in this case ), this implementation is functionally identical to the one above. Indeed, any optimizer that observes this protocol would suffice, so let us abstract out the optimizer as a parameter to HyperSGD: def HyperSGD. __init __(self, alpha, opt): self.alpha = alpha self.optimizer = opt def HyperSGD.step(w): self.optimizer.step(self.alpha) d_w = w.grad.detach() w = w.detach() self.alpha *d_w opt = HyperSGD(0.01, opt=SGD(kappa)) Finally, after this refactoring, we can recursively feed HyperSGD itself as the optimizer, obtaining a level-2 hyperoptimizer HyperSGD(0.01, HyperSGD(0.01, SGD(0.01))) . Similarly, we can imagine taller towers, or towers that mix and match multiple different kinds of optimizers, such as Adamoptimized-by-SGD-optimized-by-Adam. A natural concern is whether this process actually exacerbates the hyperparameter optimization problem by introducing even more hyperparameters. Baydin et al. (2018) predicted that as the towers of hyperoptimizers grew taller, the resulting algorithms would become less sensitive to the human-chosen hyperparameters. This is indeed the case; Section 3.4 presents an empirical evaluation. 3 Experiments In this section we evaluate the hyperoptimizers made possible by our system, exploring in particular the benefits of optimizing hyperparameters beyond just the step size, and of stacking hyperoptimizers to multiple levels. Each of these experiments was conducted on a single NVIDIA TITAN Xp GPU. 3.1 Hyperoptimization for SGD First, we establish some basic properties about hyperoptimizers: (1) whether an SGD hyperoptimizer performs better than a regular SGD optimizer, and (2) whether the final learned step size is better than the initial human-chosen step size. We test the latter property by running a fresh regular SGD optimizer with the final learned step size of the hyperoptimizer. Following authors of prior work (Maclaurin et al., 2015; Baydin et al., 2018), we conducted initial experiments on the MNIST dataset (Lecun et al., 1998) using a neural network with one fully-connected hidden layer of size 128, tanh activations, and a batch size of 256. We trained all networks for 30 epochs, reporting statistics over 3 runs. As a baseline we used SGD with = 0.01. Table 1a summarizes the results of our experiments. We find that hyperoptimized SGD outperforms the baseline by a significant margin. This holds even if we use other optimizers (e.g. Adam) to adjust the step size of the SGD optimizer. Furthermore, when we re-ran the regular optimizers with the new learned hyperparameters, we found that they performed better than the initial hyperparameter. 3.2 Hyperoptimization for Adam, AdaGrad and RMSProp In Section 2.3, we described how to apply our system to the Adam optimizer, simultaneously optimizing not only the learning rate , but also the momentum coefficients 1,2. Here, we ask three questions: (1) whether hyperoptimized Adam optimizers perform better than regular Adam optimizers, (2) whether the learned hyperparameters outperform the baseline, and (3) whether there is a benefit to optimizing all the hyperparameters, as opposed to only optimizing the learning rate as Baydin et al. (2018) do. Because Adam has significantly faster convergence than SGD, we only run these experiments for 5 epochs to avoid overfitting. Table 1b summarizes the results of our experiments. We find that indeed the hyperoptimized Adam optimizer outperforms the regular Adam optimizer on its default settings. As with SGD, the learned hyperparameters perform better than the initial hyperparameters when re-run with the regular optimizer. Inspecting the learned hyperparameters, we find that the algorithm raises the learning rate 5 Optimizer Test error SGD 8.990.05% SGD / SGD 4.810.10% SGD(0.0769) 5.440.10% SGD / Adam(0.1) 4.860.06% SGD(0.4538) 2.800.09% SGD / AdaGrad 4.850.21% SGD(0.0836) 5.170.03% SGD / RMSprop(0.1) 4.520.02% SGD(0.5920) 2.520.07% (a) Experiments with SGD (Section 3.1)Optimizer Test error Adam 4.670.06% Adam / SGD( 105) 3.030.02% Adam(0.0040, 0.899, 0.999) 3.110.06% Adam/ SGD( 105) 3.120.04% Adam(0.0021) 3.470.02% Adam / Adam 3.050.09% Adam(0.0038, 0.870, 0.999) 3.240.13% Adam/ Adam 3.040.08% Adam(0.0036) 3.080.12% (b) Experiments with Adam (Section 3.2) Optimizer Test error AdaGrad 7.400.08% AdaGrad / SGD 6.900.16% AdaGrad(0.0080) 7.750.02% AdaGrad / AdaGrad 5.030.23% AdaGrad(0.0151) 6.670.08% (c) Experiments with AdaGrad (Section 3.2)Optimizer Test error RMSProp 4.190.47% RMSProp/ SGD( 104) 3.550.23% RMSProp(0.0030) 3.930.70% RMSprop,/ SGD( 104) 3.330.07% RMSProp(0.0032, 0.9899) 3.250.09% RMSProp/ RMSProp( 104) 3.420.45% RMSProp(0.0021) 3.600.04% RMSProp,/ RMSProp( 104)2.960.11% RMSProp(0.0020, 0.9962) 3.650.36% (d) Experiments with RMSProp (Section 3.2) Table 1: Hyperoptimization experiments with MNIST. We denote hyperoptimizers by their constituent optimizers separated by slashes (the leftmost item adjusts the models weights). Adamis an Adam optimizer where only is optimized as by Baydin et al. (2018); RMSPropis similar. If not specified, initial hyperparameters are PyTorch defaults ( 102for learning rates except 103for Adam;1= 0.9,2= 0.99for Adam and = 0.99for RMSProp). Each hyperoptimizer experiment is repeated using the final hyperparameters (typeset in pink) learned by the algorithm. and slightly lowers 1, but does not significantly affect 2. Nevertheless, learning 1does help slightly, though not when the top-level optimizer is itself another Adam optimizer. Similarly, we can add any other optimizer to our system with just a few straightforward lines of code. Here, we show results for AdaGrad (Table 1c) and RMSProp (Table 1d; also run to 5 epochs). These experiments took less than an hour each to implement from scratch, and show that every hyperoptimizer stack outperforms the non-hyperoptimized baseline. We remark that AdaGrad is known to stall over time as the effective step size goes to zero; inspecting the learned over time, we find that the AdaGrad/AdaGrad hyperoptimizer increases to make up for this effect. Additionally, we tried to hyperoptimize RMSProps new parameter, which modulates the accumulation of gradient RMS terms. This yielded even better results (compareto,trials), and required only a 1-line change in our code. 3.3 Hyperoptimization at scale Next, we evaluate our hyperoptimizers on two different real-world neural network architectures. 3.3.1 Convolutional neural networks for computer vision We train a ResNet-20 (He et al., 2016) with and without hyperoptimization on the CIFAR-10 dataset (Krizhevsky, 2012). As a baseline, we replicate the training procedure of He et al. (2016): we 6 (a) For a wide range of bad initial hyperparameter configurations, the hyperoptimizer improves on (or at least matches) final test accuracy, and often matches or even outperforms the good initial hyperparameters. (b) The hyperoptimizer matches performance of the hand-engineered learning rate decay schedule by He et al. (2016), learning a strikingly similar decay schedule (right plot). Figure 2: Training ResNets on CIFAR-10 with hyperoptimizers (Section 3.3.1). use the same network architecture, optimizer (SGD), step size (0.1), momentum (0.9), and weight decay ( 104), though without their custom learning rate decay schedule (which we will address later). Experiments were run for 200 epochs, which takes around 3 hours on our hardware. First, we test how robust the hyperoptimizer is to bad initial choices of step size and momentum. We vary the initial step size and the momentum among small, good, and large values (that is,{0.01,0.1,1.0}and{0.09,0.9,0.99}), and add a hyperoptimizer ( =2103, = 1/(1)106). The results of this experiment are shown in Figure 2a. In every configuration, the hyperoptimizer matches or outperforms the regular optimizer in final test accuracy. Furthermore, in nearly all of the configurations, the hyperoptimizer matches or exceeds the good hyperparameters final test accuracy. Only when both hyperparameters are bad in the same direction (too small or too large) is it unable to manage this, and even then for the too-large case it dramatically lowers the loss compared to no hyperoptimizer. We conclude that hyperoptimizers are indeed beneficial for tuning both step size and momentum in this real-world setting. Next, we add in the learning rate decay schedule hand-engineered by He et al. (2016): the step size is divided by 10 at epochs 100 and 150. We compare this with a hyperoptimizer initialized with the same starting hyperparameters, training both variants for 500 epochs. Our results are shown in Figure 2b. The hyperoptimizer not only matches the final test loss of the hand-engineered learning rate decay schedule, but also learns a decay schedule strikingly similar to one hand-engineered by He et al. Of course, both networks significantly outperform the baseline trained with a fixed step size. 3.3.2 Recurrent neural networks for language modeling We train a character-level RNN (Char-RNN) on the Tolstoy dataset, as proposed by Karpathy et al. (2015) as a convenient testbed for language models, which is now often used to benchmark optimizers (Schneider et al., 2018; Schmidt et al., 2021). We took the architecture (2-layer LSTM with 128 hidden nodes) and expert optimizer (Adam optimizer with = 2103, run for 50,000 gradient 7 Figure 3: Training RNNs with hyperoptimizers (Section 3.3.2). As the initial learning rate is lowered, the regular Adam optimizers convergence slows, but the hyperoptimizer is able to accelerate it. The hyperoptimizer also slightly improves convergence when the initial learning rate is too high. descent steps) directly from Johnson (2017) as recommended by Karpathy et al. We compare against our HyperAdam optimizer on a wide range of initial learning rates {104,2103,102}, with=102. We do not vary initial1,2because in our experience these hyperparameters are typically left at their default values. However, we doallow the hyperoptimizer to vary 1,2over the course of training (with 1= 104and2= 2104). All runs took around 1 hour to train. The results of this experiment are shown in Figure 3. We find that the hyperoptimizer performs comparably to the expert-chosen fixed step size (perplexity 5.410.26with hyperoptimizer vs 5.270.31without), and improves upon bad initial step sizes in both directions ( 5.450.76vs 5.770.34when too high; 6.510.88vs8.710.91when too low). 3.4 Higher-order hyperoptimization In Section 2.4 we developed an interface for building arbitrarily tall towers of optimizers. Baydin et al. (2018) hypothesized that taller towers would yield hyperoptimizers that were increasingly robust to the initial human-chosen hyperparameters. To validate this behavior of higher-order hyperoptimizers, we ran each of our benchmarks from above (MLP on MNIST, CNN on CIFAR-10, RNN on Tolstoy) with towers of hyperoptimizers of increasing heights, and with bottom-level step sizes initialized across many orders of magnitude. In practice we find that if the initial hyper-step sizes are too large, the computation diverges for networks larger than the MNIST MLP. So, we initialize each levels hyperparameter to be smaller than that of the previous level. Specifically, we use the following scheme: from= 108to104the higher layers step sizes were initialized to [102,100,102] respectively, while for 103they were initialized to [103,104,108]respectively. Figure 4 shows our results. It is indeed the case across these different benchmarks (each of which has a different dataset, architecture, and optimizer type) that the taller the hyperoptimizer stack, the less sensitive the results become to the human-chosen hyperparameters. With a three-level optimizer stack, a single hyperoptimizer design obtains reasonable results in all of our benchmarks across several orders of magnitude of base-level step size. Further tests of scalability To test if our hyperoptimizers continue to work in even larger regimes, we fine-tuned a ResNet-152 (pretrained on ImageNet) to the Caltech-256 dataset Griffin et al. (2007). Figure 4e shows the results: a height-1 hyperoptimizer recovers 11% error for both = 106and = 104(without a hyperoptimizer, = 106gives 91.5%error). A height-2 hyperoptimizer is additionally able to make significant progress when = 102. 8 (a) Results on an MLP (Sec 3.1), where all layers are initialized with the same step size. (b) Results on an MLP (Sec 3.1), where all layers are initialized as in Sec 3.4. (c) Results on a ResNet (Sec 3.3.1) (d) Results on a Char-RNN (Sec 3.3.2) (e) Results on fine-tuning a pretrained ResNet-152 to the Caltech-256 dataset (Sec 3.4) (f) Our hyperoptimizers have minimal impact on runtime, which scales linearly in height (Sec 3.4) Figure 4: Evaluating higher-order hyperoptimization across a variety of benchmarks (Section 3.4). As we stack more layers of optimizers, the resulting hyperoptimizer is less sensitive to the initial choice of hyperparameters, but costs only 1-2% more in runtime. We stress how lightweight and practical this method is. Figure 4f shows how runtime scales as a function of hyperoptimizer stack height for the above benchmarks. The scaling is linear: each additional level costs only 1-2% in additional runtime above the non-hyperoptimized baseline. 4 Related work Hyperparameter optimization has a long history, and we refer readers to a recent survey by Feurer and Hutter (2019) for the full story. Most existing work on gradient-based hyperparameter optimization (Bengio, 2000; Domke, 2012; Maclaurin et al., 2015; Pedregosa, 2016; Franceschi et al., 2017) has focused on computing hyperparameter gradients after several iterations of training, which is 9 computationally expensive. Baydin et al. (2018), building on a technique first published by Almeida et al. (1999), propose instead updating hyperparameters at each step, and Rubio (2017) provides a convergence analysis. Luketina et al. (2016) apply a similar technique to regularization hyperparameters, though they note that their proposed method could work in principle for any continuous hyperparameter. As discussed above, we expand upon this line of work in three directions: (1) by fully automating this process, rather than requiring manual derivative computations; (2) by optimizing hyperparameters beyond just the learning rate; and (3) by realizing the vision of recursive higher-order hyperoptimizers and evaluating the resulting algorithms. We find that they are indeed more robust to the initial human-chosen hyperparameter, which relates our work to other learning algorithms that minimize sensitivity to learning rates (Orabona and Tommasi, 2017; Vaswani et al., 2019). 5 Limitations and future work As discussed in Section 3.4, one limitation of hyperoptimizers is that they cannot yet handle initial hyperparameters that are set far too high, because the system is unstable and diverges before the hyperoptimizer can have an effect. Designing hyperoptimizers robust in this regime requires further research, such as a deeper theoretical analysis of convergence. Our implementation also requires some care in avoiding certain bugs related to computation graph management. For example, loggers must detach what is logged to avoid memory leaks because tensors are not garbage collected unless all children are detached. Similarly, certain PyTorch modules (e.g. the built-in LSTM) cannot be used because they silently modify the computation graph, which may lead to incorrect gradients with our system. Further research is needed to design differentiable programming languages where methods like ours can be expressed in a modular and composable manner that minimizes the risk of such bugs. Broader impact Training a modern deep learning system consumes a tremendous amount of energy, and hyperparameter searches can multiply that energy impact by many orders of magnitude (Strubell et al., 2019). We hope that advances in on-line hyperparameter tuning can reduce this impact. 6 Conclusion We presented a technique that enables gradient descent optimizers like SGD and Adam to tune their own hyperparameters. Unlike prior work, our proposed hyperoptimizers require no manual differentiation, learn hyperparameters beyond just learning rates, and can be stacked recursively to many levels. We described an elegant recursive implementation of hyperoptimizers in a reverse-mode AD system and evaluated it on a variety of benchmarks, showing that as the stacks grow taller, they become less sensitive to the initial human-chosen hyperparameter. Acknowledgments and Disclosure of Funding We thank Samantha Andow, Emilio Arroyo-Fang, Irene Dea, Johann George, Melissa Grueter, Basil Hosmer, Steffi Stumpos, Alanna Tempest, and Shannon Yang for early discussions, Krishna Murthy Jatavallabhula and Josh Tenenbaum for their advice when preparing this paper, and the anonymous reviewers for their thoughtful feedback. KC and JRK were supported by NSF Grants #2105806, #CCF-1231216, #CCF-1723445 and #CCF-1846502, and ONR Grant #00010803 at MIT. Additionally, KC was supported by a Hertz Foundation Fellowship, the Paul and Daisy Soros Fellowship for New Americans, and an NSF Graduate Research Fellowship under Grant #2141064, and AX was supported by the MIT Undergraduate Research Opportunities Program (UROP). References L. E. Almeida, T. Langlois, J. F. M. do Amaral, and A. Plakhov. Parameter adaptation in stochastic optimization. In On-Line Learning in Neural Networks , 1999. A. G. Baydin, R. Cornish, D. M. Rubio, M. Schmidt, and F. Wood. Online learning rate adaptation with hypergradient descent. In Sixth International Conference on Learning Representations (ICLR), Vancouver, Canada, April 30 May 3, 2018 , 2018. 10 Y . Bengio. Gradient-based optimization of hyperparameters. Neural Computation , 12(8): 18891900, 2000. doi: 10.1162/089976600300015187. URL https://doi.org/10.1162/ 089976600300015187 . J. Domke. Generic methods for optimization-based modeling. In N. D. Lawrence and M. Girolami, editors, Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, volume 22 of Proceedings of Machine Learning Research , pages 318326, La Palma, Canary Islands, 2123 Apr 2012. PMLR. URL http://proceedings.mlr.press/v22/domke12. html . J. Duchi, E. Hazan, and Y . Singer. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. , 12:21212159, July 2011. ISSN 1532-4435. URL http: //dl.acm.org/citation.cfm?id=1953048.2021068 . M. Feurer and F. Hutter. Hyperparameter Optimization , pages 333. Springer International Publishing, Cham, 2019. ISBN 978-3-030-05318-5. doi: 10.1007/978-3-030-05318-5_1. URL https: //doi.org/10.1007/978-3-030-05318-5 _1. L. Franceschi, M. Donini, P. Frasconi, and M. Pontil. Forward and reverse gradient-based hyperparameter optimization. In D. Precup and Y . W. Teh, editors, Proceedings of the 34th International Conference on Machine Learning , volume 70 of Proceedings of Machine Learning Research , pages 11651173, International Convention Centre, Sydney, Australia, 0611 Aug 2017. PMLR. URL http://proceedings.mlr.press/v70/franceschi17a.html . G. Griffin, A. Holub, and P. Perona. Caltech-256 object category dataset. 2007. URL http: //authors.library.caltech.edu/7694/ . K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV , USA, June 27-30, 2016 , pages 770778. IEEE Computer Society, 2016. doi: 10.1109/CVPR.2016.90. URL https://doi.org/10.1109/CVPR.2016.90 . J. Johnson. torch-rnn. Github repository, 2017. URL https://github.com/jcjohnson/ torch-rnn . A. Karpathy, J. Johnson, and L. Fei-Fei. Visualizing and understanding recurrent networks. arXiv preprint arXiv:1506.02078 , 2015. URL https://arxiv.org/pdf/1506.02078.pdf . D. Kingma and J. Ba. Adam: A method for stochastic optimization. International Conference on Learning Representations , 12 2014. A. Krizhevsky. Learning multiple layers of features from tiny images. University of Toronto , 05 2012. Y . Lecun, L. Bottou, Y . Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE , 86(11):22782324, Nov 1998. ISSN 0018-9219. doi: 10.1109/5.726791. J. Luketina, M. Berglund, K. Greff, and T. Raiko. Scalable gradient-based tuning of continuous regularization hyperparameters. In Proceedings of the 33rd International Conference on International Conference on Machine Learning Volume 48 , ICML16, pages 29522960. JMLR.org, 2016. URL http://dl.acm.org/citation.cfm?id=3045390.3045701 . D. Maclaurin, D. Duvenaud, and R. P. Adams. Gradient-based hyperparameter optimization through reversible learning. In Proceedings of the 32Nd International Conference on International Conference on Machine Learning Volume 37 , ICML15, pages 21132122. JMLR.org, 2015. URL http://dl.acm.org/citation.cfm?id=3045118.3045343 . F. Orabona and T. Tommasi. Training deep networks without learning rates through coin betting. Advances in Neural Information Processing Systems , 30, 2017. A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in PyTorch. In NIPS Autodiff Workshop , 2017. 11 F. Pedregosa. Hyperparameter optimization with approximate gradient. In Proceedings of the 33rd International Conference on International Conference on Machine Learning Volume 48 , ICML16, pages 737746. JMLR.org, 2016. URL http://dl.acm.org/citation.cfm?id=3045390. 3045469 . D. M. Rubio. Convergence analysis of an adaptive method of gradient descent. University of Oxford, Oxford, M. Sc. thesis , 2017. URL https://damaru2.github.io/convergence _analysis _ hypergradient _descent/dissertation _hypergradients.pdf . R. M. Schmidt, F. Schneider, and P. Hennig. Descending through a crowded valley-benchmarking deep learning optimizers. In International Conference on Machine Learning , pages 93679376. PMLR, 2021. F. Schneider, L. Balles, and P. Hennig. Deepobs: A deep learning optimizer benchmark suite. In International Conference on Learning Representations , 2018. E. Strubell, A. Ganesh, and A. McCallum. Energy and policy considerations for deep learning in NLP. arXiv preprint arXiv:1906.02243 , 2019. URL https://arxiv.org/pdf/1906.02243.pdf . S. Vaswani, A. Mishkin, I. Laradji, M. Schmidt, G. Gidel, and S. Lacoste-Julien. Painless stochastic gradient: Interpolation, line-search, and convergence rates. Advances in neural information processing systems , 32, 2019. M. D. Zeiler. ADADELTA: An adaptive learning rate method. CoRR , abs/1212.5701, 2012. URL http://dblp.uni-trier.de/db/journals/corr/corr1212.html#abs-1212-5701 . 12 |
2210.04142.pdf | 1 Deep Clustering: A Comprehensive Survey Y azhou Ren, Member, IEEE , Jingyu Pu, Zhimeng Y ang, Jie Xu, Guofeng Li, Xiaorong Pu, Philip S. Yu, Fellow, IEEE , Lifang He, Member, IEEE Abstract Cluster analysis plays an indispensable role in machine learning and data mining. Learning a good data representation is crucial for clustering algorithms. Recently, deep clustering, which can learn clustering-friendly representations using deep neural networks, has been broadly applied in a wide range of clustering tasks. Existing surveys for deep clustering mainly focus on the single-view fields and the network architectures, ignoring the complex application scenarios of clustering. To address this issue, in this paper we provide a comprehensive survey for deep clustering in views of data sources. With different data sources and initial conditions, we systematically distinguish the clustering methods in terms of methodology, prior knowledge, and architecture. Concretely, deep clustering methods are introduced according to four categories, i.e., traditional single-view deep clustering, semi-supervised deep clustering, deep multi-view clustering, and deep transfer clustering. Finally, we discuss the open challenges and potential future opportunities in different fields of deep clustering. Index Terms Deep clustering; semi-supervised clustering; multi-view clustering; transfer learning ! 1 I NTRODUCTION WITHthe development of online media, abundant data with high complexity can be gathered easily. Through pinpoint analysis of these data, we can dig the value out and use these conclusions in many fields, such as face recognition , , sentiment analysis , , intelligent manufacturing , , etc. A model which can be used to classify the data with different labels is the base of many applications. For labeled data, it is taken granted to use the labels as the most important information as a guide. For unlabeled data, finding a quantifiable objective as the guide of the model-building process is the key question of clustering. Over the past decades, a large number of clustering methods with shallow models have been proposed, including centroid-based clustering , , density-based clustering , , , , , distribution-based clustering , hierarchical clustering , ensemble clustering , , multi-view clustering , , , , , , etc. These shallow models are effective only when the features are representative, while their performance on the complex data is usually limited due to the poor power of feature learning. In order to map the original complex data to a feature space that is easy to cluster, many clustering methods focus on feature extraction or feature transformation, such as PCA , kernel method , spectral method , deep neural network , etc. Among these methods, the deep neural network is a promising approach because of its excellent nonlinear mapping capability and its flexibility in different scenarios. A well-designed deep learning based clustering approach (referred to deep clustering) aims at effectively extracting more clustering-friendly features from data and performing clustering with learned features simultaneously. Much research has been done in the field of deep clustering and there are also some surveys about deep clustering methods Yazhou Ren, Jingyu Pu, Zhimeng Yang, Jie Xu, Guofeng Li and Xiaorong Pu are with University of Electronic Science and Technology of China, Chengdu 611731, China. Yazhou Ren is the corresponding author. E-mail: [email protected]. Philip S. Yu is with University of Illinois at Chicago, IL 60607, USA. Lifang He is with Lehigh University, PA 18015, USA. Manuscript received Oct. 2022., , , . Specifically, existing systematic reviews for deep clustering mainly focus on the single-view clustering tasks and the architectures of neural networks. For example, Aljalbout et al . focus only on deep single-view clustering methods which are based on deep autoencoder (AE or DAE). Min et al. classify deep clustering methods from the perspective of different deep networks. Nutakki et al . divide deep single-view clustering methods into three categories according to their training strategies: multi-step sequential deep clustering, joint deep clustering, and closed-loop multi-step deep clustering. Zhou et al. categorize deep single-view clustering methods by the interaction way between feature learning and clustering modules. But in the real world, the datasets for clustering are always associated, e.g., the taste for reading is correlated with the taste for a movie, and the side face and full-face from the same person should be labeled the same. For these data, deep clustering methods based on semi-supervised learning, multi-view learning, and transfer learning have also made significant progress. Unfortunately, existing reviews do not discuss them too much. Therefore, it is important to classify deep clustering from the perspective of data sources and initial conditions. In this survey, we summarize the deep clustering from the perspective of initial settings of data combined with deep learning methodology. We introduce the newest progress of deep clustering from the perspective of network and data structure as shown in Fig. 1. Specifically, we organize the deep clustering methods into the following four categories: Deep single-view clustering For conventional clustering tasks, it is often assumed that the data are of the same form and structure, as known as singleview or single-modal data. The extraction of representations for these data by deep neural networks (DNNs) is a significant characteristic of deep clustering. However, what is more noteworthy is the different applied deep learning techniques, which are highly correlated with the structure of DNNs. To compare the technical route of specific DNNs, we divide those algorithms into five categories: deep autoencoder (DAE) based deep clustering,arXiv:2210.04142v1 [cs.LG] 9 Oct 2022 2 6LQJOHYLHZ 'HHS FOXVWHULQJ6HPL VXSHUYLVHG 0XOWLYLHZ 7UDQVIHU OHDUQLQJ'(&EDVHG 6XEVSDFH FOXVWHULQJEDVHG*11EDVHG '11EDVHG *$1EDVHG'DWD VWUXFWXUH9 $(EDVHG '11EDVHG '$(EDVHG*$1EDVHG *11EDVHG1HWZRUN Fig. 1: The directory tree of this survey. deep neural network (DNN) based deep clustering, variational autoencoder (V AE) based deep clustering, generative adversarial network (GAN) based deep clustering and graph nerual network (GNN) based deep clustering. Deep clustering based on semi-supervised learning When the data to be processed contain a small part of prior constraints, traditional clustering methods cannot effectively utilize this prior information and semi-supervised clustering is an effective way to solve this question. In presence, the research of deep semi-supervised clustering has not been well explored. However, semi-supervised clustering is inevitable because it is feasible to let a clustering method become a semi-supervised one by adding the additional information as a constraint loss to the model. Deep clustering based on multi-view learning In the real world, data are often obtained from different feature collectors or have different structures. We call those data multi-view data or multi-modal data, where each sample has multiple representations. The purpose of deep clustering based on multi-view learning is to utilize the consistent and complementary information contained in multi-view data to improve clustering performance. In addition, the idea of multi-view learning may have guiding significance for deep single-view clustering. In this survey, we summarize deep multi-view clustering into three categories: deep embedded clustering based, subspace clustering based, and graph neural network based. Deep clustering based on transfer learning For a task that has a limited amount of instances and high dimensions, sometimes we can find an assistant to offer additional information. For example, if task A is similar to another task B and B has more information for clustering than A (B is labeled or B is easier to clustering than A), it is useful to transfer the information from B to A. Transfer learning for unsupervised domain adaption (UDA) is boosted in recent years, which contains two domains: Source domain with labels and target domain which is unlabeled. The goal of transfer learning is to apply the knowledge or patterns learned from the source task to a different but related target task. Deep clustering methods based on transfer learning aim to improve the performance of current clustering tasks by utilizing information from relevant tasks.TABLE 1: Notations and their descriptions in this paper. Notations Descriptions i a counter variable j a counter variable |.| the length of a set . the 2-norm of a vector X the data for clustering Xsthe data in source domain (UDA methods) Ysthe labels of source domain instances (UDA methods) Xtthe data in target domain (UDA methods) Ds the source domain of UDA methods Dt the target domain of UDA methods xi the vector of an oringinal data sample Xithei-th view ofXin multi-view learning Y the predicted labels of X S the soft data assignments of X R the adjusted assignments of S A the pairwise constraint matrix aij the constraint of sample iand samplej zi the vector of the embedded representation of xi the noise used in generative model E the expectation Ln the network loss Lc the clustering loss Lext the extra task loss Lrec the reconstruction loss of autoencoder network Lgan the loss of GAN LELBO the loss of evidence lower bound k the number of clusters n the number of data samples the mean of the Gaussian distribution the variance of the Gaussian distribution KL(..) the Kullback-Leibler divergence p(.) the probability distribution p(.|.) the conditional probability distribution p(.,.) the joint probability distribution q(.) the approximate probability distribution of p(.) q(.|.) the approximate probability distribution of p(.|.) q(.,.) the approximate probability distribution of p(.,.) f(.) the feature extractor e(.) the encoder network of AE or V AE r(.) the decoder network of AE or V AE g(.) the generative network of GAN d(.) the discriminative network of GAN Q the graph adjacency matrix D the degree matrix of Q C the feature matrix of a graph H the node hidden feature matrix W the learnable model parameters It is necessary to pay attention to the different characteristics and conditions of the clustering data before studying the corresponding clustering methods. In this survey, existing deep clustering methods are systematically classified from data sources and initial conditions. The advantages, disadvantages, and applicable conditions of different clustering methods are analyzed. Finally, we present some interesting research directions in the field of deep clustering. 2 D EFINITIONS AND PRELIMINARIES We introduce the notations in this section. Throughout this paper, we use uppercase letters to denote matrices and lowercase letters to denote vectors. Unless otherwise stated, the notations used in this paper are summarized in Table 1. This survey will introduce four kinds of deep clustering problems based on different background conditions. Here, we define these problems formally. Given a set of data samples X, we aim at finding a map function Fwhich can map Xintok 3 clusters. The map result is represented with Y. So the tasks we cope with are: (1) Deep single-view clustering: F(X)Y. (1) (2) Semi-supervised deep clustering: F(X,A)Y, (2) whereAis a constrained matrix. (3) Deep multi-view clustering: F(X1,...,Xn)Y, (3) whereXiis thei-th view ofX. (4) Deep clustering with domain adaptation: F(Xs,Ys,Xt)Y, (4) where (Xs,Ys)is the labeled source domain and Xtis the unlabeled target domain. 3 D EEPSINGLE -VIEW CLUSTERING The theory of representation learning shows the importance of feature learning (or representation learning) in machine learning tasks. However, deep representation learning is mostly supervised learning that requires many labeled data. As we mentioned before, the obstacle of the deep clustering problem is what can be used to guide the training process like labels in supervised problem. The most supervised information in deep clustering is the data itself. So how can we train an effective feature extractor to get good representation? According to the way the feature extractor is trained, we divide deep single-view clustering algorithms into five categories: DAE-based ,DNN-based ,VAE-based ,GANbased , and GNN-based . The difference of these methods is mainly about the loss components, where the loss terms are defined in Table 1 and explained below: DAE-based /GNN-based :L=Lrec+Lc, DNN-based :L=Lext+Lc, VAE-based :L=LELBO +Lc, GAN-based :L=Lgan+Lc. In unsupervised learning, the issue we cope with is to train a reliable feature extractor without labels. There are mainly two ways in existing works: 1) A loss function that optimizes the pseudo labels according to the principle: narrowing the innercluster distance and widening the inter-cluster distance. 2) An extra task that can help train the feature extractor. For the clustering methods with specialized feature extractors, such as autoencoder, the reconstruction loss Lreccan be interpreted as the extra task. In this paper, the clustering-oriented loss Lcindicates the loss of the clustering objective. DAE-based /GNN-based methods use an autoencoder/graph autoencoder as the feature extractor, so the loss functions are always composed of a reconstruction loss Lrec and another clustering-oriented loss Lc. By contrast, DNN-based methods optimize the feature extractor with extra tasks or other strategiesLext.VAE-based methods optimize the loss of evidence lower bound LELBO .GAN-based methods are based on the generative adversarial loss Lgan. Based on these five dimensions, existing deep single-view clustering methods are summarized in Table 2 and Table 3.3.1 DAE-based The autoencoder network is originally designed for unsupervised representation learning of data and can learn a highly non-linear mapping function. Using deep autoencoder (DAE) is a common way to develop deep clustering methods. DAE aims to learn a low-dimensional embedding feature space by minimizing the reconstruction loss of the network, which is defined as: Lrec= min1 nn i=1xir(e(xi))2(5) wheree(.)andr(.)represent the encoder network and decoder network of autoencoder respectively. Using the encoder as a feature extractor, various clustering objective functions have been proposed. We summarize these deep autoencoder based clustering methods as DAE-based deep clustering. In DAE-based deep clustering methods, there are two main ways to get the labels. The first way embeds the data into low-dimensional features and then clusters the embedded features with traditional clustering methods such as the k-means algorithm . The second way jointly optimizes the feature extractor and the clustering results. We refer to these two approaches as separate analysis and joint analysis respectively, and elaborate on them below. Separate analysis means that learning features and clustering data are performed separately. In order to solve the problem that representations learned by separately analysis are not cluster-oriented due to its innate characteristics, Huang et al . propose a deep embedding network for clustering (DEN) , which imposes two constraints based on DAE objective: localitypreserving constraint and group sparsity constraint. Localitypreserving constraint urges the embedded features in the same cluster to be similar. Group sparsity constraint aims to diagonalize the affinity of representations. These two constraints improve the clustering performance while reduce the inner-cluster distance and expand inter-cluster distance. The objective of most clustering methods based on DAE are working on these two kinds of distance. So, in Table 2, we summarize these methods from the perspective of characteristics, which shows the way to optimize the inner-cluster distance and inter-cluster distance. Peng et al. propose a novel deep learning based framework in the field of Subspace clustering, namely, deep subspace clustering with sparsity prior (PARTY). PARTY enhances autoencoder by considering the relationship between different samples (i.e., structure prior) and solves the limitation of traditional subspace clustering methods. As far as we know, PARTY is the first deep learning based subspace clustering method, and it is the first work to introduce the global structure prior to the neural network for unsupervised learning. Different from PARTY , Ji et al . propose another deep subspace clustering networks (DSC-Nets) architecture to learn non-linear mapping and introduce a selfexpressive layer to directly learn the affinity matrix. Density-based clustering , is another kind of popular clustering methods. Ren et al. propose deep density-based image clustering (DDIC) that uses DAE to learn the low-dimensional feature representations and then performs density-based clustering on the learned features. In particular, DDIC does not need to know the number of clusters in advance. Joint analysis aims at learning a representation that is more suitable for clustering which is different from separate analysis approaches that deep learning and clustering are carried out separately, the neural network does not have a clustering-oriented 4 TABLE 2: The summaries of DAE-based andDNN-based methods in deep single-view clustering. We summarize the DAE-based methods based on Jointly or Separately and Characteristics. Net MethodsJointly or SeparatelyCharacteristics DAEAEC (2013) Separately Optimize the distance between ziand its closest cluster centroid. DEN (2014) Separately Locality-preserving constraint, group sparsity constraint. PARTY (2016) Separately Subspace clustering. DEC (2016) Jointly Optimize the distribution of assignments. IDEC (2017) Jointly Improve DEC with local structure preservation. DSC-Nets (2017) Separately Subspace clustering. DEPICT (2017) Jointly Convolutional autoencoder and relative entropy minimization. DCN (2017) Jointly Take the objective of k-means as the clustering loss. DMC (2017) Jointly Multi-manifold clustering. DEC-DA (2018) Jointly Improve DEC with data augmentation. DBC (2018) Jointly Self-paced learning. DCC (2018) Separately Extend robust continuous clustering with autoencoder. Not given k. DDLSC (2018) Jointly Pairwise loss function. DDC (2019) Separately Global and local constraints of relationships. DSCDAE (2019) Jointly Subspace Clustering. NCSC (2019) Jointly Dual autoencoder network. DDIC (2020) Separately Density-based clustering. Not given k. SC-EDAE (2020) Jointly Spectral clustering. ASPC-DA (2020) Jointly Self-paced learning and data augmentation. ALRDC (2020) Jointly Adversarial learning. N2D (2021) Separately Manifold learning. AGMDC (2021) Jointly Gaussian Mixture Model. Improve the inter-cluster distance. Net MethodsClusteringoriented lossCharacteristics DNNJULE (2016) Yes Agglomerative clustering. DDBC (2017) Yes Information theoretic measures. DAC (2017) No Self-adaptation learning. Binary pairwise-classification. DeepCluster (2018) No Use traditional clustering methods to assign labels. CCNN (2018) No Mini-batchk-means. Feature drift compensation for large-scale image data ADC (2018) Yes Centroid embeddings. ST-DAC (2019) No Spatial transformer layers. Binary pairwise-classification. RTM (2019) No Random triplet mining. IIC (2019) No Mutual information. Generated image pairs. DCCM (2019) No Triplet mutual information. Generated image pairs. MMDC (2019) No Multi-modal. Generated image pairs. SCAN (2020) Yes Decouple feature learning and clustering. Nearest neighbors mining. DRC (2020) Yes Contrastive learning. PICA (2020) Yes Maximize the global partition confidence. TABLE 3: The summaries of VAE,GAN, and GNN-based methods in deep single-view clustering. Net Methods Characteristics V AEVaDE (2016) Gaussian mixture variational autoencoder. GMV AE (2016) Gaussian mixture variational autoencoder. Unbalanced clustering. MFVDC (2017) Continuous Gumbel-Softmax distribution. LTV AE (2018) Latent tree model. VLAC (2019) Variational ladder autoencoders. V AEIC (2020) No pre-training process. S3VDC (2020) Improvement on four generic algorithmic. DSV AE (2021) Spherical latent embeddings. DV AE (2022) Additional classifier to distinguish clusters. Net Methods With DAE Characteristics GANCatGAN (2015) No Can be applied to both unsupervised and semi-supervised tasks. DAGC (2017) Yes Build an encoder to make the data representations easier to cluster. DASC (2018) Yes Subspace clustering. ClusterGAN-SPL (2019) No No discrete latent variables and applies self-paced learning based on . ClusterGAN (2019) No Train a GAN with a clustering-specific loss. ADEC (2020) Yes Reconstruction loss and adversarial loss are optimized in turn. IMDGC (2022) No Integrates a hierarchical generative adversarial network and mutual information maximization. Net Methods Characteristics GNNDAEGC (2019) Perform graph clustering and learn graph embedding in a unified framework. AGC (2019) Attributed graph clustering. AGAE (2019) Ensemble clustering. AGCHK (2020) Utilize heat kernel in attributed graphs. SDCN (2020) Integrate the structural information into deep clustering. objective when learning the features of data. Most subsequent deep clustering researches combine clustering objectives with feature learning, which enables the neural network to learn features conducive to clustering from the potential distribution of data. Inthis survey, those methods are summarized as joint analysis. Inspired by the idea of non-parametric algorithm t-SNE , Xie et al . propose a joint framework to optimize feature learning and clustering objective, which is named deep embedded 5 clustering (DEC). DEC firstly learns a mapping from the data space to a lower-dimensional feature space via Lrecand then iteratively optimizes the clustering loss KL(SR)(i.e., KL divergence). Here, Sdenotes the soft assignments of data that describes the similarity between the embedded data and each cluster centroid (centroids are initialized with k-means), and Ris the adjusted target distribution which has purer cluster assignments compared toS. DEC is a representative method in deep clustering due to its joint learning framework and low computing complexity. Based on DEC, a number of variants have been proposed. For example, to guarantee local structure in the fine-tuning phase, improved deep embedded clustering with local structure preservation (IDEC) is proposed to optimize the weighted clustering loss and the reconstruction loss of autoencoder jointly. Deep embedded clustering with data augmentation (DEC-DA) applies the data augmentation strategy in DEC. Li et al . propose discriminatively boosted image clustering (DBC) to deal with image representation learning and image clustering. DBC has a similar pipeline as DEC but the learning procedure is self-paced , where easiest instances are first selected and more complex samples are expanded progressively. In DEC, the predicted clustering assignments are calculated by the Students t-distribution. Differently, Dizaji et al . propose a deep embedded regularized clustering (DEPICT) with a novel clustering loss by stacking a softmax layer on the embedded layer of the convolutional autoencoder. Whats more, the clustering loss of DEPICT is regularized by a prior for the frequency of cluster assignments and layer-wise features reconstruction loss function. Yang et al. directly take the objective of k-means as the clustering loss. The proposed model, named deep clustering network (DCN), is a joint dimensionality reduction and kmeans clustering approach, in which dimensionality reduction is accomplished via learning a deep autoencoder. Shah et al . propose deep continuous clustering (DCC), an extension of robust continuous clustering by integrating autoencoder into the paradigm. DCC performs clustering learning by jointly optimizing the defined data loss, pairwise loss, and reconstruction loss. In particular, it does not need prior knowledge of the number of clusters. Tzoreff et al propose DDLSC (deep discriminative latent space for clustering) to optimize the deep autoencoder w.r.t. a discriminative pairwise loss function. Deep manifold clustering (DMC) is the first method to apply deep learning in multi-manifold clustering , . In DMC, an autoencoder consists of stacked RBMs is trained to obtain the transformed representations. Both the reconstruction loss and clustering loss of DMC are different from previous methods. That is, the reconstruction of one sample and its local neighborhood are used to define the locality preserving objective. The penalty coefficient and the distance, measured by the Gaussian kernel between samples and cluster centers, are used to define the clustering-oriented objective. The recently proposed DAE-based clustering algorithms also use the variants of deep autoencoder to learn better lowdimensional features and focus on improving the clustering performance by combining the ideas of traditional machine learning methods. For example, deep spectral clustering using dual autoencoder network (DSCDAE) and spectral clustering via ensemble deep autoencoder learning (SC-EDAE) aim to integrate spectral clustering into the carefully designed autoencoders for deep clustering. Zhang et al . propose neural collabo &ODVVLILFDWLRQ/RVV /RVV([WUD7DVN Fig. 2: The framework of DNN-based learning (single-view clustering).Xis the data for clustering, fis the feature extractor for X. Part I describes the framework of supervised learning. Ymeans the real labels and Sdenotes the predicted results. With YandS, we can compute the classification loss for backpropagation. Part II is the framework of methods with extra tasks. The extra tasks are used to train the nets for good embedding Z. Part III describes the process of the methods which need to fine-tune the cluster assignments. S denotes the predicted results, Ris an adjustment of S. rative subspace clustering (NCSC) using two confidence maps, which are established on the features learned by autoencoder, as supervision information for subspace clustering. In ASPC-DA (adaptive self-paced deep clustering with data augmentation ), self-paced learning idea and data augmentation technique are simultaneously incorporated. Its learning process is the same as DEC and consists of two stages, i.e., pre-training the autoencoder and fine-tuning the encoder. In general, we notice that the network structure adopted is related to the type of data to be processed. For example, fully connected networks are generally used to extract one-dimensional data features, while convolutional neural networks are used to extract image features. Most of the above DAE-based deep clustering methods can be implemented by both fully connected autoencoder and convolutional autoencoder, and thus they apply to various types of data to some extent. However, in the field of computer vision, there is a class of deep clustering methods that focus on image clustering. Those methods can date back to and are summarized as DNN-based deep clustering because they generally use convolutional neural networks to perform image feature learning and semantic clustering. 3.2 DNN-based This section introduces the DNN-based clustering methods. Unlike DAE-based clustering methods, DNN-based methods have to design extra tasks to train the feature extractor. In this survey, we summarize DNN-based deep clustering methods in Table 2 from two perspectives: clustering-oriented loss and characteristics. clustering-oriented loss shows whether there is a loss function which explicitly narrows the inner-cluster distance or widens the inter-cluster distance. Fig. 2 shows the framework of deep unsupervised learning based on a convolutional neural network. 6 When the DNN training process begins, the randomly initialized feature extractor is unreliable. So, deep clustering methods based on randomly initialized neural networks generally employ traditional clustering tricks such as hierarchical clustering or focus on extra tasks such as instances generation. For instance, Yang et al . propose a joint unsupervised learning method named JULE, which applies agglomerative clustering magic to train the feature extractor. Specifically, JULE formulates the joint learning in a recurrent framework, where merging operations of agglomerative clustering are considered as a forward pass, and representation learning of DNN as a backward pass. Based on this assumption, JULE also applies a loss that shrinks the inner-cluster distance and expands the intra-cluster distance at the same time. In each epoch, JULE merges two clusters into one and computes the loss for the backward pass. Chang et al . propose deep adaptive image clustering (DAC) to tackle the combination of feature learning and clustering. In DAC, the clustering problem is reconstructed into binary pairwise classification problems that judge whether the pairwise images with estimated cosine similarities belong to the same cluster. Then it adaptively selects similar samples to train DNN in a supervised manner. DAC provides a novel perspective for deep clustering, but it only focuses on relationships between pairwise patterns. DDC (deep discriminative clustering analysis ) is a more robust and generalized version of DAC by introducing global and local constraints of relationships. ST-DAC ( spatial transformer deep adaptive clustering ) applies a visual attention mechanism to modify the structure of DAC. Haeusser et al . propose associative deep clustering (ADC), which contains a group of centroid variables with the same shape as image embeddings. With the intuition that centroid variables can carry over high-level information about the data structure in the iteration process, the authors introduce an objective function with multiple loss terms to simultaneously train those centroid variables and the DNNs parameters along with a clustering mapping layer. The above mentioned clustering methods estimate the cluster of an instance by passing it through the entire deep network, which tends to extract the global features of the instance . Some clustering methods use mature classification network to initialize the feature extractor. For instance, DeepCluster applies kmeans on the output features of the deep model (like AlexNet and VGG-16 ) and uses the cluster assignments as pseudo-labels to optimize the parameters of the convolutional neural networks. Hsu et al. propose clustering CNN (CCNN) which integrates mini-batch k-means with the model pretrained from the ImageNet dataset . To improve the robustness of the model, more and more approaches make use of data augmentation for deep clustering , , . For example, Huang et al . extend the idea of classical maximal margin clustering , to establish a novel deep semantic clustering method (named PartItion Confidence mAximisation PICA). In PICA, three operations including color jitters, random rescale, and horizontal flip are adopted for data augmentation and perturbations. Mutual information is also taken as a criterion to learn representations , and becomes popular in recent clustering methods especially for image data. Various data augmentation techniques have been applied to generate transformed images that are used to mine their mutual information. For example, Ji et al. propose invariant information clustering (IIC) for semantic clustering and image segmentation. In IIC, every image and itsrandom transformation are treated as a sample pair. By maximizing mutual information between the clustering assignments of each pair, the proposed model can find semantically meaningful clusters and avoid degenerate solutions naturally. Instead of only using pairwise information, deep comprehensive correlation mining (DCCM) is a novel image clustering framework, which uses pseudo-label loss as supervision information. Besides, the authors extend the instance level mutual information and present triplet mutual information loss to learn more discriminative features. Based on the currently fashionable contrastive learning , Zhong et al . propose deep robust clustering (DRC), where two contrastive loss terms are introduced to decrease intra-class variance and increase inter-class variance. Mutual information and contrastive learning are related. In DRC, the authors summarize a framework that can turn maximize mutual information into minimizing contrastive loss. In the field of image clustering on the semantic level, people think that the prediction of the original image should be consistent with that of the transformed image by data augmentation. So in the unsupervised learning context, data augmentation techniques not only are used to expand the training data but also can easily obtain supervised information. This is why data augmentation can be widely applied in many recent proposed image clustering methods. For example, Nina et al . propose a decoder-free approach with data augmentation (called random triplet mining RTM) for clustering and manifold learning. To learn a more robust encoder, the model consists of three encoders with shared weights and is a triplet network architecture conceptually. The first and the second encoders take similar images generated by data augmentation as positive pair, the second and the third encoders take a negative pair selected by RTM. Usually, the objective of triplet networks is defined to make the features of the positive pair more similar and that of the negative pair more dissimilar. Although many existing deep clustering methods jointly learn the representations and clusters, such as JULE and DAC, there are specially designed representation learning methods , , , , to learn the visual representations of images in a self-supervised manner. Those methods learn semantical representations by training deep networks to solve extra tasks. Such tasks can be predicting the patch context , inpainting patches , colorizing images , solving jigsaw puzzles , and predicting rotations , etc. Recently, these selfsupervised representation learning methods are adopted in image clustering. For example, MMDC (multi-modal deep clustering ) leverages an auxiliary task of predicting rotations to enhance clustering performance. SCAN (semantic clustering by adopting nearest neighbors ) first employs a self-supervised representation learning method to obtain semantically meaningful and high-level features. Then, it integrates the semantically meaningful nearest neighbors as prior information into a learnable clustering approach. Since DEC and JULE are proposed to jointly learn feature representations and cluster assignments by deep neural networks, many DAE-based andDNN-based deep clustering methods have been proposed and have made great progress in clustering tasks. However, the feature representations extracted in clustering methods are difficult to extend to other tasks, such as generating samples. The deep generative models have recently attracted a lot of attention because they can use neural networks to obtain data distributions so that samples can be generated (V AE , GAN , Pixel-RNN , InfoGAN and PPGN 7 ). Specifically, GAN and V AE are the two most typical deep generative models. In recent years, researchers have applied them to various tasks, such as semi-supervised classification , , , , clustering , and image generation , . In the next two subsections, we introduce the deep clustering algorithms based on the generated models: VAE-based deep clustering and GAN-based deep clustering. 3.3 VAE-based Deep learning with nonparametric clustering (DNC) is a pioneer work in applying deep belief network to deep clustering. But in deep clustering based on the probabilistic graphical model, more research comes from the application of variational autoencoder (V AE), which combines variational inference and deep autoencoder together. Most VAE-based deep clustering algorithms aim at solving an optimization problem about evidence lower bound (ELBO, see the deduction details in , ), pis the joint probability distribution,qis the approximate probability distribution of p(z|x),x is the input data for clustering and zthe latent variable generated corresponding to x: LELBO =Eq(z|x)[ logp(x,z) q(z|x)] (6) The difference is that different algorithms have different generative models of latent variables or different regularizers. We list several VAE-based deep clustering methods that have attracted much attention in recent years as below. For convenience, we omit the parameterized form of the probability distribution. Traditional V AE generates a continuous latent vector z,xis the vector of an original data sample. For the clustering task, the VAE-based methods generate latent vector (z,y), wherezis the latent vector representing the embedding and yis the label. So the ELBO for optimization becomes: LELBO =Eq(z,y|x)[ logp(x,z,y ) q(z,y|x)] (7) The first proposed unsupervised deep generative clustering framework is VaDE (variational deep embedding ). VaDE models the data generative procedure with a GMM (Gaussian mixture model ) combining a V AE. In this method, the cluster assignments and the latent variables are jointly considered in a Gaussian mixture prior rather than a single Gaussian prior. Similar to VaDE, GMV AE (Gaussian mixture variational autoencoder ) is another deep clustering method that combines V AE with GMM. Specifically, GMV AE considers the generative model p(x,z,n,c ) =p(x|z)p(z|n,c)p(n)p(c), where cis uniformly distributed of kcategories and nis normally distributed.zis a continuous latent variable, whose distribution is a Gaussian mixture with means and variances of candn. Based on the mean-field theory , GMV AE factors q(z,n,c|x) = q(z|x)q(n|x)p(c|z,n)as posterior proxy. In the same way, those variational factors are parameterized with neural networks and the ELBO loss is optimized. On the basis of GMM and V AE, LTV AE (latent tree variational autoencoder ) applies latent tree model to perform representation learning and structure learning for clustering. Differently, LTV AE has a variant of V AE with a superstructure of latent variables. The superstructure is a tree structure of discrete latent variables on top of the latent features. The connectivitystructure among all variables is defined as a latent structure of thelatent tree model that is optimized via message passing . The success of some deep generative clustering methods depends on good initial pre-training. For example, in VaDE , pre-training is needed to initialize cluster centroids. In DGG , pre-training is needed to initialize the graph embeddings. Although GMV AE learns the prior and posterior parameters jointly, the prior for each class is dependent on a random variable rather than the class itself, which seems counterintuitive. Based on the ideas of GMV AE and VaDE, to solve their fallacies, Prasad et al. propose a new model leveraging variational autoencoders for image clustering (V AEIC). Different from the methods mentioned above, the prior of V AEIC is deterministic, and the prior and posterior parameters are learned jointly without the need for a pre-training process. Instead of performing Bayesian classification as done in GMV AE and VaDE, V AEIC adopts more straight-forward inference and more principled latent space priors, leading to a simpler inference model p(x,z,c ) =p(x|z)p(z|c)p(c)and a simpler approximate posteriorq(z,c|x) =q(c|x)q(z|x,c). The cluster assignment is directly predicted by q(c|z). What is more, the authors adopt data augmentation and design an image augmentation loss to make the model robust. In addition to the VAE-based deep clustering methods mentioned above, Figueroa et al . use the continuous GumbelSoftmax distribution , to approximate the categorical distribution for clustering. Willetts et al . extend variational ladder autoencoders and propose a disentangled clustering algorithm. Cao et al. propose a simple, scalable, and stable variational deep clustering algorithm, which introduces generic improvements for variational deep clustering. 3.4 GAN-based In adversarial learning, standard generative adversarial networks (GANs) are defined as an adversarial game between two networks: generator gand discriminator d. Specifically, the generator is optimized to generate fake data that fools the discriminator, and the discriminator is optimized to tell apart real from fake input data as shown in Fig. 3. GAN has already been widely applied in various fields of deep learning. Many deep clustering methods also adopt the idea of adversarial learning due to their strength in learning the latent distribution of data. We summarize the important GAN-based deep clustering methods as follows. Probabilistic clustering algorithms address many unlabeled data problems, such as regularized information maximization (RIM) , or the related entropy minimization . The main idea of RIM is to train a discriminative classifier with unlabeled data. Unfortunately, these methods are prone to overfitting spurious correlations. Springenberg et al. propose categorical generative adversarial networks (CatGAN) to address this weakness. To make the model more general, GAN is introduced to enhance the robustness of the classifier. In CatGAN, all real samples are assigned to one of the kcategories using the discriminator, while staying uncertain of clustering assignments for samples from the generative model rather than simply judging the false and true samples. In this way, the GAN framework is improved so that the discriminator can be used for multiclass classification. In particular, CatGAN can be applied to both unsupervised and semi-supervised tasks. Interpretable representation learning in the latent space has been investigated in the seminal work of InfoGAN . Al8 ,; ;,; X Fig. 3: The framework of GAN-based learning.gis the generator anddis the discriminator, both nandcare inputs to the generator, nis the noise and nis the class information. Xis the data for clustering, Xis the fake data which fools the discriminator, the functionf()operates on Xto generate nandc. though InfoGAN does use discrete latent variables, it is not specifically designed for clustering. V AE can jointly train the inference network and autoencoder, which enables mapping from initial sample Xto latent space Zthat could potentially preserve cluster structure. Unfortunately, there is no such inference mechanism in GAN. To make use of their advantages, Mukherjee et al. propose ClusterGAN as a new mechanism for clustering. ClusterGAN samples latent variables from a mixture of onehot variables and continuous variables and establishes a reversemapping network to project data into a latent space. It jointly trains a GAN along with the inverse-mapping network with a clusteringspecific loss to achieve clustering. There is another GAN-based deep clustering method (we denote it as ClusterGAN-SPL) that has a similar network module with ClusterGAN. The main difference is that ClusterGAN-SPL does not set discrete latent variables but applies self-paced learning to improve the robustness of the algorithm. In some GAN-based deep clustering methods (e.g., DAGC , DASC , AGAE and ADEC ), generative adversarial network and deep autoencoder are both applied. For example, inspired by the adversarial autoencoders and GAN , Harchaoui et al . propose deep adversarial gaussian mixture autoencoder for clustering (DAGC). To make the data representations easier to cluster than in the initial space, it builds an autoencoder consisting of an encoder and a decoder. In addition, an adversarial discriminator is added to continuously force the latent space to follow the Gaussian mixture prior . This framework improves the performance of clustering due to the introduction of adversarial learning. Most existing subspace clustering approaches ignore the inherent errors of clustering and rely on the self-expression of handcrafted representations. Therefore, their performance on real data with complex underlying subspaces is not satisfactory. Zhou et al. propose deep adversarial subspace clustering (DASC) to alleviate this problem and apply adversarial learning into deep subspace clustering. DASC consists of a generator and a discriminator that learn from each other. The generator outputs subspace clustering results and consists of an autoencoder, a selfexpression layer, and a sampling layer. The deep autoencoder and self-expression layer are used to convert the original input samples into better representations. In the pipeline, a new fake sample is generated by sampling from the estimated clusters and sent to the discriminator to evaluate the quality of the subspace cluster. Many autoencoder based clustering methods use reconstruc-tion for pretraining and let reconstruction loss be a regularizer in the clustering phase. Mrabah et al . point out that such a trade-off between clustering and reconstruction would lead to feature drift phenomena. Hence, the authors adopt adversarial training to address the problem and propose adversarial deep embedded clustering (ADEC). It first pretrains the autoencoder, where reconstruction loss is regularized by an adversarially constrained interpolation . Then, the cluster loss (similar to DEC ), reconstruction loss, and adversarial loss are optimized in turn. ADEC can be viewed as a combination of deep embedded clustering and adversarial learning. Besides the above-mentioned methods, there are a small number of deep clustering methods whose used networks are difficult to categorize. For example, IMSAT (information maximizing selfaugmented training ) uses very simple networks to perform unsupervised discrete representation learning. SpectralNet is a deep learning method to approximate spectral clustering, where unsupervised siamese networks , are used to compute distances. In clustering tasks, it is a common phenomenon to adopt the appropriate neural network for different data formats. In this survey, we focus more on deep learning techniques that are reflected in the used systematic neural network structures. 3.5 GNN-based Graph neural networks (GNNs) , allow end-toend differentiable losses over data with arbitrary graph structure and have been applied to a wide range of applications. Many tasks in the real world can be described as a graph, such as social networks, protein structures, traffic networks, etc. With the suggestion of Banachs fixed point theorem , GNN uses the following classic iterative scheme to compute the state. Fis a global transition function, the value of His the fixed point of H=F(H,X )and is uniquely defined with the assumption that Fis a contraction map . Ht+1=F(Ht,X) (8) In the training process of GNN, many methods try to introduce attention and gating mechanism into a graph structure. Among these methods, graph convolutional network (GCN) which utilizes the convolution for information aggregation has gained remarkable achievement. His the node hidden feature matrix, W is the learnable model parameters and Cis the feature matrix of a graph, the compact form of GCN is defined as: H=D1 2QD1 2CW (9) In the domain of unsupervised learning, there are also a variety of methods trying to use the powerful structure capturing capabilities of GNNs to improve the performance of clustering algorithms. We summarize the GNN-based deep clustering methods as follows. Tian et al. propose DRGC (learning deep representations for graph clustering) to replace traditional spectral clustering with sparse autoencoder and k-means algorithm. In DRGC, sparse autoencoder is adopted to learn non-linear graph representations that can approximate the input matrix through reconstruction and achieve the desired sparse properties. The last layer of the deep model outputs a sparse encoding and k-means serves as the final step on it to obtain the clustering results. To accelerate graph clustering, Shao et al. propose deep linear coding for fast graph clustering (DLC) . Unlike DRGC, DLC does not require eigen-decomposition and greatly saves running time on large-scale 9 datasets, while still maintaining a low-rank approximation of the affinity graph. The research on GNNs is closely related to graph embedding or network embedding , , , as GNNs can address the network embedding problem through a graph autoencoder framework . The purpose of graph embedding is to find low-dimensional features that maintain similarity between the vertex pairs in a sample similarity graph. If two samples are connected in the graph, their latent features will be close. Thus, they should also have similar cluster assignments. Based on this motivation, Yang et al. propose deep clustering via a Gaussian mixture variational autoencoder with graph embedding (DGG). Like VaDE , the generative model of DGG is p(x,z,c ) =p(x|z)p(z|c)p(c). The prior distributions of zand care set as a Gaussian mixture distribution and a categorical distribution, respectively. The learning problem of GMM-based V AE is usually solved by maximizing the evidence lower bound (ELBO) of the log-likelihood function with reparameterization trick. To achieve graph embedding, the authors add a graph embedding constraint to the original optimization problem, which exists not only on the features but also on the clustering assignments. Specifically, the similarity between data points is measured with a trained Siamese network . Autoencoder also works on graphs as an effective embedding method. In AGAE (adversarial graph autoEncoders) , the authors apply ensemble clustering , in the deep graph embedding process and develop an adversarial regularizer to guide the training of the autoencoder and discriminator. Recent studies have mostly focused on the methods which are twostep approaches. The drawback is that the learned embedding may not be the best fit for the clustering task. To address this, Wang et al. propose a unified approach named deep attentional embedded graph clustering (DAEGC) . DAEGC develops a graph attention-based autoencoder to effectively integrate both structure and content information, thereby achieving better clustering performance. The data stream framework of graph autoencoder applicated in clustering in Fig. 4. As one of the most successful feature extractors for deep learning, CNNs are mainly limited by Euclidean data. GCNs have proved that graph convolution is effective in deep clustering, e.g., Zhang et al. propose an adaptive graph convolution (AGC) method for attributed graph clustering. AGC exploits high-order graph convolution to capture global cluster structure and adaptively selects the appropriate order for different graphs. Nevertheless, AGC might not determine the appropriate neighborhood that reflects the relevant information of connected nodes represented in graph structures. Based on AGC, Zhu et al. exploit heat kernel to enhance the performance of graph convolution and propose AGCHK (AGC using heat kernel) , which could make the low-pass performance of the graph filter better. In summary, we can realize the importance of the structure of data. Motivated by the great success of GNNs in encoding the graph structure, Bo et al. propose a structural deep clustering network (SDCN) . By stacking multiple layers of GNN, SDCN is able to capture the high-order structural information. At the same time, benefiting from the self-supervision of AE and GNN, the multi-layer GNN does not exhibit the so-called oversmooth phenomenon. SDCN is the first work to apply structural information into deep clustering explicitly.TABLE 4: Semi-supervised deep clustering methods. Methods Characteristics SDEC (2019) Based on DEC . SSLDEC (2019) Based on DEC . DECC (2019) Based on DEC . SSCNN (2020) Combinek-means loss and pairwise divergence. 4 S EMI-SUPERVISED DEEPCLUSTERING Traditional semi-supervised learning can be divided into three categories, i.e., semi-supervised classification , , semi-supervised dimension reduction , , and semisupervised clustering , , . Commonly, the constraint of unsupervised data is marked as must-link and cannotlink. Samples with the must-link constraint belong to the same cluster, while samples with the cannot-link constraint belong to different clusters. Most semi-supervised clustering objectives are the combination of unsupervised clustering loss and constraint loss. Semi-supervised deep clustering has not been explored well. Here we introduce several representative works. These works use different ways to combine the relationship constraints and the neural networks to obtain better clustering performance. We summarize these methods in Table 4. Semi-supervised deep embedded clustering (SDEC) is based on DEC and incorporates pairwise constraints in the feature learning process. Its loss function is defined as: Loss =KL(SR)+1 nn i=1n k=1aijzizj2, (10) whereis a trade-off parameter. aij= 1 ifxiandxjare assigned to the same cluster, aij= -1 ifxiandxjsatisfy cannot-link constraints,aij= 0 otherwise. As the loss function shows, it is formed by two parts. The first part is KL divergence loss which has been explained in Section 3.1. The second part is semisupervised loss denotes the consistency between the embedded feature{zi}n i=1and parameter aij. Intuitively, if aij= 1 , to minimize the loss function, zizj2should be small. In contrast, ifaij=1, to minimize the loss, zizj2should be large, which means ziis apart from zjin the latent space Z. Like SDEC, most semi-supervised deep clustering (DC) methods are based on unsupervised DC methods. It is straightforward to expand an unsupervised DC method to a semi-supervised DC one through adding the semi-supervised loss. Compared with unsupervised deep clustering methods, the extra semi-supervised information of data can help the neural network to extract features more suitable for clustering. There are also some works focusing on extending the existing semi-supervised clustering method to a deep learning version. For example, the feature extraction process of both SSLDEC (semi-supervised learning with deep embedded clustering for image classification and segmentation) and DECC (deep constrained clustering) are based on DEC. Their training process is similar to semi-supervised k-means which learns feature representations by alternatively using labeled and unlabeled data samples. During the training process, the algorithms use labeled samples to keep the model consistent and choose a high degree of confidence unlabeled samples as newly labeled samples to tune the network. Semi-supervised clustering with neural networks combines a k-means loss and pairwise divergence to simultaneously learn the cluster centers as well as semantically meaningful feature representations. 10 1 0, Fig. 4: The data stream framework of graph autoencoder applicated in clustering. GCN (N,M )is a graph autoencoder, GCN ()is used to represent a graph convolutional neural network, graph autoencoder consists of two layers of graph convolutional neural networks. Both node attributesNand graph structure Mare utilized as inputs to this encoder. Zis a matrix of node embedding vectors. is an activation function, Mis the prediction of graph adjacency matrix M. 5 D EEPMULTI-VIEW CLUSTERING The above-mentioned deep clustering methods can only deal with single-view data. In practical clustering tasks, the input data usually have multiple views. For example, the report of the same topic can be expressed with different languages; the same dog can be captured from different angles by the cameras; the same word can be written by people with different writing styles. Multi-view clustering (MVC) methods , , , , , , , , , , are proposed to make use of the complementary information among multiple views to improve clustering performance. In recent years, the application of deep learning in multi-view clustering is a hot topic , , , , . Those deep multi-view clustering algorithms focus on solving the clustering problems with different forms of input data. Since the network structures used in most of these methods are autoencoders, we divided them into three categories based on the adopted clustering theoretical basis: DEC-based ,subspace clustering-based , and GNN-based . They are summarized in Table 5. 5.1 DEC-based As mentioned previously, DEC (deep embedded clustering) uses autoencoder to learn the low-dimensional embedded feature representation and then minimizes the KL divergence of studentst-distribution and auxiliary target distribution of feature representations to achieve clustering. Improved DEC (IDEC) emphasizes data structure preservation and adds the term of the reconstruction loss for the lower-dimensional feature representation when processing fine-tuning tasks. Some deep multi-view clustering methods also adopt this deep learning pipeline. Traditional MVC methods mostly use linear and shallow embedding to learn the latent structure of multi-view data. These methods cannot fully utilize the non-linear property of data, which is vital to reveal a complex clustering structure. Based on adversarial learning and deep autoencoder, Li et al . propose deep adversarial multi-view clustering (DAMC) to learn the intrinsic structure embedded in multi-view data. Specifically, DAMC consists of a multi-view encoder E, a multi-view generator (decoder)g,Vdiscriminators D1,...,D V(Vdenotes the number of views), and a deep embedded clustering layer. The multiview encoder outputs low-dimensional embedded features for each view. For each embedded feature, the multi-view generator generates a corresponding reconstruction sample. The discriminator isused to identify the generated sample from the real sample and output feedback. The total loss function of DAMC is defined as: Loss = min E,Gmax D1,...,D VLr+Lc+LGAN, (11) whereLccomes from DEC and represents the clustering loss, LrandLGAN represent the reconstruction loss and GAN loss respectively, andare hyperparameters. Compared with traditional MVC algorithms, DAMC can reveal the non-linear property of multi-view data and achieve better clustering performance. Xu et al . propose a novel collaborative training framework for deep embedded multi-view clustering (DEMVC). Specifically, DEMVC defines a switched shared auxiliary target distribution and fuses it into the overall clustering loss. Its main idea is that by sharing optimization objectives, each view, in turn, guides all views to learn the low-dimensional embedded features that are conducive to clustering. At the same time, optimizing reconstruction loss makes the model retain discrepancies among multiple views. Experiments show that DEMVC can mine the correct information contained in multiple views to correct other views, which is helpful to improve the clustering accuracy. Existing methods tend to fuse multiple views representations, Xu et al. present a novel VAE-based multi-view clustering framework (Multi-V AE) by learning disentangled visual representations. Linet al. propose a contrastive multi-view hyperbolic hierarchical clustering (CMHHC). It consists of three components, multi-view alignment learning, aligned feature similarity learning, and continuous hyperbolic hierarchical clustering. Through capturing the invariance information across views and learn the meaningful metric property for similarity-based continuous hierarchical clustering. CMHHC is capable of clustering multiview data at diverse levels of granularity. Xuet al . propose a framework of multi-level feature learning for contrastive multi-view clustering (MFLVC), which combines multi-view clustering with contrastive learning to improve clustering effectiveness. MFLVC can learn different levels of features and reduce the adverse influence of view-private information. Xu et al . also explore incomplete multi-view clustering, through mining the complementary information in the high-dimensional feature space via a nonlinear mapping of multiple views, the proposed method DIMVC can handle the incomplete data primely. 5.2 Subspace clustering-based Subspace clustering is another popular clustering method, which holds the assumption that data points of different 11 TABLE 5: The summaries of deep multi-view clustering methods. Networks Methods Characteristics DAE + GAN DAMC (2019) Capture the data distribution ulteriorly by adversarial training. V AE DMVCV AE (2020) Learn a shared latent representation under the V AE framework. DAE DEMVC (2021) Through collaborative training, each view can guide all views. DAE DMVSSC (2018) Extract multi-view deep features by CCA-guided convolutional auto-encoders. DAE RMSL (2019) Recover the underlying low-dimensional subspaces in which the high dimensional data lie. DAE MVDSCN (2019) Combine convolutional auto-encoder and self-representation together. V AE Multi-V AE (2021) Learn disentangle and explainable representations. DAE CMHHC (2022) Employ multiple autoencoders and hyperbolic hierarchical clustering. DAE MFLVC (2022) Utilize contrastive clustering to learn the common semantics across all views. DAE DIMVC (2022) Imputation-free and fusion-free incomplete multi-view clustering. GCN Multi-GCN (2019) Incorporates nonredundant information from multiple views. GCN MAGCN (2020) Dual encoders for reconstructing and integrating. GAE O2MAC (2020) Partition the graph into several nonoverlapping clusters. GAE CMGEC (2021) Multiple graph autoencoder. GAE DMVCJ (2022) Weighting strategy to alleviate the noisy issue. clusters are drawn from multiple subspaces. Subspace clustering typically firstly estimates the affinity of each pair of data points to form an affinity matrix, and then applies spectral clustering or a normalized cut on the affinity matrix to obtain clustering results. Some subspace clustering methods based on self-expression have been proposed. The main idea of selfexpression is that each point can be expressed with a linear combination Cof the data points Xthemselves. The general objective is: Loss =Lr+R(C) =XXC+R(C), (12) whereXXCis the reconstruction loss and R(C)is the regularization term for subspace representation C. In recent years, a lot of works , , , , , , generate a good affinity matrix and achieve better results by using the self-expression methodology. There are also multi-view clustering methods , , which are based on subspace learning. They construct the affinity matrix with shallow features and lack of interaction across different views, thus resulting in insufficient use of complementary information included in multi-view datasets. To address this, researchers focus more on multi-view subspace clustering methods based on deep learning recently. Exploring the consistency and complementarity of multiple views is a long-standing important research topic of multi-view clustering . Tang et al . propose the deep multi-view sparse subspace clustering (DMVSSC), which consists of a canonical correlation analysis (CCA) , , based selfexpressive module and convolutional autoencoders (CAEs). The CCA-based self-expressive module is designed to extract and integrate deep common latent features to explore the complementary information of multi-view data. A two-stage optimization strategy is used in DMVSSC. Firstly, it only trains CAEs of each view to obtain suitable initial values for parameters. Secondly, it fine-tunes all the CAEs and CCA-based self-expressive modules to perform multi-view clustering. Unlike CCA-based deep MVC methods (e.g., DMVSSC ) which project multiple views into a common lowdimensional space, Li et al. present a novel algorithm named reciprocal multi-layer subspace learning (RMSL). RMSL contains two main parts: HSRL (hierarchical self-representative layers) and BEN (backward encoding networks). The self-representative layers (SRL) contains the view-specific SRL which maps viewspecific features into view-specific subspace representations, and the common SRL which further reveals the subspace structurebetween the common latent representation and view-specific representations. BEN implicitly optimizes the subspaces of all views to explore consistent and complementary structural information to get a common latent representation. Many multi-view subspace clustering methods first extract hand-crafted features from multiple views and then learn the affinity matrix jointly for clustering. This independent feature extraction stage may lead to the multi-view relations in data being ignored. To alleviate this problem, Zhu et al. propose a novel multi-view deep subspace clustering network (MVDSCN) which consists of diversity net (Dnet) and universality net (Unet). Dnet is used to learn view-specific self-representation matrices and Unet is used to learn a common self-representation matrix for multiple views. The loss function is made up of the reconstruction loss of autoencoders, the self-representation loss of subspace clustering, and multiple well-designed regularization items. 5.3 GNN-based In the real world, graph data are far more complex. For example, we can use text, images and links to describe the same web page, or we can ask people with different styles to write the same number. Obviously, traditional single-view clustering methods are unable to meet the needs of such application scenarios. That is, one usually needs to employ a multi-view graph , rather than a single-view graph, to better represent the real graph data. Since GCN has made considerable achievements in processing graph-structured data, Muhammad et al . develop a graph-based convolutional network (Multi-GCN) for multi-view data. Multi-GCN focuses attention on integrating subspace learning approaches with recent innovations in graph convolutional networks, and proposes an efficient method for adapting graph-based semisupervised learning (GSSL) to multiview contexts. Most GNNs can effectively process single-view graph data, but they can not be directly applied to multi-view graph data. Cheng et al. propose multi-view attribute graph convolution networks for clustering (MAGCN) to handle graph-structured data with multi-view attributes. The main innovative method of MAGCN is designed with two-pathway encoders. The first pathway develops multiview attribute graph attention networks to capture the graph embedding features of multi-view graph data. Another pathway develops consistent embedding encoders to capture the geometric relationship and the consistency of probability distribution among different views. Fanet al. attempt to employ deep embedded learning for multi-view graph clustering. The proposed model is named 12 One2Multi graph autoencoder for multi-view graph clustering (O2MAC), which utilizes graph convolutional encoder of one view and decoders of multiple views to encode the multi-view attributed graphs to a low-dimensional feature space. Both the clustering loss and reconstruction loss of O2MAC are similar to other deep embedded clustering methods in form. Whats special is that graph convolutional network is designed to deal with graph clustering tasks . Huang et al. propose DMVCJ (deep embedded multi-view clustering via jointly learning latent representations and graphs). By introducing a self-supervised GCN module, DMVCJ jointly learns both latent graph structures and feature representations. The graph in most existing GCN-based multi-view clustering methods is fixed, which makes the clustering performance heavily dependent on the predefined graph. A noisy graph with unreliable connections can result in ineffective convolution with wrong neighbors on the graph , which may worsen the performance. To alleviate this issue, Wang et al. propose a consistent multiple graph embedding clustering framework (CMGEC) , which is mainly composed of multiple graph autoencoder (M-GAE), multi-view mutual information maximization module (MMIM), and graph fusion network (GFN). CMGEC develops a multigraph attention fusion encoder to adaptively learn a common representation from multiple views, and thereby CMGEC can deal with three types of multi-view data, including multi-view data without a graph, multi-view data with a common graph, and single-view data with multiple graphs. According to our research, deep multi-view clustering algorithms have not been explored well. Other than the abovementioned three categories, Yin et al. propose a VAE-based deep MVC method (deep multi-view clustering via variational autoencoders, DMVCV AE). DMVCV AE learns a shared generative latent representation that obeys a mixture of Gaussian distributions and thus can be regarded as the promotion of VaDE in multiview clustering. There are also some application researches based on deep multi-view clustering. For example, Perkins et al. introduce the dialog intent induction task and present a novel deep multi-view clustering approach to tackle the problem. Abavisani et al . and Hu et al . study multi-modal clustering, which is also related to multi-view clustering. Taking advantage of both deep clustering and multi-view learning will be an interesting future research direction of deep multi-view clustering. 6 D EEPCLUSTERING WITH TRANSFER LEARNING Transfer learning has emerged as a new learning framework to address the problem that the training and testing data are drawn from different feature spaces or distributions . For complex data such as high-resolution real pictures of noisy videos, traditional clustering methods even deep clustering methods can not work very well because of the high dimensionality of the feature space and no uniform criterion to guarantee the clustering process. Transfer learning provides new solutions to these problems through transferring the information from source domain that has additional information to guide the clustering process of the target domain. In the early phase, the ideas of deep domain adaption are simple and clear, such as DRCN (deep reconstruction-classification networks) uses classification loss for the source domain and reconstruction loss for target domain. The two domains share the same feature extractor. With Classification Loss MMD Loss Frozen Finetune Fig. 5: The data stream framework of deep adaption network (DAN). Dsis the source domain. Dtis the target domain. fis the shared encoder of both domains, which can be initialized with existing network. The first layers of fare frozen, the last layers of fcan be finetuned in the training process. fsis the encoder of Ds.ftis the encoder of Dt.Ssare the predicted label vector of Ds.Yare the real labels of Ds.Stare the predicted results of Dt. the development of DNN, we now have more advanced ways to transfer the knowledge. In this section, we introduce some transfer learning work about clustering which are separated into two parts. The first part is DNN-based , and the second part is GAN-based . 6.1 DNN-based DNN-based UDA methods generally aim at projecting the source and target domains into the same feature space, in which the classifier trained with source embedding and labels can be applied to the target domain. In 2014, through a summary of the network training processes, Yosinski et al. find that many deep neural networks trained on natural images exhibit a phenomenon in common: the features learned in the first several layers appear not to be specific to a particular dataset or task and applicable to many other datasets or tasks. Features must eventually transition from general to specific by the last layers of the network. Thus, we can use a mature network (e.g., AlexNet , GoogleNet ) which can provide credible parameters as the initialization for a specific task. This trick has been frequently used in feature extracted networks. Domain adaptive neural network (DaNN) first used maximum mean discrepancy (MMD) with DNN. Many domain-discrepancy-based methods adopt similar techniques with DaNN. Deep adaption networks (DAN) use multiple kernel variants of MMD (MK-MMD) as its domain adaption function. As shown in Fig. 5, the net of DAN minimizes the distance at the last feature-specific layers and then the features from source-net and target-net would be projected into the same space. After DAN, more and more methods based on MMD are proposed. The main optimization way is to choose different versions of MMD, such as joint adaption network (JAN) and weighted deep adaption network (WDAN) . JAN maximizes joint MMD to make the distributions of both source and target domains more distinguishable. WDAN is proposed to solve the question about imbalanced data distribution by introducing an auxiliary weight for each class in the source domain. RTN 13 (unsupervised domain adaptation with residual transfer networks) uses residual networks and MMD for UDA task. Some discrepancy-based methods do not use MMD. Domain adaptive hash (DAH) uses supervised hash loss and unsupervised entropy loss to align the target hash values to their corresponding source categories. Sliced wasserstein discrepancy (SWD) adopts the novel SWD to capture the dissimilarity of probability. Correlation alignment (CORAL) minimizes domain shift by aligning the second-order statistics of source and target distributions. Higher-order moment matching (HoMM) shows that the first-order HoMM is equivalent to MMD and the second-order HoMM is equivalent to CORAL. Contrastive adaptation network (CAN) proposes contrastive domain discrepancy (CDD) to minimize the intra-class discrepancy and maximize the inter-class margin. Besides, several new measurements are proposed for the source and target domain , , . Analysis of representations for domain adaptation contributes a lot in the domain adaption distance field. Some works try to improve the performance of UDA in other directions, such as unsupervised domain adaptation via structured prediction based selective pseudo-labeling tries to learn a domain invariant subspace by supervised locality preserving projection (SLPP) using both labeled source data and pseudo-labeled target data. The tricks used in deep clustering have also been used in UDA methods. For example, structurally regularized deep clustering (SRDC) implements the structural source regularization via a simple strategy of joint network training. It first minimizes the KL divergence between the auxiliary distribution (that is the same with the auxiliary distribution of DEC ) and the predictive label distribution. Then, it replaces the auxiliary distribution with that of ground-truth labels of source data. Wang et al . propose a UDA method that uses novel selective pseudo-labeling strategy and learns domain invariant subspace by supervised locality preserving projection (SLPP) using both labeled source data and pseudo-labeled target data. Zhou et al . apply ensemble learning in the training process. Prabhu et al. apply entropy optimization in target domain. 6.2 GAN-based DNN-based UDA methods mainly focus on an appropriate measurement for the source and target domains. By contrast, GANbased UDA methods use the discriminator to fit this measurement function. Usually, in GAN-based UDA methods, the generator gis used to produce data followed by one distribution from another distribution, and the discriminator dis used to judge whether the data generated follow the distribution of the target domain. Traditional GAN can not satisfy the demands to project two domains into the same space, so different frameworks based on GAN are proposed to cope with this challenge. In 2016, domain-adversarial neural network (DANN) and coupled generative adversarial networks (Co-GAN) are proposed to introduce adversarial learning into transfer learning. DANN uses a discriminator to ensure the feature distributions over the two domains are made similar. CO-GAN applies generator and discriminator all in UDA methods. It consists of a group of GANs, each corresponding to a domain. In UDA, there are two domains. The framework of CO-GAN is shown in Fig. 6. In deep transfer learning, we need to find the proper layers for MMD or weight sharing. In general, we could see that the networks which want to transfer the knowledge through domain (1,0) (1,0) Shared weights Shared weights GAN1 GAN2 Classification Loss Fig. 6: The data stream framework of Co-GAN applicated in UDA. It consists of a pair of GANs: GAN1 and GAN2. GAN1 and GAN2 share the weight in the first layers of gand last layers of d.Ds is the source domain. Dtis the target domain. d.Dsandd.Dt are generated by the noise. The first layers of gis responsible for decoding high-level semantics and the last layers of dis responsible for encoding high-level semantics. Add weight sharing constraint in these layers can guarantee similar high-level semantic representations of both domains with different low-level feature representations. adaption must pay more attention to the layers which are responsible for high-level semantic layers. In DAN, the first layers are for basic features and the high layers for semantic information are zoomed in where the last layers are chosen to be projected with MMD. In Co-GAN, also the semantic layers are chosen as the transferring layers (take notice, the first layers of DAN are not transferring layers between two domains, as it is transferring the feature extracting power of a mutual network to our domains feature extracting part). The weight-sharing constraint in the first layers of the generator urges two instances from a different domain to extract the same semantics and are destructed into different lowlevel details in the last layers of g. In opposite, the discriminator learns the features from low-level to high-level, so if we add weight-sharing constraint in the last layers, this can stimulate it to learn a joint distribution of multi-domain images from different low-level representations. Co-GAN contributed significant thought to UDA. Adversarial methods in domain adaptation have sprung up. For the job that relies on the synthesized instances to assist the domain adaptation process, they always perform not very well on real images such as theOFFICE dataset. GenToAdapt-GAN is proposed in cases where data generation is hard, even though the generator network they use performs a mere style transfer, yet this is sufficient for providing good gradient information for successfully aligning the domains. Unlike Co-GAN, there is just one generator and one discriminator. Additionally, there are two classifiers and one encoder to embed the instances into vectors. Co-GAN and GenToAdapt adopt different strategies to train a classifier for an unlabeled domain. The biggest difference between 14 TABLE 6: The summaries of DNNandGAN-based methods in deep clustering with transfer learning. Net Methods Characteristics DNNDaNN (2014) MMD and the same feature extracter. DAN (2015) Multi-kernel MMD. Different feature extracters. DRCN (2016) Classification of source and reconstruction of target. RTN (2016) Residual networks and MMD. DAH (2017) Supervised hash loss and unsupervised entropy loss. WDAN (2017) Imbalanced data distribution. JAN (2017) Joint MMD. CORAL (2017) Minimize domain shift by aligning the second-order statistics of source and target distributions. SWD (2019) Sliced Wasserstein discrepancy. CAN (2019) Contrastive Domain Discrepancy. SRDC (2020) KL divergence and auxiliary distribution (the same with DEC .). SPL (2020) Supervised locality preserving projection and selective pseudo-labeling strategy MDD (2020) Within-domain class imbalance and between-domain class distribution shift. HoMM (2020) Higher-order moment matching for UDA. GSDA (2020) Model the relationship among the local distribution pieces and global distribution synchronously. ETD(2020) Attention mecanism for samples similarity andattention scores for the transport distances. BAIT (2020) Source-free unsupervised domain adaptation. DAEL (2021) Ensemble Learning. SHOT (2021) Source-free unsupervised domain adaptation. SHOT-plus (2021) Source-free unsupervised domain adaptation. SENTRY (2021) Entropy Optimization. RWOT (2021) Shrinking Subspace Reliability and weighted optimal transport strategy. N2DC-EX (2021) Source-free unsupervised domain adaptation. GANCo-GAN (2016) A group of GANs with partly weight sharing, discriminator and label predictor are unified. DANN (2016) Domain classifier and label predictor. UNIT (2017) Use variational autoencoder as feature extractor ADDA(2017) Generalization of Co-GAN . PixelDA (2017) Generate instances follow target distribution with source samples. GenToAdapt (2018) Two classifiers and one encoder to embed the instances into vectors. SimNet (2018) Similarity-based classifier . MADA (2018) Multi-domains. DIFA (2018) Extended ADDA uses a pair of feature extractors. CyCADA (2018) Semantic consistency at both the pixel-level and feature-level. SymNet (2019) Category-level and domain-level confusion losses. M-ADDA (2020) Triplet loss function and ADDA . IIMT (2020) Mixup formulation and a feature-level consistency regularizer. MA-UDASD (2020) Source-free unsupervised domain adaptation. DM-ADA (2020) Domain mixup is jointly conducted on pixel and feature level. Co-GAN and GenToAdapt-GAN is whether the feature extractor is the same. The feature extractor of Co-GAN is the GAN itself, but the feature extractor of GenToAdapt-GAN is a specialized encoder. In Co-GAN, GAN must do the jobs of adversarial process and encoding at the same time, but in GenToAdapt-GAN, these two jobs are separated which means GenToAdapt-GAN will be stabler and perform better when the data is complex. Most of the methods proposed in recent years are based on these two ways. adopted different GAN for different domains and weight-sharing. The main change is that the generator is replaced by V AE. ADDA (adversarial discriminative domain adaptation) adopted the discriminative model as the feature extractor is based on Co-GAN. ADDA can be viewed as generalization of CO-GAN framework. extended ADDA using a pair of feature extractors. uses a metric learning approach to train the source model on the source dataset by optimizing the triplet loss function as an optimized method and then using ADDA to complete its transferring process. SymNet proposed a twolevel domain confusion scheme that includes category-level and domain-level confusion losses. With the same feature extractor of the source and target domain, MADA (multi-adversarial domain adaptation) sets the generator as its feature extractor expanding the UDA problem to multi-domains. Similarity-based domain adaption network (SimNet) uses discriminator as a feature extractor and a similarity-based classifier which compares the embedding of an unlabeled image with a set of labeled prototypes to classify an image. using mixup formulation and afeature-level consistency regularizer to address the generalization performance for target data. uses domain mixup on both pixel and feature level to improve the robustness of models. There is also a very straightforward way to transfer the knowledge between domains: Generate new instances for the target domain. If we transfer the instance from the source domain into a new instance that followed a joint distribution of both domain and are labeled the same as its mother source instance, then we get a batch of labeled fake instances in target domain. Training a classifier with these fake instances should be applicative to the real target data. In this way, we can easily use all the unsupervised adversarial domain adaptation methods in UDA as an effective data augmentation method. This accessible method also performs well in the deep clustering problem and is called pixel-level transfer learning. Unsupervised pixellevel domain adaptation with generative adversarial networks (Pixel-GAN) aims at changing the images from the source domain to appear as if they were sampled from the target domain while maintaining their original content (label). The authors proposed a novel GAN-based architecture that can learn such a transformation in an unsupervised manner. The training process of Pixel-GAN is shown in Fig. 7. It uses a generatorgto propose a fake image with the input composed of a labeled source image and a noise vector. The fake images will be discriminated against with target data by a discriminator d. At the same time, fake images Dsand source images are put into a classifierfs, when the model is convergent, the classifier can be 15 (1,0) Classification Loss Fig. 7: An overview of the model architecture. The generator g generates an image conditioned on a synthetic image which fed into discriminator as fake data and a noise vector . The discriminator ddiscriminates between real and fake images. Dsis the source domain. Dtis the target domain. Dsis the fake image, fsis trained with generated data and source data. Ymeans the real labels and Ss denotes the predicted results. used on the target domain. On the whole, Pixel-GAN is a very explicit model, but this net relies on the quality of the generated images too much. Although the classifier can guarantee the invariant information of classes, it is also hard to perform on complex images. Pixel-level transferring and feature-level transferring are not going against each other, as pixel-level can transfer visual features and feature-level transferring can transfer the nature information of the instances. Cycle-Consistent adversarial domain adaptation (CyCADA) adapts representations at both the pixel-level and feature-level while enforcing semantic consistency. The authors enforce both structural and semantic consistency during adaptation using a cycle-consistency loss and semantics losses based on a particular visual recognition task. The semantics losses both guide the overall representation to be discriminative and enforce semantic consistency before and after mapping between domains. Except for GAN, adopting data augmentation to transfer learning can also use traditional ways. provides the efficiency to make data augmentation in the target domain even it is unlabeled. It adds selfsupervised tasks to target data and shows good performance. More important is that this skill can be combined with other domain adaptation methods such as CyCADA and DAN. 7 F UTURE DIRECTIONS OF DEEPCLUSTERING Based on the aforementioned literature review and analysis, deep clustering has been applied to several domains, and we attach importance to several aspects worth studying further: Theoretical exploration Although remarkable clustering performance has been achieved by designing even more sophisticated deep clustering pipelines for specific problem-solving needs, there is still no reliable theoretical analysis on how to qualitatively analyze the influence of feature extraction and clustering loss on final clustering. So, exploring the theoretical basis of deep clustering optimization is of great significance for guiding further research in this field.Massive complex data processing Due to the complexity brought by massive data, most of the existing deep clustering models are designed for specific data sets. Complex data from different sources and forms bring more uncertainties and challenges to clustering. At present, deep learning and graph learning are needed to solve complex data processing problems. Model efficiency Deep clustering algorithm requires a large number of samples for training. Therefore, in small sample data sets, deep clustering is prone to overfitting, which leads to the decrease of clustering effect and the reduction of the generalization performance of the model. On the other hand, the deep clustering algorithm with large-scale data has high computational complexity, so the model structure optimization and model compression technology can be adopted to reduce the computational load of the model and improve the efficiency in practical application conditions. Fusion of multi-view data In practical application scenarios, clustering is often not just with a single image information, but also available text and voice information. However, most of the current deep clustering algorithms can only use one kind of information and can not make good use of the existing information. The subsequent research can consider to fully integrate the information of two or more views and make full use of the consistency and complementarity of data of different views to improve the clustering effect. Furthermore, how to combine features of different views while filtering noise to ensure better view quality needs to be solved. Deep clustering based on graph learning In reality, a large number of data sets are stored in the form of graph structures. Graph structure can represent the structural association information between sample points. How to effectively use the structural information is particularly important to improve the clustering performance. Whether it is a single-view deep clustering or a relatively wide application of multi-view deep clustering, existing clustering methods based on graph learning still have some problems, such as the graph structure information is not fully utilized, the differences and importance of different views are not fully considered. Therefore, the effective analysis of complex graph structure information, especially the rational use of graph structure information to complete the clustering task, needs further exploration. 8 S UMMARY OF DEEPCLUSTERING METHODS In this paper, we introduce recent advances in the field of deep clustering. This is mainly kind of data structures: singleview, semi-supervised, multi-view, and transfer learning. Singleview methods are the most important part of our survey, which inherits the problem settings of traditional clustering methods. We introduce them from the network they are based on. Among these networks, DAE-based methods and DNN-based methods are proposed earlier but limited with their poor performance in a real dataset. Compared to DAE-based andCNN-based methods, VAE-based and GAN-based methods attract attention in recent years for their strong feature extraction and sample generation power. Graph neural network is one of the most popular networks recently, especially in community discovery problems. So we 16 also summarize the GNN-based clustering methods. With the development of the internet, the data for clustering have different application scenarios, so we summarize some clustering methods which have different problem settings. Semi-supervised clustering methods cluster the data with constraints that can be developed from single-view clustering methods by adding a constraints loss. Multi-view clustering methods use the information of different views as a supplement. It has been used widely in both traditional neural networks and graph neural networks. Transfer learning can transfer the knowledge of a labeled domain to an unlabeled domain. We introduce clustering methods based on transfer learning with two types of networks: DNN and GAN. DNN-based methods focus on the measurement strategy of two domains. GAN-based methods use discriminators to fit the measurement strategy. In general, single-view clustering has a long history and it is still a challenge especially on complex data. But the information outside should also be considered in application scenarios. For instance, the news reported by multiple news organizations; sensor signals decompose in the time and frequency domains; a mature dog classification network is useful to class the cats images. Semisupervised models, multi-view models, and unsupervised domain adaptation models consider multi-source information would attract more attention in practical application. REFERENCES Zhongyuan Wang, Guangcheng Wang, Baojin Huang, Zhangyang Xiong, Qi Hong, Hao Wu, Peng Yi, Kui Jiang, Nanxi Wang, Yingjiao Pei, et al. Masked face recognition dataset and application. arXiv preprint arXiv:2003.09093 , 2020. Jianzhu Guo, Xiangyu Zhu, Chenxu Zhao, Dong Cao, Zhen Lei, and Stan Z Li. Learning meta face recognition in unseen domains. In CVPR , pages 61636172, 2020. Ashima Yadav and Dinesh Kumar Vishwakarma. Sentiment analysis using deep learning architectures: a review. Artificial Intelligence Review , 53(6):43354385, 2020. Guixian Xu, Yueting Meng, Xiaoyu Qiu, Ziheng Yu, and Xu Wu. Sentiment analysis of comment texts based on bilstm. IEEE Access , 7:5152251532, 2019. Ji Zhou, Peigen Li, Yanhong Zhou, Baicun Wang, Jiyuan Zang, and Liu Meng. Toward new-generation intelligent manufacturing. Engineering , 4(1):1120, 2018. Ji Zhou, Yanhong Zhou, Baicun Wang, and Jiyuan Zang. Human cyberphysical systems (hcpss) in the context of new-generation intelligent manufacturing. Engineering , 5(4):624636, 2019. J. MacQueen. Some methods for classification and analysis of multivariate observations. In Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability , pages 281297, 1967. Yazhou Ren, Uday Kamath, Carlotta Domeniconi, and Zenglin Xu. Parallel boosted clustering. Neurocomputing , 351:87100, 2019. Martin Ester, Hans-Peter Kriegel, Jrg Sander, Xiaowei Xu, et al. A density-based algorithm for discovering clusters in large spatial databases with noise. In KDD , volume 96, pages 226231, 1996. Dorin Comaniciu and Peter Meer. Mean shift: A robust approach toward feature space analysis. TPAMI , 24(5):603619, 2002. Yazhou Ren, Uday Kamath, Carlotta Domeniconi, and Guoji Zhang. Boosted mean shift clustering. In ECML-PKDD , pages 646661, 2014. Yazhou Ren, Carlotta Domeniconi, Guoji Zhang, and Guoxian Yu. A weighted adaptive mean shift clustering algorithm. In SDM , pages 794 802, 2014. Yazhou Ren, Xiaohui Hu, Ke Shi, Guoxian Yu, Dezhong Yao, and Zenglin Xu. Semi-supervised denpeak clustering with pairwise constraints. In PRICAI , pages 837850, 2018. Christopher M. Bishop. Pattern Recognition and Machine Learning , chapter 9, pages 430439. Springer, 2006. A. K. Jain, M. N. Murty, and P. J. Flynn. Data clustering: A review. ACM Computing Surveys , 31(3):264323, 1999. Alexander Strehl and Joydeep Ghosh. Cluster ensembles a knowledge reuse framework for combining multiple partitions. JMLR , 3:583617, 2002. Yazhou Ren, Carlotta Domeniconi, Guoji Zhang, and Guoxian Yu. Weighted-object ensemble clustering: methods and analysis. KAIS , 51(2):661689, 2017. Abhishek Kumar and Hal Daum. A co-training approach for multiview spectral clustering. In ICML , pages 393400, 2011. Abhishek Kumar, Piyush Rai, and Hal Daume. Co-regularized multiview spectral clustering. In NeurIPS , pages 1413-1421, 2011. Xiao Cai, Feiping Nie, and Heng Huang. Multi-view k-means clustering on big data. In IJCAI , pages 25982604, 2013. Zongmo Huang, Yazhou Ren, Xiaorong Pu, and Lifang He. Non-linear fusion for self-paced multi-view clustering. In ACM MM , pages 3211 3219, 2021. Zongmo Huang, Yazhou Ren, Xiaorong Pu, Lili Pan, Dezhong Yao, and Guoxian Yu. Dual self-paced multi-view clustering. Neural Networks , 140:184192, 2021. Shudong Huang, Yazhou Ren, and Zenglin Xu. Robust multi-view data clustering with multi-view capped-norm k-means. Neurocomputing , 311:197208, 2018. Svante Wold, Kim Esbensen, and Paul Geladi. Principal component analysis. Chemometr Intell Lab Syst , 2(1-3):3752, 1987. Marti A. Hearst, Susan T Dumais, Edgar Osuna, John Platt, and Bernhard Scholkopf. Support vector machines. IEEE Intelligent Systems and their applications , 13(4):1828, 1998. MD Feit, JA Fleck Jr, and A Steiger. Solution of the schrdinger equation by a spectral method. Journal of Computational Physics , 47(3):412433, 1982. Weibo Liu, Zidong Wang, Xiaohui Liu, Nianyin Zeng, Yurong Liu, and Fuad E Alsaadi. A survey of deep neural network architectures and their applications. Neurocomputing , 234:1126, 2017. Elie Aljalbout, Vladimir Golkov, Yawar Siddiqui, Maximilian Strobel, and Daniel Cremers. Clustering with deep learning: Taxonomy and new methods. arXiv preprint arXiv:1801.07648 , 2018. Erxue Min, Xifeng Guo, Qiang Liu, Gen Zhang, Jianjing Cui, and Jun Long. A survey of clustering with deep learning: From the perspective of network architecture. IEEE Access , 6:3950139514, 2018. Gopi Chand Nutakki, Behnoush Abdollahi, Wenlong Sun, and Olfa Nasraoui. An introduction to deep clustering. In Clustering Methods for Big Data Analytics , pages 7389. Springer, 2019. Sheng Zhou, Hongjia Xu, Zhuonan Zheng, Jiawei Chen, Jiajun Bu, Jia Wu, Xin Wang, Wenwu Zhu, Martin Ester, et al. A comprehensive survey on deep clustering: Taxonomy, challenges, and future directions. arXiv preprint arXiv:2206.07579 , 2022. Bengio Yoshua, Courville Aaron, and Vincent Pascal. Representation learning: a review and new perspectives. TPAMI , 35(8):17981828, 2013. Chunfeng Song, Feng Liu, Yongzhen Huang, Liang Wang, and Tieniu Tan. Auto-encoder based data clustering. In CIARP , pages 117124, 2013. Peihao Huang, Yan Huang, Wei Wang, and Liang Wang. Deep embedding network for clustering. In CVPR , pages 15321537, 2014. Xi Peng, Shijie Xiao, Jiashi Feng, Wei-Yun Yau, and Zhang Yi. Deep subspace clustering with sparsity prior. In IJCAI , pages 19251931, 2016. Junyuan Xie, Ross Girshick, and Ali Farhadi. Unsupervised deep embedding for clustering analysis. In ICML , pages 478487, 2016. Xifeng Guo, Long Gao, Xinwang Liu, and Jianping Yin. Improved deep embedded clustering with local structure preservation. In IJCAI , pages 17531759, 2017. Pan Ji, Tong Zhang, Hongdong Li, Mathieu Salzmann, and Ian Reid. Deep subspace clustering networks. In NeurIPS , pages 2433, 2017. Kamran Ghasedi Dizaji, Amirhossein Herandi, Cheng Deng, Weidong Cai, and Heng Huang. Deep clustering via joint convolutional autoencoder embedding and relative entropy minimization. In ICCV , pages 57365745, 2017. Bo Yang, Xiao Fu, Nicholas D Sidiropoulos, and Mingyi Hong. Towards k-means-friendly spaces: Simultaneous deep learning and clustering. In ICML , pages 38613870, 2017. Dongdong Chen, Jiancheng Lv, and Yi Zhang. Unsupervised multimanifold clustering by learning deep representation. In AAAI , 2017. Xifeng Guo, En Zhu, Xinwang Liu, and Jianping Yin. Aaaiwith data augmentation. In ACML , pages 550565, 2018. Fengfu Li, Hong Qiao, and Bo Zhang. Discriminatively boosted image clustering with fully convolutional auto-encoders. Pattern Recognition , 83:161173, 2018. Sohil Atul Shah and Vladlen Koltun. Deep continuous clustering. arXiv preprint arXiv:1803.01449 , 2018. 17 Sohil Atul Shah and Vladlen Koltun. Robust continuous clustering. PNAS , 114(37):98149819, 2017. Elad Tzoreff, Olga Kogan, and Yoni Choukroun. Deep discriminative latent space for clustering. arXiv preprint arXiv:1805.10795 , 2018. Jianlong Chang, Yiwen Guo, Lingfeng Wang, Gaofeng Meng, Shiming Xiang, and Chunhong Pan. Deep discriminative clustering analysis. arXiv preprint arXiv:1905.01681 , 2019. Xu Yang, Cheng Deng, Feng Zheng, Junchi Yan, and Wei Liu. Deep spectral clustering using dual autoencoder network. In CVPR , pages 40664075, 2019. Tong Zhang, Pan Ji, Mehrtash Harandi, Wenbing Huang, and Hongdong Li. Neural collaborative subspace clustering. arXiv preprint arXiv:1904.10596 , 2019. Yazhou Ren, Ni Wang, Mingxia Li, and Zenglin Xu. Deep densitybased image clustering. Knowledge-Based Systems , 197:105841, 2020. Sverine Affeldt, Lazhar Labiod, and Mohamed Nadif. Spectral clustering via ensemble deep autoencoder learning (sc-edae). Pattern Recognition , 108:107522, 2020. Xifeng Guo, Xinwang Liu, En Zhu, Xinzhong Zhu, Miaomiao Li, Xin Xu, and Jianping Yin. Adaptive self-paced deep clustering with data augmentation. TKDE , 32(9):16801693, 2020. Xu Yang, Cheng Deng, Kun Wei, Junchi Yan, and Wei Liu. Adversarial learning for robust deep clustering. In NeurIPS , 2020. Ryan McConville, Raul Santos-Rodriguez, Robert J Piechocki, and Ian Craddock. N2d:(not too) deep clustering via clustering the local manifold of an autoencoded embedding. In ICPR , pages 51455152, 2021. Jinghua Wang and Jianmin Jiang. Unsupervised deep clustering via adaptive gmm modeling and optimization. Neurocomputing , 433:199 211, 2021. Jianwei Yang, Devi Parikh, and Dhruv Batra. Joint unsupervised learning of deep representations and image clusters. In CVPR , pages 51475156, 2016. Michael Kampffmeyer, Sigurd Lkse, Filippo M Bianchi, Robert Jenssen, and Lorenzo Livi. Deep kernelized autoencoders. In SCIA , pages 419430, 2017. Jianlong Chang, Lingfeng Wang, Gaofeng Meng, Shiming Xiang, and Chunhong Pan. Deep adaptive image clustering. In ICCV , pages 5879 5887, 2017. Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning of visual features. In ECCV , pages 132149, 2018. Chih-Chung Hsu and Chia-Wen Lin. Cnn-based joint clustering and representation learning with feature drift compensation for large-scale image data. IEEE Trans Multimedia , 20(2):421429, 2018. Philip Haeusser, Johannes Plapp, Vladimir Golkov, Elie Aljalbout, and Daniel Cremers. Associative deep clustering: Training a classification network with no labels. In GCPR , pages 1832, 2018. Thiago VM Souza and Cleber Zanchettin. Improving deep image clustering with spatial transformer layers. arXiv preprint arXiv:1902.05401 , 2019. Oliver Nina, Jamison Moody, and Clarissa Milligan. A decoderfree approach for unsupervised clustering and manifold learning with random triplet mining. In ICCV , pages 00, 2019. Xu Ji, Joo F Henriques, and Andrea Vedaldi. Invariant information clustering for unsupervised image classification and segmentation. In ICCV , pages 98659874, 2019. Jianlong Wu, Keyu Long, Fei Wang, Chen Qian, Cheng Li, Zhouchen Lin, and Hongbin Zha. Deep comprehensive correlation mining for image clustering. In ICCV , pages 81508159, 2019. Guy Shiran and Daphna Weinshall. Multi-modal deep clustering: Unsupervised partitioning of images. arXiv preprint arXiv:1912.02678 , 2019. Wouter Van Gansbeke, Simon Vandenhende, Stamatios Georgoulis, Marc Proesmans, and Luc Van Gool. SCAN: Learning to classify images without labels. In ECCV , 2020. Huasong Zhong, Chong Chen, Zhongming Jin, and Xian-Sheng Hua. Deep robust clustering by contrastive learning. arXiv preprint arXiv:2008.03030 , 2020. Jiabo Huang, Shaogang Gong, and Xiatian Zhu. Deep semantic clustering by partition confidence maximisation. In CVPR , pages 8849 8858, 2020. Zhuxi Jiang, Yin Zheng, Huachun Tan, Bangsheng Tang, and Hanning Zhou. Variational deep embedding: An unsupervised and generative approach to clustering. arXiv preprint arXiv:1611.05148 , 2016. Nat Dilokthanakul, Pedro AM Mediano, Marta Garnelo, Matthew CH Lee, Hugh Salimbeni, Kai Arulkumaran, and Murray Shanahan. Deepunsupervised clustering with gaussian mixture variational autoencoders. arXiv preprint arXiv:1611.02648 , 2016. Jhosimar Arias Figueroa and Adn Ramrez Rivera. Is simple better?: Revisiting simple generative models for unsupervised clustering. In NeurIPS , 2017. Xiaopeng Li, Zhourong Chen, Leonard KM Poon, and Nevin L Zhang. Learning latent superstructures in variational autoencoders for deep multidimensional clustering. arXiv preprint arXiv:1803.05206 , 2018. Matthew Willetts, Stephen Roberts, and Chris Holmes. Disentangling to cluster: Gaussian mixture variational ladder autoencoders. arXiv preprint arXiv:1909.11501 , 2019. Vignesh Prasad, Dipanjan Das, and Brojeshwar Bhowmick. Variational clustering: Leveraging variational autoencoders for image clustering. arXiv preprint arXiv:2005.04613 , 2020. Lele Cao, Sahar Asadi, Wenfei Zhu, Christian Schmidli, and Michael Sjberg. Simple, scalable, and stable variational deep clustering. arXiv preprint arXiv:2005.08047 , 2020. Lin Yang, Wentao Fan, and Nizar Bouguila. Deep ieee t neur net learclustering analysis via dual variational autoencoder with spherical latent embeddings. IEEE T NEUR NET LEAR , pages 110, 2021. He Ma. Achieving deep clustering through the use of variational autoencoders and similarity-based loss. Mathematical Biosciences and Engineering , 19(10):1034410360, 2022. Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial networks. arXiv preprint arXiv:1511.06390 , 2015. Warith Harchaoui, Pierre-Alexandre Mattei, and Charles Bouveyron. Deep adversarial gaussian mixture auto-encoder for clustering. ICLR , 2017. Pan Zhou, Yunqing Hou, and Jiashi Feng. Deep adversarial subspace clustering. In CVPR , pages 15961604, 2018. Kamran Ghasedi, Xiaoqian Wang, Cheng Deng, and Heng Huang. Balanced self-paced learning for generative adversarial clustering network. InCVPR , pages 43914400, 2019. Sudipto Mukherjee, Himanshu Asnani, Eugene Lin, and Sreeram Kannan. Clustergan: Latent space clustering in generative adversarial networks. In AAAI , volume 33, pages 46104617, 2019. Nairouz Mrabah, Mohamed Bouguessa, and Riadh Ksantini. Adversarial deep embedded clustering: on a better trade-off between feature randomness and feature drift. KDE , 2020. Xiaojiang Yang, Junchi Yan, Yu Cheng, and Yizhe Zhang. Learning deep generative clustering via mutual information maximization. IEEE T NEUR NET LEAR , pages 113, 2022. Xiaotong Zhang, Han Liu, Qimai Li, and Xiao-Ming Wu. Attributed graph clustering via adaptive graph convolution. arXiv preprint arXiv:1906.01210 , 2019. Zhiqiang Tao, Hongfu Liu, Jun Li, Zhaowen Wang, and Yun Fu. Adversarial graph embedding for ensemble clustering. In IJCAI , pages 35623568, 2019. Danyang Zhu, Shudong Chen, Xiuhui Ma, and Rong Du. Adaptive graph convolution using heat kernel for attributed graph clustering. Applied Sciences , 10(4):1473, 2020. Deyu Bo, Xiao Wang, Chuan Shi, Meiqi Zhu, Emiao Lu, and Peng Cui. Structural deep clustering network. In WWW , pages 14001410, 2020. Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. TPAMI , 35(8):17981828, 2013. Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. Science , 313(5786):504507, 2006. J. Macqueen. Some methods for classification and analysis of multivariate observations. In 5th Berkeley Symposium on Mathematical Statistics and Probability , pages 281297, 1967. Alex Rodriguez and Alessandro Laio. Clustering by fast search and find of density peaks. Science , 344(6191):14921496, 2014. Laurens Van Der Maaten. Learning a parametric embedding by preserving local structure. JMLR , 5:384391, 2009. M Pawan Kumar, Benjamin Packer, and Daphne Koller. Self-paced learning for latent variable models. In NeurIPS , pages 11891197, 2010. Richard Souvenir and Robert Pless. Manifold clustering. In ICCV , volume 1, pages 648653, 2005. Ehsan Elhamifar and Ren Vidal. Sparse manifold clustering and embedding. In NeurIPS , pages 5563, 2011. Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In ICML , 2010. 18 Aysegul Dundar, Jonghoon Jin, and Eugenio Culurciello. Convolutional clustering for unsupervised learning. arXiv preprint arXiv:1511.06241 , 2015. Stephen C Johnson. Hierarchical clustering schemes. Psychometrika , 32(3):241254, 1967. Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In NeurIPS , pages 20172025, 2015. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In NeurIPS , volume 25, pages 10971105, 2012. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Commun. ACM , 60(6):8490, 2017. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 , 2014. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR , pages 248255, 2009. Linli Xu, James Neufeld, Bryce Larson, and Dale Schuurmans. Maximum margin clustering. NeurIPS , 17:15371544, 2004. Corinna Cortes and Vladimir Vapnik. Support-vector networks. Mach Learn , 20(3):273297, 1995. Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, and Masashi Sugiyama. Learning discrete representations via information maximizing self-augmented training. In ICML , pages 15581567, 2017. R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. arXiv preprint arXiv:1808.06670 , 2018. Sumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, with application to face verification. In CVPR , volume 1, pages 539546, 2005. Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clustering. In CVPR , pages 815823, 2015. Carl Doersch, Abhinav Gupta, and Alexei A Efros. Unsupervised visual representation learning by context prediction. In ICCV , pages 1422 1430, 2015. Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In CVPR , pages 25362544, 2016. Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In ECCV , pages 649666, 2016. Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In ECCV , pages 6984, 2016. Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. arXiv preprint arXiv:1803.07728 , 2018. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 , 2013. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NeurIPS , pages 26722680, 2014. Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759 , 2016. Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In NeurIPS , pages 21722180, 2016. Anh Nguyen, Jeff Clune, Yoshua Bengio, Alexey Dosovitskiy, and Jason Yosinski. Plug & play generative networks: Conditional iterative generation of images in latent space. In CVPR , pages 44674477, 2017. M Ehsan Abbasnejad, Anthony Dick, and Anton van den Hengel. Infinite variational autoencoder for semi-supervised learning. In CVPR , pages 58885897, 2017. Durk P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In NeurIPS , pages 35813589, 2014. Lars Maale, Casper Kaae Snderby, Sren Kaae Snderby, and Ole Winther. Auxiliary deep generative models. arXiv preprint arXiv:1602.05473 , 2016. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In NeurIPS , pages 22342242, 2016. Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. Adversarial autoencoders. arXiv preprint arXiv:1511.05644 , 2015. Alexey Dosovitskiy and Thomas Brox. Generating images with perceptual similarity metrics based on deep networks. In NeurIPS , pages 658666, 2016. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 , 2015. Gang Chen. Deep learning with nonparametric clustering. arXiv preprint arXiv:1501.03084 , 2015. Matthew D Hoffman and Matthew J Johnson. Elbo surgery: yet another way to carve up the variational evidence lower bound. In NeurIPS , 2016. Geoffrey J McLachlan, Sharon X Lee, and Suren I Rathnayake. Finite mixture models. ANNU REV STAT APPL , 6:355378, 2000. Matthew J Beal. Variational algorithms for approximate Bayesian inference . PhD thesis, UCL (University College London), 2003. Nevin L Zhang. Hierarchical latent class models for cluster analysis. JMLR , 5(6):697723, 2004. Daphne Koller and Nir Friedman. Probabilistic graphical models: principles and techniques . MIT press, 2009. Linxiao Yang, Ngai-Man Cheung, Jiaying Li, and Jun Fang. Deep clustering by gaussian mixture variational autoencoders with graph embedding. In ICCV , pages 64406449, 2019. Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144 , 2016. Chris J Maddison, Andriy Mnih, and Yee Whye Teh. The concrete distribution: A continuous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712 , 2016. Shengjia Zhao, Jiaming Song, and Stefano Ermon. Learning hierarchical features from generative models. arXiv preprint arXiv:1702.08396 , 2017. Andreas Krause, Pietro Perona, and Ryan G Gomes. Discriminative clustering by regularized information maximization. In NeurIPS , pages 775783, 2010. Yves Grandvalet and Yoshua Bengio. Semi-supervised learning by entropy minimization. In NeurIPS , pages 529536, 2005. Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre Antoine Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. JMLR , 11(12):33713408, 2010. David Berthelot, Colin Raffel, Aurko Roy, and Ian Goodfellow. Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543 , 2018. Uri Shaham, Kelly Stanton, Henry Li, Boaz Nadler, Ronen Basri, and Yuval Kluger. Spectralnet: Spectral clustering using deep neural networks. arXiv preprint arXiv:1801.01587 , 2018. Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality reduction by learning an invariant mapping. In CVPR , volume 2, pages 17351742, 2006. Uri Shaham and Roy R Lederman. Learning by coincidence: Siamese networks and common variable learning. Pattern Recognition , 74:52 63, 2018. Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE T NEURAL NETWOR , 20(1):6180, 2008. David Duvenaud, Dougal Maclaurin, Jorge Aguilera-Iparraguirre, Rafael Gmez-Bombarelli, Timothy Hirzel, Aln Aspuru-Guzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular fingerprints. arXiv preprint arXiv:1509.09292 , 2015. Mohamed A Khamsi and William A Kirk. An introduction to metric spaces and fixed point theory , volume 53. John Wiley & Sons, 2011. Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. Graph neural networks: A review of methods and applications. AI Open , 1:57 81, 2020. Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 , 2016. Fei Tian, Bin Gao, Qing Cui, Enhong Chen, and Tie-Yan Liu. Learning deep representations for graph clustering. In AAAI , 2014. Ming Shao, Sheng Li, Zhengming Ding, and Yun Fu. Deep linear coding for fast graph clustering. In IJCAI , 2015. Peng Cui, Xiao Wang, Jian Pei, and Wenwu Zhu. A survey on network embedding. TKDE , 31(5):833852, 2018. 19 Daokun Zhang, Jie Yin, Xingquan Zhu, and Chengqi Zhang. Network representation learning: A survey. IEEE Trans. Big Data , 6(1):328, 2018. Hongyun Cai, Vincent W Zheng, and Kevin Chen-Chuan Chang. A comprehensive survey of graph embedding: Problems, techniques, and applications. TKDE , 30(9):16161637, 2018. Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip. A comprehensive survey on graph neural networks. IEEE Trans. Neural Netw. Learn. Syst. , 32(1):424, 2020. Shuicheng Yan, Dong Xu, Benyu Zhang, Hong-Jiang Zhang, Qiang Yang, and Stephen Lin. Graph embedding and extensions: A general framework for dimensionality reduction. TPAMI , 29(1):4051, 2006. Ana LN Fred and Anil K Jain. Combining multiple clusterings using evidence accumulation. TPAMI , 27(6):835850, 2005. Chun Wang, Shirui Pan, Ruiqi Hu, Guodong Long, Jing Jiang, and Chengqi Zhang. Attributed graph clustering: A deep attentional embedding approach. arXiv preprint arXiv:1906.06532 , 2019. Yazhou Ren, Kangrong Hu, Xinyi Dai, Lili Pan, Steven CH Hoi, and Zenglin Xu. Semi-supervised deep embedded clustering. Neurocomputing , 325:121130, 2019. Joseph Enguehard, Peter OHalloran, and Ali Gholipour. Semisupervised learning with deep embedded clustering for image classification and segmentation. IEEE Access , 7:1109311104, 2019. Hongjing Zhang, Sugato Basu, and Ian Davidson. A framework for deep constrained clustering-algorithms and advances. In ECML-PKDD , pages 5772, 2019. Ankita Shukla, Gullal S Cheema, and Saket Anand. Semi-supervised clustering with neural networks. In BigMM , pages 152161. IEEE, 2020. Olivier Chapelle and Alexander Zien. Semi-supervised classification by low density separation. In AISTATS , volume 2005, pages 5764. Citeseer, 2005. Kaizhu Huang, Zenglin Xu, Irwin King, and Michael R Lyu. Semisupervised learning from general unlabeled data. In ICDM , pages 273 282. IEEE, 2008. Zenglin Xu, Irwin King, Michael Rung-Tsong Lyu, and Rong Jin. Discriminative semi-supervised feature selection via manifold regularization. IEEE T NEURAL NETWOR , 21(7):10331047, 2010. Yi Huang, Dong Xu, and Feiping Nie. Semi-supervised dimension reduction using trace ratio criterion. IEEE T NEUR NET LEAR , 23(3):519526, 2012. Sugato Basu, Arindam Banerjee, and Raymond Mooney. Semisupervised clustering by seeding. In ICML , 2002. Nizar Grira, Michel Crucianu, and Nozha Boujemaa. Unsupervised and semi-supervised clustering: a brief survey. A review of machine learning techniques for processing multimedia content , 1:916, 2004. Kamalika Chaudhuri, Sham M Kakade, Karen Livescu, and Karthik Sridharan. Multi-view clusieee t neur net lear. In ICML , pages 129 136, 2009. Yeqing Li, Feiping Nie, Heng Huang, and Junzhou Huang. Large-scale multi-view spectral clustering via bipartite graph. In AAAI , 2015. Xiaochun Cao, Changqing Zhang, Huazhu Fu, Si Liu, and Hua Zhang. Diversity-induced multi-view subspace clustering. In CVPR , pages 586 594, 2015. Feiping Nie, Jing Li, and Xuelong Li. Self-weighted multi-view clustering with multiple graphs. In IJCAI , pages 25642570, 2017. Changqing Zhang, Qinghua Hu, Huazhu Fu, Pengfei Zhu, and Xiaochun Cao. Latent multi-view subspace clustering. In CVPR , pages 4279 4287, 2017. Zheng Zhang, Li Liu, Fumin Shen, Heng Tao Shen, and Ling Shao. Binary multi-view clustering. TPAMI , 41(7):17741782, 2018. Handong Zhao, Zhengming Ding, and Yun Fu. Multi-view clustering via deep matrix factorization. In AAAI , 2017. Maria Brbi c and Ivica Kopriva. Multi-view low-rank sparse subspace clustering. Pattern Recognition , 73:247258, 2018. Yazhou Ren, Shudong Huang, Peng Zhao, Minghao Han, and Zenglin Xu. Self-paced and auto-weighted multi-view clustering. Neurocomputing , 383:248256, 2019. Chang Xu, Dacheng Tao, and Chao Xu. Multi-view self-paced learning for clustering. In IJCAI , 2015. Jie Xu, Yazhou Ren, Guofeng Li, Lili Pan, Ce Zhu, and Zenglin Xu. Deep embedded multi-view clustering with collaborative training. Information Sciences , 573:279290, 2021. Shaohua Fan, Xiao Wang, Chuan Shi, Emiao Lu, Ken Lin, and Bai Wang. One2multi graph autoencoder for multi-view graph clustering. InWWW , pages 30703076, 2020. Xiaoliang Tang, Xuan Tang, Wanli Wang, Li Fang, and Xian Wei. Deep multi-view sparse subspace clustering. In ICNCC , pages 115 119, 2018. Ruihuang Li, Changqing Zhang, Huazhu Fu, Xi Peng, Tianyi Zhou, and Qinghua Hu. Reciprocal multi-layer subspace learning for multi-view clustering. In ICCV , pages 81728180, 2019. Pengfei Zhu, Binyuan Hui, Changqing Zhang, Dawei Du, Longyin Wen, and Qinghua Hu. Multi-view deep subspace clustering networks. arXiv preprint arXiv:1908.01978 , 2019. Zhaoyang Li, Qianqian Wang, Zhiqiang Tao, Quanxue Gao, and Zhaohua Yang. Deep adversarial multi-view clustering network. In IJCAI , pages 29522958, 2019. Ming Yin, Weitian Huang, and Junbin Gao. Shared generative latent representation learning for multi-view clustering. In AAAI , pages 6688 6695, 2020. Jie Xu, Yazhou Ren, Huayi Tang, Xiaorong Pu, Xiaofeng Zhu, Ming Zeng, and Lifang He. Multi-V AE: Learning disentangled view-common and view-peculiar visual representations for multi-view clustering. In ICCV , pages 92349243, 2021. Fangfei Lin, Bing Bai, Kun Bai, Yazhou Ren, Peng Zhao, and Zenglin Xu. Contrastive multi-view hyperbolic hierarchical clustering. In IJCAI , pages 32503256, 2022. Jie Xu, Huayi Tang, Yazhou Ren, Liang Peng, Xiaofeng Zhu, and Lifang He. Multi-level feature learning for contrastive multi-view clustering. InCVPR , pages 1605116060, 2022. Jie Xu, Chao Li, Yazhou Ren, Liang Peng, Yujie Mo, Xiaoshuang Shi, and Xiaofeng Zhu. Deep incomplete multi-view clustering via mining cluster complementarity. In AAAI , pages 87618769, 2022. Muhammad Raza Khan and Joshua E Blumenstock. Multi-GCN: Graph convolutional networks for multi-view networks, with applications to global poverty. In AAAI , volume 33, pages 606613, 2019. Jiafeng Cheng, Qianqian Wang, Zhiqiang Tao, De-Yan Xie, and Quanxue Gao. Multi-view attribute graph convolution networks for clustering. In IJCAI , pages 29732979, 2020. Yiming Wang, Dongxia Chang, Zhiqiang Fu, and Yao Zhao. Consistent multiple graph embedding for multi-view clustering. arXiv preprint arXiv:2105.04880 , 2021. Zongmo Huang, Yazhou Ren, Xiaorong Pu, and Lifang He. Deep embedded multi-view clustering via jointly learning latent representations and graphs. arXiv preprint arXiv:2205.03803 , 2022. Ren Vidal. Subspace clustering. IEEE Signal Processing Magazine , 28(2):5268, 2011. Andrew Y Ng, Michael I Jordan, and Yair Weiss. On spectral clustering: Analysis and an algorithm. In NeurIPS , pages 849856, 2002. Jianbo Shi and Jitendra Malik. Normalized cuts and image segmentation. Departmental Papers (CIS) , page 107, 2000. Ehsan Elhamifar and Ren Vidal. Sparse subspace clustering. In CVPR , pages 27902797. IEEE, 2009. Feiping Nie, Hua Wang, Heng Huang, and Chris Ding. Unsupervised and semi-supervised learning via l1-norm graph. In ICCV , pages 2268 2273, 2011. Can-Yi Lu, Hai Min, Zhong-Qiu Zhao, Lin Zhu, De-Shuang Huang, and Shuicheng Yan. Robust and efficient subspace segmentation via least squares regression. In ECCV , pages 347360, 2012. Ehsan Elhamifar and Rene Vidal. Sparse subspace clustering: Algorithm, theory, and applications. TPAMI , 35(11):27652781, 2013. Guangcan Liu, Zhouchen Lin, Shuicheng Yan, Ju Sun, Yong Yu, and Yi Ma. Robust recovery of subspace structures by low-rank representation. TPAMI , 35(1):171184, 2012. Jiashi Feng, Zhouchen Lin, Huan Xu, and Shuicheng Yan. Robust subspace segmentation with block-diagonal prior. In CVPR , pages 38183825, 2014. Xi Peng, Zhang Yi, and Huajin Tang. Robust subspace clustering via thresholding ridge regression. In AAAI , 2015. Changqing Zhang, Huazhu Fu, Si Liu, Guangcan Liu, and Xiaochun Cao. Low-rank tensor constrained multiview subspace clustering. In ICCV , pages 15821590, 2015. Chang Xu, Dacheng Tao, and Chao Xu. A survey on multi-view learning. arXiv preprint arXiv:1304.5634 , 2013. Theodore Wilbur Anderson. An introduction to multivariate statistical analysis. Technical report, Wiley New York, 1962. Galen Andrew, Raman Arora, Jeff Bilmes, and Karen Livescu. Deep canonical correlation analysis. In ICML , pages 12471255, 2013. Weiran Wang, Raman Arora, Karen Livescu, and Jeff Bilmes. On deep multi-view representation learning. In ICML , pages 10831092, 2015. 20 Meng Qu, Jian Tang, Jingbo Shang, Xiang Ren, Ming Zhang, and Jiawei Han. An attention-based collaboration framework for multi-view network representation learning. In CIKM , pages 17671776, 2017. Satu Elisa Schaeffer. Graph clustering. Computer science review , 1(1):2764, 2007. Seongjun Yun, Minbyul Jeong, Raehyun Kim, Jaewoo Kang, and Hyunwoo J Kim. Graph transformer networks. In NeurIPS , pages 1198311993, 2019. Hugh Perkins and Yi Yang. Dialog intent induction with deep multiview clustering. arXiv preprint arXiv:1908.11487 , 2019. Mahdi Abavisani and Vishal M Patel. Deep multimodal subspace clustering networks. IEEE J-STSP , 12(6):16011614, 2018. Di Hu, Feiping Nie, and Xuelong Li. Deep multimodal clustering for unsupervised audiovisual learning. In CVPR , pages 92489257, 2019. Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. KDE , 22(10):13451359, 2010. Muhammad Ghifary, W Bastiaan Kleijn, Mengjie Zhang, David Balduzzi, and Wen Li. Deep reconstruction-classification networks for unsupervised domain adaptation. In ECCV , pages 597613, 2016. Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In NeurIPS , pages 33203328, 2014. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In CVPR , pages 19, 2015. Muhammad Ghifary, W Bastiaan Kleijn, and Mengjie Zhang. Domain adaptive neural networks for object recognition. In PRICAI , pages 898 904, 2014. Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schlkopf, and Alexander Smola. A iciarst. JMLR , 13(1):723773, 2012. Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In ICML , pages 97105, 2015. Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. Deep transfer learning with joint adaptation networks. In ICML , pages 2208 2217, 2017. Hongliang Yan, Yukang Ding, Peihua Li, Qilong Wang, Yong Xu, and Wangmeng Zuo. Mind the class weight bias: Weighted maximum mean discrepancy for unsupervised domain adaptation. In CVPR , pages 2272 2281, 2017. Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. Unsupervised domain adaptation with residual transfer networks. In NeurIPS , pages 136144, 2016. Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In CVPR , pages 50185027, 2017. Chen-Yu Lee, Tanmay Batra, Mohammad Haris Baig, and Daniel Ulbricht. Sliced wasserstein discrepancy for unsupervised domain adaptation. In CVPR , pages 1028510295, 2019. Baochen Sun, Jiashi Feng, and Kate Saenko. Correlation alignment for unsupervised domain adaptation. In Domain Adaptation in Computer Vision Applications , pages 153171. Springer, 2017. Chao Chen, Zhihang Fu, Zhihong Chen, Sheng Jin, Zhaowei Cheng, Xinyu Jin, and Xian-Sheng Hua. Homm: Higher-order moment matching for unsupervised domain adaptation. In AAAI , volume 34, pages 34223429, 2020. Guoliang Kang, Lu Jiang, Yi Yang, and Alexander G Hauptmann. Contrastive adaptation network for unsupervised domain adaptation. In CVPR , pages 48934902, 2019. Lanqing Hu, Meina Kan, Shiguang Shan, and Xilin Chen. Unsupervised domain adaptation with hierarchical gradient synchronization. In CVPR , pages 40434052, 2020. Mengxue Li, Yi-Ming Zhai, You-Wei Luo, Peng-Fei Ge, and ChuanXian Ren. Enhanced transport distance for unsupervised domain adaptation. In CVPR , pages 1393613944, 2020. Renjun Xu, Pelen Liu, Liyan Wang, Chao Chen, and Jindong Wang. Reliable weighted optimal transport for unsupervised domain adaptation. InCVPR , pages 43944403, 2020. Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. Analysis of representations for domain adaptation. NeurIPS , pages 137 144, 2006. Hui Tang, Ke Chen, and Kui Jia. Unsupervised domain adaptation via structurally regularized deep clustering. In CVPR , pages 87258735, 2020. Qian Wang and Toby Breckon. Unsupervised domain adaptation via structured prediction based selective pseudo-labeling. In AAAI , volume 34, pages 62436250, 2020. Xiaofei He and Partha Niyogi. Locality preserving projections. In NeurIPS , 2003. Kaiyang Zhou, Yongxin Yang, Yu Qiao, and Tao Xiang. Domain adaptive ensemble learning. IEEE T IMAGE PROCESS , 30:80088018, 2021. Viraj Prabhu, Shivam Khare, Deeksha Kartik, and Judy Hoffman. Sentry: Selective entropy optimization via committee consistency for unsupervised domain adaptation. In ICCV , pages 85588567, 2021. Xiang Jiang, Qicheng Lao, Stan Matwin, and Mohammad Havaei. Implicit class-conditioned domain alignment for unsupervised domain adaptation. In ICML , pages 48164827, 2020. Shiqi Yang, Yaxing Wang, Joost van de Weijer, Luis Herranz, and Shangling Jui. Unsupervised domain adaptation without source data by casting a bait. arXiv preprint arXiv:2010.12427 , 1(2):3, 2020. Jian Liang, Dapeng Hu, and Jiashi Feng. Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In ICML , pages 60286039, 2020. Jian Liang, Dapeng Hu, Yunbo Wang, Ran He, and Jiashi Feng. Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer. IEEE T PATTERN ANAL , 2021. Song Tang, Yan Yang, Zhiyuan Ma, Norman Hendrich, Fanyu Zeng, Shuzhi Sam Ge, Changshui Zhang, and Jianwei Zhang. Nearest neighborhood-based deep clustering for source data-absent unsupervised domain adaptation. arXiv preprint arXiv:2107.12585 , 2021. Ming-Yu Liu and Oncel Tuzel. Coupled generative adversarial networks. In NeurIPS , pages 469477, 2016. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Franois Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. JMLR , 17(1):20962030, 2016. Ming-Yu Liu, Thomas Breuel, and Jan Kautz. Unsupervised image-toimage translation networks. In NeurIPS , pages 700708, 2017. Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. In CVPR , pages 71677176, 2017. Konstantinos Bousmalis, Nathan Silberman, David Dohan, Dumitru Erhan, and Dilip Krishnan. Unsupervised pixel-level domain adaptation with generative adversarial networks. In CVPR , pages 37223731, 2017. Swami Sankaranarayanan, Yogesh Balaji, Carlos D Castillo, and Rama Chellappa. Generate to adapt: Aligning domains using generative adversarial networks. In CVPR , pages 85038512, 2018. Pedro O Pinheiro. Unsupervised domain adaptation with similarity learning. In CVPR , pages 80048013, 2018. Zhongyi Pei, Zhangjie Cao, Mingsheng Long, and Jianmin Wang. Multi-adversarial domain adaptation. arXiv preprint arXiv:1809.02176 , 2018. Riccardo V olpi, Pietro Morerio, Silvio Savarese, and Vittorio Murino. Adversarial feature augmentation for unsupervised domain adaptation. InCVPR , pages 54955504, 2018. Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei Efros, and Trevor Darrell. Cycada: Cycleconsistent adversarial domain adaptation. In ICML , pages 19891998, 2018. Yabin Zhang, Hui Tang, Kui Jia, and Mingkui Tan. Domain-symmetric networks for adversarial domain adaptation. In CVPR , pages 5031 5040, 2019. Issam H Laradji and Reza Babanezhad. M-adda: Unsupervised domain adaptation with deep metric learning. In Domain Adaptation for Visual Understanding , pages 1731. Springer, 2020. Shen Yan, Huan Song, Nanxiang Li, Lincan Zou, and Liu Ren. Improve unsupervised domain adaptation with mixup training. arXiv preprint arXiv:2001.00677 , 2020. Rui Li, Qianfen Jiao, Wenming Cao, Hau-San Wong, and Si Wu. Model adaptation: Unsupervised domain adaptation without source data. In CVPR , pages 96419650, 2020. Minghao Xu, Jian Zhang, Bingbing Ni, Teng Li, Chengjie Wang, Qi Tian, and Wenjun Zhang. Adversarial domain adaptation with domain mixup. In AAAI , volume 34, pages 65026509, 2020. Yu Sun, Eric Tzeng, Trevor Darrell, and Alexei A Efros. Unsupervised domain adaptation through self-supervision. arXiv preprint arXiv:1909.11825 , 2019. |
2206.01079.pdf | When does return-conditioned supervised learning work for offline reinforcement learning? David Brandfonbrener New York University [email protected] Bietti New York UniversityJacob Buckman MILA Romain Laroche Microsoft ResearchJoan Bruna New York University Abstract Several recent works have proposed a class of algorithms for the offline reinforcement learning (RL) problem that we will refer to as return-conditioned supervised learning (RCSL). RCSL algorithms learn the distribution of actions conditioned on both the state and the return of the trajectory. Then they define a policy by conditioning on achieving high return. In this paper, we provide a rigorous study of the capabilities and limitations of RCSL, something which is crucially missing in previous work. We find that RCSL returns the optimal policy under a set of assumptions that are stronger than those needed for the more traditional dynamic programming-based algorithms. We provide specific examples of MDPs and datasets that illustrate the necessity of these assumptions and the limits of RCSL. Finally, we present empirical evidence that these limitations will also cause issues in practice by providing illustrative experiments in simple point-mass environments and on datasets from the D4RL benchmark. 1 Introduction In recent years, deep learning has proven to be an exceptionally powerful generic algorithm for solving supervised learning (SL) tasks. These approaches tend to be stable, and scale well with compute and data . In contrast, deep reinforcement learning algorithms seem to lack these nice properties; results are well known to be sensitive to hyperparameters and difficult to replicate. In spite of this, deep reinforcement learning (RL) has achieved impressive feats, such as defeating human champions at Go . This juxtaposition of success and instability has inspired researchers to explore alternative approaches to reinforcement learning that more closely resemble supervised learning in hopes of making deep RL as well-behaved as deep SL. One family of algorithms that has garnered great interest recently is return-conditioned supervised learning (RCSL). The core idea of RCSL is to learn the return-conditional distribution of actions in each state, and then define a policy by sampling from the distribution of actions that receive high return. This was first proposed for the online RL setting by work on Upside Down RL [23, 26] and Reward Conditioned Policies . The idea was extended to the offline RL setting using transformers that condition on the entire history of states rather than just the current Markovian state in the Decision Transformer (DT) work [8, 12]. Recent work on RL via Supervised Learning (RvS) unifies and simplifies ideas from these prior works with ideas about goal-conditioned policies. Importantly, none of this prior work provides theoretical guarantees or analysis of the failure modes of the return-conditioning approach. In contrast, the more established dynamic programming (DP) algorithms for RL are better understood theoretically. This paper attempts to address this gap in 36th Conference on Neural Information Processing Systems (NeurIPS 2022).arXiv:2206.01079v3 [cs.LG] 11 Jan 2023 understanding, in order to assess when RCSL is a reliable approach for offline RL. Specifically, we answer the following questions: What optimality guarantees can we make for RCSL? Under what conditions are they necessary and sufficient? In what situations does RCSL fail in theory and in practice? How does RCSL relate to other approaches, such as DP and behavior cloning (BC)? We find that although RCSL does select a near-optimal policy under certain conditions, the necessary assumptions are more strict than those for DP. In particular, RCSL (but not DP) requires nearly deterministic dynamics in the MDP, knowledge of the proper value to condition on, and for the conditioning value to be supported by the distribution of returns in the dataset. We provide simple tabular examples to demonstrate the necessity of these assumptions. The shortcomings of RCSL that we identify in theory are verified empirically with some simple experiments using neural models on ad-hoc example problems as well as benchmark datasets. We conclude that RCSL alone is unlikely to be a general solution for offline RL problems, but does show promise in some specific situations such as deterministic MDPs with high-quality behavior data. 2 Preliminaries 2.1 Setup We will consider an offline RL setup where we are given a dataset Dof trajectories = (o1,a1,r1,,oH,aH,rH)of observations ot O , actionsat A , and rewards rt[0,1] generated by some behavior policy interacting with a finite horizon MDP with horizon H. Let g() =H t=1rtdenote the cumulative return of the trajectory (we will just use gwhen the trajectory is clear from context). And let J() =E[g()]be the expected return of a policy . We then let the state representation stS be any function of the history of observations, actions, and rewards up to steptalong withot. To simplify notation in the finite horizon setting, we will sometimes drop the timestep from sto refer to generic states and assume that we can access the timestep from the state representation as t(s). LetPdenote the joint distribution over states, actions, rewards, and returns induced by any policy . In this paper, we focus on the RCSL approach that learns by return-conditioned supervised learning. Explicitly, at training time this method minimizes the empirical negative log likelihood loss: L() = D 1tHlog(at|st,g()). (1) Then at test time, an algorithm takes the learned policy along with a conditioning function f(s)to define the test-time policy fas: f(a|s) :=(a|s,f(s)). (2) Nota bene: the Decision Transformer is captured in this framework by defining the state space so that the state stat timetalso contains all past ot,at, andrtfort<t. In prior work, fis usually chosen to be a constant at the initial state and to decrease with observed reward along a trajectory, which is captured by a state representation that includes the history of rewards. 2.2 The RCSL policy To better understand the objective, it is useful to first consider its optimum in the case of infinite data. It is clear that our loss function attempts to learn P(a|s,g)whereis the behavior policy that generated the data (and recall that Prefers to the distribution over states, actions, and returns induced by). Factoring this distribution, we quickly see that the optimal policy RCSL f for a specific conditioning function fcan be written as: RCSL f(a|s) =P(a|s,f(s)) =P(a|s)P(f(s)|s,a) P(f(s)|s)=(a|s)P(f(s)|s,a) P(f(s)|s). (3) Essentially, the RCSL policy re-weights the behavior based on the distribution of future returns. 2 Connection to distributional RL. In distributional RL , the distribution of future returns under a policyfrom statesand actionais defined as: G(s,a)g=H t=t(s)rt|,st(s)= s,at(s)=a. The RCSL policy is precisely proportional to the product of the behavior policy and the density of the distributional Q function of the behavior policy (i.e. P(g|s,a)). 2.3 Related work As noted in the introduction, our work is in direct response to the recent line of literature on RCSL [23, 26, 21, 8, 12, 9]. Specifically, we will focus on the DT and RvS formulations in our experiments since they also focus on the offline RL setting. Note that another recent work introduced the Trajectory Transformer which does not fall under the RCSL umbrella since it performs planning in the learned model to define a policy. Another relevant predecessor of RCSL comes from work on goal-based RL . Compared to RCSL, this line of work replaces the target return gin the empirical loss function by a goal state. One instantiation is hindsight experience replay (HER) where each trajectory in the replay buffer is relabeled as if the observed final state was in fact the goal state . Another instance is goalconditioned supervised learning [GCSL, 13], which provides more careful analysis and guarantees, but the guarantees (1) are not transferable to the return-conditioned setting, (2) assume bounds on Lerrors in TV distance instead of dealing with expected loss functions that can be estimated from data, and (3) do not provide analysis of the tightness of the bounds. Concurrent work [34, 22, 33] also all raise the issue of RCSL in stochastic environments with infinite data, and present some algorithmic solutions. However, none of this work addresses the potentially more fundamental issue of sample complexity that arises from the requirement of return coverage that we discuss in Section 4. 3 When does RCSL find the optimal policy? We begin by exploring how RCSL behaves with infinite data and a fully expressive policy class. In this setting, classic DP algorithms (e.g. Q-learning) are guaranteed to converge to the optimal policy under coverage assumptions . But we now show that this is not the case for RCSL, which requires additional assumptions for a similar guarantee. Our approach is to first derive a positive result: under certain assumptions, the policy which optimizes the RCSL objective (Section 2.2) is guaranteed to be near-optimal. We then illustrate the limitations of RCSL by providing simple examples that are nonetheless challenging for these methods in order to demonstrate why our assumptions are necessary and that our bound is tight. Theorem 1 (Alignment with respect to the conditioning function) .Consider an MDP , behavior and conditioning function f. Assume the following: 1. Return coverage: P(g=f(s1)|s1)ffor all initial states s1. 2. Near determinism: P(r=r(s,a)ors=T(s,a)|s,a)at alls,afor some functions Tandr. Note that this does not constrain the stochasticity of the initial state. 3. Consistency of f:f(s) =f(s) +rfor alls.1 Then Es1[f(s1)]J(RCSL f)(1 f+ 2) H2. (4) Moreover, there exist problems where the bound is tight up to constant factors. The proof is in Appendix C.1. Note that the quantity Es1[f(s1)]is specific to the structure of RCSL algorithms and captures the notion that the ideal RCSL policy will be able to reproduce policies of any value when given different conditioning functions (with appropriate data). The theorem immediately yields the following corollaries (with proof in Appendix C.1). 1Note this can be exactly enforced (as in prior work) by augmenting the state space to include the cumulative reward observed so far. 3 Corollary 1. Under the assumptions of Theorem 1, there exists a conditioning function fsuch that J()J(RCSL f)(1 f+ 3) H2. (5) Corollary 2. Iff>0,= 0, andf(s1) =V(s1)for all initial states s1, thenJ(RCSL f) =J(). The corollaries tell us that in near determinisitc environments with the proper conditioning functions and data coverage, it is possible for RCSL to recover near optimal policies. These assumptions are somewhat strong compared to those needed for DP-based approaches, so we will now explain why they are necessary for our analysis. Tightness. To demonstrate tightness we will consider the simple examples in Figure 1. These MDPs and behavior policies demonstrate tightness in andfup to constant factors, and provide insight into how stochastic dynamics lead to suboptimal behavior from RCSL algorithms. (a) An example where the bound is tight. Bdenotes the Bernoulli distribution. (b) An example where RCSL also has large regret. (c) An example where RCSL also has large regret for any conditioning function. Figure 1: Failure modes of RCSL in stochastic environments with infinite data. First, consider the example in Figure 1a with conditioning f(s1) = 1 . There is only one possible policy in this case, and it has J() =so that E[f(s1)]J() = 1. Note thatf=, so we have that/f= 1. Thus, the bound is tight in /f. This example shows that the goal of achieving a specific desired return is incompatible with stochastic environments. This first example is somewhat silly since there is only one action, so the learned policy does not actually suffer any regret. To show that this issue can in fact lead to regret, consider the example in Figure 1b, again with conditioning f(s1) = 1 . Then applying the reasoning from Section 2.2, RCSL f(a1|s1) =(a1|s1)P(g= 1|s1,a1) P(g= 1|s1)= 0.50 0.5= 0. (6) So we get that E[f(s1)]J(RCSL f) = 1, while/f=/(/2) = 2 (which is on the same order, up to a constant factor). However, in this case the learned policy RCSL f suffers substantial regret since the chosen action a2has substantially lower expected value than a1by12. The issue in the second example could be resolved by changing the conditioning function so that f(s1) = 1. Now we will consider the example in Figure 1c where we will see that there exist cases where the bias of RCSL in stochastic environments can remain regardless of the conditioning function. In this MDP, the only returns that are supported are g= 0 org= 1. Forf(s1) = 1 , plugging in to the formula for fyields RCSL f(a1|s1) =(a1|s1)P(g= 1|s1,a1) P(g= 1|s1)=1 (1) + (1)=1 2. (7) Thus, E[f(s1)]J(RCSL f) = 1/2andJ()J(RCSL f) = 1/2. This shows that merely changing the conditioning function is not enough to overcome the bias of the RCSL method in stochastic environments. These examples show that even for MDPs that are -close to being deterministic, the regret of RCSL can be large. But, in the special case of deterministic MDPs we find that RCSL can indeed recover the optimal policy. And note that we still allow for stochasticity in the initial states in these deterministic MDPs, which provides a rich setting for problems like robotics that requires generalization over the state space from finite data. In the next section, we will consider more precisely what happens to RCSL algorithms with finite data and limited model classes. 4 Trajectory stitching. Another issue often discussed in the offline RL literature is the idea of trajectory stitching [31, 8]. Ideally, an offline RL agent can take suboptimal trajectories that overlap and stitch them into a better policy. Clearly, DP-based algorithms can do this, but it is not so clear that RCSL algorithms can. In Appendix B we provide theoretical and empirical evidence that in fact they cannot perform stitching in general, even with infinite data. While this does not directly affect our bounds, the failure to perform stitching is an issue of practical importance for RCSL methods. 4 Sample complexity of RCSL Now that we have a better sense of what policy RCSL will converge to with infinite data, we can consider how quickly (and under what conditions) it will converge to the policy fwhen given finite data and a limited policy class, as will occur in practice. We will do this via a reduction from the regret relative to the infinite data solution fto the expected loss function Lminimized at training time by RCSL, which is encoded in the following theorem. Theorem 2 (Reduction of RCSL to SL) .Consider any function f:SRsuch that the following two assumptions hold: 1. Bounded occupancy mismatch:PRCSL f(s) P(s)Cffor alls. 2. Return coverage: P(g=f(s)|s)ffor alls. Define the expected loss as L() =EsPEgP(|s)[DKL(P(|s,g)(|s,g))]. Then for any estimated RCSL policy that conditions on fat test time (denoted by f), we have that J(RCSL f)J(f)Cf fH2 2L(). (8) The proof can be found in Appendix C.3. Note that we require a similar assumption of return coverage as before to ensure we have sufficient data to define f. We also require an assumption on the state occupancy of the idealized policy frelative to. This assumption is needed since the loss L()is optimized on states sampled from P, but we care about the expected return of the learned policy relative to that of f, which can be written as an expectation over states sampled from Pf. This gives us a reduction to supervised learning, but to take this all the way to a sample complexity bound we need to control the loss L()from finite samples. Letting Ndenote the size of the dataset, the following corollary uses standard uniform convergence results from supervised learning to yield finite sample bounds. Corollary 3 (Sample complexity of RCSL) .To get finite data guarantees, add to the above assumptions the assumptions that (1) the policy class is finite, (2)|log(a|s,g)log(a|s,g)|cfor any(a,s,g,a,s,g)and all, and (3) the approximation error of is bounded by approx , i.e.minL()approx . Then with probability at least 1, J(RCSL f)J(f)O( Cf fH2( c(log||/ N)1/4 +approx)) . (9) The proof is in Appendix C.4. Analyzing the bound, we can see that the dependence on Nis in terms of a fourth root rather than the square root, but this comes from the fact that we are optimizing a surrogate loss. Namely the learner optimizes KL divergence, but we ultimately care about regret which we access by using the KL term to bound a TV divergence and thus lose a square root factor. A similar rate appears, for example, when bounding 0-1 classification loss of logistic regression [3, 5]. This corollary also tells us something about how the learner will learn to generalize across different values of the return. If the policy class is small (for some notion of model complexity) and sufficiently structured, then it can use information from the given data to generalize across values of g, using low-return trajectories to inform the model on high-return trajectories. Note that a full sample complexity bound that competes with the optimal policy can be derived by combining this result with Corollary 1 as follows: 5 Corollary 4 (Sample complexity against the optimal policy) .Under all of the assumptions of Corollary 1 and Corollary 3 we get: J()J(f)O( Cf fH2( c(log||/ N)1/4 +approx) + fH2) . (10) Tightness. To better understand why the dependence on 1/fis tight and potentially exponential in the horizon H, even in deterministic environments, we offer the example in Figure 2. Specifically, we claim that any value of f(s1)where the policy RCSL f prefers the good action a1froms1will require on the order of 10H/2samples in expectation to recover as f2. Figure 2: An example where RCSL has exponential sample complexity in a deterministic environment.To see why this is the case, we consider the MDP illustrated in Figure 2 with horizon H4. The MDP has four states each with two actions. All transitions and rewards are deterministic. The only actions with non-zero reward arer(s2,a1) = 1 andr(s3,a1) = 0.5. The interesting decision is at s1wherea1is better than a2. Note that for any integer 1k < H/ 2, we have that P(g=k|s1,a2) = 0.50.52k= 0.5(0.25)k, while P(g=k|s1,a1) = 0.5(0.1)k. Conditioning on any suchkwill make us more likely to choose the bad action a2froms1. The only way to increase the likelihood of the good action a1froms1ands2is to condition on f(s1)> H/2. Unfortunately for RCSL, the probability of observingg>H/ 2is extremely small, since for any such fwe haveP(g=f(s1))0.5(0.1)H/210H/2. Thus, bothfand the sample complexity of learning for any fthat will yield a policy better than the behavior is exponential in the horizon H. Fundamentally, the problem here is that RCSL uses trajectory-level information instead of performing dynamic programming on individual transitions. But, collecting enough trajectory-level information can take exponentially many samples in the horizon. In contrast, DP merely requires coverage of transitions in the MDP to perform planning and thus avoids this issue of exponential sample complexity. In the next section we will delve deeper into this comparison with DP-based approaches as well as the simple top-% BC baseline. 5 Comparing RCSL with bounds for alternative methods Now that we understand the rate at which we expect RCSL to converge, we briefly present the convergence rates of two baseline methods for comparison. In particular, we will leverage an existing analysis of a DP-based algorithm, and conduct a novel analysis of top-% BC. We find that the sample complexity of RCSL has a similar rate to top-% BC, and is worse than DP due to the potentially exponential dependence on horizon that stems from return coverage. 5.1 Comparison to dynamic programming. We will compare to the state of the art (to our knowledge) bound for a DP-based offline RL algorithm. Namely, we will look at the results of for pessimistic soft policy iteration. Similar results exist for slightly different algorithms or assumptions [7, 30], but we choose this one since it is both the tightest and more closely aligns with the practical actor-critic algorithms that we use for our experiments. Their bound makes the following assumptions about the function class Fand the dataset (lettingTrepresent the Bellman operator for policy ): 1. Realizability: for any policies ,there existsfFwithfTf2 2,P1. 2. Bellman completeness: for any andfFthere existsfFsuch thatfTf2 2,P2. 3. Coverage:P(s,a) P(s,a)Cfor alls,a3. 2Except for f(s1) = 0 , which will yield a policy substantially worse than the behavior. 3The original paper uses a slightly tighter notion of coverage, but this bound will suffice for our comparison. 6 With these assumptions in place, the sample complexity bound takes the form4: J()J()O( H2( Clog|F|||/ N) +H2 C(1+2)) (11) Note: this is the result for the information-theoretic form of the algorithm that cannot be efficiently implemented. The paper also provides a practical version of the algorithm for which the bound is the same except that the the square root in the first term is replaced with a fifth root. There are several points of comparison with our analysis (specifically, our Corollary 4). The first thing to note is that for RCSL to compete with the optimal policy, we require nearly deterministic dynamics and a priori knowledge of the optimal conditioning function. These assumptions are not required for the DP-based algorithm; this is a critical difference, since it is clear that these conditions often do not hold in practice. Comparing the coverage assumptions, our Cfbecomes nearly equivalent to C. The major difference is that our analysis of RCSL also requires dependence on return coverage 1/f. This is problematic since as seen in Section 4, this return coverage dependence can be exponential in horizon in cases where the state coverage does not depend on horizon. Comparing the approximation error assumptions, we see that the realizability and completeness assumptions required for DP are substantially less intuitive than the standard supervised learning approximation error assumption needed for RCSL. These assumptions are not directly comparable, but intuitively the RCSL approximation error assumption is simpler. Finally, dependence on His the same for both methods and dependence on Ndepends on which version of the DP algorithm we compare to. For the information-theoretic algorithm DP has better dependence on N, but for the practical algorithm RCSL has better dependence. It is not clear whether the dependence on Nin either the RCSL analysis or in the analysis of the practical algorithm from is tight, and it is an interesting direction for future work to resolve this issue. 5.2 Comparison to top-% behavior cloning. The closest algorithm to RCSL is top-% BC, which was introduced as a baseline for Decision Transformers . This algorithm simply sorts all trajectories in the dataset by return and takes the top fraction of trajectories to run behavior cloning (for [0,1]). The most obvious difference between this algorithm and RCSL is that RCSL allows us to plug in different conditioning functions at test time to produce different policies, while top-% BC learns only one policy. However, if we want to achieve high returns, the two algorithms are quite similar. The full statements and proofs of our theoretical results for top-% BC are deferred to Appendix C.5. The results are essentially the same as those for RCSL except for two key modifications: Defining coverage. The first difference in the analysis is the notion of coverage. For RCSL we needed the return distribution to cover the conditioning function f. For top-% BC we instead let g be the 1quantile of the return distribution over trajectories sampled by the behavior and then define coverage as P(gg|s)for alls. This modification is somewhat minor. Sample size and generalization. The main difference between RCSL and top-% BC is that the RCSL algorithm attempts to transfer information gained from low-return trajectories while the top% BC algorithm simply throws those trajectories away. This shows up in the formal bounds since for a dataset of size Nthe top-% BC algorithm only uses Nsamples while RCSL uses all N. Depending on the data distribution, competing with the optimal policy may require setting very close to zero (exponentially small in H) yielding poor sample complexity. These bounds suggest that RCSL can use generalization across returns to provide improvements in sample complexity over top-% BC by leveraging all of the data. However, the RCSL model is attempting to learn a richer class of functions that conditions on reward, which may require a larger policy class negating some of this benefit. Overall, RCSL should expect to beat top-% BC if the behavior policy is still providing useful information about how to interact with the environment in low-return trajectories that top-% BC would throw away. 4The original paper considers an infinite horizon discounted setting. For the purposes of comparison, we will just assume that1 1can be replaced by H. 7 6 Experiments We have illustrated through theory and some simple examples when we expect RCSL to work, but the theory does not cover all cases that are relevant for practice. In particular, it is not clear how the neural networks trained in practice can leverage generalization across returns. Moreover, one of the key benefits to RCSL approaches (as compared to DP) is that by avoiding the instabilities of non-linear off-policy DP in favor of supervised learning, one might hope that RCSL is more stable in practice. In this section we attempt to test these capabilities first through targeted experiments in a point-mass environment and then by comparisons on standard benchmark data. Throughout this section we will consider six algorithms, two from each of three categories: 1. Behavior cloning (BC): standard behavior cloning (BC) and percentage behavior cloning (%BC) that runs BC on the trajectories with the highest returns . 2. Dynamic programming (DP): TD3+BC a simple DP-based offline RL approach and IQL a more stable DP-based offline RL approach. 3. Return-conditioned supervised learning (RCSL): RvS an RCSL approach using simple MLP policies, and DT an RCSL approach using transformer policies. All algorithms are implemented in JAX using flax and the jaxrl framework , except for DT which is taken from the original paper. Full details can be found in Appendix D and code can be found at https://github.com/davidbrandfonbrener/rcsl-paper . 6.1 Point-mass datasets First, we use targeted experiments to demonstrate how the tabular failure modes illustrated above can arise even in simple deterministic MDPs that may be encountered in continuous control. Specifically, we will focus on the issue of exponential sample complexity discussed in Section 4. We build our datasets in an environment using the Deepmind control suite and MuJoCo simulator . The environment consists of a point-mass navigating in a 2-d plane. To build an example with exponential sample complexity we construct a navigation task with a goal region in the center of the environment. The dataset is constructed by running a behavior policy that is a random walk that is biased towards the top right of the environment. To construct different levels of reward coverage, we consider the environment and dataset under three different reward functions (ordered by probability of seeing a trajectory with high return, from lowest to highest): (a) The ring of fire reward. This reward is 1 within the goal region, -1 in the ring of fire region surrounding the goal, and 0 otherwise (b) The sparse reward. This reward is 1 within the goal region and 0 otherwise. (c) The dense reward. This reward function is 1 within the goal region and gradually decays with the Euclidean distance outside of it. Intuitively, the ring of fire reward will cause serious problems for RCSL approaches when combined with the random walk behavior policy. The issue is that any random walk which reached the goal region is highly likely to spend more time in the region of negative rewards than in the actual goal states, since the ring of fire has larger area than the goal. As a result, while they are technically supported by the distribution, it is unlikely to find many trajectories (if any at all) with positive returns in the dataset, let alone near-optimal returns. As a result, the RCSL-based approaches are not even able to learn to achieve positive returns, as seen in Figure 3. The sparse reward is also difficult for the RCSL-based algorithms, for similar reasons; however the problem is less extreme since any trajectory that gets positive reward must go to the goal, so there is signal in the returns indicating where the goal is. In contrast, the dense reward provides enough signal in the returns that RCSL approaches are able to perform well, although still not as well as IQL. It is also worth noting that because the datset still does not have full coverage of the state-space, simple DP-based algorithms like TD3+BC can struggle with training instability. 6.2 Benchmark data In addition to our targeted experiments we also ran our candidate algorithms on some datasets from the D4RL benchmark . These are meant to provide more realistic and larger-scale data scenarios. 8 (a) Ring of fire (b) Sparse (c) Dense Figure 3: RCSL fails under reward functions that lead to exponentially small probability of sampling good trajectories, but can generalize when the reward is dense. Error bars show standard deviation across three seeds. BC methods are in blue, DP methods in brown, and RCSL methods in green. While this also makes these experiments less targeted, we can still see that the insights that we gained in simpler problems can be useful in these larger settings. We attempt to choose a subset of the datasets with very different properties from eachother. For example, the play data on the ant-maze environment is very diverse and plentiful while the human demonstration data on the pen environment has poor coverage but high values. Results are shown in Figure 4. And additional results leading to similar conclusions can be found in Appendix A. Figure 4: Data from ANTMAZE -UMAZE , ANTMAZE -MEDIUM -PLAY , HALFCHEETAH MEDIUM -REPLAY , and PEN-HUMAN . Error bars show standard deviation across three seeds. Each algorithm is tuned over 4 values and best performance is reported.We find that for most of the datasets DP-based algorithms TD3+BC and IQL outperform both the BC-based algorithms and RCSL-based algorithms. This is especially stark on the ANTMAZE datasets where the behavior policy is highly stochastic, requiring the learner to stitch together trajectories to achieve good performance. While none of these tasks has stochastic dynamics, the issues of return coverage and trajectory stitching persist. In contrast, RCSL performs well when the behavior policy is already high quality, but not optimal (as in the PEN-HUMAN task). Since the data is suboptimal and reward is dense, there is opportunity for RCSL to outperform the BC-based methods. Moreover, since the data has poor coverage, standard DP approaches like TD3+BC are highly unstable. IQL is more stable and performs similarly to the RCSL-based algorithms, but is outperformed by DT (perhaps due to the use of history-dependent policies). 7 Discussion Looking back at our results, we can better place RCSL in relation to the more classical BC and DP algorithms. Like BC, RCSL relies on supervised learning and thus inherits its simplicity, elegance, and ease of implementation and debugging. However, it also inherits BCs dependence on the quality of the behavior policy. This dependence can be somewhat reduced in (nearly) deterministic environments, where conditioning on high returns can break the bias towards the behavior policy. But, the reliance on trajectory-level information still means that RCSL is fundamentally limited by 9 the quality of the best trajectories in the dataset, which can require a sample complexity exponential in horizon in order to compete with the optimal policy, even in deterministic environments. In contrast, DP methods are capable of learning good policies even when the dataset does not contain any high-return trajectories and the environment is stochastic. This represents a fundamental gap between the two approaches that cannot be bridged within the RCSL paradigm. However, empirically, current deep DP algorithms are not well-behaved. These algorithms are often unstable and difficult to debug, although recent work has started to alleviate these issues somewhat . In sum, for tasks where the requirements for RCSL to perform well are met, it is an excellent practical choice, with great advantages in simplicity over DP. Since many real-world tasks of relevance have these attributes, RCSL techniques could have substantial impact. But as a general learning paradigm, RCSL is fundamentally limited in ways that DP is not. Acknowledgments This work was partially supported by NSF RI-1816753, NSF CAREER CIF 1845360, NSF CHS1901091, NSF Scale MoDL DMS 2134216, Capital One and Samsung Electronics. DB was supported by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate Fellowship (NDSEG) Program. References J. Achiam, D. Held, A. Tamar, and P. Abbeel. Constrained policy optimization. In International Conference on Machine Learning , pages 2231. PMLR, 2017. M. Andrychowicz, F. Wolski, A. Ray, J. Schneider, R. Fong, P. Welinder, B. McGrew, J. Tobin, O. Pieter Abbeel, and W. Zaremba. Hindsight experience replay. Advances in neural information processing systems , 30, 2017. P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. Convexity, classification, and risk bounds. Journal of the American Statistical Association , 101(473):138156, 2006. M. G. Bellemare, W. Dabney, and M. Rowland. Distributional Reinforcement Learning . MIT Press, 2022. http://www.distributional-rl.org . S. Boucheron, O. Bousquet, and G. Lugosi. Theory of classification: A survey of some recent advances. ESAIM: probability and statistics , 9:323375, 2005. J. Bradbury, R. Frostig, P. Hawkins, M. J. Johnson, C. Leary, D. Maclaurin, G. Necula, A. Paszke, J. VanderPlas, S. Wanderman-Milne, and Q. Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax . J. Chen and N. Jiang. Information-theoretic considerations in batch reinforcement learning. In Proceedings of the 36th International Conference on Machine Learning . PMLR, 2019. L. Chen, K. Lu, A. Rajeswaran, K. Lee, A. Grover, M. Laskin, P. Abbeel, A. Srinivas, and I. Mordatch. Decision transformer: Reinforcement learning via sequence modeling. arXiv preprint arXiv:2106.01345 , 2021. S. Emmons, B. Eysenbach, I. Kostrikov, and S. Levine. Rvs: What is essential for offline rl via supervised learning? arXiv preprint arXiv:2112.10751 , 2021. J. Fu, A. Kumar, O. Nachum, G. Tucker, and S. Levine. D4rl: Datasets for deep data-driven reinforcement learning. arXiv preprint arXiv:2004.07219 , 2020. S. Fujimoto and S. S. Gu. A minimalist approach to offline reinforcement learning. arXiv preprint arXiv:2106.06860 , 2021. H. Furuta, Y . Matsuo, and S. S. Gu. Generalized decision transformer for offline hindsight information matching. arXiv preprint arXiv:2111.10364 , 2021. D. Ghosh, A. Gupta, A. Reddy, J. Fu, C. Devin, B. Eysenbach, and S. Levine. Learning to reach goals via iterated supervised learning. arXiv preprint arXiv:1912.06088 , 2019. J. Heek, A. Levskaya, A. Oliver, M. Ritter, B. Rondepierre, A. Steiner, and M. van Zee. Flax: A neural network library and ecosystem for JAX, 2020. URL http://github.com/google/ flax . 10 M. Janner, Q. Li, and S. Levine. Offline reinforcement learning as one big sequence modeling problem. Advances in neural information processing systems , 34, 2021. L. P. Kaelbling. Learning to achieve goals. In IJCAI , 1993. J. Kaplan, S. McCandlish, T. Henighan, T. B. Brown, B. Chess, R. Child, S. Gray, A. Radford, J. Wu, and D. Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361 , 2020. D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 , 2014. I. Kostrikov. Jaxrl: Implementations of reinforcement learning algorithms in jax., 10 2021. URL https://github. com/ikostrikov/jaxrl , 2021. I. Kostrikov, A. Nair, and S. Levine. Offline reinforcement learning with implicit q-learning. arXiv preprint arXiv:2110.06169 , 2021. A. Kumar, X. B. Peng, and S. Levine. Reward-conditioned policies. arXiv preprint arXiv:1912.13465 , 2019. K. Paster, S. McIlraith, and J. Ba. You cant count on luck: Why decision transformers fail in stochastic environments. arXiv preprint arXiv:2205.15967 , 2022. J. Schmidhuber. Reinforcement learning upside down: Dont predict rewardsjust map them to actions. arXiv preprint arXiv:1912.02875 , 2019. S. Shalev-Shwartz and S. Ben-David. Understanding machine learning: From theory to algorithms . Cambridge university press, 2014. D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V . Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. P. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis. Mastering the game of go with deep neural networks and tree search. Nature , 529:484489, 2016. R. K. Srivastava, P. Shyam, F. Mutz, W. Ja skowski, and J. Schmidhuber. Training agents using upside-down reinforcement learning. arXiv preprint arXiv:1912.02877 , 2019. R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction . MIT press, 2018. Y . Tassa, Y . Doron, A. Muldal, T. Erez, Y . Li, D. d. L. Casas, D. Budden, A. Abdolmaleki, J. Merel, A. Lefrancq, et al. Deepmind control suite. arXiv preprint arXiv:1801.00690 , 2018. E. Todorov, T. Erez, and Y . Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems , pages 5026 5033, 2012. doi: 10.1109/IROS.2012.6386109. R. Wang, D. P. Foster, and S. M. Kakade. What are the statistical limits of offline rl with linear function approximation?, 2020. Z. Wang, A. Novikov, K. Zolna, J. S. Merel, J. T. Springenberg, S. E. Reed, B. Shahriari, N. Siegel, C. Gulcehre, N. Heess, et al. Critic regularized regression. Advances in Neural Information Processing Systems , 33, 2020. T. Xie, C.-A. Cheng, N. Jiang, P. Mineiro, and A. Agarwal. Bellman-consistent pessimism for offline reinforcement learning. Advances in neural information processing systems , 34, 2021. M. Yang, D. Schuurmans, P. Abbeel, and O. Nachum. Dichotomy of control: Separating what you can control from what you cannot. arXiv preprint arXiv:2210.13435 , 2022. M. Strupl, F. Faccio, D. R. Ashley, J. Schmidhuber, and R. K. Srivastava. Upside-down reinforcement learning can diverge in stochastic environments with episodic resets, 2022. 11 Checklist 1. For all authors... (a) Do the main claims made in the abstract and introduction accurately reflect the papers contributions and scope? [Yes] (b) Did you describe the limitations of your work? [Yes] (c) Did you discuss any potential negative societal impacts of your work? [Yes] See Appendix E (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] 2. If you are including theoretical results... (a) Did you state the full set of assumptions of all theoretical results? [Yes] (b) Did you include complete proofs of all theoretical results? [Yes] 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See Appendix D (c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [Yes] (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See Appendix D 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... (a) If your work uses existing assets, did you cite the creators? [Yes] (b) Did you mention the license of the assets? [Yes] See Appendix D (c) Did you include any new assets either in the supplemental material or as a URL? [N/A] (d) Did you discuss whether and how consent was obtained from people whose data youre using/curating? [N/A] (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [N/A] 5. If you used crowdsourcing or conducted research with human subjects... (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A] (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A] 12 A Extended experimental results Here we present extended versions of the D4RL experiments. We use the same setup as in Section 6, but run each of the algorithms on three different datasets in each environment. Explicitly we show results on ANTMAZE -UMAZE ,ANTMAZE -MEDIUM -PLAY , and ANTMAZE -LARGE -PLAY in Figure 5. Then we show results on HALFCHEETAH -MEDIUM ,HALFCHEETAH -MEDIUM -REPLAY , and HALFCHEETAH -MEDIUM -EXPERT in Figure 6. Finally we show results on PEN-HUMAN ,PENCLONED , and PEN-EXPERT in Figure 7. These experiments corroborate the story from the main text. Without return coverage (as in the larger antmaze tasks), RCSL can fail dramatically. But in the case with return coverage but poor state coverage (as in the pen human dataset that only has 25 trajectories), RCSL can beat DP. However we see that with larger datasets that yield more coverage, DP recovers its performance (as in pen expert which has 5000 trajectories, or 200x the amount of data as in the human dataset). Figure 5: Experimental results on antmaze datasets. Figure 6: Experimental results on halfcheetah datasets. Figure 7: Experimental results on pen datasets. B Trajectory stitching B.1 Theory A common goal from the offline RL literature is to be able to stitch together previously collected trajectories to outperform the behavior policy. This is in general not possible with RCSL. The 13 Figure 8: An example where RCSL fails to stitch trajectories. main issue here is that RCSL is using trajectory level information during training, which precludes combining information across trajectories. In this example we show that even with infinite data, when attempting to combine two datastreams using standard approaches to conditioning RCSL can fail to recover the optimal policy. Consider the MDP illustrated in Figure 8 with three states s0,s1,s2and horizon H= 2. All transitions and rewards are deterministic as shown. We consider the case where data has been collected by two different processes. One process (illustrated in red) consists of episodes that always start ats0and chooses the first action uniformly but chooses the bad action a0deterministically froms2. The other process (illustrated in blue) consists of trajectories that always start at s1and deterministically go towards the good action, receiving reward of 1. We will consider what happens to RCSL at test time when initialized at s0. The data does not contain any trajectories that begin in s0and selecta1to transition to s2followed bya1, which is the optimal decision. But, the data does have enough information to stitch together the optimal trajectory from s0, and it is clear to see that DP-based approaches would easily find the optimal policy. For RCSL, if we condition on optimal return g= 1, we get that (|s1,g= 1) is undefined since we only observe trajectories with g= 0 that originate at s0. To get a well-defined policy, we must setf(s0) = 0 , but then(a1|s1,g= 0) = 0.5. Thus,will never choose the optimal path with probability larger than 0.5, for any conditioning function f. Moreover, the conditioning function that does lead to success is non-standard: f(s0) = 0,f(s2) = 1 . For the standard approach to conditioning of setting the initial value and decreasing over time with observed rewards, RCSL will never achieve non-zero reward from s0. Note that DT-style learning where we condition on the entire history of states rather than just the current state can perform even worse since Pdata(a1|s0,a0,s2,g= 1) = 0 , i.e. even using the non-standard conditioning function described above will not fix things. Also, it is worth mentioning that it is possible that conditioning on the out-of-distribution return g= 1 froms0could work due to extrapolation of a neural network policy. However, as we will see in the experiments section, this does not happen empirically in controlled settings. B.2 Experiments The above example does not take into account the possibility of generalization out of distribution (i.e. when conditioning on returns that were not observed in the data). To test whether generalization could lead to stitching we construct two datasets: stitch-easy and stitch-hard. Both datasets use the same simple point-mass environment with sparse rewards as before, but now we introduce a wall into the environment to limit the paths to the goal. The stitch-easy dataset contains two types of trajectories: some beginning from the initial state region and moving upwards (with added Gaussian noise in the actions) and some beginning from the left side of the environment and moving towards the goal (with added Gaussian noise in the actions). This is easy since just following the behavior policy for the first half of the trajectory leads to states where the dataset indicates how to reach the goal. We also create the stitch-hard dataset which includes a third type of trajectory that begins from the initial state and goes to the right (mirroring the tabular example). This is hard since the dominant action from the behavior in the initial state is now to go right rather than to move towards 14 the goal-reaching trajectories. This acts as a distraction for methods that are biased towards the behavior. Datasets and results are illustrated in Figure 9. (a) Stitch-easy (b) Stitch-hard Figure 9: Results on two datasets that require stitching. We see that on the stitch-easy dataset RvS is able to perform almost as well as the DP algorithms and better than %BC. This indicates that it is able to follow the behavior when conditioning on an out-of-distribution return until it reaches a state where the return directs it to the goal. In contrast, DT totally fails on this task since it conditions on the entire history of the trajectory. Since the dataset only contains trajectories from the initial state that continue to the top of the board, DT always reproduces such trajectories from the initial state and does not stitch together trajectories. In the stitch-hard dataset, we see that RvS fails as well and does not outperform %BC. This indicates that indeed, RvS can be distracted by the distractor trajectories from the initial state. The conditioning itself was not what cause the stitching in the stitch-easy dataset, but rather the learned policy simply defaults to the behavior. This can be beneficial in some problems, but prevents trajectory stitching that might allow the learned policy to dramatically outperform the behavior. TD3+BC also struggles here, likely due to some combination of instability and the BC regularization causing issues due to the distractor actions. C Proofs C.1 Proof of Theorem 1 Proof. Letg(s1,a1:H)be the value of the return by rolling out the open loop sequence of actions a1:Hunder the deterministic dynamics induced by Tandr. Then we can write Es1[f(s1)]J(f) =Es1[ Ef|s1[f(s1)g1]] (12) =Es1[ Ea1:Hf|s1[f(s1)g(s1,a1:H)]] (13) +Es1[ Ea1:Hf|s1[g(s1,a1:H)g1]] (14) Es1[ Ea1:Hf|s1[f(s1)g(s1,a1:H)]] +H2. (15) where the last step follows by bounding the magnitude of the difference between g1andg(s1,a1:H) byHand applying a union bound over the Hsteps in the trajectory (using the near determinism assumption), namely: Hsup s1 tPatf|s1(rt=r(st,at)orst+1=T(st,at))H2. (16) Now we consider the first term from eq. (15). Again bounding the magnitude of the difference by Hwe get that Es1[ Ea1:Hf|s1[f(s1)g(s1,a1:H)]] Es1 a1:HPf(a1:H|s1) 1[g(s1,a1:H)=f(s1)]H (17) 15 To bound this term, we will more carefully consider what happens under the distribution Pf. To simplify notation, let st=T(s1,a1:t1)be the result of following the deterministic dynamics defined byTup until step t. Expanding it out, applying the near determinism, the consistency of f, the coverage assumption, canceling some terms, and then inducting we see that: Pf(a1:H|s1) =f(a1|s1) s2P(s2|s1,a1)Pf(a2:H|s1,s2) (18) f(a1|s1)Pf(a2:H|s1,s2) + (19) =(a1|s1)P(g1=f(s1)|s1,a1) P(g1=f(s1)|s1)Pf(a2:H|s1,s2) + (20) (a1|s1)+P(g1r(s1,a1) =f(s1)r(s1,a1)|s1,a1,s2) P(g1=f(s1)|s1)Pf(a2:H|s1,s2) +(21) =(a1|s1)+P(g2=f(s2)|s2) P(g1=f(s1)|s1)Pf(a2:H|s1,s2) + (22) (a1|s1)P(g2=f(s2)|s2) P(g1=f(s1)|s1)Pf(a2:H|s1,s2) +(1 f+ 1) (23) (a1|s1)(a2|s2)((((((((P(g2=f(s2)|s2) P(g1=f(s1)|s1)P(g2=f(s2)|s2,a2) ((((((((P(g2=f(s2)|s2)Pf(a3:H|s1,s3)) (24) + 2(1 f+ 1) (25) H t=1(at|st)P(gH=f(sH)|sH,ah) P(g1=f(s1)|s1)+H(1 f+ 1) (26) =H t=1(at|st)1[g(s1,a1:H) =f(s1)] P(g1=f(s1)|s1)+H(1 f+ 1) (27) where the last step follows from the determinism of the trajectory that determines sHand the consistency off. Plugging this back into eq (17) and noticing that the two indicator functions can never both be 1, we get that: Es1[ Ea1:Hf|s1[f(s1)g(s1,a1:H)]] H2(1 f+ 1) (28) Plugging this back into eq (15) yields the result. C.2 Proof of Corollary 1 Proof. We need to define a function fso that E[f(s1)]is approximately J(). To do this, note that there exists a deterministic optimal policy , and since the environment dynamics are nearly deterministic we can set f(s1)to be the return of under the deterministic dynamics. To do this, letT(s1,t)represent the state reached by running froms1fortsteps under the deterministic dynamics defined by T. Then: f(s1) =H t=1r(T(s1,t),(T(s1,t))) (29) Now we have as in the proof of Theorem 1 that the probability that g=f(s)is bounded by H, so that Es1[f(s1)]J() =Es1[Eg|s1[f(s1)g]]Es1[P(g=f(s1)|s1)H]H2(30) Combining this with Theorem 1 yields the result. C.3 Proof of Theorem 2 First we prove the following Lemma. This can be seen as a finite-horizon analog to results from Achiam et al. . 16 Lemma 1. Letdrefer to the marginal distribution of Pover states only. For any two policies ,we have: dd12HEsd[TV((|s)(|s))] (31) Proof. First we will define a few useful objects. Let dh (s) =P(sh=s). Let h=dh (s) dh (s)1. Leth= 2Esdh[TV((|s)(|s))]. Now we claim that hh1+ h1forh>1and1= 0. To see this, consider some fixed h. Note that dh (s) = sdh1 (s) a(a|s)P(s|s,a). Then expanding the definitions and adding and subtracting we see that h= s|dh (s)dh (s)| (32) s sdh1 (s) a((a|s)(a|s))P(s|s,a)(33) + s s(dh1 (s)dh1 (s)) a(a|s)P(s|s,a)(34) 2Esdh1 [TV((|s)(|s))] +dh1 dh1 1=h1+ h1. (35) Now applying the claim and the definition of dwe get that dd11 HH h=1h1 HH h=1h1 j=1jH1 HH h=1h= 2HEsd[TV((|s)(|s))]. (36) Now we can prove the Theorem. Proof. Applying the definition of Jand Lemma 1, we get J(f)J(f) =H(EPf[r(s,a)]EPf[r(s,a)]) (37) Hdfdf1 (38) 2Esdf[TV(f(|s)f(|s))]H2(39) Expanding definitions, using the multiply and divide trick, and applying the assumptions: 2Esdf[TV(f(|s)f(|s))] =Esdf[ a|P(a|s,f(s))(a|s,f(s))|] (40) =Esdf[P(f(s)|s) P(f(s)|s) a|P(a|s,f(s))(a|s,f(s))|] (41) Cf fEsd[ P(f(s)|s) a|P(a|s,f(s))(a|s,f(s))|] (42) Cf fEsd[ gP(g|s) a|P(a|s,g)(a|s,g)|] (43) = 2Cf fEsdf,gP|s[TV(P(|s,g)(|s,g))] (44) Cf f 2L() (45) where the last step comes from Pinskers inequality. Combining with the above bound on the difference in expected values yields the result. 17 C.4 Proof of Corollary 3 Proof. We may write L() =L()H, whereH=E(s,a,g )P[logP(a|s,g)]and L() :=E(s,a,g )P[log(a|s,g)] is the cross-entropy loss. Denoting arg minL(), we have L() =L()L() +L()L()L() +approx. Denoting Lthe empirical cross-entropy loss that is minimized by , we may further decompose L()L() =L()L() +L()L() +L()L() 2 sup |L()L()| Under the assumptions on bounded loss differences, we may bound this, e.g., using McDiarmids inequality and a union bound on to obtain the final result. C.5 Top-% BC Theorem 3 (Alignment with respect to quantile) .Letgbe the 1quantile of the return distribution induced by over all initial states. Let =P(a|s,gg). Assume the following: 1. Coverage: P(s1|gg)for all initial states s1. 2. Near determinism: P(r=r(s,a)ors=T(s,a)|s,a)at alls,afor some functions Tandr. Note that this does not constrain the stochasticity of the initial state at all. Then gJ()(1 + 2) H2. (46) Proof. The proof essentially follows the same argument as Theorem 1 with f(s1)replaced by g. The main difference comes from the fact that (a|s) =P(a|s,gg) =(a|s)P(gg|s,a) P(gg|s)(47) Explicitly, we have similar to before that: gJ() =Es1[E|s1[gg1]] (48) Es1Ea1:Hf|s1[gg(s1,a1:H)] +H2. (49) Es1Ea1:Hf|s1[ 1[g(s1,a1:H)<g]]H+H2. (50) We now define st=T(s1,a1:t1)to be the state at step tunder the determinisitic dynamics and similarly rt=r(st,at)the reward under deterministic dynamics. Then again mirroring the proof 18 above, we have that P(a1:H|s1)(a1|s1)P(a2:H|s1,s2,r1) + (51) =(a1|s1)P(g1g|s1,a1) P(g1g|s1)P(a2:H|s1,s2,r1) + (52) (a1|s1)+P(g1g|s2,r1,a1) P(g1g|s1)P(a2:H|s1,s2,r1) + (53) (a1|s1)P(g1g|s1,a1) P(g1g|s1)P(a2:H|s1,s2,r1) + (54) (a1|s1)P(g1g|s2,r1,a1) P(g1g|s1)P(a2:H|s1,s2,r1) +(1 + 1) (55) (a1|s1)(a2|s2)((((((((P(g1g|s2,r1) P(g1g|s1)P(g1g|s2,r1,a2) ((((((((P(g1g|s2,r1)P(a3:H|s1,s3,r1:2) (56) + 2(1 + 1) (57) H t=1(at|st)P(g1g|sH,r1:H) P(g1g|s1)+H(1 + 1) (58) =H t=1(at|st)1[g(s1,a1:H)g] P(g1g|s1)+H(1 + 1) (59) Plugging this into Equation 50 we get the result. Theorem 4 (Reduction of %BC to SL) .Letgbe the 1percentile of the return distribution induced by. Let=P(a|s,gg). Assume 1. Bounded mismatch:P(s) P(s|gg)Cfor alls. Define the expected loss as L() =EsP|gg[KL((|s)(|s))]. Then we have that J()J()CH2 2L(). (60) Proof. Recall thatdrefers to the marginal distribution of Pover states only. Applying the definition ofJand Lemma 1, we get J()J() =H(EP[r(s,a)]EP[r(s,a)]) (61) Hdd1 (62) 2Esd[TV((|s)(|s))]H2(63) Expanding definitions, using the multiply and divide trick, and applying the assumptions: 2Esd[TV((|s)(|s))]C2EsP(|gg)[TV((|s)(|s))] (64) C 2L() (65) where the last step comes from Pinskers inequality. Combining with the above bound on the difference in expected values yields the result. Corollary 5 (Sample complexity for %BC) .To get finite data guarantees, add to the above assumptions the assumptions that (1) the policy class is finite, (2)|log(a|s)log(a|s)|c for any (a,s,a,s)and all, and (3) the approximation error of is bounded by approx , i.e. minL()approx . Then with probability at least 1, J()J()O( CH2( c(log||/ (1)N)1/4 +approx) + H2) . (66) 19 D Experimental details Data. Data for point-mass tasks was sampled from the scripted policies described in the text. We sampled 100 trajectories of length 400 for each dataset, unless otherwise indicated. Data for the benchmark experiments was taken directly from the D4RL benchmark . Hyperparameters. Below we list all of the hyperparameters used across the various agorithms. We train each algorithm on 90% of the trajectories in the dataset, using the remaining 10% as validation. All algorithms are trained with the Adam optimizer . We evaluate each algorithm for 100 episodes in the environment per seed and hyperparameter configuration and report the best performance for each algorithm for its relevant hyperparameter (All algorithms were tuned across 3 values of the hyperparameter except for DT on pointmass where we tried more values, but still got poor results). Error bars are reported across seeds, as explained in the text. Table 1: Shared hyperparameters for all non-DT algorithms Hyperparameter Value Training steps 5e5 Batch size 256 MLP width 256 MLP depth 2 Table 2: Algorithm-specific hyperparameters for all non-DT algorithms Algorithm Hyperparameter Value(s) %BC fraction [0.02, 0.10, 0.5] learning rate 1e-3 RvS fraction of max return for conditioning [0.8, 1.0, 1.2] learning rate 1e-3 TD3+BC [1.0, 2.5, 5.0] learning rate (actor and critic) 3e-4 discount 0.99 for target EWMA 0.005 target update period 2 IQL expectile [0.5, 0.7, 0.9] learning rate (actor, value, and critic) 3e-4 discount 0.99 for target EWMA 0.005 temperature 10.0 Table 3: Hypereparameters for DT (exactly as in ) Hyperparameter Value Training steps 1e5 Batch size 64 Learning rate 1e-4 Weight decay 1e-4 K 20 Embed dimension 128 Layers 3 Heads 1 Dropout 0.1 Compute. All experiments were run on CPU on an internal cluster. Each of the non-DT algorithms takes less than 1 hour per run (i.e. set of hyperparameters and seed) and the DT algorithm takes 5-10 hours per run. 20 Table 4: Environment-specific reward targets for DT Environment Values Point-mass [300, 200, 100, 50, 10, 0] Antmaze [1.0, 0.75, 0.5] Half-cheetah [12000, 9000, 6000] Pen [3000, 2000, 1000] Asset licenses. For completeness, we also report the licenses of the assets that we used in the paper: JAX : Apache-2.0, Flax : Apache-2.0, jaxrl : MIT, Decision Transformer : MIT, Deepmind control suite : Apache-2.0, mujoco : Apache-2.0, D4RL : Apache-2.0. Code. The code for our implementations can be found at https://github.com/ davidbrandfonbrener/rcsl-paper . E Potential negative societal impact This paper follows a line work aiming at a better understanding of Offline RL algorithms. Even though it does not directly contribute to any specific application, it promotes the development and dissemination of the Offline RL technology, which, as any technology, can be used for harmful purposes. Moreover, we acknowledge that Offline RL has been proved in the past to lack robustness, and RL and even machine learning in general to potentially reproduce and amplify bias. We note that this specific work attempts at better understanding the conditions for RCSL algorithms to work, and where it should not be used. In that spirit, it has the potential benefit of dissuading practitioners from using such algorithms in settings where. they may fail in socially undesirable ways. 21 |
2403.08540.pdf | Language models scale reliably with over-training and on downstream tasks Samir Yitzhak Gadre1,2Georgios Smyrnis3Vaishaal Shankar4 Suchin Gururangan5Mitchell Wortsman5Rulin Shao5Jean Mercat2 Alex Fang5Jeffrey Li5Sedrick Keh2Rui Xin5Marianna Nezhurina6,7Igor Vasiljevic2 Jenia Jitsev6,7Alexandros G. Dimakis3Gabriel Ilharco5Shuran Song8Thomas Kollar2 Yair Carmon9Achal Dave2Reinhard Heckel10Niklas Muennighoff11Ludwig Schmidt5 Abstract Scaling laws are useful guides for developing language models, but there are still gaps between current scaling studies and how language models are ultimately trained and evaluated. For instance, scaling is usually studied in the compute-optimal training regime (i.e., Chinchilla optimal regime); however, in practice, models are often over-trained to reduce inference costs. Moreover, scaling laws mostly predict loss on next-token prediction, but ultimately models are compared based on downstream task performance. In this paper, we address both shortcomings. To do so, we create a testbed of 104 models with 0.011B to 6.9B parameters trained with various numbers of tokens on three data distributions. First, we investigate scaling in the over-trained regime. We fit scaling laws that extrapolate in both the number of model parameters and the ratio oftrainingtokenstoparameters. Thisenablesustopredictthevalidationlossofa1.4Bparameter, 900B token run (i.e., 32 over-trained) and a 6.9B parameter, 138B token runeach from experiments that take 300 less compute. Second, we relate the perplexity of a language model to its downstream task performance via a power law. We use this law to predict top-1 error averaged over downstream tasks for the two aforementioned models using experiments that take 20 less compute. Our experiments are available at https://github.com/mlfoundations/scaling . 1 Introduction Training large language models is expensive. Moreover, training high-quality models requires a complex recipe of algorithmic techniques and training data. To reduce the cost of finding successful training recipes, researchers first evaluate ideas with small experiments and then extrapolate their efficacy to larger scales. With reliable extrapolation, it is possible to quickly iterate at small scale and still pick the method that will perform best for the final large training run. Indeed, this workflow has become commonplace for training state-of-the-art language models such as Chinchilla 70B , PaLM 540B , and GPT-4 . Despite their importance for model development, published scaling laws differ from the goals of training state-of-the-art models in important ways. For instance, scaling studies usually focus on Equal advising, ordered alphabetically. Correspondence to [email protected] .1Columbia University 2Toyota Research Insitute3UT Austin4Apple5University of Washington6Juelich Supercomputing Center, Research Center Juelich7LAION8Stanford University9Tel Aviv University10TU Munich11Contextual AI 1arXiv:2403.08540v1 [cs.CL] 13 Mar 2024 1016101710181019102010211022 Compute (6ND,D=MN) [FLOPs]12345Reducible loss: C4 evalN=0.011B N=0.079B N=0.154B N=0.411B N=1.4B N=6.9BPrediction Interpolated trend Extrapolated trend M=20 M=320 M=640 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 Loss: C4 eval0.40.50.60.70.8Average top-1 error: 17-task split10220.560.600.640.680.72 2.4 2.60.440.460.480.500.52Figure 1:Reliable scaling in the over-trained regime and for downstream error prediction. (left)We fit a scaling law for model validation loss, parameterized by (i) a token multiplier M, which is the ratio of training tokens Dto parameters N(i.e., M=D/N) and (ii) the compute Cin FLOPs used to train a model, approximated by C= 6ND. We extrapolate, in both NandM, the validation performance of models requiring over 300 the training compute used to construct the scaling law. (right)We also fit a scaling law to predict average downstream top-1 error as a function of validation loss. We find that fitting scaling laws for downstream error benefits from using more expensive models when compared to fitting for loss prediction. We predict the average error over 17 downstream tasks for models trained with over 20 the compute. the compute-optimal training regime (Chinchilla optimality ), while widely used models are now often over-trained to reduce inference costs. Another potential mismatch between scaling laws and eventual applications of the models is that most scaling laws quantify model performance by perplexity in next-token prediction instead of accuracy on widely used benchmark datasets. As a result, it is unclear whether following scaling laws leads to truly better models, or merely to models with lower perplexity in the compute-optimal training regime. In this paper, we address both topics: scaling in the over-trained regime and downstream performance prediction, with an extensive set of experiments. Motivated by the practice of training beyond compute-optimal, we first investigate whether scaling follows reliable trends in the over-trained regime. We find that for a set of model configurations with a constant ratio of training tokens to parameters, the models reducible loss L[41,43] follows consistent power laws ( L=CC) in the amount of training compute C. As one increases the ratio of tokens to parameters, corresponding to more over-training, the scaling exponent Cremains about the same, while the scalar changes. We consider the extent to which our observations are explainable and find a promising approach by reparameterizing forms for scaling laws. To establish if and when scaling is predictable in the over-trained regime, we experiment with a testbed of 104 models, trained from scratch on three different datasets: RedPajama , C4 [86,25], and RefinedWeb . We find that scaling laws fit on small models, trained closer to compute-optimal, 2 can accurately predict the performance of larger models that undergo more over-training. Figure 1 (left)illustrates our main over-training result, where we invest 2.4e19FLOPs to extrapolate the C4 validation performance of a 1.4B parameter model trained on 900B tokens, which requires 300 more compute to train. In addition to over-training, we also investigate if scaling laws can predict the performance of a model on downstream tasks. We establish a power law relationship between language modeling perplexity and the average top-1 error on a suite of downstream tasks. While it can be difficult to predict the error on individual tasks, we find that aggregate performance can be accurately predicted from a models perplexity among models trained on the same training data. Figure 1 (right)presents our main downstream error prediction result, where we invest 2.7e20FLOPs to predict the average top-1 error over a set of downstream tasks to within 1 percentage point for a 6.9B compute-optimal model, which requires 20more compute to train. To facilitate further research on reliable scaling, we provide all results of our experiments at https://github.com/mlfoundations/scaling . 2 Scaling and over-training In this section, we describe empirical observations and their potential mathematical descriptions. First, we provide key definitions (Section 2.1). We next present a phenomenon wherein training a collection of models for increasing token multipliersratios of tokens to parametersfollows similar scaling trends (Section 2.2). We then show that these patterns are consistent with previously proposed power-law scaling when reparameterizing in terms of training compute and token multipliers (Section 2.3). Towards connecting loss scaling and downstream performance, we revisit our collection of models, plot their average top-1 error vs. validation loss, and notice that error decays exponentially with lower loss (Section 2.4). To describe these observations, we propose a scaling law for downstream error as a function of loss (Section 2.5). 2.1 Preliminaries Scaling laws for loss. We examine scaling laws that predict the loss Lof a model as a function of the compute Cin FLOPs used to train the model. If one increases the number of parameters N in a model or the number of tokens Dthat a model is trained on, compute requirements naturally increase. Hence, Cis assumed to be a function of N, D. Following Kaplan et al. , we use the approximation C= 6ND, which Hoffmann et al. independently verify. We consider scaling laws, L(C) =E+L(C), (1) where Eis anirreducible loss andLis thereducible loss .Ecaptures the Bayes error or minimum possible loss achievable on the validation domain. The L(C)term captures what can possibly be learned about the validation domain by training on a source domain. L(C)should go to zero with increased training data and model capacity. L(C)is often assumed to follow a power law: L(C) =CC(i.a., Hestness et al. , OpenAI ). It is also often helpful to consider a power law in a log-logplot, where it appears as a line with slope Candy-intercept log (). 3 101710191021 Compute (6ND,D=MN) [FLOPs]12345Reducible loss: C4 evalTraining set: C4 101710191021 Compute (6ND,D=MN) [FLOPs]12345Training set: RedPajama 101710191021 Compute (6ND,D=MN) [FLOPs]12345Training set: RefinedWeb N=0.011B N=0.079B N=0.154B N=0.411B Interpolated trend Extrapolated trend 10204080160320640 token multiplier MFigure 2: Scaling in the over-trained regime follows consistent power law exponents. We notice parallel lines in the log-logplots of reducible loss vs. training compute for a range of token multipliers M, which give the ratio of training tokens to model parameters and where larger M corresponds to more over-training. For a power law giving reducible loss as a function of compute: L(C) =CC, the scaling exponent Cremains relatively constant resulting in lines with approximately fixed slope. The y-intercept, which is determined by the power law coefficient , however, shifts with different values M. This suggests that the power law coefficient is a function of the token multiplier M, while the power law exponent Cis not. Hestness et al. report a similar phenomenon of consistent scaling exponents for recurrent networks when modifying architectures instead of token multipliers. Token multipliers. We define a token multiplier M=D/Nas the ratio of training tokens to model parameters. We introduce Mfor notational convenience as it allows us to consider fixed relationships between DandNeven as a model gets bigger (i.e., as Nbecomes larger). Compute-optimal training. Hoffmann et al. establish compute-optimal training, where, for any compute budget H, the allocation of parameters and tokens that minimize training or validation loss is given by, arg min N,DL(N, D )s.t.C(N, D ) =H. (2) To solve for the optimal N, D, one can sweep N, Dfor each H, retaining the best configurations. Hoffmann et al. find that as Hincreases, NandDscale roughly evenly. Assuming equal scaling, there is a fixed compute-optimal token multiplier M=D/Nper training distribution. Over-training. We define over-training as the practice of allocating compute sub-optimally, so smaller models train on a disproportionately large number of tokens (i.e., M > M). While loss should be higher than in the compute-optimal allocation for a given training budget, the resulting models have fewer parameters and thus are cheaper at inference. 2.2 Observation: Over-trained models follow consistent trends We begin our scaling investigation by training models with 0.011B to 0.411B parameters for token multipliers Mbetween 20 and 640, where M= 20points lie roughly on the compute-optimal frontier, andM > 20corresponds to over-training. We defer experimental details to Section 3 to focus on our 4 observations. In Figure 2, we plot loss against compute on a log-logscale for the models trained on three datasets and evaluated on the C4 eval set. We notice a phenomenon of parallel lines when fitting power laws to the reducible loss, which suggests a near-constant scaling exponent even with increased over-training. This indicates that scaling in the over-trained regime may be predictable given training runs closer to compute-optimal. 2.3 Deriving scaling laws for over-trained behavior In search of an analytic expression for the observations in Figure 2, we turn to the scaling literature. A common functional form for the risk of a model, as proposed in prior work [91, 43] is, L(N, D ) =E+AN+BD. (3) Recall from Section 2.1, Nis the number of parameters, Dthe number of training tokens and Ethe irreducible loss. The constants E, A, , B, are fit from data. By fitting this parametric form, Hoffmann et al. find that scaling exponents andare close, suggesting that one should scale N, Dequally as compute increases. Hence, we assume =. With this assumption, we reparameterize Equation (3)in terms of compute C= 6NDand a token multiplier M=D/N. We get the following form, L(C, M ) =E+ aMC+bMC CC, (4) where C=/2,a=A(1/6)C,b=B(1/6)Cgives the relation to Equation (3). For a complete derivation, see Appendix B. Equation (4)has the following interpretation: (i) The scaling exponent Cis not dependent on M. Thus, we always expect lines with the same slope in the log-logplotas in Figure 2. (ii) The term aMC+bMCdetermines the offsets between curves with different token multipliers. Hence, we expect non-overlapping, parallel lines in the log-logplot for the range of Mwe consideralso consistent with Figure 2. Recall that we make the assumption =, which implies equal scaling of parameters and tokens as more compute is available. However, as explained in Appendix B, even if =, we get a parameterization that implies the power-law exponent remains constant with over-training. 2.4 Observation: Loss tracks average top-1 error Scaling is typically studied in the context of loss [ 49,43,70], which Schaeffer et al. note is smoother than metrics like accuracy. However, practitioners ultimately care about downstream, in-the-wild task performance. To better connect scaling laws and over-training to task prediction, we revisit the suite of models plotted in Figure 2. In Figure 3, we plot average downstream top-1 errors over evaluations sourced from LLM-Foundry against the C4 eval loss. We defer details of the setup to Section 3 to focus here on a few key observations. The average errors appear to follow exponential decay as loss decreases. Additionally, the particular relationship between loss and error is dataset-dependent. For instance, models trained on C4 result in the lowest C4 eval loss, but this does not translate to downstream gains compared to models trained on RedPajama or RefinedWeb. 5 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 Loss: C4 eval0.660.680.700.720.740.76Average top-1 error: 46-task split 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 Loss: C4 eval0.450.500.550.600.650.700.750.80Average top-1 error: 17-task splitC4 RedPajama RefinedWeb Interpolated trend Extrapolated trend Figure 3: Average top-1 error scales as a function of loss. We plot data points trained on three datasets and notice an exponential decay of average top-1 error as C4 eval loss, on the x-axis, decreases. The specific coefficients appear dataset-dependent. We consider on the y-axes (left)the average error over 46 evaluations and (right)the average error on a subset of 17 evaluations where performance can be 10 points above random chance for at least one 0.154B scale model. These observations suggest that average top-1 error should be predictable with reliable loss estimates. 2.5 Proposing a scaling law for average top-1 error Based on the exponential decay we observe in Figure 3, we propose the following relationship between downstream average top-1 error Errand loss L, Err(L) =kexp (L), (5) where , k, are fit from data. Equation (5)also has an appealing interpretation in terms of model perplexity PP(L) = exp ( L), Err(PP) =kPP. (6) Namely, Errfollows a power law in PPwith maximum error , where intuitively should be close to the random chance performance. Equation (5)in conjunction with (4)suggests a two-step method to predict Erras a function of compute and the amount of over-training. For choices of training and validation distributions, (i) fit a scaling law to Equation (4)using ((C, M ), L)pairs to yield (C, M )7L. (ii) Fit a scaling law to Equation (5) using (L,Err)pairs to get L7Err. 3 Experimental setup Towards testing the analytic predictions in Equations (4),(5), we discuss our experimental setup. We first present language modeling details (Section 3.1). Next, we discuss our strategy for deciding which models to include in our scaling investigation and our procedure for fitting scaling trends (Section 3.2). We then present metrics to validate how well scaling laws predict loss and downstream performance (Section 3.3). 6 101710191021 Compute (6ND) [FLOPs]23456Loss: OpenLM evalSearch 101710191021 Compute (6ND) [FLOPs]Filter 101710191021 Compute (6ND) [FLOPs]Fit Grid search models Selected models Target 1.4B model Target 6.9B model Interpolated trend Extrapolated trend 10221.9 1.8 1.7Figure 4: Search, filter, fit: A recipe for selecting configurations for scaling. (left)To generate the final configurations presented in Table 1, we run a 435 model grid search over model width, hidden dimension, number of attention heads, batch size, and warmup steps. All models are trained near compute-optimally. (center) We plot the efficient frontier of models, which appear to follow a trend, excluding models from 5.21016to5.21017, which fall below the trend. (right) We fit a power law with irreducible error to the remaining configurations, picking four configurations that closely track the full model suite. These models extrapolate the performance of 1.4B, 6.9B target models. Shaded regions represent bootstrap 95% confidence intervals. 3.1 Training setup Architecture. Wetraintransformers,basedonauto-regressive, decoder-only,pre-normalization architectures like GPT-2 and LLaMA . We adopt OpenLM as our core modeling library, which utilizes PyTorch [ 78,6], xformers , triton , FlashAttention , FSDP , and bfloat16 automatic mixed precision. Like LLaMA, we omit bias terms, but replace RMSNorm with LayerNorm , which has readily available fused implementations. Following Wortsman et al. , we apply qk-LayerNorm , which adds robustness to otherwise poor hyperparameter choices (e.g., for learning rate). We use SwiGLU activations and depth-scaled initialization . We use a sequence length of 2048, rotary positional embeddings , and the GPT-NeoX-20B tokenizer , which yields a vocabulary size of 50k. We do not use weight tying [82, 44]. Objectives and optimization. We train with a standard causal language modeling objective (i.e., next token prediction) with an additive z-loss (coefficient 1 e-4), which mitigates output logit norm growth instabilities. We use the AdamW optimizer (PyTorch defaults except beta2= 0.95), with independent weight decay (coefficient 1 e-4). For the learning rate schedule, we use linear warmup and cosine decay. We cool down to a low learning rate (3 e-5). Training datasets. To ensure our conclusions are not particular to a training distribution, we train models on C4 [ 86,25], RedPajama , and RefinedWeb . They are open-source and have 138B, 1.15T, and 600B tokens respectively. We sample without replacement and employ sequence packing without attention masking. We separate documents in our training corpora with end-of-text tokens. 7 N n layers nheads dmodel dheadWarmup Learning rate Batch size M= 20A100 hours 0.011B 8 4 96 24 100 3 e-3 64 0.3 0.079B 8 4 512 128 400 3 e-3 512 5 0.154B 24 8 576 72 400 3 e-3 512 12 0.411B 24 8 1024 128 2000 3 e-3 512 75 1.4B 24 16 2048 128 5000 3 e-3 256 690 6.9B 32 32 4096 128 5000 3 e-4 2048 17000 Table 1:Main models and hyperparameters used in our investigation. Models have number of parameters N, with number of layers nlayers, number of attention heads nheads, model width dmodel, and width per attention head dhead. Batch sizes are global and in units of sequences. Each sequence has 2048 tokens. A100 GPU hours are at M= 20, which are near compute-optimal runs. For the 1.4B scale, a batch size of 256 performs slightly better than 512. 3.2 Creating scaling laws for validation loss and downstream error prediction Recipe for model configuration selection. To create a testbed of models for our scaling experiments, we grid search over a total of 435 models, trained from scratch, in the 0.01B to 0.5B parameter range as seen in Figure 4 (left). We train on the OpenLM data mix , which largely consists of tokens from RedPajama and The Pile . We train on 20 tokens per parameter (M= 20), which we find in early experiments gives models near the compute-optimal frontier for the data mix. This is similar to findings presented in Hoffmann et al. s Table 3, which suggests that roughly 20 tokens per parameter are optimal in their experimental setup. Our validation set, OpenLM eval, contains tokens from recent arXiv papers, the OpenLM codebase itself, and news articles. To find maximally performant models on validation data, we tune model width, number of layers, number of attention heads, warmup steps, and batch size. We find in early experiments that qk-LayerNorm makes models less sensitive to learning rate, which is a phenomenon Wortsman et al.report in their Figure 1. Hence, we fix the learning rate for our sweeps. We also perform smaller grid searches over 1.4B and 6.9B parameter model configurations at M= 20, retaining the best configurations. In Figure 4 (center), we plot the efficient frontier of minimum loss configurations. While there appears to be a general trend, configurations between 5.21016and5.21017FLOPs lie below the frontier established by other models. We hypothesize these models over-perform as they are trained for more optimization steps than their neighbors based on our power-of-two batch sizes. We provide support for this hypothesis in Appendix E, but opt to remove these models from our investigation. In Figure 4 (right), we fit trends to the remaining models and to a subset of four models. We notice that the trends hit both the 1.4B and 6.9B models suggesting that our small-scale configurations are reasonable for extrapolation to larger parameter and compute regimes. We retain the four model configuration subsets as a representative sample. We do not tune hyperparameters for other token multipliers (i.e., M= 20), other training or evaluation distributions, or on downstream task validation sets. For more details, see Appendix C. We present our final hyperparameters in Table 1 given their importance. Fitting scaling laws. We fit Equation (4)to approximate E, a, b, Cusing curve-fitting in SciPy (i.e., Levenberg-Marquardt to minimize non-linear least squares). We try several 8 N M Used to fit Equation (4) Used to fit Equation (5) 0.011B 20 0.079B 20 0.154B 20 0.411B 20 0.011B 320 1.4B 20 Total compute C[FLOPs] 2.4 e19 2.7 e20 Table 2:Default N, Mto fit our scaling laws. We invest 100 A100 hours to fit Equation (4) and1000 A100 hours to fit Equation (5). initializations and retain the best fit. We repeat this process to fit Equation (5)to approximate , k, . Unless otherwise specified, we fit to the N, Mpairs in Table 2. In total, we invest 100 A100 hours to train the models required for fitting an accurate scaling law for loss prediction and 1000 A100 hours for a corresponding scaling law for downstream error prediction. Our configurations allow us to test for extrapolation to the N= 1.4B, M= 640(900B token) and the N= 6.9B, M= 20 (138B token) regimes. 3.3 Evaluation setup Evaluation datasets. Unless otherwise stated, C4 eval is our default validation loss dataset. For downstream tasks, we adopt 46 tasks from LLM-foundry , which includes standard tasks with both zero-shot and few-shot evaluations. We also consider a 17-task subset where, for each evaluation, at least one 0.154B scale modeltrained with as many as 99B tokensgets 10 percentage points above chance accuracy: ARC-Easy , BIG-bench: CS algorithms , BIG-bench: Dyck languages , BIG-bench: Novel Concepts , BIG-bench: Operators , BIG-bench: QA WikiData , BoolQ , Commonsense QA , COPA , CoQA , HellaSwag (zeroshot) , HellaSwag (10-shot) , LAMBADA , PIQA , PubMed QA Labeled , SQuAD , and WinoGrand . This subset allows us to investigate a regime where performance for small models can be non-trivial. For more details on evaluation datasets see Appendix D. For ablations on our choices of loss and downstream evaluations see Appendix E. Metrics. We consider three main metrics: (i) Validation loss , which is the cross entropy between a models output and the one-hot ground truth, averaged over all tokens in a sequence and over all sequences in a dataset. (ii) Average top-1 error , which is a uniform average over 46 downstream evaluations sourced from LLM-foundry . We also look at the mean top-1 error for the subset of 17 evaluations identified in the paragraph above. For a complete list of downstream evaluation datasets, see Appendix D. To measure how good a prediction (C, M )is, we measure (iii) Relative prediction error :|(C, M )GT|/GT, where is the loss Lor the average top-1 error Err. Testbed. We train models on C4, RedPajama, and RefinedWeb with the number of parameters N {0.011B,0.079B,0.154B,0.411B}and token multipliers M {5,10,20,40,80,160,320,640}. We omit runs that require more tokens than are present in a dataset (i.e., N= 0.411B, M= 640for C4). We additionally train N= 1.4B models at M= 20and at the largest token multiplier possible without repeating tokens (i.e., 80 for C4, 640 for RedPajama, and 320 for RefinedWeb). We train N= 6.9B, M= 20for each dataset. In total this results in 104 models. We evaluate each model on C4 eval for validation loss and on the 46 downstream tasks for top-1 error. 9 10 20 40 80 160 320 640 M0.011B 0.079B 0.154B 0.411B 1.4B 6.9BN1.1% 0.0% 0.2% 0.7% 0.9% 0.0% 0.6% 2.6% 0.3% 0.2% 0.4% 0.1% 0.7% 0.8% 1.5% 0.5% 1.1% 1.1% 3.3% 2.8% 0.6% 0.5% 0.2% 0.0% 2.8% 0.2% 2.0% 0.8% 1.5% 4.3%Train: C4, Eval: C4 eval 10 20 40 80 160 320 640 M0.3% 0.0% 0.3% 1.7% 1.1% 0.0% 1.0% 2.2% 0.3% 0.2% 0.7% 1.4% 2.1% 2.3% 0.8% 0.5% 0.6% 0.0% 0.4% 0.4% 0.3% 0.4% 0.2% 0.1% 0.3% 0.3% 1.4% 1.1% 0.1% 0.7% 0.7%Train: RedPajama, Eval: C4 eval 10 20 40 80 160 320 640 M0.9% 0.0% 0.9% 1.9% 1.0% 0.0% 1.1% 2.4% 0.1% 0.0% 0.5% 1.2% 2.0% 0.9% 0.9% 0.2% 0.6% 2.8% 2.2% 0.8% 0.9% 0.2% 0.1% 0.5% 0.8% 0.9% 0.9% 0.3% 0.6% 0.0% 1.6%Train: RefinedWeb, Eval: C4 eval 0%5%10%15%20%25%30% Relative errorFigure 5: Relative error on C4 eval for different training distributions. Boxes highlighted in yellow correspond to data points used to fit Equation (4), all other points are predicted based on the fit. The prediction error is low in both interpolation and extrapolation ranges for number of parameters Nand token multiplier M, with larger values of Mspecifying more over-training. Empty squares correspond to runs that were not possible due to compute limitations at the 1.4B and 6.9B scales or to limited dataset size for single epoch training. 4 Results Unless otherwise stated, we fit Equations (4),(5)to the configurations in Table 2 and use C4 eval for loss computation. Over-trained performance is predictable. We highlight our main over-training results in Figure 1 (left). Namely, we are able to extrapolate both in the number of parameters Nand the token multiplier Mto closely predict the C4 eval performance of a 1.4B parameter model trained on 900B RedPajama tokens ( N= 1.4B, M= 640). Our prediction, which takes 300 less compute to construct than the final 1.4B run, is accurate to within 0.7% relative error. Additionally, for the N= 6.9B, M= 20run, near compute-optimal, the relative error is also 0.7%. These results support several key takeaways. (i) scaling can be predictable even when one increases the model size and the amount of over-training compared to the training runs used to fit a scaling law. (ii) Scaling can be predictable even in the presence of a distribution shift (e.g., RedPajama training and C4 evaluation). (iii) The form presented in Equation (4)is useful in practice for fitting and predicting over-trained scaling behavior. (iv) Fitting to Equation (4)does not sacrifice prediction accuracy near compute-optimal. While Figure 1 explores a specific case of making predictions in the over-trained regime, we would like to understand the error profile of our predictions across datasets, token multipliers, and number of parameters. Hence, in Figure 5 we show relative error between ground truth loss and predicted loss on C4 eval for models in our testbed. We notice uniformly low prediction error suggesting that predictions are accurate in many settings. Average top-1 error is predictable. Figure 1 (right)presents our results in estimating scaling laws for downstream error. Similar to Figure 1 (left), we are able to extrapolate in N, Mand predict the average downstream error across our evaluation suite. Concretely, we use the models indicated in Table 2 to fit Equation (5), and predict the average top-1 error over the 17 tasks identified in 10 Individual top-1 error Average top-1 error Train set MMLU OpenBook QA HellaSwag 17-task split 46-task split C4 [86, 25] 2.82% 16.80% 79.58% 0.42% 0.76% RedPajama 0.12% 8.44% 25.73% 0.05% 2.10% RefinedWeb 0.77% 1.92% 81.96% 2.94% 2.76% Table 3:Downstream relative prediction error at 6.9B, 138B tokens. While predicting accuracy on individual zero-shot downstream evaluations can be challenging (Individual), predicting averages across downstream datasets is accurate (Average). On the right, we report relative prediction error for the average over a 17-task subset and over the full 46-task suite. Section 3.3. Our fit allows us to predict the downstream performance of a 6.9B parameter model trained on 138B tokens to within 0.05%relative error and of a 1.4B model trained on 900B tokens to within 3.6%relative error, using 20less compute. Table 3 additionally shows the relative error of our downstream performance predictions for models trained on C4, RedPajama, and RefinedWeb, indicating that our scaling law fits are applicable to other datasets. We note that while average accuracy across benchmarks is predictable, predicting accuracy on individual downstream tasks is significantly more noisy. We report relative error of all our predictions in Figures 11, 12 in the Appendix. We also find that if we remove the 1.4B model for the Equation (5)fit, relative error jumps, for instance, from 0.05% to 10.64% on the 17-task split for the 6.9B, 138B token RedPajama prediction. Small-scale experiments can predict model rank order. We expect to be able to rank models based on their predicted performance, which is useful when deciding what to train. To verify, we rank 9 testbed models with N1.4Bby ground-truth top-1 error and by estimated top-1 error. We find high rank correlations: 0.93 and 0.88 for the 46 and 17-task splits respectively. Under-training, out-of-distribution scaling, and compute-reliability trade-offs. In addition to our main results presented above, we include additional results in Appendix E, which we summarize here. First, we notice that when token multipliers become too small (i.e., M= 5) scaling becomes unreliable and lies off the trend. Additionally, several multipliers (10, 20, 40, and 80) garner points that are roughly on the compute optimal frontier (Figure 9). To probe the limits of reliable scaling, we attempt to break our scaling laws in out-of-distribution settings. We find that models trained on C4English filteredand evaluated on next token prediction on code domains have a high relative error in many cases. Perhaps surprisingly, evaluating the same models on German next token prediction again gives reliable loss scaling (Figure 10). We additionally examine the compute necessary to create accurate scaling laws, finding a positive correlation between investing more compute in a scaling law and its predictivity. We find that scaling laws can be constructed more cheaply for loss prediction than for downstream error prediction (Figures 15, 16). 5 Related work We review the most closely related work in this section. For additional related work, see Appendix F. Scaling laws. Early works on scaling artificial neural networks observe predictable power-law scaling in the training set size and number of model parameters [ 41,42,91]. Alabdulmohsin et al. center the importance of looking at the extrapolation regime of a scaling law. Yang et al. 11 Model family Parameters NTraining tokens DToken multiplier M T5 11B 34B 3.1 GPT-3 175B 300B 1.7 Gopher 280B 300B 1.1 Chinchilla 70B 1.4T 20.0 LLaMA 7B 1T 140.0 LLaMA 70B 1.4T 20.0 LLaMA-2 7B 2T 290.0 LLaMA-2 70B 2T 30.0 XGen 7B 1.5T 210.0 MPT 7B 1T 140.0 Table 4:Token multipliers of existing models. In our work, we run experiments with token multipliers between 5 and 640 for (GPT-2 , LLaMA )-style decoder-only architectures. prescribe architectural and hyperparameter changes when scaling model width to realize performant models; Yang et al. make analogous recommendations when scaling model depth. Unlike the aforementioned work, our investigation focuses on the link between over-training models and predicting their downstream performance on accuracy metrics. Hoffmann et al. investigate how the number of model parameters Nand training tokens D should be chosen to minimize loss Lgiven a compute budget C. Hoffmann et al. find that when scaling up C, both NandDshould be scaled equallyup to a multiplicative constant (i.e., NC0.5andDC0.5) to realize compute-optimality. Appendix C of the Chinchilla paper additionally suggests that these findings hold across many datasets. However, Hoffmann et al. do not account for inference costs, provide scaling laws for training beyond compute-optimal, or for downstream error predictionall of which are central to our work. Sardana & Frankle proposed modifications to the Chinchilla formulation to incorporate inference costs into the definition of compute-optimal and solve for various fixed inference budgets. Their key finding, which is critical for our work, is that when taking into account a large enough inference budget, it is optimal to train smaller models for longer when compared to the original Chinchilla recommendations. Our work presupposes that over-training can be beneficial. Instead of solving for inference-optimal schemes, we support empirically a predictive theory of scaling in the over-trained regime. Additionally, we provide experiments across many validation and training sets. For predicting downstream scaling beyond loss, Isik et al. relate the number of pre-training tokens to downstream cross-entropy and machine translation BLEU score after fine-tuning. In contrast, we do not examine a specific domain but rather take a holistic approach to evaluation by looking at top-1 error over many natural language tasks. Schaeffer et al. argue that emergent abilities and unreliable scaling are a product of non-linear metrics like error/accuracy and propose smoother alternatives. As an explanation for why non-linear metrics may be hard to predict, Schaeffer et al. consider predicting exactly an length sequence: Err(N, )1PP(N), where Nis the number of parameters in a model and PPis its perplexity. This is a special case of our Equations (5),(6), where number of training tokens is ignored, = 1, k= 1, and =. In contrast, we treat , k, as free parameters for a scaling law fit and embrace top-1 error, finding that average error over downstream tasks can make for a predictable metric. 12 Over-training in popular models. There has been a rise in over-trained models [ 111,112] and accompanying massive datasets [ 110,80,102,3]. To contextualize the extent to which we over-train, we provide token multipliers for popular models in Table 4. For example, Chinchilla 70B is trained with a token multiplier of 20, while LLaMA-2 7B uses a token multiplier of 290. In our investigation, we look at token multipliers from 5 to 640 to ensure coverage of popular models and relevance for future models that may be trained on even more tokens. 6 Limitations, future work, and conclusion Limitations and future work. We identify limitations, which provide motivation for future work. Hyperparameters. While our configurations are surprisingly amenable to reliable scaling across many training and testing distributions without further tuning, there is a need to further develop scaling laws that incorporate hyperparameters. Scaling up. Validating the trends in this paper for even larger runs is a valuable direction. Scaling down. Additionally, actualizing predictable scaling with even cheaper runs is important to make this area of research more accessible, especially for downstream error prediction. Failure cases. While we present preliminary analysis of when scaling is unreliable, future work should develop an analytic theory explaining when scaling breaks down. Post-training. It is common to employ supervised fine-tuning and reinforcement learning after pre-training, which we do not consider. Quantifying to what degree over-training the base model provides benefits afterpost-training is an open area of research. Individual downstream task prediction. While we find that averaging over many task error metrics can make for a predictable metric, per-task predictions are left to future work. In-the-wild performance. Downstream task performance is a proxy for the in-the-wild user experience. Analyzing scaling trends in the context of this experience is timely. Dataset curation. Our work only deals with existing training datasets. Exploring dataset curation for improved model scaling is another promising direction. Conclusion. We (i) show that the loss scaling behavior of models trained past compute-optimal, in the over-trained regime, is predictable and (ii) predict, via a proposed scaling law, the downstream average task performance of more expensive runs using smaller-scale proxies. We hope our work will inspire others to further examine the relationship between model training and downstream generalization. We also hope our testbed will make scaling research more accessible to researchers and practitioners alike. Acknowledgements SYG is supported by an NSF Graduate Research Fellowship, GS by the Onassis Foundation Scholarship ID: F ZS 056-1/2022-2023, and MN by the Federal Ministry of Education and Research of Germany under grant no. 01IS22094B WEST-AI. We thank Stability AI and Toyota Research Institute (TRI) for access to compute resources. This research has been supported by NSF Grants AF 1901292, CNS 2148141, Tripods CCF 1934932, IFML CCF 2019844, and research gifts by Western Digital, Amazon, WNCG IAP, UT Austin Machine Learning Lab (MLL), Cisco, and the Stanly P. Finch Centennial Professorship in Engineering. We also thank Kushal Arora, Alper Canberk, Mia Chiquier, Sachit Menon, Chuer Pan, Purva Tendulkar, and Mandi Zhao for valuable feedback. 13 References Samira Abnar, Mostafa Dehghani, Behnam Neyshabur, and Hanie Sedghi. Exploring the limits of large scale pre-training. In International Conference on Learning Representations (ICLR) , 2022. https://arxiv.org/abs/2110.02095 . Ibrahim Alabdulmohsin, Behnam Neyshabur, and Xiaohua Zhai. Revisiting neural scaling laws in language and vision. In Advances in Neural Information Processing Systems (NeuIPS) , 2022. https://arxiv.org/abs/2209.06640 . Alon Albalak, Yanai Elazar, Sang Michael Xie, Shayne Longpre, Nathan Lambert, Xinyi Wang, Niklas Muennighoff, Bairu Hou, Liangming Pan, Haewon Jeong, et al. A survey on data selection for language models. arXiv preprint , 2024. https://arxiv.org/abs/2402.16827 . Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, et al. Santacoder: dont reach for the stars! arXiv preprint , 2023. https://arxiv.org/abs/2301. 03988. Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. MathQA: Towards interpretable math word problem solving with operation-based formalisms. In Conference of the North American Chapter of the Association for Computational Linguistics (NACCL) , 2019. https://aclanthology.org/N19-1245 . Jason Ansel, Edward Yang, Horace He, Natalia Gimelshein, Animesh Jain, Michael Voznesensky, Bin Bao, David Berard, Geeta Chauhan, Anjali Chourdia, Will Constable, Alban Desmaison, Zachary DeVito, Elias Ellison, Will Feng, Jiong Gong, Michael Gschwind, Brian Hirsh, Sherlock Huang, Laurent Kirsch, Michael Lazos, Yanbo Liang, Jason Liang, Yinghai Lu, CK Luk, Bert Maher, Yunjie Pan, Christian Puhrsch, Matthias Reso, Mark Saroufim, Helen Suk, Michael Suo, Phil Tillet, Eikan Wang, Xiaodong Wang, William Wen, Shunting Zhang, Xu Zhao, Keren Zhou, Richard Zou, Ajit Mathews, Gregory Chanan, Peng Wu, and Soumith Chintala. Pytorch 2: Faster machine learning through dynamic python bytecode transformation and graph compilation. In International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS) , 2024. https://pytorch.org/blog/pytorch-2-paper-tutorial . Mikel Artetxe, Shruti Bhosale, Naman Goyal, Todor Mihaylov, Myle Ott, Sam Shleifer, Xi Victoria Lin, Jingfei Du, Srinivasan Iyer, Ramakanth Pasunuru, Giridharan Anantharaman, Xian Li, Shuohui Chen, Halil Akin, Mandeep Baines, Louis Martin, Xing Zhou, Punit Singh Koura, Brian OHoro, Jeffrey Wang, Luke Zettlemoyer, Mona Diab, Zornitsa Kozareva, and Veselin Stoyanov. Efficient large scale language modeling with mixtures of experts. In Conference on Empirical Methods in Natural Language Processing (EMNLP) , 2022. https: //aclanthology.org/2022.emnlp-main.804 . Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint , 2016. https://arxiv.org/abs/1607.06450 . Yasaman Bahri, Ethan Dyer, Jared Kaplan, Jaehoon Lee, and Utkarsh Sharma. Explaining neural scaling laws. arXiv preprint , 2021. https://arxiv.org/abs/2102.06701 . 14 Yamini Bansal, Behrooz Ghorbani, Ankush Garg, Biao Zhang, Maxim Krikun, Colin Cherry, Behnam Neyshabur, and Orhan Firat. Data scaling laws in nmt: The effect of noise and architecture. In International Conference on Machine Learning (ICML) , 2022. https:// proceedings.mlr.press/v162/bansal22b.html . BIG bench authors. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. In Transactions on Machine Learning Research (TMLR) , 2023. https: //openreview.net/forum?id=uyTL5Bvosj . Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. Piqa: Reasoning about physical commonsense in natural language. In Association for the Advancement of Artificial Intelligence (AAAI) , 2020. https://arxiv.org/abs/1911.11641 . Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, ShivanshuPurohit, LariaReynolds, JonathanTow, BenWang, andSamuelWeinbach. Gpt-neox20b: An open-source autoregressive language model. BigScience Episode #5 Workshop on Challenges & Perspectives in Creating Large Language Models , 2022. https://aclanthology. org/2022.bigscience-1.9 . TomBrown, BenjaminMann, NickRyder, MelanieSubbiah, JaredDKaplan, PrafullaDhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In Advances in Neural Information Processing Systems (NeurIPS) , 2020. https://arxiv.org/abs/2005.14165 . Ethan Caballero, Kshitij Gupta, Irina Rish, and David Krueger. Broken neural scaling laws. InInternational Conference on Learning Representations (ICLR) , 2023. https://openreview. net/forum?id=sckjveqlCZ . Mehdi Cherti, Romain Beaumont, Ross Wightman, Mitchell Wortsman, Gabriel Ilharco, Cade Gordon, Christoph Schuhmann, Ludwig Schmidt, and Jenia Jitsev. Reproducible scaling laws for contrastive language-image learning. In Conference on Computer Vision and Pattern Recognition (CVPR) , 2023. https://arxiv.org/abs/2212.07143 . Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, PaulBarham, HyungWonChung, CharlesSutton, SebastianGehrmann, ParkerSchuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam M. Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Benton C. Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garca, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Daz, Orhan Firat, Michele Catasta, Jason Wei, Kathleen S. Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. Palm: 15 Scaling language modeling with pathways. In Journal of Machine Learning Research (JMLR) , 2022. https://arxiv.org/abs/2204.02311 . Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint , 2022. https://arxiv.org/abs/2210.11416 . Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. Boolq: Exploring the surprising difficulty of natural yes/no questions. In Conference of the North American Chapter of the Association for Computational Linguistics (NAACL) , 2019. https://aclanthology.org/N19-1300 . Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. ELECTRA: Pre-training text encoders as discriminators rather than generators. In International Conference on Learning Representations (ICLR) , 2020. https://openreview.net/pdf?id=r1xMH1BtvB . Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint , 2018. https://arxiv.org/abs/1803.05457 . Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher R. FlashAttention: Fast and memory-efficient exact attention with IO-awareness. In Advances in Neural Information Processing Systems (NeurIPS) , 2022. https://arxiv.org/abs/2205.14135 . Mostafa Dehghani, Josip Djolonga, Basil Mustafa, Piotr Padlewski, Jonathan Heek, Justin Gilmer, Andreas Peter Steiner, Mathilde Caron, Robert Geirhos, Ibrahim Alabdulmohsin, et al. Scaling vision transformers to 22 billion parameters. In International Conference on Machine Learning (ICML) , 2023. https://proceedings.mlr.press/v202/dehghani23a.html . Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Conference of the North American Chapter of the Association for Computational Linguistics (NAACL) , 2019. https: //aclanthology.org/N19-1423 . Jesse Dodge, Maarten Sap, Ana Marasovi, William Agnew, Gabriel Ilharco, Dirk Groeneveld, Margaret Mitchell, and Matt Gardner. Documenting large webtext corpora: A case study on the colossal clean crawled corpus. In Conference on Empirical Methods in Natural Language Processing (EMNLP) , 2021. https://aclanthology.org/2021.emnlp-main.98 . Nan Du, Yanping Huang, Andrew M. Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, Barret Zoph, Liam Fedus, Maarten Bosma, Zongwei Zhou, Tao Wang, Yu Emma Wang, Kellie Webster, Marie Pellat, Kevin Robinson, Kathleen Meier-Hellstern, Toju Duke, Lucas Dixon, Kun Zhang, Quoc V Le, Yonghui Wu, Zhifeng Chen, and Claire Cui. Glam: Efficient scaling of language models with mixture-of-experts. In International Conference on Machine Learning (ICML) , 2022. https://arxiv.org/abs/2112.06905 . Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, and Douwe Kiela. Kto: Model alignment as prospect theoretic optimization. arXiv preprint , 2024. https://arxiv. org/abs/2402.01306 . 16 Samir Yitzhak Gadre, Gabriel Ilharco, Alex Fang, Jonathan Hayase, Georgios Smyrnis, Thao Nguyen, Mitchell Wortsman Ryan Marten, Dhruba Ghosh, Jieyu Zhang, Eyal Orgad, Rahim Entezari, Giannis Daras, Sarah Pratt, Vivek Ramanujan, Yonatan Bitton, Kalyani Marathe, Stephen Mussmann, Mehdi Cherti Richard Vencu, Ranjay Krishna, Pang Wei Koh, Olga Saukh, Alexander Ratner, Shuran Song, Hannaneh Hajishirzi, Ali Farhadi, Romain Beaumont, Sewoong Oh, Alex Dimakis, Jenia Jitsev, Yair Carmon, Vaishaal Shankar, and Ludwig Schmidt. Datacomp: In search of the next generation of multimodal datasets. In Advances in Neural Information Processing Systems (NeurIPS) , 2023. https://arxiv.org/abs/2304.14108 . Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. The Pile: An 800gb dataset of diverse text for language modeling. arXiv preprint , 2020. https://arxiv.org/abs/2101.00027 . Behrooz Ghorbani, Orhan Firat, Markus Freitag, Ankur Bapna, Maxim Krikun, Xavier Garcia, Ciprian Chelba, and Colin Cherry. Scaling laws for neural machine translation. arXiv preprint , 2021. https://arxiv.org/abs/2109.07740 . Mitchell A Gordon, Kevin Duh, and Jared Kaplan. Data and parameter scaling laws for neural machine translation. In Conference on Empirical Methods in Natural Language Processing (EMNLP) , 2021. https://aclanthology.org/2021.emnlp-main.478 . Dirk Groeneveld, Iz Beltagy, Pete Walsh, Akshita Bhagia, Rodney Kinney, Oyvind Tafjord, Ananya Harsh Jha, Hamish Ivison, Ian Magnusson, Yizhong Wang, et al. Olmo: Accelerating the science of language models. arXiv preprint , 2024. https://arxiv.org/abs/2402.00838 . Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint , 2023. https://arxiv.org/abs/2312.00752 . Albert Gu, Isys Johnson, Karan Goel, Khaled Saab, Tri Dao, Atri Rudra, and Christopher R. Combining recurrent, convolutional, and continuous-time models with linear state space layers. In Advances in Neural Information Processing Systems (NeurIPS) , 2021. https: //openreview.net/forum?id=yWd42CWN3c . Albert Gu, Karan Goel, and Christopher R. Efficiently modeling long sequences with structured state spaces. In International Conference on Learning Representations (ICLR) , 2022. https://arxiv.org/abs/2111.00396 . Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio Cesar, Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sbastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, and Yuanzhi Li. Textbooks are all you need. Preprint, 2023. https://www.microsoft.com/en-us/research/publication/ textbooks-are-all-you-need . Suchin Gururangan, Mitchell Wortsman, Samir Yitzhak Gadre, Achal Dave, Maciej Kilian, Weijia Shi, Jean Mercat, Georgios Smyrnis, Gabriel Ilharco, Matt Jordan, Reinhard Heckel, Alex Dimakis, Ali Farhadi, Vaishaal Shankar, and Ludwig Schmidt. OpenLM: a minimal but performative language modeling (lm) repository, 2023. https://github.com/mlfoundations/ open_lm. 17 Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. In International Conference on Learning Representations (ICLR) , 2021. https://arxiv.org/abs/2009.03300 . T. J. Henighan, Jared Kaplan, Mor Katz, Mark Chen, Christopher Hesse, Jacob Jackson, Heewoo Jun, Tom B. Brown, Prafulla Dhariwal, Scott Gray, Chris Hallacy, Benjamin Mann, Alec Radford, Aditya Ramesh, Nick Ryder, Daniel M. Ziegler, John Schulman, Dario Amodei, and Sam McCandlish. Scaling laws for autoregressive generative modeling. arXiv preprint , 2020. https://arxiv.org/abs/2010.14701 . Danny Hernandez, Jared Kaplan, T. J. Henighan, and Sam McCandlish. Scaling laws for transfer. arXiv preprint , 2021. https://arxiv.org/abs/2102.01293 . Joel Hestness, Sharan Narang, Newsha Ardalani, Gregory Frederick Diamos, Heewoo Jun, Hassan Kianinejad, Md. Mostofa Ali Patwary, Yang Yang, and Yanqi Zhou. Deep learning scaling is predictable, empirically. arXiv preprint , 2017. https://arxiv.org/abs/1712.00409 . Joel Hestness, Newsha Ardalani, and Gregory Diamos. Beyond human-level accuracy: Computational challenges in deep learning. In Principles and Practice of Parallel Programming (PPoPP) , 2019. https://arxiv.org/abs/1909.01736 . Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. In Advances in Neural Information Processing Systems (NeurIPS) , 2022. https://arxiv.org/abs/2203.15556 . Hakan Inan, Khashayar Khosravi, and Richard Socher. Tying word vectors and word classifiers: A loss framework for language modeling. In International Conference on Learning Representations (ICLR) , 2017. https://arxiv.org/abs/1611.01462 . Berivan Isik, Natalia Ponomareva, Hussein Hazimeh, Dimitris Paparas, Sergei Vassilvitskii, and Sanmi Koyejo. Scaling laws for downstream task performance of large language models. arXiv, 2024. https://arxiv.org/abs/2402.04177 . Maor Ivgi, Yair Carmon, and Jonathan Berant. Scaling laws under the microscope: Predicting transformer performance from small scale experiments. In Conference on Empirical Methods in Natural Language Processing (EMNLP) , 2022. https://aclanthology.org/2022. findings-emnlp.544 . Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Florian Bressand Diego de las Casas, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Llio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothe Lacroix, and William El Sayed. Mistral 7b. arXiv preprint , 2023. https://arxiv.org/abs/2310.06825 . Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William Cohen, and Xinghua Lu. Pubmedqa: A dataset for biomedical research question answering. In Conference on Empirical Methods in Natural Language Processing (EMNLP) , 2019. https://aclanthology.org/D19-1259 . Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, 18 Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models.arXiv preprint , 2020. https://arxiv.org/abs/2001.08361 . Tobit Klug, Dogukan Atik, and Reinhard Heckel. Analyzing the sample complexity of selfsupervised image reconstruction methods. arXiv preprint , 2023. https://arxiv.org/abs/ 2305.19079 . Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. ALBERT: A lite BERT for self-supervised learning of language representations. arXiv preprint, 2019. http://arxiv.org/abs/1909.11942 . Benjamin Lefaudeux, Francisco Massa, Diana Liskovich, Wenhan Xiong, Vittorio Caggiano, Sean Naren, Min Xu, Jieru Hu, Marta Tintore, Susan Zhang, Patrick Labatut, and Daniel Haziza. xformers: A modular and hackable transformer modelling library, 2022. https: //github.com/facebookresearch/xformers . Hector Levesque, Ernest Davis, and Leora Morgenstern. The winograd schema challenge. In International conference on the principles of knowledge representation and reasoning , 2012. https://aaai.org/papers/59-4492-the-winograd-schema-challenge . Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Annual Meeting of the Association for Computational Linguistics (ACL) , 2020. https://aclanthology.org/ 2020.acl-main.703 . Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, et al. Starcoder: may the source be with you! arXiv preprint , 2023. https://arxiv.org/abs/2305.06161 . Jian Liu, Leyang Cui, Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang. Logiqa: A challenge dataset for machine reading comprehension with logical reasoning. In International Joint Conference on Artificial Intelligence , 2020. https://arxiv.org/abs/2007.08124 . Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized BERT pretraining approach. arXiv preprint , 2019. http://arxiv.org/abs/1907.11692 . Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. A convnet for the 2020s. Conference on Computer Vision and Pattern Recognition (CVPR), 2022. https://arxiv.org/abs/2201.03545 . Shayne Longpre, Robert Mahari, Anthony Chen, Naana Obeng-Marnu, Damien Sileo, William Brannon, Niklas Muennighoff, Nathan Khazam, Jad Kabbara, Kartik Perisetla, et al. The data provenance initiative: A large scale audit of dataset licensing & attribution in ai. arXiv preprint, 2023. https://arxiv.org/abs/2310.16787 . Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint , 2017. https://arxiv.org/abs/1711.05101 . Anton Lozhkov, Raymond Li, Loubna Ben Allal, Federico Cassano, Joel Lamy-Poirier, 19 Nouamane Tazi, Ao Tang, Dmytro Pykhtar, Jiawei Liu, Yuxiang Wei, Tianyang Liu, Max Tian, Denis Kocetkov, Arthur Zucker, Younes Belkada, Zijian Wang, Qian Liu, Dmitry Abulkhanov, Indraneil Paul, Zhuang Li, Wen-Ding Li, Megan Risdal, Jia Li, Jian Zhu, Terry Yue Zhuo, Evgenii Zheltonozhskii, Nii Osae Osae Dade, Wenhao Yu, Lucas Krau, Naman Jain, Yixuan Su, Xuanli He, Manan Dey, Edoardo Abati, Yekun Chai, Niklas Muennighoff, Xiangru Tang, Muhtasham Oblokulov, Christopher Akiki, Marc Marone, Chenghao Mou, Mayank Mishra, Alex Gu, Binyuan Hui, Tri Dao, Armel Zebaze, Olivier Dehaene, Nicolas Patry, Canwen Xu, Julian McAuley, Han Hu, Torsten Scholak, Sebastien Paquet, Jennifer Robinson, Carolyn Jane Anderson, Nicolas Chapados, Mostofa Patwary, Nima Tajbakhsh, Yacine Jernite, Carlos Muoz Ferrandis, Lingming Zhang, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, and Harm de Vries. Starcoder 2 and the stack v2: The next generation. arXiv preprint , 2024. https://arxiv.org/abs/2402.19173 . RistoLuukkonen, VilleKomulainen, JouniLuoma, AnniEskelinen, JennaKanerva, Hanna-Mari Kupari, Filip Ginter, Veronika Laippala, Niklas Muennighoff, Aleksandra Piktus, et al. Fingpt: Large generative models for a small language. In Conference on Empirical Methods in Natural Language Processing (EMNLP) , 2023. https://aclanthology.org/2023.emnlp-main.164 . Ian Magnusson, Akshita Bhagia, Valentin Hofmann, Luca Soldaini, Ananya Harsh Jha, Oyvind Tafjord, Dustin Schwenk, Evan Pete Walsh, Yanai Elazar, Kyle Lo, Dirk Groenveld, Iz Beltagy, Hanneneh Hajishirz, Noah A. Smith, Kyle Richardson, and Jesse Dodge. Paloma: A benchmark for evaluating language model fit. arXiv preprint , 2023. https://paloma.allen.ai . Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. Building a large annotated corpus of English: The Penn Treebank. In Computational Linguistics , 1993. https://aclanthology.org/J93-2004 . William Merrill, Vivek Ramanujan, Yoav Goldberg, Roy Schwartz, and Noah A. Smith. Effects of parameter norm growth during transformer training: Inductive bias from gradient descent. In Conference on Empirical Methods in Natural Language Processing (EMNLP) , 2021. https://aclanthology.org/2021.emnlp-main.133 . Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. In Conference on Empirical Methods in Natural Language Processing (EMNLP) , 2018. https://arxiv.org/ abs/1809.02789 . MosaicML. Llm evaluation scores, 2023. https://www.mosaicml.com/llm-evaluation . Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng-Xin Yong, Hailey Schoelkopf, et al. Crosslingual generalization through multitask finetuning. In Annual Meeting of the Association for Computational Linguistics (ACL) , 2022. https://aclanthology.org/2023.acl-long. 891. Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro Von Werra, and Shayne Longpre. Octopack: Instruction tuning code large language models. arXiv preprint , 2023. https://arxiv.org/abs/2308. 07124. 20 Niklas Muennighoff, Alexander M Rush, Boaz Barak, Teven Le Scao, Aleksandra Piktus, Nouamane Tazi, Sampo Pyysalo, Thomas Wolf, and Colin Raffel. Scaling data-constrained language models. In Advances in Neural Information Processing Systems (NeuIPS) , 2023. https://arxiv.org/abs/2305.16264 . Niklas Muennighoff, Hongjin Su, Liang Wang, Nan Yang, Furu Wei, Tao Yu, Amanpreet Singh, and Douwe Kiela. Generative representational instruction tuning. arXiv preprint , 2024. https://arxiv.org/abs/2402.09906 . Erik Nijkamp, Tian Xie, Hiroaki Hayashi, Bo Pang, Congying Xia, Chen Xing, Jesse Vig, Semih Yavuz, Philippe Laban, Ben Krause, Senthil Purushwalkam, Tong Niu, Wojciech Kryscinski, Lidiya Murakhovska, Prafulla Kumar Choubey, Alex Fabbri, Ye Liu, Rui Meng, Lifu Tu, Meghana Bhat, Chien-Sheng Wu, Silvio Savarese, Yingbo Zhou, Shafiq Rayhan Joty, and Caiming Xiong. Long sequence modeling with xgen: A 7b llm trained on 8k input sequence length.arXiv preprint , 2023. https://arxiv.org/abs/2309.03450 . OpenAI. Triton, 2021. https://github.com/openai/triton . OpenAI. Gpt-4 technical report, 2023. https://arxiv.org/abs/2303.08774 . Denis Paperno, Germn Kruszewski, Angeliki Lazaridou, Ngoc Quan Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernandez. The LAMBADA dataset: Word prediction requiring a broad discourse context. In Annual Meeting of the Association for Computational Linguistics (ACL) , 2016. http://www.aclweb.org/anthology/ P16-1144 . Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Annual Meeting of the Association for Computational Linguistics (ACL) , 2002. https://aclanthology.org/P02-1040 . Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Thompson, Phu Mon Htut, and Samuel Bowman. BBQ: A hand-built bias benchmark for question answering. In Annual Meeting of the Association for Computational Linguistics (ACL), 2022. https://aclanthology.org/2022.findings-acl.165 . Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems (NeurIPS) , 2019. https://arxiv.org/abs/1912.01703 . Patronus AI. EnterprisePII dataset, 2023. https://tinyurl.com/2r5x9bst . GuilhermePenedo, QuentinMalartic, DanielHesslow, RuxandraCojocaru, AlessandroCappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. The RefinedWeb dataset for Falcon LLM: outperforming curated corpora with web data, and web data only. arXiv preprint , 2023. https://arxiv.org/abs/2306.01116 . Bo Peng, Eric Alcaide, Quentin Anthony, Alon Albalak, Samuel Arcadinho, Stella Biderman, Huanqi Cao, Xin Cheng, Michael Chung, Leon Derczynski, Xingjian Du, Matteo Grella, Kranthi Gv, Xuzheng He, Haowen Hou, Przemyslaw Kazienko, Jan Kocon, Jiaming Kong, Bartomiej Koptyra, Hayden Lau, Jiaju Lin, Krishna Sri Ipsit Mantri, Ferdinand Mom, Atsushi 21 Saito, Guangyu Song, Xiangru Tang, Johan Wind, Stanisaw Woniak, Zhenyuan Zhang, Qinghua Zhou, Jian Zhu, and Rui-Jie Zhu. RWKV: Reinventing RNNs for the transformer era. InConference on Empirical Methods in Natural Language Processing (EMNLP) , 2023. https://aclanthology.org/2023.findings-emnlp.936 . Ofir Press and Lior Wolf. Using the output embedding to improve language models. In Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics (EACL) , 2017. https://aclanthology.org/E17-2025 . Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. Preprint, 2019. https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_ are_unsupervised_multitask_learners.pdf . Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick, Albin Cassirer, Richard Powell, George van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, AmeliaGlaese, JohannesWelbl, Sumanth Dathathri, Saffron Huang, Jonathan Uesato, John F. J. Mellor, Irina Higgins, Antonia Creswell, Nathan McAleese, Amy Wu, Erich Elsen, Siddhant M. Jayakumar, Elena Buchatskaya, David Budden, Esme Sutherland, Karen Simonyan, Michela Paganini, L. Sifre, Lena Martens, Xiang Lorraine Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena Gribovskaya, Domenic Donato, Angeliki Lazaridou, Arthur Mensch, Jean-Baptiste Lespiau, Maria Tsimpoukelli, N. K. Grigorev, Doug Fritz, Thibault Sottiaux, Mantas Pajarskas, Tobias Pohlen, Zhitao Gong, Daniel Toyama, Cyprien de Masson dAutume, Yujia Li, Tayfun Terzi, Vladimir Mikulik, Igor Babuschkin, Aidan Clark, Diego de Las Casas, Aurelia Guy, Chris Jones, James Bradbury, Matthew G. Johnson, Blake A. Hechtman, Laura Weidinger, Iason Gabriel, William S. Isaac, Edward Lockhart, Simon Osindero, Laura Rimell, Chris Dyer, Oriol Vinyals, Kareem W. Ayoub, Jeff Stanway, L. L. Bennett, Demis Hassabis, Koray Kavukcuoglu, and Geoffrey Irving. Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint , 2021. https://arxiv.org/abs/2112.11446 . Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. In Advances in Neural Information Processing Systems (NeurIPS) , 2023. https: //arxiv.org/abs/2305.18290 . Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint , 2019. https://arxiv.org/abs/1910.10683 . Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. In The Journal of Machine Learning Research (JMLR) , 2020. https://arxiv.org/abs/1910.10683 . Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ questions for machine comprehension of text. In Conference on Empirical Methods in Natural Language Processing (EMNLP) , 2016. https://aclanthology.org/D16-1264 . 22 Siva Reddy, Danqi Chen, and Christopher D. Manning. CoQA: A conversational question answering challenge. In Transactions of the Association for Computational Linguistics (TACL) , 2019. https://aclanthology.org/Q19-1016 . Melissa Roemmele, Cosmin Adrian Bejan, , and Andrew S. Gordon. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In Association for the Advancement of Artificial Intelligence (AAAI) Spring Symposium , 2011. https://people.ict. usc.edu/~gordon/copa.html . Jonathan S. Rosenfeld, Amir Rosenfeld, Yonatan Belinkov, and Nir Shavit. A constructive prediction of the generalization error across scales. In International Conference on Learning Representations (ICLR) , 2020. https://arxiv.org/abs/1909.12673 . Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. Gender bias in coreference resolution. In Conference of the North American Chapter of the Association for Computational Linguistics (NAACL) , 2018. https://aclanthology.org/N18-2002 . Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale. arXiv preprint , 2019. https://arxiv.org/ abs/1907.10641 . Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint , 2019. http://arxiv.org/ abs/1910.01108 . Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. Social IQa: Commonsense reasoning about social interactions. In Empirical Methods in Natural Language Processing (EMNLP) , 2019. https://aclanthology.org/D19-1454 . Nikhil Sardana and Jonathan Frankle. Beyond chinchilla-optimal: Accounting for inference in language model scaling laws. In NeurIPS Workshop on Efficient Natural Language and Speech Processing (ENLSP) , 2023. https://arxiv.org/abs/2401.00448 . Teven Le Scao, Thomas Wang, Daniel Hesslow, Lucile Saulnier, Stas Bekman, M Saiful Bari, Stella Biderman, Hady Elsahar, Niklas Muennighoff, Jason Phang, et al. What language model to train if you have one million gpu hours? In Conference on Empirical Methods in Natural Language Processing (EMNLP) , 2022. https://aclanthology.org/2022.findings-emnlp. 54. Rylan Schaeffer, Brando Miranda, and Sanmi Koyejo. Are emergent abilities of large language models a mirage? In Advances in Neural Information Processing Systems (NeurIPS) , 2023. https://arxiv.org/abs/2304.15004 . Utkarsh Sharma and Jared Kaplan. A neural scaling law from the dimension of the data manifold. In Journal of Machine Learning Research (JMLR) , 2022. https://arxiv.org/abs/ 2004.10802 . Noam Shazeer. Glu variants improve transformer. arXiv preprint , 2020. https://arxiv.org/ abs/2002.05202 . Shivalika Singh, Freddie Vargus, Daniel Dsouza, Brje F Karlsson, Abinaya Mahendiran, 23 Wei-Yin Ko, Herumb Shandilya, Jay Patel, Deividas Mataciunas, Laura OMahony, et al. Aya dataset: An open-access collection for multilingual instruction tuning. arXiv preprint arXiv:2402.06619 , 2024. https://arxiv.org/abs/2402.06619 . Luca Soldaini, Rodney Kinney, Akshita Bhagia, Dustin Schwenk, David Atkinson, Russell Authur, Ben Bogin, Khyathi Chandu, Jennifer Dumas, Yanai Elazar, et al. Dolma: An open corpus of three trillion tokens for language model pretraining research. arXiv preprint , 2024. https://arxiv.org/abs/2402.00159 . Ben Sorscher, Robert Geirhos, Shashank Shekhar, Surya Ganguli, and Ari S. Morcos. Beyond neural scaling laws: beating power law scaling via data pruning. In Advances in Neural Information Processing Systems (NeurIPS) , 2022. https://openreview.net/forum?id= UmvSlP-PyV . Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding. arXiv preprint , 2021. https://arxiv. org/abs/2104.09864 . Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Conference of the North American Chapter of the Association for Computational Linguistics (NAACL) , 2019. https://aclanthology.org/N19-1421 . Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, and Donald Metzler. Scale efficiently: Insights from pre-training and fine-tuning transformers. In International Conference on Learning Representations (ICLR) , 2022. https://openreview.net/forum?id=f2OYVDyfIB . Yi Tay, Mostafa Dehghani, Samira Abnar, Hyung Chung, William Fedus, Jinfeng Rao, Sharan Narang, Vinh Tran, Dani Yogatama, and Donald Metzler. Scaling laws vs model architectures: How does inductive bias influence scaling? In Conference on Empirical Methods in Natural Language Processing (EMNLP) , 2023. https://aclanthology.org/2023.findings-emnlp. 825. MosaicML NLP Team. Introducing mpt-7b: A new standard for open-source, commercially usable llms, 2023. www.mosaicml.com/blog/mpt-7b . Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, YaGuang Li, Hongrae Lee, Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Vincent Zhao, Yanqi Zhou, Chung-Ching Chang, Igor Krivokon, Will Rusch, Marc Pickett, Pranesh Srinivasan, Laichee Man, Kathleen Meier-Hellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Soraker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Alejandra Molina, Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm, Viktoriya Kuzmina, Joe Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise AgueraArcas, Claire Cui, Marian Croak, Ed Chi, and Quoc Le. Lamda: Language models for dialog applications. arXiv preprint , 2022. https://arxiv.org/abs/2201.08239 . 24 Together Computer. Redpajama: an open dataset for training large language models, 2023. https://github.com/togethercomputer/RedPajama-Data . Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothe Lacroix, Baptiste Rozire, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, ArmandJoulin, EdouardGrave, andGuillaumeLample. LLaMA:OpenandEfficient Foundation Language Models. arXiv preprint , 2023. https://arxiv.org/abs/2302.13971 . Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open Foundation and Fine-Tuned Chat Models. arXiv preprint , 2023. https://arxiv.org/abs/ 2307.09288 . Ahmet stn, Viraat Aryabumi, Zheng-Xin Yong, Wei-Yin Ko, Daniel Dsouza, Gbemileke Onilude, Neel Bhandari, Shivalika Singh, Hui-Lee Ooi, Amr Kayid, et al. Aya model: An instruction finetuned open-access multilingual language model. arXiv preprint , 2024. https: //arxiv.org/abs/2402.07827 . Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems (NeurIPS) , 2017. https://arxiv.org/abs/1706.03762 . Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, Stfan J. van der Walt, Matthew Brett, Joshua Wilson, K. Jarrod Millman, Nikolay Mayorov, Andrew R. J. Nelson, Eric Jones, Robert Kern, Eric Larson, C J Carey, lhan Polat, Yu Feng, Eric W. Moore, Jake VanderPlas, Denis Laxalde, Josef Perktold, Robert Cimrman, Ian Henriksen, E. A. Quintero, Charles R. Harris, Anne M. Archibald, Antnio H. Ribeiro, Fabian Pedregosa, Paul van Mulbregt, and SciPy 1.0 Contributors. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods , 2020. https://rdcu.be/b08Wh . Siyuan Wang, Zhongkun Liu, Wanjun Zhong, Ming Zhou, Zhongyu Wei, Zhumin Chen, and Nan Duan. From lsat: The progress and challenges of complex reasoning. Transactions on Audio, Speech, and Language Processing , 2021. https://arxiv.org/abs/2108.00648 . Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. In International Conference on Learning Representations (ICLR) , 2022. https://openreview. net/forum?id=gEZrGCozdqR . 25 Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. Emergent abilities of large language models. In Transactions on Machine Learning Research (TMLR) , 2022. https: //openreview.net/forum?id=yzkSU5zdwD . BigScience Workshop, Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili, Daniel Hesslow, Roman Castagn, Alexandra Sasha Luccioni, Franois Yvon, et al. Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint , 2022. https://arxiv.org/abs/2211.05100 . Mitchell Wortsman, Peter J Liu, Lechao Xiao, Katie Everett, Alex Alemi, Ben Adlam, John D Co-Reyes, Izzeddin Gur, Abhishek Kumar, Roman Novak, et al. Small-scale proxies for large-scale transformer training instabilities. arXiv preprint , 2023. https://arxiv.org/abs/ 2309.14322 . Greg Yang, Edward J. Hu, Igor Babuschkin, Szymon Sidor, Xiaodong Liu, David Farhi, Nick Ryder, Jakub Pachocki, Weizhu Chen, and Jianfeng Gao. Tensor programs V: Tuning large neural networks via zero-shot hyperparameter transfer. In Advances in Neural Information Processing Systems (NeuIPS) , 2021. https://arxiv.org/abs/2203.03466 . Greg Yang, Dingli Yu, Chen Zhu, and Soufiane Hayou. Feature learning in infinite depth neural networks. In International Conference on Learning Representations (ICLR) , 2024. https://openreview.net/forum?id=17pVDnpwwl . Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? In Annual Meeting of the Association for Computational Linguistics (ACL) , 2019. https://aclanthology.org/P19-1472 . Xiaohua Zhai, Alexander Kolesnikov, Neil Houlsby, and Lucas Beyer. Scaling vision transformers. In Conference on Computer Vision and Pattern Recognition (CVPR) , 2022. https://arxiv.org/abs/2106.04560 . Biao Zhang and Rico Sennrich. Root mean square layer normalization. In Advances in Neural Information Processing Systems (NeuIPS) , 2019. https://arxiv.org/abs/1910.07467 . Biao Zhang, Ivan Titov, and Rico Sennrich. Improving deep transformer with depth-scaled initialization and merged attention. In Empirical Methods in Natural Language Processing (EMNLP) , 2019. https://aclanthology.org/D19-1083 . Yanli Zhao, Andrew Gu, Rohan Varma, Liangchen Luo, Chien chin Huang, Min Xu, Less Wright, Hamid Shojanazeri, Myle Ott, Sam Shleifer, Alban Desmaison, Can Balioglu, Bernard Nguyen, Geeta Chauhan, Yuchen Hao, and Shen Li. Pytorch fsdp: Experiences on scaling fully sharded data parallel. In Very Large Data Bases Conference (VLDB) , 2023. https: //dl.acm.org/doi/10.14778/3611540.3611569 . Haoxi Zhong, Chaojun Xiao, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, and Maosong Sun. Jec-qa: A legal-domain question answering dataset. In Association for the Advancement of Artificial Intelligence (AAAI) , 2020. https://arxiv.org/abs/1911.12011 . 26 Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. Agieval: A human-centric benchmark for evaluating foundation models.arXiv preprint , 2023. https://arxiv.org/abs/2304.06364 . Terry Yue Zhuo, Armel Zebaze, Nitchakarn Suppattarachai, Leandro von Werra, Harm de Vries, Qian Liu, and Niklas Muennighoff. Astraios: Parameter-efficient instruction tuning code large language models. arXiv preprint , 2024. https://arxiv.org/abs/2401.00788 . 27 Contents 1 Introduction 1 2 Scaling and over-training 3 2.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2 Observation: Over-trained models follow consistent trends . . . . . . . . . . . . . . . 4 2.3 Deriving scaling laws for over-trained behavior . . . . . . . . . . . . . . . . . . . . . 5 2.4 Observation: Loss tracks average top-1 error . . . . . . . . . . . . . . . . . . . . . . . 5 2.5 Proposing a scaling law for average top-1 error . . . . . . . . . . . . . . . . . . . . . 6 3 Experimental setup 6 3.1 Training setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2 Creating scaling laws for validation loss and downstream error prediction . . . . . . . 8 3.3 Evaluation setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 4 Results 10 5 Related work 11 6 Limitations, future work, and conclusion 13 A Contributions 29 B Scaling-law derivations 30 C Additional grid search details 31 D Evaluation dataset details 31 E Additional results 31 F Additional related work 43 28 A Contributions Names ordered alphabetically. Planning. Samir Yitzhak Gadre Model training and experiment babysitting. Achal Dave (notably, the 1.4B parameter, 900B token run), Samir Yitzhak Gadre Dataloading. Georgios Smyrnis Training tokens. Achal Dave, Alex Fang, Samir Yitzhak Gadre, Suchin Gururangan, Jeffrey Li, Vaishaal Shankar (lead), Mitchell Wortsman Evaluation tokens. Achal Dave, Samir Yitzhak Gadre, Reinhard Heckel, Vaishaal Shankar (lead), Rulin Shao Loss/perplexity evaluation. Samir Yitzhak Gadre Downstream evaluation. Vaishaal Shankar Project-specific infrastructure, plots, and analysis. Samir Yitzhak Gadre OpenLM open-source infrastructure. Achal Dave (core contributor), Alex Fang, Samir Yitzhak Gadre (core contributor), Suchin Gururangan (core contributor), Jenia Jitsev, Sedrick Keh, Jeffrey Li, Jean Mercat, Marianna Nezhurina, Vaishaal Shankar (core contributor), Georgios Smyrnis (core contributor), Igor Vasiljevic, Mitchell Wortsman (core contributor), Rui Xin Theory. Yair Carmon (original idea that parallel lines should show up in scaling plots), Samir Yitzhak Gadre (various derivations, empirical verification, related validation loss to average top-1 errorasinEquation (5)), ReinhardHeckel(derivedascalingformbasedonChinchillaApproach3, which appears in Equation (4)), Mitchell Wortsman (provided intuition about irreducible loss and why it is critical), Niklas Muennighoff (derived a scaling form based on Chinchilla Approach 3, similar to Equation (4)). Writing. YairCarmon,AchalDave,ReinhardHeckel,SamirYitzhakGadre(lead),NiklasMuennighoff, Ludwig Schmidt Compute. Achal Dave, Jenia Jitsev, Thomas Kollar, Ludwig Schmidt, Vaishaal Shankar Advising. Yair Carmon (co-lead), Achal Dave (co-lead), Alexandros G. Dimakis, Reinhard Heckel (co-lead), Gabriel Ilharco, Jenia Jitsev, Thomas Kollar, Niklas Muennighoff (co-lead), Ludwig Schmidt (co-lead), Shuran Song 29 B Scaling-law derivations We first show that reparameterizing Equation (3)in terms of the compute Cand token multiplier Mfor=yields Equation (4). Combining C= 6NDandM=D/Nyields N=p C/(6M)and D=p CM/ 6. Inserting these into Equation (3) yields, L(C, M ) =E+AC 6M 2 +BCM 6 2 , =E+ A1 6 2 M 2+B1 6 2 M 2! C 2. This is equal to Equation (4), making the substitutions C=/2,a=A(1/6)C,b=B(1/6)C, as noted in the main body. Relation to compute-optimal training. Recall that we made the assumption =, which implies equal scaling of parameters and tokens to realize compute-optimal models. While this assumption is empirically justified , even if =, we get a parameterization that implies the power law exponent in Equation (4)remains constant with over-training, while the power law scalar changes. To find a compute-optimal training setting, Hoffmann et al. propose to minimize the right-hand side of Equation (3)subject to the compute constraint C= 6ND. This yields, N=1 +(C/6) + andD=1 +(C/6) +, where =A B, for notational convenience. The associated risk is, L(N, D) =E+ A ++B +C 6 + . We now deviate from compute-optimal training by modifying the model size and tokens by multiplication with a constantm, according to Nm=1mN, D m=mD. (7) This modification keeps the compute constant (i.e., 6NmDm= 6ND). The risk, then, becomes L(fNm,Dm) =E+ m 2A ++m 2B + C +. (8) We again expect the same power law exponent and changing power law scalar. Note that min Equation (8)is similar to Min Equation (4). Specifically, mis a multiple of the Chinchilla-optimal token multiplier M=D/N, which is no longer fixed as a compute budget changes for =. 30 1016101710181019 Compute (6ND) [FLOPs]345Loss: OpenLM eval 1016101710181019 Compute (6ND) [FLOPs]345 Grid search models 1000200030004000500060007000 Number of optimization stepsFigure 6: Understanding over-performing models in our grid search. (left)Models trained with 5.21016to5.21017FLOPs over-perform relative to their neighbors. In looking at the number of optimization steps, we notice that the over-performing models experience more optimization steps than their x-axis neighbors. We hypothesize that the number of optimization steps is important, especially for smaller models, when trying to find models that lie along a trend. (right)A view of the same phenomenon, specifically on the efficient frontier. C Additional grid search details Grid search configuration selection. Recall in Section 3.2, we run a grid search over many configurations. We present the architectures we sweep over in Table 5. D Evaluation dataset details All 46 downstream evaluations are based on MosaicMLs LLM-foundry evaluation suite . We specifically consider the datasets given in Table 6. E Additional results Over-performing grid search models experience more optimization steps. As mentioned in Section 3.2 and Figure 4, we notice that models between 0.011B to 0.079B (i.e., 5.21016to 5.21017FLOPs trained near compute-optimal) over-perform compared to the trend established by other models in our initial grid searches. This results in a bump in the scaling plot. While we choose to exclude this range of models for our scaling study, we additionally investigate this phenomenon. In Figure 6 we color grid search configurations by the number of optimization steps (i.e., number of tokens seen divided by batch size divided by sequence length). For context, Figure 1 (left)in Kaplan et al. also shows a bump; however, there the performance is worse than the general trend instead of better as in our work. We leave understanding more fully the interactions between hyperparameters, scaling, and performance to future work. 31 10 20 40 80 160 320 640 M0.011B 0.079B 0.154B 0.411B 1.4B 6.9BN1.1% 0.0% 0.1% 0.7% 0.9% 0.0% 0.6% 2.6% 0.3% 0.2% 0.8% 0.2% 0.3% 1.4% 2.0% 0.5% 1.3% 1.0% 3.9% 3.5% 1.3% 0.2% 0.2% 0.2% 3.5% 0.5% 2.7% 0.4% 2.3% 3.6%Train: C4, Eval: C4 (Paloma split) 10 20 40 80 160 320 640 M4.6% 0.0% 0.3% 2.8% 1.5% 0.0% 0.8% 0.5% 0.0% 1.1% 2.2% 3.0% 3.5% 3.3% 0.2% 0.0% 0.5% 1.3% 1.4% 1.0% 0.5% 0.1% 0.0% 0.2% 0.3% 0.7% 10.0% 10.3% 3.0% 15.4% 10.3%Train: RedPajama, Eval: RedPajama (Paloma split) 10 20 40 80 160 320 640 M1.1% 0.0% 0.9% 1.6% 0.8% 0.0% 1.1% 2.3% 0.1% 0.4% 1.0% 2.0% 3.2% 2.3% 1.1% 0.2% 0.1% 1.5% 0.8% 2.0% 2.4% 0.7% 0.1% 1.6% 2.3% 2.0% 2.3% 4.3% 1.4% 6.0% 5.6%Train: RefinedWeb, Eval: RefinedWeb (Paloma split) 0%5%10%15%20%25%30% Relative errorFigure 7: In-distribution (ID) settings. Boxes highlighted in yellow correspond to data points used to fit Equation (4). Relative error is generally low across interpolation and extrapolation regimes. Relative error is largest for the RedPajama N= 1.4B, M= 640prediction at 15.4%. In this case, we find that our scaling law predicts the model should perform worse than it does in practice. Scaling is largely predictable in-distribution (ID). Prior work focuses on understanding scaling using ID loss, often using training loss directly [ 49,43]. Hence, we also consider Paloma loss evaluation sets, which are designed to probe performance in specific domains. We use Palomas C4 [86,25], RedPajama , and Falcon-RefinedWeb splits to probe for ID validation loss. As seen in Figure 7, relative error is mostly low. Relative error is largest for the N= 1.4B, M= 640 RedPajama run at 15.4%. Examining this case specifically, we find that the model performs better than the scaling law prediction. We hypothesize that as a model sees more tokens there is an increased likelihood of near-duplicate sequences ID, resulting in performance that is better than predicted. Relative error is stable across many choices of downstream evaluation suites. To understand how sensitive our investigation is to our choices of the evaluation set, we consider several other options as seen in Figure 8. We find that our prediction errors are fairly (i) low and (2) consistent for many choices of downstream evaluation sets. Scaling can break down when under-training. We find that when a token multiple is too small (i.e., under-training regime), scaling appears unreliable. In Figure 9 we see for M= 5the scaling trend is different. We hypothesize that tuning hyperparameters (e.g., warmup, batch size) directly for smaller multipliers may help mitigate the breakdown in predictability. Scaling can be unpredictable out-of-distribution (OOD). Our main result shows reliable C4 eval loss predictions with models trained on RedPajama, which is an OOD evaluation setting. However, both C4 and RedPajama both contain tokens sourced from CommonCrawl. To further probe OOD performance, we measure the relative error of scaling laws fit to models trained on C4 and evaluated on Palomas 100 programming languages , Palomas Penn Tree Bank (PTB) split , and a German version of C4 . Recall that the C4 training set we use has been filtered for English text. Hence we expect (i) the proportion of code is minimal, (ii) the <unk> substrings in PTB raw text do not appear frequently, and (iii) German is not prevalent. We notice that extrapolation relative error tends to be high for large M, Non programming languages and PTB (Figure 10 (left, center) ). In contrast, for German C4, relative error is still low across the 32 0 10 20 30 40 50 Inclusion threshold t (i.e., include evals where any model gets t percentage points above random chance at 0.154B scales)102 101 Relative prediction error 0 10 20 30 40 Number of excluded datasets (out of 46-total)102 101 C4 RedPajama RefinedWeb Figure 8: Downstream evaluation set ablation for 6.9B parameter, 138B token runs. Recall that in addition to the 46 task evaluation suite, we consider a 17 task evaluation suite created by including only test sets where any 0.154B model we trained (for any token multiplier and training dataset) gets t= 10percentage points above random chance. Here we wish to understand how sensitive our results are to this choice t.(left)We see that relative prediction error is fairly low before a threshold of t= 35. When too many tasks are excluded (i.e., t >40relative error spikes. (right)A parallel view, showing how many tasks are removed as tincreases. 40 out of the 46 tasks can be removed and relative error is still fairly stable. extrapolation range with a maximum relative error of 7.6% at the N=1.4B, M= 80scale (Figure 10 (right)). We hypothesize that further modifications to scaling laws are necessary to predict when scaling should be reliable as a function of the training and evaluation distributions. Small-scale experiments can predict average downstream top-1 error. To verify that chaining Equations (4)and(5)is effective in practice, we collect C4 eval loss and downstream error pairs for the configurations in Table 2. In Figure 11, we look at relative error for our scaling predictions in the context of Average top-1 error over our 46 evals and over our 17 evals in Figure 12. We again notice reliable scaling in interpolation and extrapolation regimes, suggesting the validity of our procedure to predict downstream average top-1 error. Loss evaluation ablations for downstream trends. Figure 13 presents the correlation between downstream error vs. loss evaluated on different validation sets (C4, RedPajama, and RefinedWeb). Regardless of the validation set (x-axis), models follow the exponential decay relationship given in Equation (5), suggesting the choice of validation loss is not critical for the appearance of this phenomenon. Investing more compute in a scaling law makes it more predictive. Thus far we have looked at standard configurations from Table 2 to construct our scaling laws, mainly to demonstrate extrapolation to larger N, M. However, for practitioners, the main constraint is often training compute. Hence, we wish to understand the trade-offs between the amount of compute invested in creating a scaling law and the relative error of the resulting law in the over-trained regime. In 33 101610181020 Compute (6ND,D=MN) [FLOPs]23456Reducible loss: C4 evalTraining set: C4 101610181020 Compute (6ND,D=MN) [FLOPs]123456Training set: RedPajama 101610181020 Compute (6ND,D=MN) [FLOPs]12345Training set: RefinedWeb N=0.011B N=0.079B N=0.154B N=0.411B Interpolated trend Extrapolated trend 510204080 token multiplier MFigure 9: Scaling with small token multipliers. For smaller multipliers (e.g., M= 5in cyan), scaling does not follow the same trend as that of larger multipliers. Additionally, many token multiplies (e.g., M {10,20,40,80}) garner points close to the compute-optimal frontier. Figure 14 (left), we see that as one increases the amount of compute, it is possible to get better fits with lower relative error. In Figure 14 (right), we see a similar trend as one increases the number of data points used to fit a scaling law. Blue stars indicate the configurations from Table 2, which provide accurate predictions relative to the general trendshinting at their usefulness for our investigation. In Figures 15, 16 we repeat the compute analysis comparing trade-offs for loss prediction and error prediction reliability. We find that less compute is generally necessary to construct a loss scaling law that achieves the same relative error as that of an error prediction scaling law. 34 10 20 40 80 160 320 640 M0.011B 0.079B 0.154B 0.411B 1.4B 6.9BN15.2% 0.0% 8.4% 2.9% 0.2% 0.0% 0.2% 2.1% 0.3% 1.7% 0.2% 1.8% 3.0% 9.7% 3.3% 0.6% 0.7% 1.6% 6.4% 4.3% 2.8% 0.5% 0.3% 0.4% 9.5% 4.1% 22.4% 2.5% 4.3% 3.3%Train: C4, Eval: 100 programming languages (Paloma split) 10 20 40 80 160 320 640 M5.7% 0.0% 3.6% 1.1% 1.4% 0.0% 3.5% 0.0% 0.4% 1.6% 2.1% 3.0% 5.1% 24.3% 0.9% 0.7% 0.7% 1.6% 1.3% 4.7% 2.0% 0.7% 0.3% 0.1% 0.5% 0.7% 3.9% 9.9% 10.0% 9.8%Train: C4, Eval: Penn Tree Bank (Paloma split) 10 20 40 80 160 320 640 M4.1% 0.0% 2.1% 0.6% 1.5% 0.0% 1.2% 1.5% 0.1% 0.5% 0.2% 0.2% 1.1% 1.7% 2.3% 0.1% 0.6% 0.0% 2.2% 2.2% 0.7% 0.5% 0.0% 0.0% 2.8% 0.2% 3.1% 0.9% 7.6% 3.4%Train: C4, Eval: C4 German eval 0%5%10%15%20%25%30% Relative errorFigure 10: Out-of-distribution (OOD) settings. Boxes highlighted in yellow correspond to data points used to fit Equation (4). Recall that the C4 training set is English-filtered. Relative error can spike, suggesting unreliable scaling, for (left)programming languages and (center) Penn Tree Bank, which contains many frequently occurring, uncommon substrings. However, scaling is relatively reliable when evaluating on (right)German. These results motivate future studies of OOD conditions that affect scaling in the over-trained regime. 10 20 40 80 160 320 640 M0.011B 0.079B 0.154B 0.411B 1.4B 6.9BN0.3% 0.2% 0.6% 0.9% 0.2% 0.3% 1.3% 1.2% 0.4% 1.0% 0.1% 0.3% 0.3% 0.0% 0.2% 0.7% 0.5% 1.2% 1.7% 1.0% 0.4% 0.6% 0.2% 0.0% 1.6% 0.0% 0.4% 0.3% 3.1% 0.1%Train: C4, Downstream: 46-task split 10 20 40 80 160 320 640 M0.1% 0.4% 1.1% 1.3% 0.2% 0.6% 0.4% 0.1% 0.0% 0.3% 0.8% 1.0% 0.6% 1.3% 0.2% 0.2% 0.0% 0.6% 0.9% 0.9% 0.3% 1.3% 0.8% 1.3% 1.5% 1.0% 1.0% 1.0% 0.3% 3.4% 2.1%Train: RedPajama, Downstream: 46-task split 10 20 40 80 160 320 640 M1.2% 0.1% 0.1% 0.7% 0.3% 0.8% 0.6% 0.5% 1.4% 0.4% 0.9% 0.8% 1.0% 1.2% 0.8% 0.1% 0.6% 0.3% 1.1% 0.7% 0.9% 0.5% 1.1% 1.1% 1.7% 1.6% 0.6% 0.9% 0.3% 4.3% 2.8%Train: RefinedWeb, Downstream: 46-task split 0%5%10%15%20%25%30% Relative error Figure 11: Relative error on average top-1 predictions (46 task split). Boxes highlighted in yellow correspond to data points used to fit Equation (5). Using our fits, we accurately predict downstream average top-1 error across interpolation and extrapolation regimes. This result supports that (i) chaining a scaling law and our proposed exponential decay function is a valid procedure and (ii) average top-1 error can be highly predictable. 35 nlayers nheads dmodel Number of parameters [B] 4 4 96 0.010 4 12 96 0.010 12 12 96 0.011 12 4 96 0.011 8 4 96 0.011 16 4 96 0.011 16 12 96 0.011 8 12 96 0.011 24 4 96 0.012 24 12 96 0.012 4 4 192 0.021 4 8 192 0.021 4 12 192 0.021 8 8 192 0.023 8 4 192 0.023 8 12 192 0.023 12 4 192 0.025 12 8 192 0.025 12 12 192 0.025 16 4 192 0.026 16 8 192 0.026 16 12 192 0.026 24 8 192 0.030 24 4 192 0.030 24 12 192 0.030 4 12 288 0.033 4 4 288 0.033 8 12 288 0.037 8 4 288 0.037 4 4 320 0.038 4 8 320 0.038 12 12 288 0.041 12 4 288 0.041 8 8 320 0.043 8 4 320 0.043 16 4 288 0.045 16 12 288 0.045 12 4 320 0.049 12 8 320 0.049 24 4 288 0.053 24 12 288 0.053 16 8 320 0.055 16 4 320 0.055 4 12 488 0.062 4 4 512 0.065 4 16 512 0.065 4 8 512 0.065 24 8 320 0.066 24 4 320 0.066 4 4 576 0.074 4 8 576 0.074 4 12 576 0.074 8 12 488 0.075 8 4 512 0.079 8 8 512 0.079 8 16 512 0.079 4 4 640 0.085 4 16 640 0.085 4 8 640 0.085 12 12 488 0.087 8 4 576 0.090 8 12 576 0.090 8 8 576 0.090 12 16 512 0.093 12 8 512 0.093nlayers nheads dmodel Number of parameters [B] 12 4 512 0.093 16 12 488 0.100 8 16 640 0.105 8 4 640 0.105 8 8 640 0.105 12 8 576 0.106 16 16 512 0.106 4 4 768 0.106 12 12 576 0.106 16 8 512 0.106 4 8 768 0.106 12 4 576 0.106 4 16 768 0.106 16 4 512 0.106 4 12 768 0.106 16 12 576 0.122 16 4 576 0.122 16 8 576 0.122 12 4 640 0.126 24 12 488 0.126 12 16 640 0.126 12 8 640 0.126 24 8 512 0.133 24 4 512 0.133 24 16 512 0.133 8 8 768 0.134 8 16 768 0.134 8 4 768 0.134 8 12 768 0.134 16 16 640 0.146 16 8 640 0.146 16 4 640 0.146 24 8 576 0.154 24 4 576 0.154 24 12 576 0.154 4 8 1024 0.155 4 16 1024 0.155 4 4 1024 0.155 12 8 768 0.162 12 4 768 0.162 12 12 768 0.162 12 16 768 0.162 24 16 640 0.186 24 8 640 0.186 24 4 640 0.186 16 16 768 0.191 16 4 768 0.191 16 8 768 0.191 16 12 768 0.191 8 8 1024 0.206 8 4 1024 0.206 8 16 1024 0.206 24 8 768 0.247 24 12 768 0.247 24 4 768 0.247 24 16 768 0.247 12 8 1024 0.257 12 4 1024 0.257 12 16 1024 0.257 16 8 1024 0.309 16 4 1024 0.309 16 16 1024 0.309 24 16 1024 0.412 24 8 1024 0.412 24 4 1024 0.412 Table 5:Topologies for our grid searches. We consider 130 architectures for our grid search. After sweeping over batch size and warmup, we get a total of 435 configurations. For a complete list of hyperparameter configurations, please see: https://github.com/mlfoundations/scaling . 36 Downstream task LLM-foundry category Evaluation type Shots Samples Chance accuracy AGIEval LSAT AR [129, 128, 116] symbolic problem solving multiple choice 3 230 0.25 AGIEval LSAT LR [129, 128, 116] reading comprehension multiple choice 3 510 0.25 AGIEval LSAT RC [129, 128, 116] reading comprehension multiple choice 3 268 0.25 AGIEval SAT English reading comprehension multiple choice 3 206 0.25 ARC-Challenge world knowledge multiple choice 10 2376 0.25 ARC-Easy world knowledge multiple choice 10 2376 0.25 BBQ safety multiple choice 3 58492 0.50 BIG-bench: CS algorithms symbolic problem solving language modeling 10 1320 0.00 BIG-bench: Conceptual combinations language understanding multiple choice 10 103 0.25 BIG-bench: Conlang translation language understanding language modeling 0 164 0.00 BIG-bench: Dyck languages symbolic problem solving language modeling 10 1000 0.00 BIG-bench: Elementary math QA symbolic problem solving multiple choice 10 38160 0.25 BIG-bench: Language identification language understanding multiple choice 10 10000 0.25 BIG-bench: Logical deduction symbolic problem solving multiple choice 10 1500 0.25 BIG-bench: Misconceptions world knowledge multiple choice 10 219 0.50 BIG-bench: Novel Concepts commonsense reasoning multiple choice 10 32 0.25 BIG-bench: Operators symbolic problem solving language modeling 10 210 0.00 BIG-bench: QA WikiData world knowledge language modeling 10 20321 0.00 BIG-bench: Repeat copy logic symbolic problem solving language modeling 10 32 0.00 BIG-bench: Strange stories commonsense reasoning multiple choice 10 174 0.50 BIG-bench: Strategy QA commonsense reasoning multiple choice 10 2289 0.50 BIG-bench: Understanding fables reading comprehension multiple choice 10 189 0.25 BoolQ reading comprehension multiple choice 10 3270 0.50 COPA commonsense reasoning multiple choice 0 100 0.50 CoQA reading comprehension language modeling 0 7983 0.00 Commonsense QA commonsense reasoning multiple choice 10 1221 0.25 Enterprise PII classification safety multiple choice 10 3395 0.50 HellaSwag (10-shot) language understanding multiple choice 10 10042 0.25 HellaSwag (zero-shot) language understanding multiple choice 0 10042 0.25 Jeopardy world knowledge language modeling 10 2117 0.00 LAMBADA language understanding language modeling 0 5153 0.00 LogiQA symbolic problem solving multiple choice 10 651 0.25 MMLU (5-shot) world knowledge multiple choice 5 14042 0.25 MMLU (zero-shot) world knowledge multiple choice 0 14042 0.25 MathQA symbolic problem solving multiple choice 10 2983 0.25 OpenBook QA commonsense reasoning multiple choice 0 500 0.25 PIQA commonsense reasoning multiple choice 10 1838 0.50 PubMed QA Labeled reading comprehension language modeling 10 1000 0.00 SIQA commonsense reasoning multiple choice 10 1954 0.50 SQuAD reading comprehension language modeling 10 10570 0.00 Simple Arithmetic: NoSpaces symbolic problem solving language modeling 10 1000 0.00 Simple Arithmetic: WithSpaces symbolic problem solving language modeling 10 1000 0.00 WinoGender MC: Female safety multiple choice 10 60 0.50 WinoGender MC: Male safety multiple choice 10 60 0.50 WinoGrande language understanding schema 0 1267 0.50 WinoGrand language understanding schema 0 273 0.50 Table 6:46 downstream tasks. All downstream tasks considered in this work, evaluated via LLM-foundry . For more information on each dataset and specifics about the LLM-foundry category and evaluation type, please see: https://www.mosaicml.com/llm-evaluation . Scaling law fit Train set MMLU OpenBook QA HellaSwag 17 eval 46 eval Table 2 C4 [86, 25] 2.82% 16.80% 79.58% 0.14% 0.07% Table 2 w/o 1.4B C4 [86, 25] 1.86% 96.16% 61.79% 0.42% 0.76% Table 2 RedPajama 0.12% 8.44% 25.73% 0.05% 2.10% Table 2 w/o 1.4B RedPajama 1.07% 7.56% 30.98% 10.64% 7.68% Table 2 RefinedWeb 0.77% 1.92% 81.96% 2.94% 2.76% Table 2 w/o 1.4B RefinedWeb 2.29% 6.79% 6.52% 15.79% 8.57% Table 7:Downstream relative prediction error at 6.9B, 138B tokens, with and without the 1.4B data point. Recall in Table 2, we introduce a N= 1.4B,M= 20run to get better downstream error predictions. Here we compare, prediction errors with and without this model for fitting the scaling law. Note that without the model (i.e., rows with w/o 1.4B) average top-1 predictions over a 17 task subset and over the full 46 task suite are less accurate. 37 10 20 40 80 160 320 640 M0.011B 0.079B 0.154B 0.411B 1.4B 6.9BN1.2% 0.7% 1.2% 2.3% 1.1% 1.1% 3.1% 3.9% 1.2% 2.5% 1.0% 0.5% 0.6% 0.4% 1.3% 2.9% 1.1% 2.2% 3.9% 1.4% 0.2% 2.3% 1.3% 1.7% 2.8% 2.2% 1.8% 0.7% 9.6% 0.1%Train: C4, Downstream: 17-task split 10 20 40 80 160 320 640 M0.6% 0.4% 2.1% 2.5% 0.6% 0.8% 0.8% 0.8% 0.7% 0.2% 1.3% 0.8% 0.5% 1.3% 0.9% 1.7% 0.2% 0.1% 0.2% 0.7% 0.0% 2.6% 2.0% 1.0% 2.6% 1.9% 3.4% 3.4% 0.7% 3.6% 0.0%Train: RedPajama, Downstream: 17-task split 10 20 40 80 160 320 640 M2.5% 0.4% 0.7% 1.5% 1.3% 1.7% 0.9% 1.0% 2.7% 0.4% 0.4% 0.9% 0.3% 2.3% 1.5% 0.1% 0.0% 1.3% 2.1% 0.6% 0.6% 0.8% 2.2% 2.0% 3.6% 3.4% 1.2% 1.6% 0.4% 5.6% 2.9%Train: RefinedWeb, Downstream: 17-task split 0%5%10%15%20%25%30% Relative errorFigure 12: Relative error on average top-1 predictions (17 task split). Boxes highlighted in yellow correspond to data points used to fit Equation (5). Using our fits, we accurately predict downstream average top-1 error across interpolation and extrapolation regimes. This result supports that (i) chaining a scaling law and our proposed exponential decay function is a valid procedure and (ii) average top-1 error can be highly predictable. 38 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 Loss: C40.450.500.550.600.650.700.750.80Average top-1 error: 17-task split 2 3 4 5 6 7 8 Loss: RedPajama0.450.500.550.600.650.700.750.800.85 3 4 5 6 Loss: RefinedWeb0.450.500.550.600.650.700.750.800.85 C4 RedPajama RefinedWeb Interpolated trend Extrapolated trend 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 Loss: C40.660.680.700.720.740.76Average top-1 error: 46-task split 2 3 4 5 6 7 8 Loss: RedPajama0.660.680.700.720.740.76 3 4 5 6 Loss: RefinedWeb0.660.680.700.720.740.76 C4 RedPajama RefinedWeb Interpolated trend Extrapolated trend Figure 13: Correlation between average top-1 error and evaluation loss. We observe that regardless of evaluation loss distribution (x-axis), models tend to follow Equation (5). This suggests that there can be several reasonable choices for the validation loss distribution. Additionally, ID models trained on C4 and evaluated on a C4 validation set, perform best in terms of loss, but these gains dont necessarily translate to lower error downstream (e.g., (left column) ). This suggests the need to fit Equation (5)per dataset and also suggests comparing models trained on different data distributions with a single loss evaluation can be misleading in terms of downstream performance. 39 1018101910201021 Compute [FLOPs] used for the scaling fit104 103 102 101 100Relative error: C4 eval 5 10 15 20 25 30 Number of samples used for the scaling fitTrend Individual estimates Default setting from Table 2 Figure14: Trade-offsbetweenscalinglawforlossfittingconsiderationsandreliability. Each redcirclerepresentsascalinglawfittoEquation (4)withasmanyas29modelstrainedonRedPajama. Specifically, a grid formed by N {0.011B,0.079B,0.154B,0.411B}, M {5,10,20,40,80,160,320} gives 28 models and a N= 1.4B, M = 20run gives the last model. We sort models by training FLOPs in increasing order and sample models uniformly from index windows [1,2, ..., n ]forn[5,6, ..,29] to fit Equation (4). The blue star represents the default configuration presented in Table 2. The prediction target is a N= 1.4B, M = 640 ( D= 900B)model. As the amount of compute (left)and the number of points (right)used to fit the scaling law increases, relative error trends downwards. Our default configuration keeps compute and number of points low, while still providing low prediction error compared to the trend. 1018101910201021 Compute [FLOPs] used for the scaling fit104 103 102 101 100Relative error: C4 eval 1018101910201021 Compute [FLOPs] used for the scaling fitRelative error: 17-task split Trend Default setting from Table 2 Individual estimates Figure 15: Compute vs. relative error for the 1.4B, 900B token RedPajama run. (left) The compute necessary to accurately predict loss is less than that needed to accurately predict (right)average downstream error. This claim is supported by the fact that the slope of the trend for loss is steeper than for top-1 error. These findings corroborate Figure 16. 40 1018101910201021 Compute [FLOPs] used for the scaling fit104 103 102 101 100Relative error: C4 eval 1018101910201021 Compute [FLOPs] used for the scaling fitRelative error: 17-task split Trend Default setting from Table 2 Individual estimatesFigure 16: Compute vs. relative error for the 6.9B, 138B token RedPajama run. (left) The compute necessary to accurately predict loss is less than that needed to accurately predict (right)average downstream error. This claim is supported by the fact that the slope of the trend for loss is steeper than for top-1 error. These findings corroborate Figure 15. 20 40 80 160 320 640 Token multiplier M0.110.120.130.140.15Scaling exponent C C4 RedPajama RefinedWeb Trend Figure 17: Scaling exponent vs. token multiplier. In Figure 2 we notice roughly parallel lines (i.e., roughly constant scaling exponent C) in the log-logplot of loss vs. compute, even as the token multiplier Mchanges. Here we plot Cvs.Mdirectly, where the shaded region gives a 95% bootstrap confidence interval for the trend. This view supports that Cis relatively constant. 41 3 4 5 Loss: C4 eval0.720.740.760.780.800.820.84Top-1 error: AGIEval LSAT AR 3 4 5 Loss: C4 eval0.710.720.730.740.750.760.770.78Top-1 error: AGIEval LSAT LR 3 4 5 Loss: C4 eval0.720.740.760.78Top-1 error: AGIEval LSAT RC 3 4 5 Loss: C4 eval0.700.720.740.760.780.80Top-1 error: AGIEval SAT English 3 4 5 Loss: C4 eval0.6250.6500.6750.7000.7250.7500.7750.800Top-1 error: ARC-Challenge 3 4 5 Loss: C4 eval0.30.40.50.60.7Top-1 error: ARC-Easy 3 4 5 Loss: C4 eval0.460.480.500.520.540.560.580.60Top-1 error: BBQ 3 4 5 Loss: C4 eval0.6750.7000.7250.7500.7750.8000.825Top-1 error: BIG-bench: Conceptual combinations 3 4 5 Loss: C4 eval0.960.970.980.991.00Top-1 error: BIG-bench: Conlang translation 3 4 5 Loss: C4 eval0.60.70.80.91.0Top-1 error: BIG-bench: CS algorithms 3 4 5 Loss: C4 eval0.700.750.800.850.900.951.00Top-1 error: BIG-bench: Dyck languages 3 4 5 Loss: C4 eval0.730.740.750.760.77Top-1 error: BIG-bench: Elementary math QA 3 4 5 Loss: C4 eval0.7400.7450.7500.7550.760Top-1 error: BIG-bench: Language identification 3 4 5 Loss: C4 eval0.720.730.740.750.760.77Top-1 error: BIG-bench: Logical deduction 3 4 5 Loss: C4 eval0.440.460.480.500.520.540.56Top-1 error: BIG-bench: Misconceptions 3 4 5 Loss: C4 eval0.450.500.550.600.650.700.750.800.85Top-1 error: BIG-bench: Novel Concepts 3 4 5 Loss: C4 eval0.800.850.900.951.00Top-1 error: BIG-bench: Operators 3 4 5 Loss: C4 eval0.30.40.50.60.70.80.91.0Top-1 error: BIG-bench: QA WikiData 3 4 5 Loss: C4 eval0.840.860.880.900.920.940.960.981.00Top-1 error: BIG-bench: Repeat copy logic 3 4 5 Loss: C4 eval0.4250.4500.4750.5000.5250.5500.5750.600Top-1 error: BIG-bench: Strange stories 3 4 5 Loss: C4 eval0.440.460.480.500.520.54Top-1 error: BIG-bench: Strategy QA 3 4 5 Loss: C4 eval0.680.700.720.740.760.78Top-1 error: BIG-bench: Understanding fables 3 4 5 Loss: C4 eval0.350.400.450.500.550.60Top-1 error: BoolQ 3 4 5 Loss: C4 eval0.680.700.720.740.760.780.800.82Top-1 error: Commonsense QA 3 4 5 Loss: C4 eval0.20.30.40.50.6Top-1 error: COPA 3 4 5 Loss: C4 eval0.600.650.700.750.800.850.900.951.00Top-1 error: CoQA 3 4 5 Loss: C4 eval0.440.460.480.500.520.540.560.58Top-1 error: Enterprise PII classification 3 4 5 Loss: C4 eval0.30.40.50.60.7Top-1 error: HellaSwag (10-shot) 3 4 5 Loss: C4 eval0.30.40.50.60.7Top-1 error: HellaSwag (zero-shot) 3 4 5 Loss: C4 eval0.60.70.80.91.0Top-1 error: Jeopardy 3 4 5 Loss: C4 eval0.40.50.60.70.80.91.0Top-1 error: LAMBADA 3 4 5 Loss: C4 eval0.700.720.740.760.780.80Top-1 error: LogiQA 3 4 5 Loss: C4 eval0.7400.7450.7500.7550.7600.765Top-1 error: MathQA 3 4 5 Loss: C4 eval0.7300.7350.7400.7450.7500.7550.7600.7650.770Top-1 error: MMLU (5-shot) 3 4 5 Loss: C4 eval0.730.740.750.760.77Top-1 error: MMLU (zero-shot) 3 4 5 Loss: C4 eval0.6000.6250.6500.6750.7000.7250.7500.775Top-1 error: OpenBook QA 3 4 5 Loss: C4 eval0.250.300.350.400.450.50Top-1 error: PIQA 3 4 5 Loss: C4 eval0.50.60.70.80.91.0Top-1 error: PubMed QA Labeled 3 4 5 Loss: C4 eval0.9750.9800.9850.9900.9951.000Top-1 error: Simple Arithmetic: NoSpaces 3 4 5 Loss: C4 eval0.960.970.980.991.00Top-1 error: Simple Arithmetic: WithSpaces 3 4 5 Loss: C4 eval0.480.490.500.510.52Top-1 error: SIQA 3 4 5 Loss: C4 eval0.50.60.70.80.91.0Top-1 error: SQuAD 3 4 5 Loss: C4 eval0.450.500.550.60Top-1 error: WinoGender MC: Female 3 4 5 Loss: C4 eval0.350.400.450.500.550.60Top-1 error: WinoGender MC: Male 3 4 5 Loss: C4 eval0.150.200.250.300.350.400.450.50Top-1 error: WinoGrand 3 4 5 Loss: C4 eval0.360.380.400.420.440.460.480.500.52Top-1 error: WinoGrande C4 RedPajama RefinedWeb Random chanceFigure 18: Downstream top-1 error vs. C4 eval loss for each of the 46 downstream evals. Here we plot models from our testbed for each scatter plot. We see that some individual evaluations, like ARC-Easy, follow exponential decay. Others, like BIG-bench: CS algorithms, show step function behavior. Still others, like MathQA, hover around random chance. 42 F Additional related work Language modeling. Language models can be grouped into encoder-only [ 24,51,57,94,20], encoder-decoder [ 54,87], and decoder-only architectures [ 83,111,112,108,47,36,72,7,109,26, 62,97,119,4,55,61,32]. Most current implementations are based on the transformer . However, there has been a recent resurgence in scaling language models based on non-transformer architectures [ 81,34,35,33]. Further, there has been substantial work on adapting pre-trained language models to better follow instructions [ 117,18,68,59,69,130,85,27,113,101,71]. However, following prior work [ 43,70] and given their overall prevalence, we limit ourselves to GPT-style, decoder-only transformers that have solely been pre-trained. Scaling laws. Kaplan et al. investigate scaling trends in GPT language models. Bahri et al. investigate different scaling regimes theoretically, and Sharma & Kaplan relate scaling coefficients to data manifold dimensions. Tay et al. [106,107]elucidate the connection between model architecture and scaling trends, while Hernandez et al. , Tay et al. develop scaling laws for transfer learning. Ivgi et al. also consider transfer learning scaling laws and highlight the importance of hyperparameter selection in the low-compute regime. Ghorbani et al. , Gordon et al., Bansal et al. develop scaling laws for neural machine translation. Caballero et al. propose a scaling law functional form, which they demonstrate is predictive in several domains. Scaling beyond language modeling. There is a large body of work on scaling neural networks beyondlanguagemodeling, forexampleincomputervision[ 58,124,103,1,2], multimodallearning[ 39, 16, 28], and image reconstruction . 43 |
2305.01625.pdf | Unlimiformer: Long-Range Transformers with Unlimited Length Input Amanda Bertsch andUri Alon andGraham Neubig andMatthew R. Gormley Carnegie Mellon University, USA {abertsch,ualon,gneubig,mgormley}@cs.cmu.edu Abstract Transformer-based models typically have a predefined bound to their input length, because of their need to potentially attend to every token in the input. In this work, we propose Unlimiformer: a general approach that can wrap any existing pretrained encoder-decoder transformer, and offload the attention computation across all layers to a single k-nearestneighbor index; this index can be kept on either the GPU or CPU memory and queried in sub-linear time. This way, we can index extremely long input sequences, while every attention head in every decoder layer retrieves its topkkeys, instead of attending to every key. We demonstrate Unlimiformers efficacy on several long-document and multi-document summarization benchmarks, showing that it can summarize even 350k token-long inputs from the BookSum dataset, without any input truncation at test time. Unlimiformer improves pretrained models such as BART (Lewis et al., 2020a) and Longformer (Beltagy et al., 2020a) by extending them to unlimited inputs without additional learned weights and without modifying their code. We make our code and models publicly available1. 1 Introduction Transformers (Vaswani et al., 2017) are the dominant sequence-to-sequence architecture. Pretrained transformers generally have a context window of 512 (e.g. BERT (Devlin et al., 2019)) or 1024 tokens (e.g. BART (Lewis et al., 2020b)), which are sufficient lengths for many current conditional generation datasets (XSum; Narayan et al., 2018) (CNN/DM; Nallapati et al., 2016). For inputs between 1k and 16k tokens, specialized long-context models have been developed. These models employ clever techniques to sparsify or approximate attention (e.g. Longformer 1https://github.com/abertsch72/unlimiformer XSum (Avg)CNN/DM (Avg)ArXiv (Avg)GovReport (Avg)WikiSum (Avg)NarrativeQA (Avg)BookSum (Avg)NarrativeQA (Max)BookSum (Max)WikiSum (Max)103104105Input tokens16384 tokens 4096 tokens 1024 tokensFigure 1: Long-range transformers can avoid input truncation in some datasets; however, there are datasets with inputs many times longer than these models maximum input length. The dotted lines represent three common maximum input lengths for models; the bars are the average or maximum input length in each dataset, as indicated. Averages for datasets from Koh et al. (2022). (Beltagy et al., 2020b), Performers (Choromanski et al., 2020)), allowing the maximum input length to quadruple while remaining computationally feasible. Datasets in this length include most long-document summarization or question answering datasets, such as arXiv summarization (Cohan et al., 2018). But 16,384 is not the upper limit for the length of context required for generation: tasks that involve long narratives, such as book summarization (Kry scinski et al., 2021) or narrative question-answering (Kocisk et al., 2018), often have inputs exceeding 100k tokens . A challenge set for Wikipedia article generation (Liu* et al., 2018) contains inputs longer than 500k tokens. Open-domain tasks in generative question answering could conceivably synthesize information from even larger inputs, e.g.arXiv:2305.01625v1 [cs.CL] 2 May 2023 Input:Datastore of one long input Retrieved hidden states querykNN Search Encoder a b c d e fa b c d e fEncode chunks Cross attentionDecoder LayerFigure 2: In this example, the encoders maximum input length is 2 tokens. A 6-token input is encoded in chunks and stored in the datastore. We inject Unlimiformer into each decoder layer prior to cross-attention. In Unlimiformer, we perform kNN search to select a 2-token context for each attention head from the datastore; then, cross-attention is computed using keys and values from the entire input sequence. answering a question about the aggregate properties of all living person articles on Wikipedia. Figure 1 shows the size of several popular summarization and question answering datasets, plotted against common context window lengths; the longest inputs are more than 34 times longer than Longformers context window. In these extremely-long-input cases, vanilla transformers cannot be feasibly scaled, as nave attention has quadratic complexity. Long-input transformers, though more efficient than standard transformers, require significant computational resources, which increase with increased context window size. Additionally, increasing the context window necessitates re-training the model from scratch with a new context window size, which is computationally and environmentally costly. We introduce Unlimiformer, a retrieval-based method which augments pretrained language models to accept inputs of unbounded length at test time. Unlimiformer can be injected into any existing encoder-decoder transformer to permit unbounded inputs. Given a long input sequence, Unlimiformer constructs a datastore over the hidden states of all input tokens. Then, the decoders standard crossattention queries the datastore, and attends to the top-kinput tokens. The datastore can be stored in either GPU or CPU memory and admits sublinear queries. Unlimiformer can be applied directly over a trained model, and can improve an existing checkpoint without any further training. When finetuningUnlimiformer, performance is further improved. We demonstrate that Unlimiformer can be applied to multiple base models, such as BART (Lewiset al., 2020a) or PRIMERA (Xiao et al., 2022), without adding weights and without re-training. Across a variety of long-range seq2seq datasets, Unlimiformer not only performs better on these datasets than strong long-range transformers such as Longformer (Beltagy et al., 2020b), SLED (Ivgi et al., 2022) and Memorizing Transformers (Wu et al., 2022), but we also find that Unlimiformer can be applied on top of a Longformer-encoderdecoder model for further improvement. 2 Unlimiformer Transformers are limited in their maximum input length because of the fixed size of the encoder context window. However, at different points in decoding, different information may be relevant; also, different attention heads may be attending to different types of information (Clark et al., 2019). Thus, a fixed context window may waste effort on tokens that an attention head does not attend strongly to. Unlimiformer allows each head to choose a separate context window from the full-length input at each decoding step. This is formalized by injecting an Unlimiformer lookup into the decoder: prior to cross-attention, the model performs a k-nearest neighbor (kNN) search in an external datastore to choose a set of per-decoder-layer per-attentionhead tokens to attend to. 2.1 Encoding To encode an input sequence that is longer than the models context window, we encode overlapping chunks of the input, following Ivgi et al. (2022), keeping only the middle half of the outputs from each chunk, to ensure that the encodings have sufficient context on both sides. Finally, we index the encoded inputs in a datastore, using a library such as Faiss (Johnson et al., 2019). 2.2 Retrieval-augmented cross-attention In standard cross-attention, a transformer decoder attends to the encoders last hidden states, where the encoder usually truncates the input and encodes only the kfirst tokens in the input sequence. Instead of attending only to this k-token prefix of the input, we retrieve the topkhidden states from a much longer input sequence for each crossattention head, and attend only to these topk. This allows retrieving keys from the entire input sequence instead of truncating. Our approach is also cheaper, in computation and GPU-memory, than attending to all input tokens, while usually preserving more than 99% of the attention mass. Figure 2 displays our generic changes to any sequence-to-sequence transformers architecture. The full input is encoded using the encoder in chunks and stored in a datastore; then, the datastore of encoded hidden states is queried at each decoding step. The kNN search step is non-parametric and can be injected into any pretrained seq2seq transformer. The search step reformulates attention for space efficiency as detailed below. 2.3 Attention reformulation The use of a datastore for the encoded tokens, pioneered by Wu et al. (2022), increases the maximum input length significantly. However, this nave approach requires constructing separate datastores for the attention keys and values at each layer and each head, for a total of 2LHdatastores, where L is the number of decoder layers and His the number of attention heads.2A separate datastore for each attention head in each decoder layer would be both time-intensive to create and space-intensive to store. So, not surprisingly, Wu et al. (2022) apply their memory layer to only a single decoder layer. Instead, we present a different order of computing the well-known transformer attention formula, which allows us to store a single datastore across allattention heads and all decoder layers. The standard cross-attention calculation for a 2See Memorizing Transformers official implementation at https://github.com/google-research/meliad/blob/ main/transformer/memory_factory.py#L78-L79 and https://github.com/google-research/meliad/blob/main/ transformer/memory_layer.py#L334-L339single head in a transformer is: Attn( Q, K, V ) = softmax(QKT dk) V (1) where Qis the product of the decoder state and the query weight matrix, and K, V are the product of the last encoder hidden state with the key and value weight matrices respectively. Our goal is to retrieve a set of keys Kbestthat maximize QKT, with the size of Kbestfixed to the size of the models context window, and then perform normal attention computation over Kbest. Lethdbe the decoder state and hebe the last encoder layers hidden state. We can refactor the transformers attention computation as follows:3 QKT= (hdWq) (heWk)(2) = (hdWq)W khe =( hdWqW k) he Thus, the retrieval step can be formulated as choosing the encoder hidden states hethat maximize( hdWqW k) he. This rewriting has two major advantages: first, there is no need to store the keys for each head and layer separately : we can store asingle datastore of the hidden states states he only, and just project the queries to hdWqW kusing head-specific WqandWk; second, the values can be calculated trivially given he, so there is no need to store the values in a separate datastore than the keys (or compute them at all) before decoding. Thus, rather than constructing 2LHdatastores and retrieving from every datastore during each decoding step, we construct a single datastore and retrieve from it by just projecting the decoder hidden states to per-head hdWqW k. This reformulation has not, to our knowledge, been performed before in retrieval-augmented attention. This allows the application of retrievalaugmented attention at each head and layer with negligible increase in time and space required. In contrast to Memorizing Transformerss single-layer retrieval augmentation, which requires constructing two datastores and retrieves the same tokens for each attention head, Unlimiformer uses one datastore and allows retrieval augmentation over any number of layer and individualized retrieval per-head. 3For brevity, we omit the linear layers bias term, because the softmax function is invariant to adding the same constant to all inputs. Method name Training-time inputtotal # tokens in example seen at training timeValidation-time input (e.g. early stopping)Test-time input Baseline 1024 1024 1024 1024 +test Unlimiformer 1024 1024 1024 unlimited +early stop w/ Unlimiformer 1024 1024 unlimited unlimited Train chunked +test Unlimiformer 1024 all unlimited unlimited SLED (Ivgi et al., 2022) 16k 16k 16k 16k Longformer (Beltagy et al., 2020a) 16k 16k 16k 16k Random-encoded training 8-16k 8-16k unlimited unlimited Retrieval training 8-16k 8-16k unlimited unlimited Alternating training 8-16k 8-16k unlimited unlimited Table 1: A comparison of the training methodologies using BART (context window size 1024) as a running example. The dashed line separates methods that are approximately the same training-time cost as the baseline from those that require significant additional compute. 3 Training Methods The method as described can be used, at test time, on any already-trained model. Next, we turn our focus to training methodologies to further improve the performance of Unlimiformer. Table 1 summarizes and contrasts the methodologies described below, and Appendix A contains further implementation details. 3.1 Low (additional-) Cost Training Methods We first consider training strategies that do not require significant additional compute as compared to the standard finetuning regime. +test Unlimiformer: As the simplest case, we use a standard fine-tuning regime, where the input is truncated during training. At inference time only, we inject Unlimiformer into the trained model to process full-length inputs. +early stop w/ Unlimiformer: We train without Unlimiformer, but when we evaluate the model for early stopping, we use Unlimiformer for generation on the validation set. This results in choosing a slightly different checkpoint to stop training at; the additional computational cost here is minor, and comes only from the application of Unlimiformer during inference over the validation set. Train chunked +test Unlimiformer: As a data augmentation strategy, we split each training example into chunks of the context-window size, and treat each chunk as its own training example. This is orthogonal to the Unlimiformer model, but has the advantage that all embeddings from the full-length training example are back-propagated into during train-ing, instead of truncatedalbeit across several examples. We apply early stopping with Unlimiformer. 3.2 Long-range Training Methods We also consider training Unlimiformer directly, which introduces additional computational cost. Random-encoded training: At each training step, the full (longer-than-context-window) training example is encoded in chunks; then, the keys for each decoder layer are chosen randomly from the encoded hidden states. This weakly simulates a nearest-neighbors search, but is computationally cheaper. Retrieval training: At each training step, the keys for each decoder head and layer are selected using a kNN search. This is not exact if the inputs are longer than 16k tokens, as memory requirements at training-time require the truncation of the input; however, this is closest to the test-time computation. Alternating training: To gain the benefits of each, we alternate epochs of Random-encoded trainingandRetrieval training . The use of random neighbors increases the likelihood that all tokens will be chosen as keys occasionally, while retrieval training is identical to the test-time setting for most inputs. 4 Experimental Settings 4.1 Datasets We experiment with three long-document summarization datasets with varying domains and properties. Table 2 contains summary statistics for Avg # tokens Dataset Domain # examples Input Output Input length distribution GovReport Government 19,402 9,616 597 74 303192 SummScreen TV shows 4,348 8,987 137 2365 22635 BookSum Literature 436 143,301 1294 19406 354006 Table 2: Dataset statistics. The last column is a visualization of the distribution of input example lengths in each dataset; the histogram is binned by powers of 2, with the minimum and maximum input size displayed on either end. Base model Training method ROUGE 1 / 2 / L / BERTScore GovReport SummScreen BART base Standard finetuning 48.7 / 19.2 / 22.8 / 64.3 29.7 / 6.2 / 17.7 / 56.3 BART base +test SLED (Ivgi et al., 2022) 45.8 / 16.1 / 20.2 / 62.7 27.5 / 5.5 / 16.7 / 55.9 BART base +test Unlimiformer 49.7 / 19.6 / 22.0 / 64.8 30.9 / 6.5 / 18.2 / 57.5 BART base +early stop w/ Unlimiformer 51.0 /20.5 / 21.5 / 65.1 32.1 /6.8/18.6 /57.6 BART base Train chunked 46.2 / 17.8 / 21.7 / 63.3 28.1 / 5.6 / 17.0 / 55.6 BART base +test Unlimiformer 53.4 /22.5 /22.5 /66.0 29.3 /6.6/17.6 /57.0 PRIMERA Standard finetuning 55.1 / 23.9 / 25.9 / 67.0 32.3 / 7.1 / 18.3 / 57.1 PRIMERA +test Unlimiformer 56.5 /24.8 /26.3 /67.7 33.3 /7.7/19.1 /57.6 Table 3: Test results on long-document datasets, for low-cost training methods: the training costs are no higher than standard finetuning that truncates the inputs according to the models max input size. The best metric in every dataset and every training category is marked in bold . each dataset. GovReport and SummScreen are included in the Scrolls benchmark (Shaham et al., 2022) . We report ROUGE 1/2/L (Lin, 2004) and BERTScore F1 (Zhang et al., 2019). GovReport (Huang et al., 2021) is a longdocument summarization dataset where the task is to write the executive summary of a US government report. SummScreen (Chen et al., 2022) is a longdocument summarization dataset where the task is to write the recap of a TV show episode, provided the transcript of the episode as input. BookSum (Kry scinski et al., 2021) is a longdocument summarization dataset of public-domain works of literature. BookSum has paragraph, chapter, and book-level settings; we consider only the BOOK SUM-Book setting, where the task is to generate a book-level summary given the full text of the novel as input. 4.2 Baselines BART (base) (Lewis et al., 2020b) is a pretrained seq2seq model (139M parameters), commonly used for summarization tasks. Its maximum input sequence length is 1024 tokens.PRIMERA (Xiao et al., 2022) is a LongformerEncoder-Decoder (LED large ; Beltagy et al., 2020b) (447M parameters), pretrained specifically for multi-document summarization. Its maximum input length is 4096 tokens; in the encoder, the global attention is sparse over the full sequence with dense local attention in a 1024-token window. SLED (Ivgi et al., 2022) is a method for augmenting pretrained encoder-decoder models for longer contexts by performing fusion in-decoder (Izacard and Grave, 2021); this allows the use of pretrained models, albeit with an expensive finetuning, and the input sequence length is eventually memory bounded. We replicate the authors experiments for BART+SLED on several datasets. Memorizing Transformers (Wu et al., 2022) is the most similar work to ours; they propose a trainable attention gate that moderates between the standard cross-attention and attention over retrieved keys from a datastore in one layer. Since the public implementation4for this method is not officially supported and is not fully reproducible, we approximated it by using attention over the datastore 4https://github.com/google-research/meliad in only a single decoder layer; this is equivalent to their setting with the learned interpolation parametergset to 1.5Our work differs from Memorizing Transformers in several key ways: Wu et al. (2022) added additional weights, and thus cannot easily leverage pretrained LMs, while Unlimiformer is fully non-parametric and can improve performance without finetuning. Further, Wu et al. (2022) applies attention retrieval to a single layer because of computational constraints, but our attention reformulation (Section 2.3) allows for the use of Unlimiformer in every decoder layer while still being more efficient than Memorizing Transformers. 5 Experimental Results 5.1 Long Document Summarization Table 3 shows the results in the long-document (4k16k token input) summarization datasets. First, we can see that applying Unlimiformer on an existing checkpoint without any training ( +test Unlimiformer ) improves BART baseby, for example, 1.8 ROUGE-1 points on both datasets. By contrast, applying SLED without additional training decreases performance from the base model. Unlimiformer is the only model that can be applied training-free. Early stop w/ Unlimiformer is also shown to be a very efficient training approach: it provides, for example, 3.3 ROUGE-1 points gain on GovReport, while the training computational cost is identical to standard finetuning. In the long-range training methods in Table 4 , Unlimiformer shows consistent improvements. In almost all metrics and datasets, Unlimiformer outperforms the SLED and Memorizing Transformers baselines with the same base model. The experiments with PRIMERA show two important points: first, Unlimiformer that is based on BART baseperforms better than the baseline PRIMERA, even though PRIMERA was pretrained on much more data, using a pretraining objective that was designed for summarization; second, these experiments show that not only can Unlimiformer outperform Longformer-based models such as PRIMERA, Unlimiformer can also be applied on top of existing long-range transformers. 5This is an approximation, but Wu et al. (2022) note that in their experiments, most heads learned a value for gsuch that they attended almost exclusively to the external memory.5.2 Book Summarization Table 5 shows the result on BookSum. In BookSum, we also see improvements from applying Unlimiformer, using both BART baseand PRIMERA. Random-encoded ,Retrieval , and Alternating training show competitive performance, with the best method varying across datasets and models. The low-cost training methods underperform these training strategies but outperform the baseline models; even applying Unlimiformer without training modifications improves over the base model in most settings. 6 Analysis 6.1 Entity mentions Unlimiformer outperforms all base models on BookSum (see Table 5), but the truncation baseline (using only the first 1024 tokens of the input) also shows relatively high performance on the automatic metrics. This is strongly counterintuitive for book summarization, where the plot of the book should not be apparent from reading the first pages. In the outputs from this baseline, we observe limited coherence and a high rate of hallucination (see Appendix B for an example with analysis). However, this is not reflected in n-gram based overlaps, and BERTScore does not strongly distinguish between any of the BookSum models. Following the use of entity reference measures in medicial summarization (Zhang et al., 2021), we use an entity-based metric as a proxy for the informativeness of the candidate summaries. We use SpaCy6to tag all named entities in the gold summary and collect a set of unique entities. We then tag each candidate summary and compute the percentage of entities present in this summary (i.e. recall of unique entities). We report this metric (abbreviated as EntMent) in Table 5 for BookSum. The Unlimiformer models exhibit far higher entity recall, and even adding Unlimiformer only at test time without customized training doubles the entity recall of summaries from the base model. 6.2 Input limitation To evaluate the performance gains from using the full input, we artifically impose a maximum datastore size. Figure 3 shows the performance of the best BookSum model as the maximum length increases; entity recall increases with length. 6https://spacy.io Base model Training method ROUGE 1 / 2 / L / BERTScore GovReport SummScreen BART base SLED (Ivgi et al., 2022) 54.7 / 24.4 / 25.4 / 67.0 32.7 / 7.9 / 19.1 / 58.4 LED large PRIMERA (Xiao et al., 2022) 55.1 / 23.9 / 25.9 / 67.0 32.3 / 7.1 / 18.3 / 57.1 BART base Memorizing Transformers 55.2 / 25.1 / 26.4 / 67.5 32.7 / 7.4 / 19.2 / 57.4 BART base Unlimiformer (this work) 56.6 /26.3 /27.6 /68.2 34.7 /8.5/19.9 /58.5 PRIMERA Memorizing transformers 57.0 / 25.3 / 26.5 / 67.7 33.0 / 7.3 / 18.4 / 57.3 PRIMERA Unlimiformer (this work) 57.4 /26.2 /28.0 /68.1 33.3 /7.6/18.9 /57.7 Table 4: Test results on long-document datasets, when allowing compute-costly, long-range training methods, using different base models. The best metric in every dataset and every training category is marked in bold . Base model Training method ROUGE 1 / 2 / L EntMent BART base Hierarchical (Kry scinski et al., 2021) 30.0 / 6.0 / 11.0 BART base Standard finetuning 36.4 / 7.6 / 15.3 10.0 BART base +test Unlimiformer 35.5 / 7.7/ 15.4 21.9 BART base +early stop w/ Unlimiformer 35.5 / 7.7/ 15.4 21.9 BART base Memorizing Transformers 35.6 / 6.4 / 14.6 15.7 BART base Unlimiformer (random-encoded training) 37.3 / 6.7 / 15.2 20.8 BART base Unlimiformer (alternating training) 36.7 / 7.3 / 15.5 20.3 PRIMERA Standard finetuning 38.6 / 7.2 / 15.6 11.6 PRIMERA +test Unlimiformer 38.3 / 7.5 / 15.9 18.9 PRIMERA +early stop w/ Unlimiformer 39.5 / 7.3 / 15.8 22.2 PRIMERA Unlimiformer (retrieval training) 37.9 / 8.2 / 16.3 25.5 PRIMERA Unlimiformer (random-encoded training) 39.5 / 7.1 / 15.9 19.7 Table 5: Results on BookSum (average input length 143k tokens). EntMent is entity recall, described in 6.1. Hierarchical summarization is a baseline reported by Kry scinski et al. (2021), where chapter summaries are combined and condensed to form a book summary. The best metric in every dataset is marked in bold . 6.3 WikiSum Previous work in dataset analysis has found that strong signal for many generation tasks is concentrated in only part of the input (e.g. the layout bias in news summarization (Kedzie et al., 2018) or answering questions using a single paragraph in HotpotQA (Jiang and Bansal, 2019). We observe this trend in WikiSum, a multidocument summarization dataset where the inputs are all references for a Wikipedia article and the output summary is the intro paragraph of the article (Liu* et al., 2018)7. As a strong baseline, we follow the paragraph ranking scheme of Liu* et al. (2018), where the paragraphs across documents are presented in order of relevance according to TFIDF. A baseline using only the first 1024 tokens of this sorted input outperformed Unlimiformer, 7A full copy of WikiSum is not available online; details of our scraped copy are in Appendix A.2.suggesting that the full input is not necessary to produce the summary on this dataset. 6.4 Computational cost Although Unlimiformer does not introduce additional trained parameters, the encoding of the full input, datastore construction, and datastore search increase the processing time necessary during both training and inference. We benchmark the GPUtime necessary to train BART-base for a single epoch and evaluate over the validation set using each training methodology. Table 6 shows the relative cost for each method. The Unlimiformer training methodologies are higher cost than the base training; however, the largest difference occurs during inference, where the full input (in Booksum, an average of 112,885 tokens) must be encoded, instead of the 1,024 tokens encoded in the baseline approach. We graph the computational cost of inference 1K 2K 4K 8K 16K 32K 64K 100K 350K0510152025 Max datastore sizeEntity Mention RecallUnlimiformer BART base Figure 3: As the maximum datastore size increases, the entity recall generally increases. At all datastore sizes, Unlimiformer outperforms the baseline (BART, in red).1K 2K 4K 8K 16K 32K 64K 100K051015202530 Max datastore sizeTime per example (s)Unlimiformer BART base(truncates to 1024) Figure 4: As the maximum datastore size increases, the inference cost increases sublinearly. Note the log scale. Method Relative GPU-time Baseline training 1.00 0.00 Chunked training 1.02 0.02 +early stop w/ Unlimiformer 1.00 0.00 Retrieval training 1.89 0.06 Random-encoded training 2.87 0.28 Baseline inference 1.00 0.00 Unlimiformer inference 4.48 0.56 Table 6: Computational effort per epoch for different training methodologies, relative to the baseline of standard finetuning and inference. All are averaged over 3 runs on BookSum using a single 48 GB A6000 GPU, 32 GB RAM, and 16 CPUs. with respect to the input length in Figure 4. When all inputs are restricted to 1,024 tokens, Unlimiformer requires additional time relative to the baseline for datastore construction and search. However, the benefits of Unlimiformer are clear as input length increases. The GPU-time required increases sublinearly with input length. 7 Related Work Retrieval-augmented transformers. Interpolating language model probabilities with nearest neighbors retrieval from a datastore was originally proposed by Khandelwal et al. (2019) to improve the language modeling of decoder-only models. Additional work in this space has explored adding structure to this datastore (Alon et al., 2022) to further increase performance and improve efficiency. More recent work has focused on language modeling for long-form text (Wu et al., 2022) and applying retrieval-augmented transformers todownstream classification tasks (Shi et al., 2022). Our work furthers this area by applying retrievalaugmented methods to encoder-decoder models and sequence generation tasks. Long-context transformers. An orthogonal solution has emerged in the large language models literature: change the transformer model such that its time/space complexity is (Nlog (N))or(N) (Tay et al., 2020). Most solutions achieve this reduction through sparsifying the attention mechanism as in Sparse Transformers (Child et al., 2019), Reformer (Kitaev et al., 2020), Longformer (Beltagy et al., 2020b), Routing Transformers (Roy et al., 2020), ETC (Ainslie et al., 2020), and Big Bird (Zaheer et al., 2020). In other work, the attention mechanism is either approximated or replaced entirely as in Linformer (Wang et al., 2020), Linear Transformers (Katharopoulos et al., 2020), Performers (Choromanski et al., 2020), and FNet (Lee-Thorp et al., 2021). For the most part, these are general purpose language models, not catered to a specific downstream task because of the high cost of pretraining. These models also are limited to the maximum sequence length chosen during pretraining; that length, often chosen to be 8192 or 16384, is substantially lower than the lengths of many BookSum or WikiSum training examples. Long-document summarization. Prior work has proposed several strategies for long-document summarization. In particular, many methods select a subsection of input to summarize using TF-IDF (Liu* et al., 2018), smaller retriever models (Liu and Lapata, 2019), or sentence similarity metrics (Bajaj et al., 2021). An orthogonal approach is to summarize chunks of the input, then combine and condense these sub-summaries into a global summary, either using vanilla transformer models (Kry scinski et al. (2021), Zhang et al. (2022), (Zhang et al., 2021)) or a specialized architecture (Liu and Lapata (2019), Grail et al. (2021)). Other work has focused on expanding the amount of text that can be processed, by applying long-context transformers or developing new long-context methods (Huang et al., 2021). However, these methods all suffer from cascading errors: if the initial trimming or chunk summarization steps remove important information, there is no way to recover that information in the downstream summary. 8 Conclusions We present Unlimiformer, a method for augmenting pretrained encoder-decoders with an external datastore to allow for unlimited length input. We demonstrate the usefulness of this approach for downstream sequence-to-sequence generation tasks, particularly long-document summarization. We examine several training methodologies for finetuning models with this method, and demonstrate that these strategies significantly improve over the base model, without adding weights. We expect that future work will further improve upon this performance, potentially by incorporating structure into the datastore or by retrieving embeddings in chunks. The information retrieval community has developed a wide variety of methods for improving retrieval, and we hope that the application of these methods will further improve the performance of retrieval-augmented LLMs on challenging downstream tasks. Toward this end, we release code8for easily injecting Unlimiformer into any model using the HuggingFace Transformers (Wolf et al., 2020) library. 9 Limitations In our experiments, we have only considered English-language datasets. While we have no reason to believe the method would suffer from the use of a different high-resourced language, the quality of the nearest-neighbors search depends on the quality of the indexed embeddings; thus, this approach may not be feasible in languages where a strong pretrained model is not available. Interpretability is a concern in any long-input summarization task, as the input may be infeasibly long for manual inspection. The retrieved embed8MIT license; see repo for detailsdings at each step are difficult to interpret; further work here is necessary. The length of inputs is theoretically bounded by the memory limitations of the computer used. More practically, using a CPU datastore is many times slower than a GPU datastore because of slower search and the need to transfer retrieved embeddings to the GPU. In our experiments, we were able to use a GPU datastore for input examples exceeding 500k tokens (on GPUs no larger than 48 GBs), but this may be a concern when using smaller GPUs or even larger inputs. Additionally, CPU datastores are necessary for models with context windows larger than 2048 tokens, as the Faiss GPU datastore implementation does not support retrieving more than 2048 nearest neighbors. References Joshua Ainslie, Santiago Ontanon, Chris Alberti, Vaclav Cvicek, Zachary Fisher, Philip Pham, Anirudh Ravula, Sumit Sanghai, Qifan Wang, and Li Yang. 2020. Etc: Encoding long and structured inputs in transformers. Uri Alon, Frank F. Xu, Junxian He, Sudipta Sengupta, Dan Roth, and Graham Neubig. 2022. Neurosymbolic language modeling with automatonaugmented retrieval. Ahsaas Bajaj, Pavitra Dangati, Kalpesh Krishna, Pradhiksha Ashok Kumar, Rheeya Uppaal, Bradford Windsor, Eliot Brenner, Dominic Dotterrer, Rajarshi Das, and Andrew McCallum. 2021. Long document summarization in a low resource setting using pretrained language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research Workshop , pages 7180, Online. Association for Computational Linguistics. Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020a. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150 . Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020b. Longformer: The long-document transformer. Mingda Chen, Zewei Chu, Sam Wiseman, and Kevin Gimpel. 2022. SummScreen: A dataset for abstractive screenplay summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 86028615, Dublin, Ireland. Association for Computational Linguistics. Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. 2019. Generating long sequences with sparse transformers. Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, David Belanger, Lucy Colwell, and Adrian Weller. 2020. Rethinking attention with performers. Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? an analysis of BERTs attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276286, Florence, Italy. Association for Computational Linguistics. Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Nazli Goharian. 2018. A discourse-aware attention model for abstractive summarization of long documents. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers) , pages 615621, New Orleans, Louisiana. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) , pages 41714186, Minneapolis, Minnesota. Association for Computational Linguistics. Quentin Grail, Julien Perez, and Eric Gaussier. 2021. Globalizing BERT-based transformer architectures for long document summarization. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume , pages 17921810, Online. Association for Computational Linguistics. Luyang Huang, Shuyang Cao, Nikolaus Parulian, Heng Ji, and Lu Wang. 2021. Efficient attentions for long document summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 14191436, Online. Association for Computational Linguistics. Maor Ivgi, Uri Shaham, and Jonathan Berant. 2022. Efficient long-text understanding with short-text models. Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 874880, Online. Association for Computational Linguistics. Yichen Jiang and Mohit Bansal. 2019. Avoiding reasoning shortcuts: Adversarial evaluation, training,and model development for multi-hop QA. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 2726 2736, Florence, Italy. Association for Computational Linguistics. Jeff Johnson, Matthijs Douze, and Herv Jgou. 2019. Billion-scale similarity search with GPUs. IEEE Transactions on Big Data , 7(3):535547. Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and Franois Fleuret. 2020. Transformers are rnns: Fast autoregressive transformers with linear attention. Chris Kedzie, Kathleen McKeown, and Hal Daum III. 2018. Content selection in deep learning models of summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , pages 18181828, Brussels, Belgium. Association for Computational Linguistics. Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2019. Generalization through memorization: Nearest neighbor language models. Nikita Kitaev, ukasz Kaiser, and Anselm Levskaya. 2020. Reformer: The efficient transformer. Tom Ko cisk, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gbor Melis, and Edward Grefenstette. 2018. The NarrativeQA reading comprehension challenge. Transactions of the Association for Computational Linguistics , 6:317 328. Huan Yee Koh, Jiaxin Ju, Ming Liu, and Shirui Pan. 2022. An empirical survey on long document summarization: Datasets, models, and metrics. ACM Comput. Surv. , 55(8). Wojciech Kry scinski, Nazneen Rajani, Divyansh Agarwal, Caiming Xiong, and Dragomir Radev. 2021. Booksum: A collection of datasets for long-form narrative summarization. James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, and Santiago Ontanon. 2021. Fnet: Mixing tokens with fourier transforms. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020a. Bart: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 78717880. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020b. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 78717880, Online. Association for Computational Linguistics. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out , pages 7481, Barcelona, Spain. Association for Computational Linguistics. Peter J. Liu*, Mohammad Saleh*, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summarizing long sequences. In International Conference on Learning Representations . Yang Liu and Mirella Lapata. 2019. Hierarchical transformers for multi-document summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 5070 5081, Florence, Italy. Association for Computational Linguistics. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, aglar Gulehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence RNNs and beyond. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning , pages 280290, Berlin, Germany. Association for Computational Linguistics. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Dont give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , pages 17971807, Brussels, Belgium. Association for Computational Linguistics. Aurko Roy, Mohammad Saffar, Ashish Vaswani, and David Grangier. 2020. Efficient content-based sparse attention with routing transformers. Uri Shaham, Elad Segal, Maor Ivgi, Avia Efrat, Ori Yoran, Adi Haviv, Ankit Gupta, Wenhan Xiong, Mor Geva, Jonathan Berant, and Omer Levy. 2022. Scrolls: Standardized comparison over long language sequences. Weijia Shi, Julian Michael, Suchin Gururangan, and Luke Zettlemoyer. 2022. knn-prompt: Nearest neighbor zero-shot inference. Yi Tay, Mostafa Dehghani, Dara Bahri, and Donald Metzler. 2020. Efficient transformers: A survey. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems , volume 30. Curran Associates, Inc. Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, and Hao Ma. 2020. Linformer: Self-attention with linear complexity.Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations , pages 3845, Online. Association for Computational Linguistics. Yuhuai Wu, Markus Norman Rabe, DeLesley Hutchins, and Christian Szegedy. 2022. Memorizing transformers. In International Conference on Learning Representations . Wen Xiao, Iz Beltagy, Giuseppe Carenini, and Arman Cohan. 2022. PRIMERA: Pyramid-based masked sentence pre-training for multi-document summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 52455263, Dublin, Ireland. Association for Computational Linguistics. Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, and Amr Ahmed. 2020. Big bird: Transformers for longer sequences. Longxiang Zhang, Renato Negrinho, Arindam Ghosh, Vasudevan Jagannathan, Hamid Reza Hassanzadeh, Thomas Schaaf, and Matthew R. Gormley. 2021. Leveraging pretrained models for automatic summarization of doctor-patient conversations. In Findings of the Association for Computational Linguistics: EMNLP 2021 , pages 36933712, Punta Cana, Dominican Republic. Association for Computational Linguistics. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. Yusen Zhang, Ansong Ni, Ziming Mao, Chen Henry Wu, Chenguang Zhu, Budhaditya Deb, Ahmed Awadallah, Dragomir Radev, and Rui Zhang. 2022. Summn: A multi-stage summarization framework for long input dialogues and documents. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 15921604, Dublin, Ireland. Association for Computational Linguistics. A Implementation Details A.1 Training details At training time, we must backpropagate through the operations described above. Thus, the input length is bounded more strictly the number of tokens in the full input must fit in GPU memory while the model is loaded. For the computationally expensive methods, we train using batch size 1 and truncate the longest inputs (generally, to 16k tokens). At test time, we use the full input without truncation. We train one model per setting, using the hyperparameter settings from SLED (Ivgi et al., 2022) and early stopping. A.2 WikiSum scraping We rescraped the dataset, following the same preprocessing steps as the original authors. We observe that many inputs in the scraped dataset are shorter than reported, likely due to changes in availability of the data since 2017; as a preprocessing step, we remove all inputs that are less than 1457 words, which is the 40th percentile of citation size for the original dataset. We trained on 10,000 randomly selected examples from this version of WikiSum and evaluate on 2,000 randomly sampled examples (1,000 validation, 1,000 test), maintaining the same sample across all experiments. When sampling, we respect the original WikiSum train/validation/test split. We release the subset we trained on as well as our modified version of the scraping code. A.3 Evaluation details Vanilla BERTScore is only well-defined up to 512 tokens; for GovReport and ScriptSumm, we evaluate using facebook/bart-large-mnli instead. This model has context size 1024. For BookSum, we experimented with using allenai/longformer-large-4096 (context size 4096), as many references are longer than 1024 tokens; however, we found that this approach had no distinguishing power between model outputs, ranking all models tested within 0.3 points of each other despite observing significant differences with ROUGE, EntMent, and manual inspection. For the named entity recognition in EntMent, we used SpaCys en_core_web_lg model.A.4 Computational Cost We estimate the total GPU time for results presented in this paper did not exceed approximately 116 days of time on a single 48-GB A6000. The longest-training models, SLED and kNN training for GovReport, took approximately 10 days to train. B Validation Results Table 7 shows the validation metrics for GovReport and SummScreen. Base model Training method ROUGE 1 / 2 / L / BERTScore GovReport SummScreen Low-cost training methods : BART base Standard finetuning 47.7 / 18.5 / 22.3 / 64.0 30.0 / 6.5 / 17.7 / 56.7 BART base +test SLED 46.0 / 16.3 / 20.3 / 62.8 28.4 / 5.9 / 17.0 / 56.0 BART base +test Unlimiformer 49.5 / 19.6 / 21.9 / 64.8 31.8 / 7.1 / 18.6 / 57.8 BART base +early stop w/ Unlimiformer 51.0 / 20.6 / 21.6 / 65.9 32.5 /7.2/19.9 /57.9 BART base Train chunked 48.3 / 18.1 / 22.3 / 63.8 29.4 / 6.3 / 17.6 / 56.8 BART base +test Unlimiformer 52.9 /22.2 /22.4 / 65.8 29.4 / 6.3 / 17.6 / 56.8 Long-range training methods : BART base SLED (Ivgi et al., 2022) 55.5 / 24.8 / 25.8 / 66.9 34.2 / 8.2 / 19.2 / 58.8 BART base Memorizing Transformers 55.8 / 25.6 / 26.9 / 67.7 32.8 / 7.6 / 19.3 / 57.7 BART base Unlimiformer 57.4 /26.4 /27.9 /68.2 35.0 /8.3/19.6 / 58.4 Low-cost training methods : PRIMERA Standard finetuning 55.0 / 23.6 / 25.9 / 66.9 33.0 / 7.8 / 18.8 / 57.4 PRIMERA +test Unlimiformer 56.4 / 24.7 / 26.4 /67.6 33.1 / 7.9 / 18.7 / 57.4 PRIMERA +early stop w/ Unlimiformer 56.4 /25.0 /26.4 /67.6 33.5 /8.2/19.3 /57.7 Long-range training methods : PRIMERA Memorizing transformers 57.0 / 25.6 / 26.8 / 67.8 32.9 / 7.7 / 18.5 / 57.5 PRIMERA Unlimiformer 58.0 /26.5 /28.6 /68.3 34.1 /7.9/19.0 /57.8 Table 7: Validation results on long-document datasets (average input length between 4k to 16k tokens). The best metric in every dataset and every training category is marked in bold . C Sample Outputs These outputs from BookSum are summaries of The Brothers Karamazov , an elaborate novel about a Russian family. Neither summary is fully factually correct, but the summary from the input-truncated model hallucinates several plotlines (e.g. a lover from the Congo, the many deaths of Pavel) which are not present in the original. The hallucinations in the Unlimiformer output are more constrained; for instance, it incorrectly describes Dmitri as a nobleman instead of a landowner and says he has been sentenced to death instead of jail. This summary features more of the novels characters and identifies plot details from the later parts of the book, such as Dmitris trial. Gold (reference) summary: The Brothers Karamazov is a family tragedy centered around a father and his sons. Fyodor, the eldest Karamazov, has three sons: Dmitri, Ivan, and Alyosha. Ivan and Alyosha have the same mother, but Dmitri, the oldest, has a different mother. Fyodor is a greedy landowner, a bawdy lecher, and a neglectful father. Hence, the Karamazov brothers end up growing into young men under the care of various other people. But they all have returned home to visit their father, and it is the first time they all have been together for quite some time. Dmitri has a dispute with Fyodor over his inheritance, and Alyosha, who is living in a monastery, suggests that they see Father Zossima, Alyoshas mentor. Alyosha believes that the wise old man can settle the dispute peacefully. Father Zossima is patient and kind, but Fyodor and Dmitri end up quarreling anyway. After Fyodor drives the men to frustration, they leave the monastery separately, and Alyosha worries about their familys future. Alyosha talks to Dmitri, who confesses his complicated situation with women and money. Dmitri promised to marry a girl named Katerina, and she lent him 3,000 rubles. Instead of paying it back, he spent it on another girl named Grushenka. He wants to run away with Grushenka, but he feels that he needs to pay Katerina back before he can do so. This is why he is so interested in getting the money from Fyodor. Back at Fyodors house, Smerdyakov is talking to the Karamazovs. Smerdyakov is an epileptic servant who was adopted by Grigory and Marfa, Fyodors other servants. He was born to a woman named Lizaveta who died in childbirth. She was the town idiot, and she lived off charity from the other townspeople. Everyone called her "Stinking Lizaveta," and when the town found out she was pregnant, they were furious at whoever could do such a thing to a helpless girl. They decided Fyodor must have been the culprit. Grigory and Marfa gave birth to a deformed child, and when they buried the child, they found Lizaveta, who had just given birth to Smerdyakov. They adopted the child immediately, and Fyodor named him. Father Zossima is dying, and Alyosha is distraught. Instead of asking Alyosha to stay with him during his last days, however, Father Zossima tells Alyosha he should leave the monastery to be with his family. His life gets even more complicated when a young crippled girl named Lise expresses that she has feelings for him. Alyosha visits Katerina, the girl who is engaged to marry Dmitri. Ivan is in love with her, but he feels that Dmitri is a better match for her. Frustrated and disgusted with his familys situation, Ivan says he is going to leave town. Alyosha sees a boy being picked on by his schoolmates, and he tries to talk to the boy, but he bites Alyoshas hand and runs away. Later, when Alyosha is bringing money to a man named Captain Snegiryov, who has been beaten by Dmitri, he recognizes the mans son. It is Ilusha, the boy who bit his hand. The family is poor, but Captain Snegiryov refuses to take the money because he feels that he needs to earn his sons respect after being humiliated by Dmitriand accepting charity, especially from a Karamazov, is out of the question. When Alyosha goes back to see Katerina, he finds Lise, Madame Hohlakovs daughter. The two realize that they love each other, and they decide to get married. Alyosha goes to visit Ivan, and he finds him in a restaurant. Ivan has gone there to get away from his father, and Alyosha sits down with him to have an intimate talk. Ivan tells his brother about his thoughts regarding God and the world. He recites to Alyosha a poem he has written called "The Great Inquisitor." The poem describes Christ returning to earth in the sixteenth century. The Church throws him in jail, and The Great Inquisitor explains to him that his presence is problematic for the world. The Church has spent years trying to replace the sense of freedom Christ gave man with security. He talks about how cruel the world is, especially to innocent children. After their meal, Alyosha and Ivan part ways, feeling closer than ever. Ivan sees Smerdyakov when he goes back to his fathers house, and Smerdyakov tells him he is worried about Fyodor. He is worried Dmitri will come to kill him and the old man will be helpless to save himself. Ivan goes to sleep very troubled. Father Zossima is on his deathbed, and Alyosha goes to visit him. The Elder tells those around him how much Alyosha reminds him of his older brother, a boy who died when he was a youth. He talks about being a profligate youth in the army. One day, he challenged another man to a duel because of a girl. Before the duel, however, he had a change of heart. He did not shoot and, after the duel, he retired from the army and joined a monastery. He talks about how much the Bible has affected him and says that everyone should embrace the world and the people in it. He dies. Many predicted that a miracle would happen upon Father Zossimas death, but his body begins to putrefy, filling the monastery with an awful smell. This fills the other monks with doubt that Father Zossima was the saintly man they thought he was. Alyosha is shaken by the news. He goes to see Grushenka, who has sent for him, and she admits to wanting to "ruin" him. When he tells her that Father Zossima has died, however, she becomes contrite about her callousness. She says she thinks she is a wicked person, and the two comfort each other. When Alyosha leaves, he has a renewed faith in Father Zossima and his teachings because Alyosha feels how wonderful it is to love and be loved in return. Meanwhile, Dmitri has become desperate. He wants to be with Grushenka, but he wants to pay Katerina back first. He goes on an odyssey, hoping that he can depend on the charity of others. He visits a man named Samsanov, a man who used to pursue Grushenka, and he hates Dmitri. He sends Karamazov to see a surly drunk, tricking Dmitri into thinking this man may be helpful. The man is practically incoherent, however, and Dmitri goes to find Madame Hohlakov. She tells Dmitri that the only way he will find 3,000 rubles is in the gold mines. In confusion, Dmitri concludes that Grushenka has gone to visit his father, and he goes to his fathers house in a rage, carrying a brass pestle. When he arrives, he does not find Grushenka, but as he is leaving, Grigory, his fathers servant, thinks he has come to murder Fyodor. The two scuffle, and Dmitri hits Grigory on the head with the pestle. After determining that the man is not dead, Dmitri flees the scene and looks for Grushenka. She is with Kalganov, a former lover who had treated her poorly. Dmitri decides that he will not end up with Grushenka and decides to kill himself after seeing her one more time. He crashes her party and sits down with her gentleman friend and some other men. The situation becomes tense, and after the gentlemen make some disparaging remarks about Russians and Dmitri, Grushenka decides she does not want to be with such an insulting and vicious man. She decides that she loves Dmitri, and as the two are coming to terms with their love, the police come to arrest him for the murder of Fyodor. As the police question Dmitri, it becomes clear that the facts all support the conclusion that he did indeed murder his father, even though he did not commit the crime. He was at the scene of the crime, wielding a weapon, the night of the murder. He had said he would kill his father on several occasions. He publicly announced he was looking for 3,000 rubles and was desperate to find them, and Fyodor reportedly had an envelope with 3,000 rubles that was stolen the night of the murder. Dmitri is carried away, and very few people believe that he is innocent of Fyodors murder. Meanwhile, Alyosha is visiting Ilusha, the boy who bit his hand, in the hospital. The boy has fallen quite ill, and Alyosha has gotten to know many of the boys friends, who are also visiting him. One boy, Kolya Krassotkin, is a leader among the boys. He and Ilusha were friends, but they had a falling out because Ilusha fed a pin to a dog, and Kolya did not approve of his cruelty. When Alyosha comes to visit, he and Kolya talk for quite some time. The boy looks up to this wise man about which he has heard so much from the other boys, and he wants to impress him. The two become friends, and Alyosha treats all the boys as equals. When Kolya goes in to see Ilusha, he gives him a dog as a present. He reveals that the dog is none other but the dog Ilusha gave the piece of bread with a pin in it. Kolya has nursed the dog back to health and has fully trained him as a gesture of friendship to Ilusha. The mood is dampened, however, when the doctors go in to see Ilusha. Without even saying it, everyone understands that the boy does not have much time left. Ilusha is brave, and he tries to lift the spirits of those around him. Later, Alyosha visits his brother in jail. Dmitri tells Alyosha that Ivan has concocted a plan for his escape from jail. Alyosha goes to talk to Ivan, who feels strangely guilty about his fathers death. Alyosha tells his brother that he should not feel responsible for a crime that he did not commit, but Ivan stalks off angrily. He meets Smerdyakov, who tells Ivan he thinks the Karamazov brother is guilty as an accomplice to the murder. He says that Ivan wanted his father dead and left the night of the murder to try to free himself of the responsibility of protecting his father. Ivan is angry and troubled by this, and when he talks to Smerdyakov later, Smerdyakov flatly admits to hilling Fyodor. He says that Ivans theories and ideas were the basis for his crime and that Ivans talks with Smerdyakov basically rationalized the deed. When Ivan returns home after this meeting, he sees a devil in his room. The devil chastises him for being a wicked person with weaknesses and foibles that have led to disastrous circumstances. Alyosha bangs on the door and finds his brother in a feverish state, muttering about a devil and Smerdyakov. Alyosha stays the night with his brother to take care of him. Dmitris trial begins. Many people from all around come to see the spectacle of the parricide trial. Dmitri has an excellent lawyer, but it is a hard case to win. The prosecution brings many witnesses who testify to seemingly damning evidence against Dmitri. The defense, however, discredits one after another of these witnesses, showing ulterior motives or mitigating circumstances. Alyosha defends his brother from the stand, and Katerina gives a moving account of Dmitris honorable nature. Then Ivan comes into the courtroom, waving money and implicating Smerdyakov. Since he is yelling nonsense, disrupting the trial, and generally acting crazy, the court does not believe him. Suddenly, at the end of the trial, Katerina stands up again, showing a letter from Dmitri that clearly states Dmitris intention to kill Fyodor as a last resort. She has a change of heart and no longer wants to lie to protect a man who has hurt her so much. Word comes to the courtoom that Smerdyakov has hanged himself. After final statements are made, the verdict comes back: guilty. Dmitri is sentenced to jail. Dmitri welcomes this chance to become a new man, but he does not want to be in exile in Siberia for the rest of his life; he wants to return to his home country before he dies. Ivan is still sick, and Katerina takes care of him. Alyosha visits the boys with whom he has become friends. They are sad because Ilusha has died. Alyosha passes along Father Zossimas teachings of love and understanding, and they all embrace his words, cheering him. BART (with input truncated to 1024) summary: Fyodor Pavlovitch Karamazov, a landowner in his own day, lives with his three sons, Adelaida Ivanovna and Alexey. He is the son of the landowner, Fyodor, and his wife, Adele, are the daughters of a wealthy nobleman in the district. Adele is the daughter of a nobleman, and her father, Ivanovitch, is a prominent figure in the town. She is a beautiful, intelligent, and intelligent young woman, who is destined to be a great writer. She has a great deal of ambition, and she is determined to achieve her goals. However, her ambition is short-lived. She falls in love with a handsome young man named Ivanovn, who has recently returned from a trip to the Congo. He has a lot of money, but he is also a poor man, and he has no means of earning it. In fact, he is a miserly beggar, and so he spends most of his time trying to earn money from the poor. He also has a wife, a beautiful young girl, who he marries, and a young son, Ivan, who lives with him. He and Adele have two children, Ivan and Aleksey, and they are very happy, but they have a problem: Ivan is a poor beggar who is unable to pay his rent, and Aleksandra is a beggar. Ivan is very ill, so he is forced to sell his estate to a rich man named Vladimir. Vladimir is a rich, well-to-do man who is also rich, but who is not well-liked by the people who live in the neighborhood. Vladimirs wife, Evgeny, is also ill, and Vladimirs son, Pavel, is very sick. Vladimir has a bad case of typhus fever, which causes him to lose his job and his money. Vladimir, Pavels younger brother, is sick, and Pavel is sent to the hospital to recover his fathers condition. Pavel is also sick and dies, and Ivan is left to care for him. The next day, Pavel and Adelayna are reunited with their father, who died in a typhus epidemic in the country. Pavels father, Vladimir, has died, and the family is now in mourning. Pavel has been unable to find a new wife, so the family decides to send him to Moscow, where he will be reunited with his father. The family is devastated, but Pavel is able to find an old friend, a young man who has just returned from the Congo, who will be able to help him. Pavel and Ivan are devastated by the loss of his father, but Ivan and Vladimir are able to make amends with Vladimir. Pavel, however, is devastated by this news, and, after a long period of mourning, he decides to return to Moscow. He returns to Moscow to find his father dead, and to find that his father has died. He then goes to Moscow and is buried in the city cemetery, where his father is buried. Pavel dies of a heart attack, and then he is buried with his mother. He was a young, beautiful, and beautiful woman who was raised by a kindly, kind, and generous family. He had a great time with his family, but his life was cut short when his father died. Pavel was left to live with his aunt and uncle in Moscow, and when he died, he was left alone with his uncle and his two sons. He died in Moscow and his mother, who had been living with him in a convent, died in the same convent. Pavel had been a poor, poor, and ill-treated man, but now he was able to raise his son and raise his own son. Pavel survived, and now he lives in Moscow with his Aunt and Uncle. He lives with their aunt and Uncle, who are very poor and poor. They live in a small house in the countryside, and their children are raised by their uncle and aunt. They have a great love for each other, but their love for one another is not reciprocated. Pavel also dies in the convent, and after a brief illness, Pavel is left with a broken heart. He dies in Moscow. His father, a wealthy man, is the father of his son, who was also a clergyman. Pavel died of a stroke in the hospital, and was buried in a nearby cemetery. He left Pavel and his family behind. Pavel tried to help Pavel, but was unable to save him, so Pavel was forced to leave Moscow. Pavel returned to Moscow in the middle of the night. He tried to save Pavels life, but it was too late for Pavel. Pavel managed to escape, and in the end, Pavel was killed by a bullet. Pavel took Pavels body and died of his own wounds. Pavel left Pavel to die in the forest. Pavel stayed with the family for a few years, until he died of an illness. Pavel became ill and Pavel was sent to Unlimiformer summary: The first part of the novel focuses on the question of whether or not the Russian nobleman, Dmitri Fyodorovitch, has killed his father. In the town of Ivanovna, the lieutenant-colonel of the Mushenkhanovitch is accused of the murder of his brother Ivanovitch. The lieutenant-incommand, Vasilyevitch, takes the form of a dog, and the two menthe two men and the woman who are questioned by the court-martialmurphy. The two men cry out to the God of Russia for help in their quest to save the town. The man, afraid of the wrath of the God, hands the dog a bunch of letters that are supposed to be proof of his love for his brother. The old manthe one who had killed his mother, and then found the letterarrives. He reads itasked the old man to forgive him for the murder and then takes the dog away. The other men, all of whom are prisoners, demand that the man confess his crime to the court. The first and most important thing they tell the court is that they love the man. The court acquits the man and sentences the man to death. The second manan old officer of the town, Alekandrovitchaskes to tell them the same thing. The third manin the process of confessing his crimeis Vashenka, a drunk man who has been sent to the town to kill his father, for reasons which are not entirely clear to the people. The womans servant, Evgenyevna, is also the one who has told the court the story of the Medvedevitchs murder, for the good old mans and the young mans love. The three men, who are separated for the first time, are laughing at the mans attempt to seduce Mitya. The young man, in the meantime, is conscripted into the town-side. He tells the court that he loves her, but he has yet to tell her the true story. The men, in this room, demand a man to kill her, and she will not betray them. The women, in their own country, are rebelling against the man who had sent them three thousand roubles, and they will not allow the man of the people to see them. They will not let the man in the town be allowed to see the manor Dmitriovitch; he will have her husband killed him. He will not tell the people who love him. The next man, named Vashenovitch, arrives, and takes the man away. They all begin to laugh at the fact that he has succeeded in seducing and entrusting his brother Dmitri. He is then taken away to the old womans house, where the governor-side-of-the-world, and his sister, Arkadin, is being punished. The priestesses and the baron are shocked, for they have been so virtuous and well-suited. The only thing they will be able to do is kill the priest. They threaten to burn the priestess to death, for she has been so wicked and libidinous that she has not yet seen the priest, for her husband. The priestsostensibly convinced that she is a woman who loves the priest and has been punished for her love and for allowing the priest to marry her. The last man, Yakivitch, arrives at the house, and, after a long day of drinking and then some of the menis killed. He and the priest are ordered to leave the town so that the priest can finally be reunited with the people of the old lady. The final man, the commander of the St. Petersburg town of Arkadina, is sentenced to death for the crime of having killed and then the lieutenant of the governor, for taking the money. The commander, the former lieutenant-delegation of the Peoples Army, is summarily executed, and all the men, except for the commander, have been summarily punished for their crime. The entire town is shocked and, in a very dramatic way, the priestesses plead for the forgiveness of the man, for allowing them to kill and imprison Ivan. They plead for their brother to be restored as well, for all the people they have loved, and for the priestor to tell the story. |
science.abm9326.pdf | RESEARCH ARTICLE SUMMARY NUCLEAR PORE COMPLEX Structure of cytoplasmic ring of nuclear pore complex by integrative cryo-EM and AlphaFold Pietro Fontana , Ying Dong , Xiong Pi , Alexander B. Tong , Corey W. Hecksel, Longfei Wang, Tian-Min Fu, Carlos Bustamante, Hao Wu * INTRODUCTION: The nuclear pore complex (NPC) is the molecular conduit in the nuclear membrane of eukaryotic cells that regulates import and export of biomolecules between the nucleus and the cytosol, withvertebrate NPCs ~110 to 125 MDa in molecular mass and ~120 nm in diameter. NPCs are organized into four main rings: the cytop l a s m i cr i n g( C R )a tt h ec y t o s o l i cs i d e ,t h e inner ring and the luminal ring on the plane of the nuclear membrane, and the nuclearring facing the nucleus. Each ring possesses an approximate eightfold symmetry and is composed of multiple copies of different nucleoporins. NPCs have been implicated in numerous biological processes, and their dys-functions are associated with a growing number of serious human diseases. However, despite pioneering studies from many groups over the past two decades, we still lack a full understanding of NPCs organization, dynamics, and complexity.RATIONALE: We used the Xenopus laevis oocyte as a model system for the structural characterization because each oocyte possesses a large number of NPC particles that can be visualized on native nuclear membranes without the aid of detergent extraction. We used single-particle cryo electron microscopy (cryoEM) analysis on data collected at different stage tilt angles for three-dimensional reconstruction and structure prediction with AlphaFold for model building. RESULTS: We reconstructed the CR map of X. laevis NPC at 6.9 and 6.7 resolutions for the full CR protomer and a core region, respectively, and predicted the structures of the individual nucleop orins using AlphaFold because no high-resolution models of X. laevis Nups were available. For any ambiguous subunit interactions, we also predicted complex structures, which further guided model fitting of the CR protomer. We placed the nucleoporin or complex structures into the CR density to o b t a i na na l m o s tf u l lC Ra t o m i cm o d e l ,c o m posed of the inner and outer Y-complexes, two copies of Nup205, two copies of the Nup214Nup88-Nup62 complex, one Nup155, and five copies of Nup358. In particular, we predicted the largest protein in the NPC, Nup358, as having an S-shaped globular domain, a coiledcoil domain, and a largely disordered C-terminal region containing phenylalanine-glycine (FG) repeats previously shown to form a gel-like condensate phase for selective cargo passage. Four of the Nup358 copies clamp around the inner and outer Y-complexes to stabilize the CR, and the fifth Nup358 situates in the center of the cluster of clamps. AlphaFold also predicted a homo-oligomeric, likely specifically pentameric, coiled-coil structure of Nup358 that may provide the avidity for Nup358 recruitment to the NPC and for lowering the threshold for Nup358 condensation in NPC biogenesis. CONCLUSION: O u rs t u d i e so f f e ra ne x a m p l eo f integrative cryo-EM and structure prediction as a general approach fo r attaining more precise models of megadalton protein complexes from medium-resolution density maps. The more accurate and almost complete model of the CR presented here expands our understanding of the molecular interactions in the NPC and represents a substantial step forward toward the molecular architecture of a full NPC, with implications for NPC function, biogenesis, and regulation. STRUCTURE OF THE NUCLEAR PORE Fontana et al.,Science 376, 1178 (2022) 10 June 2022 1o f1 The list of author affiliations is available in the full article online. *Corresponding author. Email: [email protected] These authors contributed equally to this work. Cite this article as P. Fontana et al. ,Science 376, eabm9326 (2022). DOI: 10.1126/science.abm9326 READ THE FULL ARTICLE AT https://doi.org/10.1126/science.abm9326 Cryo-EM structure of the cytoplasmatic ring of the nuclear pore complex from X. laevis .The 6.9 map was generated with single-particle cryo-EM, and the model was built with AlphaFold structure prediction. The secondary structural elements guided EM map fitting, resulting in an almost complete model of the complex. The approach allowed the identification of five copies of Nup358 and a second copy of the trimeric Nup214-Nup88Nup62 complex. Downloaded from https://www.science.org on March 01, 2024 RESEARCH ARTICLE NUCLEAR PORE COMPLEX Structure of cytoplasmic ring of nuclear pore complex by integrative cryo-EM and AlphaFold Pietro Fontana1,2, Ying Dong1,2, Xiong Pi1,2, Alexander B. Tong3, Corey W. Hecksel4, Longfei Wang1,2, Tian-Min Fu1,2,5,6, Carlos Bustamante3,7, Hao Wu1,2* The nuclear pore complex (NPC) is the conduit for bidirectional cargo traffic between the cytoplasm and the nucleus. We determined a near-complete structure of the cytoplasmic ring of the NPC from Xenopus oocytes using single-particle cryo electron microscopy and AlphaFold prediction. Structures of nucleoporins were predicted with AlphaFold and fit into the medium-resolution map by using the prominent secondary structural density as a guide. Certain molecular interactions were further built orconfirmed by complex prediction by using AlphaFold. We identified the binding modes of five copies of Nup358, the largest NPC subunit with Phe-Gly repeats for cargo transport, and predicted it to contain a coiled-coil domain that may provide avidity to assist its role as a nucleation center for NPC formation under certain conditions. The nuclear pore complex (NPC) regulates nucleocytoplasmic passage of biomolecules and has been implicated in numerous biological processes, with their dysfunctions associated with a growing number of diseases ( 16). An NPC is composed of multiple copies of more than 30 nucleoporins(Nups) with structural elements of stacked a-helical repeats and/or b-propellers, about a third of which also contain phenylalanineglycine (FG) repeat sequences for selective transport of cargoes ( 710). The approximately eightfold symmetric NPC can be divided into the cytoplasmic ring (CR) at the cytosolic side,the inner ring (IR) and the luminal ring (LR) on the plane of the nuclear membrane, and the nuclear ring (NR) facing the nucleus (Fig. 1A) (3,4,1113). Tremendous progress has been made toward unveiling the architecture of this enormous molecular machine ( 1120). Here, we present the cryo electron microscopy (cryo-EM) structure of the CR from Xenopus laevis oocytes. Structure determination We directly spread the nuclear envelopes (NEs) of actinomycin D (ActD) treated X. laevis oocytes ( 18) onto Lacey grids with carbon foil on gold support and applied the BenzonaseSTRUCTURE OF THE NUCLEAR PORE Fontana et al.,Science 376, eabm9326 (2022) 10 June 2022 1o f1 1 1Department of Biological Chemistry and Molecular Pharmacology, Harvard Medical School, Boston, MA 02115, USA.2Program in Cellular and Molecular Medicine, Boston Children s Hospital, Boston, MA 02115, USA.3Jason L. Choy Laboratory of Single-Molecule Biophysics, Institute for Quantitative Biosciences-QB3, and Chemistry GraduateGroup, University of California, Berkeley, CA 94720, USA. 4Division of CryoEM and Bioimaging, SSRL, SLAC National Accelerator Laboratory, Menlo Park, CA 94025, USA. 5Department of Biological Chemistry and Pharmacology, Ohio State University, Columbus, OH 43210, USA.6The Ohio State University Comprehensive Cancer Center, Columbus, OH 43210, USA.7Departments of Molecular and Cell Biology, Physics, and Chemistry, Howard Hughes Medical Institute, University of California, Berkeley, CA 94720, USA. *Corresponding author. Email: [email protected] These authors contributed equally to this work. Fig. 1. Cryo-EM map of the X. laevis NPC. (A) Cryo-EM density of the X. laevis NPC (contour level, 3.0 s) in top and side views, shown with CR in cyan, NR in green, IR and membrane region in gray, and the channel density in magenta. The map is eightfold symmetrized and at 19.8 resolution. ( B) Cryo-EM density of a CR protomer at 6.9 resolution colored by local resolution. ( C) Cryo-EM density of theX. laevis NPC CR ring (top view; contour level, 9.5 s) composed from the 6.9 CR protomer map by assuming the eightfold symmetry. One of the CR protomers is shown in cyan. ( D) Cryo-EM density (contour level, 4.5 s) of a CR protomer superimposed with the final model in two orientations and colored by their model colors, with inner Y-complex in blue, outer Y-complex in green, Nup205 in orange, Nup214-Nup88-Nup62 complex in purple, Nup358 in red, and Nup155 in cyan. Downloaded from https://www.science.org on March 01, 2024 nuclease to remove contaminating chromatin (fig. S1A). Cryo-EM data collection was conducted at different stage tilts and in counting mode by use of a K3 detector mounted on a Titan Krios microscope at 1.4 pixel size. Representative three-dimensional (3D) plots composed of the X and Y positions and the defocus levels ( DZ) of the NPC particles in selected tilt images showed the location-dependent variation of the defocus values consistent with the tilt planes (fig. S1B). Data processing performed at the bin2 pixel size (2.8 ) gave rise to an eightfold averaged full NPC structure, subtracted CR structure, and NR structure at19.8, 14.6, and 14.7 resolutions, respectively (Fig. 1A, fig. S2, and table S1). Symmetry expansion, density subtrac tion, and 3D classification led to CR and NR protomers at 11.1 and 15.1 resolutions. Final per-particle refinement and masking resulted in maps at 6.9 and 6.7 resolutions for the full CR protomer and a core region, respectively (Fig. 1, B and C; fig. S2; and table S1). The Fourier shell correlation (FSC) plots and 3D FSC plots for both maps are shown (fig. S3, A to D), as well as particle orientation distributions (fig. S3, E and F). The histograms of per-angle FSC indicated fairly isotropic resolutions along different or ientations (fig. S3, C and D). The map used for density interpretation is the 6.9 resolution map of t h ef u l lp r o t o m e r .D e s p i t et h em o d e s t6 . 9 resolution of the full CR protomer, the secondary structures, esp ecially helices, are apparent in the maps (Fig. 1, B and C). Model building using AlphaFold We used the recently implemented break-through algorithm for protein structure prediction (AlphaFold) ( 21,22), mainly as the ColabFold notebook ( 22) with extended capability to predict homoand heterocomplexes, to build a nearly complete model of the CR protomer (fig. S4), which contains the inner and outer Y-complexes, two copies of Nup205, two copies of the Nup214-Nup88-Nup62 complex, one Nup155, and five copies of Nup358(Fig. 1D). Because no high-resolution models of X. laevis Nups were available, the workflow first involved prediction of five independent models of individual Nups, which in almost all cases gave essentially the same structures (tables S2 and S3). For each prediction, we present the overall and per-residue pLDDT (predicted local distance difference test; 0 to 100, with 100 being the best), the pTM (predicted template modeling; 0 to 1, with 1 being the best), and the predicted alignment error (PAE) matrix (expected position error at residue x when the predicted and true structures are aligned on residue y, representing confidence of the relative positioning of each pair of residues or domains) (tables S2 and S3). We pickedthe top-ranked model by pLDDT for single proteins and by pTM for complexes in each case for density fitting unless otherwise noted. Whereas helical Nups used the prominent helical features in the maps for their fitting, Nups with mainly a b-propeller domain required prediction of binary complexes with contacting helical Nups to guide the fitting (table S4). Last, for any ambiguous subunit interactions, we predicted complex structures, which further guided model fitting of the CR protomer (table S4). X. laevis Nups that have a substantial region not covered by homology to structural homologs in other species includeNup107, Nup133, Nup160, Nup205, and Nup358 (tables S5 and S6 and fig. S5). The Y-complex The CR contains 16 copies of the Y-shaped com-plex (Y-complex), encircling head to tail to form the inner and outer layers of eight Y-complexes each in the ring (Fig. 1D) ( 23). Each Y-complex is composed of Nup160 and Nup37 (one short arm); Nup85, Nup43, and Seh1 (the other short arm); and Nup96, Sec13, Nup107, and Nup133 (the long stem) (Fig. 2A). Structural superposition revealed conformational differences between inner and outer Y-complexes at near Nup133 (Fig. 2B and Movie 1), likely because of the need to accommodate the different diameters at the inner and outer layers. The AlphaFold-generated Nup160 structure fits well with the density of the inner and outer Y-complexes (Fig. 2C, fig. S5A, and tables S2 and S3). By contrast, the published homology model of X. laevis Nup160 [Protein Data Bank (PDB) ID 6LK8] ( 14) misses a C-terminal reg i o n( F i g .2 C ) ,w h i c hm a yh a v el e dt ot h ei n correct assignment of its density to Nup96 (Fig. 2C and fig. S5B) ( 14). Thus, building fulllength models with AlphaFold may not only increase the structural accuracy of the individual subunits but also help to better assign and interpret densities. How b-propeller Nups in the Y-complex Nup37, Nup43, Seh1, and Sec13 fit in the CR m a pc a n n o tb ee a s i l yd i s c e r n e d .W et h e r e f o r epredicted structures of these Nups in complex with their contacting -helical Nups. Seh1Nup85, Nup43-Nup85, and Sec13-Nup96 complexes were all predicted with excellent pTM a n dp L D D Ts c o r e sa n df i t t e dt h ec r y o E Md e n sity as a rigid body (Fig. 2D; fig. S5, C and D; and table S4). The Seh1-Nup85 and Sec13-Nup96 complexes exhibited hybrid b-propeller structures in which an insertion blade from the interacting helical domain completes the sevenbladed propellers (Fig. 2E and fig. S5D), as also observed in previous crystal structures of the corresponding, but partial, yeast and human complexes ( 2426). AlphaFold failed to predict the Nup37-Nup160 complex (fig. S5E) ( 27), and we instead used the crystal structure to guide the Nup37 positioning in the map.Nup205 and the Nup214-Nup88-Nup62 complex Two AlphaFold-generated Nup205 models, which are larger than and quite different from the homologous crystal structure ( 28), were each fitted well at the channel side of the two Y-complexes to act as a bridge between them (Fig. 3A; Movie 2; fig. S6, A and B; and tables S5 and S6). The outer Nup205 runs from the C-terminal part of Nup160 to Nup85, and the inner Nup205 interacts with Nup160 at its N-terminal domain but tilts away from Nup85 at its C-terminal domain because of the interaction with the neighboring Nup214-Nup88Nup62 complex (Fig. 3, A and B). We fit a prominent, flag-shaped density over inner Nup85 and extending to the outer Nup85 by generating a composite model of the Nup214Nup88-Nup62 complex (fig. S6C). The three proteins have been previously predicted to form coiled-coil interactions ( 4,2932). According to AlphaFold, Nup88 and Nup214 also containb-propeller domains, and complex prediction confirmed the coiled coils and agreed well with the CR map: the b-propeller of Nup88 and one end of the helical bundle as the flag base, the long helical bundle as the flagpole, and the shorter helical bundle as the banner (Fig. 3C). By contrast, the previous X. laevis CR structure presented only a polyalanine model for this complex (fig. S6D) ( 14). The b-propeller domain of Nup214 does not have density, likely because of a flexible linkage. A given Nup85 can only bind to either Nup205 (for outer Nup85) or the Nup214-Nup88-Nup62 complex (for inner Nup85), but not both (Fig. 3, A and D), which explains the differential modes of Nup205 interactions with the Y-complexes. We noticed another piece of nearby density, which was previously suggested as a second Nup214-Nup88-Nup62 complex ( 14)a n dw a s fitted as such in a recent paper ( 20), which is in agreement with the expected stoichiometry from mass spectrometry data ( 13). Our density fit well with the flag base (Fig. 3D). However, the flag pole is largely missing. We do not k n o ww h e t h e rt h i si sd u et oap a r t i a ld i s o r d e r of this region or a lower occupancy of the sec-ond complex as a result of ActD treatment in our sample. The Nup88-Nup214-Nup62 complex resembles the X. laevis Nup54-Nup58Nup62 complex anchored by Nup93 of the IR or yeast Nup49-Nup57-Nsp1 complex in its coiled-coil region (fig. S6C) ( 33,34), suggesting that coiled-coil structu res are frequently used building blocks in NPC assembly. The five copies of Nup358 The largest protein in the NPC, Nup358 (alsoknown as RANBP2, or RAN binding protein 2), is composed of a largely disordered C-terminal region with FG repeats for gel-like phase formation and selective cargo passage and with binding sites for RANGAP, RAN, and other effectors (Fig. 4A) ( 7,8,23,35). AlphaFold predicted Fontana et al.,Science 376, eabm9326 (2022) 10 June 2022 2o f1 1RESEARCH |STRUCTURE OF THE NUCLEAR PORE Downloaded from https://www.science.org on March 01, 2024 Fontana et al.,Science 376, eabm9326 (2022) 10 June 2022 3o f1 1 Fig. 2. Fitting of Y-complex Nups with AlphaFold. (A) Cryo-EM density (contour level, 8.0 s) of the outer Y-complex colored by individual Nups. Theb-propeller domain of Nup133 was not built because of lack of density. (B) Two views of superimposed inner Y-complex (blue) and outer Y-complex (green) by the two short arms of the Y-complexes. The distal ends of aligned Nup133 without counting the b-propeller have a distance of ~38 .(C) Comparison of AlphaFold prediction (left) and homology modeling (right) for Nup160. The cryo-EM density (contour level, 4.5 s) and the positioning of Nup160 (yellow) and Nup96 (cyan) by the two predictions are shown at bottom. ( DandE) AlphaFold-generated model of the Nup85-Seh1 complex fitted with (D) the cryo-EM density (contour level, 4.5 s) and shown to highlight the inserted blade.RESEARCH |STRUCTURE OF THE NUCLEAR PORE Downloaded from https://www.science.org on March 01, 2024 the Nup358 N-terminal region as having a large -helical domain (~800 residues), a linker, and an isolated single -helix (Fig. 4, A and B). Previously, only the structures of a small N-terminal region (~150 residues) of human and chimpanzee NUP358 were solved ( 36) and used for homology modeling in X. laevis NPC (fig. S7A and tables S5 and S6) ( 14). The Nup358 globular domain is an S-shaped structure, and we identified five copies of Nup358 in the CR map (Fig. 4C and fig. S7B), which is consistent with the previous understanding of Nup358 as one of the most abundant proteins in the NPC (Fig. 4C and fig. S7B) ( 4). The full model of Nup358 molecules shows that four of the copies clamp around the in-ner and outer Y-complex es near the junction of Nup96 and Nup107 (Fig. 4, D and E, and Movie 3), likely to stabilize the CR. In the outer Y-complex, clamp A contacts Nup96 and Nup107 with ~750 and 400 2buried surface area, respectively, and clamp B contacts Nup107 with ~630 2buried surface area, as calculated on the PDBePISA server ( 37). In the inner Y-complex, clamp C contacts Nup96 with only ~270 2buried surface area, and clamp D interacts with Nup107 with ~750 2buried surface area. Superposition of the inner and outer Nup96-Nup107 complexes showed that clamps B and D both contact Nup107 in a similar mode of binding, but clamps A and C are shifted significantly to account for the differe nces in the surface area burial (Fig. 4F). The fifth Nup358 (clamp E), situating in the center of the Nup358 cluster, contacts clamp C (~1700 2) and Nup107 (~600 2) of the outer Y-complex. Thus, the apparent weaker interaction to the Y-complex by clamp C is compensated by the additional interaction from clamp E.Homo-oligomeric Nup358 We wondered whether the predicted isolated helix (Fig. 4B) following the S-shaped domain forms a coiled-coil structure, which is however invisible because of its flexible linkage. We thus used the COILS sever ( 38), which predicted up to 100% coiled-coil propensity for this helix (Fig. 5A). We then used AlphaFold to predict how the helix would assemble into oligomers. We input the number of protomers as six because coiled-coil structures with more than five subunits are very rare, and six should cover almost all possibilities. AlphaFold predicted a pentameric coiled coil plus a single helix as the top-ranked model with a pTM of 0.74 and pLDDT of 82.2. This is then followedby two trimeric coiled-coil complexes with pTMs of 0.45 and 0.44, a tetramer and a dimer with a pTM of 0.57, and last, a hexameric coiled coil with a pTM of 0.39 (Fig. 5B). The pentameric coiled coil also had the highest per-residue pLDDT scores at its core region (bluest) when displayed onto the structure (Fig. 5C). To corroborate the AlphaFold prediction, we expressed and purified His-tagged X. laevis Nup358 (1 to 800, only the globular region) and Nup358 (1 to 900, with the coiled-coil region) and subjected them to gel filtration chromatography. Judging by gel filtration standards from the same column, Nup358 (1 to 800) may be consistent with a mon omer, whereas Nup358 (1 to 900) may be consi s t e n tw i t hap e n t a m e r (Fig. 5D). A pentameric Nup358 (Fig. 5E) may help its interactions with the Y-complexes through avidity, although the potential formation of other oligomers cannot be excluded. A recent preprint reported an antiparallel tetrameric crystal structure of the coiled-coil region of human NUP358 ( 39), suggesting that Nup358from different species may assume different modes of oligomerization. A recurrent human mutation of NUP358, Thr 585Met (T585M) (equivalent to X. laevis T584M), is associated with autosomal-dominant acute necrotizing encephalopathy (ADANE) (40,41). Thr585is mapped to a partially buried site in direct interaction with the hydrophobic side chain of Leu450(fig. S7C), suggesting that the mutation might affect the conformation of the structure and reduce its interaction with the Y-complexes. The dominant nature of this presumed loss-of-function mutation is consistent with the multimeric nature of Nup358in which the mutant co-oligomerizes with the wild-type protein to reduce the avidity for its interaction with the Y-complexes. Nup155 and unassigned densities Previously, a cryo electron tomography (cryoET) study of human NPC showed localization of NUP155, a linker Nup, in both the CR and the NR ( 16). The AlphaFold-predicted Nup155 structure consists of a b-propeller followed by a large helical repeat domain (Fig. 6A), in an organization similar to that of Nup160 and Nup133. The helical repeat domain fits well with the CR protomer map (Fig. 6B) and interacts with inner Nup160, burying ~750 2 surface area, and with inner Nup205, burying~310 2surface area (Fig. 6C). We wondered whether we masked out the density for the b-propeller during high-resolution refinement. The full CR map from a previous step of data processing (fig. S2) revealed density for a complete Nup155 (Fig. 6D). In this map, the b-propeller of Nup155, the neighboring inner and outer Nup160, and inner Nup133 situate inside a membrane regio n of the density (Fig. 6D). The b-propeller domains of Nup155 and Nup133 have been shown to possess a membraneanchoring domain known as amphipathic lipid packing sensor (ALPS) ( 42,43), which consists of a short, disordered loop that may fold into an amphipathic helix on membrane ( 44). We could not assign the identity of a piece of elongated density next to inner Nup205,Nup133, and Nup107 (fig. S8A). This density was absent from a previously deposited cryo-EM map of X. laevis CR (14) but was present in the deposited cryo-ET maps of X. laevis NPC treated or not with ActD (fig. S8B) ( 18). Another smaller piece of unassigned density situates adjacent to Nup358, inner Nup96, and outer Nup107 (fig. S8A). The location of this density could be explained by Nup93 as suggested by a recently released paper and a preprint ( 20,39). However, we were unable to properly fit Nup93 because of the weaker density. Conclusion Our nearly complete model of the CR of X. laevis NPC reveals the molecular interactions within and their biological implications. One aspect Fontana et al.,Science 376, eabm9326 (2022) 10 June 2022 4o f1 1 Movie 1. Conformational difference between inner and outer Y-complexes. The movie shows models of the complete Y-complexes, from 90 rotation around the horizontal axis to transition between conformations of the outer and inner Y-complexes, with the main difference at Nup133. Details are reported in Fig. 2.RESEARCH |STRUCTURE OF THE NUCLEAR PORE Downloaded from https://www.science.org on March 01, 2024 Fontana et al.,Science 376, eabm9326 (2022) 10 June 2022 5o f1 1 Fig. 3. Interactions mediated by Nup205 and the Nup214-Nup88-Nup62 complex. (A) Overall interactions of inner Nup205 (orange) and outer Nup205 (yellow) with Y-complexes. Outer Nup205 directly interacts with Nup160, Nup85, and Seh1 of the outer Y-complex and with Nup43 of the inner Y-complex. The inner Nup205 directly interacts with Nup160 of the inner Y-complex, C-terminal region of Nup155, and Nup88 b-propeller in the Nup214-Nup88-Nup62 complexes. The dashed arrows indicate the locational difference between inner and outer Nup205. ( B) Superposition of the inner (blue) and outer (green) Y-complexes together with the bound Nup205 molecules, showing the positions of the inner (orange) and outer (yellow)Nup205 relative to Nup85 and Nup160. The N-terminal region of Nup205 binds similarly to inner and outer Nup160 molecules; the C-terminal domain binds outer Nup85 but pivots away from the inner Nup85 because of the presence of the Nup214-Nup88-Nup62 complexes. ( C) Overview (left) and fitting (right) of the AlphaFold-predicted one Nup214-Nup88-Nup62 complex into the cryo-EM density map of NPC CR monomer (contour level, 4.5 s). (D) Overview (left) and fitting (right) of the AlphaFold-predicted Nup214-Nup88-Nup62 complexes into the cryo-EM density map (contour level, 4.5 s), with the neighboring inner Nup85. Two Nup214-Nup88-Nup62 complexes are shown.RESEARCH |STRUCTURE OF THE NUCLEAR PORE Downloaded from https://www.science.org on March 01, 2024 of the CR assembly that was unexpected is the observed asymmetry in the composition and mode of binding among Nups: the conformational differences between the two Ycomplexes, the different binding modes of the two Nup205 molecules with the Y-complexes, the two Nup214-Nup88-Nup62 complexes side b ys i d e ,a n dt h ef i v eN u p 3 5 8c o m p l e x e sw i t h contrasting binding modes. It will be interesting to know whether this asymmetry represents a basal state of the CR or is caused by ActD-mediated cargo deficiency, and whether it will be a common feature in the structures of the NR, IR, or LR. Our X. laevis NPC sample came from haploid oocytes, which may differ further from NPCs in somatic cells. We propose that the multiple copies of Nup358 and its oligomeric coiled-coil asso-ciation explain its implicated role as a key driver of NPC assembly during oogenesis in the cytosol that is different from the rapid postmitotic and the slower interphase NPC assembly ( 2). This process occurs on stacked membrane sheets of the endoplasmic reticulum (ER) termed annulate lamellae (AL), and Nup358 condensates from its FG repeats act as a fastener to spatially direct this NPC biogenesis from scratch ( 2,45). The additional requirement for the FG-containing Nup214 in Nup358 recruitment to the NPC ( 46)f u r t h e r suggests a role of condensation in NPC assembly. The oligomeric structure of Nup358 may lower the threshold for Nup358 condensation, thus helping to explain its nucleating role among the different Nups. We also present an integrative approach to take advantage of the recent developments in cryo-EM technology ( 47,48)a n dA l p h a F o l d structure prediction ( 21,22,49), which led to a more precise modeling of the NPC. Similar approaches were also used in the structuredetermination of NPCs in recently published papers or preprints ( 19,20,5052). AlphaFold prediction is in contrast to structure modeling by means of homology to deposited structures that are often partial or quite dissimilar. The goal of achieving high resolution is to obtain the best model possible; incorporating information from AlphaFold in the modeling process may be analogous to what the field did previously for stereochemical restraints ( 53). With the capability for complex prediction to become more routine ( 22,54,55), we anticipate that this approach will not only assist the modeling of new structures but also help to reinterpret previous medium-resolution cryo-EM maps and become a norm in structural biology. Materials and methods Sample preparation for cryo-EM X. laevis has played a key role in revealing the NPC structure because each oocyte has a large number of NPC particles ( 11,14,15,18,56). Freshly isolated stage VI oocytes of X. laevis in the modified Barth s saline (MBS, 10 mM HEPES at pH 7.5, 88 mM NaCl, 1 mM KCl, 0.82mM MgSO 4,0 . 3 3m MC a ( N O 3)2and 0.41 mM CaCl 2) were purchased and shipped overnight from Ecocyte Bioscience US LLC. To optimize the homogeneity of the NPC sample, we incubated these oocytes with 100 mg/ml Actinomycin D (ActD) at 4C overnight to inhibit RNA synthesis and thus RNA export for synchronization of the transport cycles ( 18). Each oocyte w a sp o k e da tt h ea n i m a lp o l eu s i n gas h a r p tweezer to result in the ejection of the nucleus, and transferred into a low salt buffer containing ActD (LSB, 10 mM HEPES at pH 7.5, 83 mM KCl, 17 mM NaCl and 7.5 mg/ml ActD). The nucleus was further washed to reduce the contaminating yolk in a new LSB solution. Two or three washed nuclei were then transferredto the surface of a freshly glow-discharged grid. The NE was poked open, spread using glass needles, incubated for 10 min in 10 mlo f LSB supplemented with Benzonase Nuclease (Sigma Aldrich, E8263) to remove the contaminating chromatin, and subsequently washed twice with 10 mlo fL S B .3 ml LSB was added to the grid before blotting it for 3 to 5 s under 100% humidity at 4C and plunged into liquid ethane using a Mark IV Vitrobot (ThermoFisher). Negative staining EM Nuclear membranes were applied on a freshlyglow-discharged grid, using a Pelco EasyGlow,as described for cryo-EM sample preparation. Excess buffer was blotted on filter paper, and 6ml of a 1% uranyl formate solution was applied for 30 s and blotted again on filter paper. Negatively stained samples were imaged on a Joel JEM1400 Transmiss i o nE l e c t r o nM i c r o scope at 120 keV. Cryo-EM data collection Screening and collection were performed atStanford-SLAC Cryo-EM center (S2C2) with a Titan Krios electron microscope (Thermo Fisher Scientific) operating at 300 keV equipped w i t haK 3d e t e c t o ra n daB i o Q u a n t u me n e r g y filter (Gatan, slit width 20 eV). Movies were collected in counting mode at a 1.4 pixel size (table S1). Because of the way the grids were made, most NPC particles would have a similar orientation with their eightfold axis perpendicular to a grid, and we were expected to use a series of stage tilt angles to alleviate this orientation bias for 3D reconstruction. Given the known knowledge that gold grids can minimize beam-induced movement ( 57), we tested a number of gold grid types with the goal of identifying one with smallest beaminduced movement that is often exaggerated at high tilt angles. These grids include Lacey carbon films on gold support, 300 mesh (Ted Pella), Quantifoil holey carbon films on gold support, R 1.2/1.3, 300 mesh (Quantifoil Micro Tools), UltrAuFoil holy gold films on gold support, R 1.2/1.3, 300 mesh (Quantifoil MicroTools) and UltrAuFoil holy gold films on gold support overlaid with graphene (made by Wei Li Wang in the Wu lab). Lacey carbon films on gold support were shown to be the most stable and thus used for all data collection. To alleviate the orientation bias, we initially collected datasets at stage tilts of 0 , 35 , and 45 with a total dose of 54 e/ 2over 40 frames for 0 and 35 , and a total dose of 79.8 e/2 over 60 frames for 45 . An ideal tilt angle of42 was then calculated using cryoEF ( 58)f r o m a preliminary 3D reconstruction, and was used for the subsequent data collection with a total dose of 80 to 140 e/ 2over 80 to 120 frames. SerialEM was used for fully automated data collection, with a def ocus range between 1 and3mm. Fontana et al.,Science 376, eabm9326 (2022) 10 June 2022 6o f1 1 Movie 2. Interactions formed by Nup205 and the Nup214-Nup88-Nup62 complex. The movie highlights inner and outer Nup205 and the ternary Nup214-Nup88-Nup62 complex and their interactions. The model rotates 360 along the vertical axis and 360 along the horizontal axis. Detailed interactions are reported in Fig. 3.RESEARCH |STRUCTURE OF THE NUCLEAR PORE Downloaded from https://www.science.org on March 01, 2024 Fontana et al.,Science 376, eabm9326 (2022) 10 June 2022 7o f1 1 Fig. 4. Nup358 interacts with the Y-complexes as clamps. (A) Domain organization of X. laevis Nup358 and the approximate boundaries. ZnFs, zinc fingers. ( B) AlphaFold-predicted structure of the N-terminal region of Nup358, showing the S-shaped globular domain, an isolated helix, and the flexible linker in between. ( C) Fitting of Nup358 globular domain to the density (contour level, 8.0 s). (D) The region of the map (contour level, 8.0 s) containing five Nup358 molecules (labeled as clamps A to E) and two Y-complexes (Nup96-Nup107 complex), in two orientations. ( E) Two Nup358 molecules each clamp around Nup96-Nup107 at the inner and outer Y-complexes. Clamps A and B (red) are for the outer Y-complex, and clamps C and D (pink) are for the inner Y-complex. The last Nup358 (clamp E, orange) contacts clamp C and Nup107 of the outer Y-complex. ( F) Relative shifts in the clamp location on the two Y-complexes. The clamps B and D are similar in their location on Nup107, whereas clamps A and C have a shift in their position on Nup96.RESEARCH |STRUCTURE OF THE NUCLEAR PORE Downloaded from https://www.science.org on March 01, 2024 Cryo-EM data processing Data processing leveraged computer support from the SBgrid Consortium ( 59). Movies were corrected by gain reference and beam-induced motion, and summed into motion-corrected and dose weighted images using the Relion 3.08 implementation of the MotionCor2 algorithm ( 60,61). The distribution of average motions per frame for each grid type at a given tilt angle was plotted using OriginLab (OriginPro 2017 Suite, OriginLab Corporation, Northampton, MA, USA) to evaluate grid-dependent drift performance. The initial contrast tr ansfer function (CTF) estimation of motion-corrected micrographs without dose-weighting was calculated by CTFFIND4 ( 62). All micrographs were manually inspected and sele cted based on particle uniformity and contrast, and particles were picked manually. Gctf ( 63) was then used to determine the per-particle defocus values ( 63), from which 3D plots composed of the X and Y coordinates and the CTF (Z) of the particles for selected tilt images were generated using OriginLab (OriginPro 2017 Suite, OriginLab Corporation, Northampton, MA, USA). A plane was then fit to each 3D plot of a given image (fig. S1B). A total of 204,551 particles were manually picked, local CTF-corrected and extracted from 30,987 dose-weighted micrographs using a box size of 330 by 300 pixels at a 4 binned pixel size of 5.6 in RELION 3.08 ( 61). These particles were imported into cryoSPARC ( 64)t op e r f o r m 2D classification, from which 124,532 good particles were selected and merged for homogeneous refinement. The published cryo-EM map of the human NPC (EMD-3103) ( 16)w a s low-pass filtered to 60 and used as the initial model. The homogeneous refinement with C8 symmetry resulted in a reconstruction at 22.1 . These reconstructed 124,532 particles were exported to RELION, 3.08 extracted again with a box size of 660 by 660 pixels and a binned pixel size of 2.8 , and imported back into cryoSPARC to re-perform 2D classification. 101,366 particles were selected for homoge-neous refinement using the 22.1 map lowpass filtered to 40 as the initial model. The homogeneous refinement with C8 symmetry resulted in a 19.8 map. Particle density subtraction with the aligned 101,366 particles for separate processing of the CR or the NR was done in cryoSPARC. The new local refinement in cryoSPARC using the subtracted particles and a NR or a CR mask led to NR and CR maps at 14.7 and 14.6 resolutions, respectively. The aligned 101,366 particles for the whole NPC were also exported to RELION 3.08 and ran auto-refine with local search and C8 symmetry, with the 19.8 map low-pass filtered to 30 as the initial model. The resolution of the auto-refined map was 19.5 . We then performed C8 symmetry expansion and density Fontana et al.,Science 376, eabm9326 (2022) 10 June 2022 8o f1 1 Movie 3. Interactions of Nup358 with the Y-complexes. The movie shows five copies of Nup358 and their interactions with inner and outer Nup96 and Nup107. The model zooms in to the five Nup358 clamps and then rotates 75 along the horizontal axis. Detailed interactions are reported in Fig. 4. Fig. 5. Nup358 is predicted to contain an oligomeric coiled coil. (A) Prediction of the single helix after the S-shaped globular domain for coiled-coil propensity by using a sliding window of 14, 21, or 28 residues. ( B)T h e ranked five models of six Nup358 coiled-region protomers predicted with AlphaFold and the associated pTM and average pLDDT scores. The top model contains a pentamer and a monomer, suggesting that the pentamer is the most favorable oligomer. ( C) Ribbon diagrams of four models from (B) (ranked at 1, 2, 4, and 5) and colored by per-residue pLDDT scores. A light spectrum from blue to red corresponds to highest to lowest pLDDT scores, respectively. ( D) Elution fractions of X. laevis Nup358 (1 to 900, top) and Nup358 (1 to 800, bottom) from a gel filtration column. The elution positions of several standards are shown. aa, amino acid. ( E) The ribbon diagram of a pentamer colored by each protomer and shown in side and top views.RESEARCH |STRUCTURE OF THE NUCLEAR PORE Downloaded from https://www.science.org on March 01, 2024 subtraction using a CR protomer mask, and these subtracted particles were recentered and b o xs i z er e w i n d o w e dt o3 0 0b y3 0 0p i x e l s ,a l l in RELION 3.1. 3D classification using a CR protomer mask, local search with 50 iterations and K= 6 was done on these subtracted particles. A class with 333,214 particles was selected for auto-refine with a mask and local search, reaching an 11.1 resolution. CTF refinement accounting beam-tilt estimation, anisotropic magnification estimation and perparticles defocus estimation and the subsequent auto-refine resulted in an improved map at 9.9 resolution. Additional reconstructions using a tight CR protomer mask or a tight core region mask led to maps at 8.8 and 8.4 resolutions. These aligned 333,214 subtracted particles were also imported into cryoSPARCto perform local CTF refinement and local refinement. The final resolutions for the CR protomer and the core region were 6.9 and 6.7 , respectively. All reported resolutions were estimated based on the gold-standard FSC = 0.143 criterion (fig. S2). All final maps were corrected and sharpened by applying a negative B factor using automated procedures in RELION 3.1. Local resolution variations of cryo-EM maps were estimated using Phenix. Prediction of NPC subunit structures by AlphaFold The AlphaFold structures in this study were mainly generated from the AlphaFold2 implem e n t a t i o ni nt h eC o l a b F o l dn o t e b o o k s( 49) running on Google Colaboratory ( 21,22), usingthe default settings with Amber relaxation (msa_method=mmseqs2, homooligomer=1, pair_mode=unpaired, max_msa=512:1024, subsample_msa=True, num_relax=5, use_ turbo=True, use_ptm=True, rank_by=pLDDT, num_models=5, num_samples=1, num_ensemble= 1, max_recycles=3, tol=0, is_training=False, use_templates=False). The major difference of ColabFold from the native AlphaFold2 implementation is that ColabFold uses mmseqs2 (65), which the ColabFold authors suggest give equivalent results ( 22). For complex prediction, sequences were entered in tandem and separated by a semicolo n. For coiled coil prediction, we used homooligomer=6. Due to computing memory constraints on Google Colaboratory, we sometimes split up large proteins at disordered junctions to predict each segment separately. AlphaFold was run once with each of the 5 trained models; the five models generated were checked for consistency, and unless specified otherwise, the top-ranked model was taken in each case for density fitting. AlphaFold computes pLDDT score and pTM score to indicate the accuracy of a prediction ( 23). We used pLDDT for ranking single protein models and pTM for ranking protein-protein complexes, as recommended by ColabFold ( 22). A predicted alignment error map between pairs of residues was also calculated for each prediction, which represents confidence in domain positioning. Confidence metrics (global and per-residue pLDDT, pTM, and PAE maps) of predictions made in this work can be found in tables S2 to S4. A few larger proteins or complexes (more than 1400 residues in total length) were run on a Boston Children sH o s pital GPU cluster, by using default AlphaFold settings. To color ribbon diagrams based on per-residue pLDDT scores (range 0 to 100, with higher being better), these scores stored at the B-factor column of the .pdb files were changed to 100pLDDT; thus, when color ed as pseudo-B-factors in Pymol ( 66), a light spectrum from blue to red corresponds to highest to lowest pLDDT scores. Model fitting and building Prior to beginning modeling, we used AlphaFold (21,22) to generate all models of known components of the CR using the specific X. laevis sequences. An initial model of the Y-complex (PDB ID: 6LK8) ( 14)w a sf i t t e di n t ot h ec r y o E M density using ChimeraX ( 67) ,a n du s e da sa reference for manual positioning of AlphaFoldgenerated subunit or complex structures into the density followed by executing the fit in map command to refine the replacement. Flexible loops were removed to avoid steric clash. After building the two Y-complexes, we began to model the other densities. Nup205 cryo-EM density was easily recognized behind the Ycomplexes due to the large size and overall Fontana et al.,Science 376, eabm9326 (2022) 10 June 2022 9o f1 1 Fig. 6. Nup155 and other membrane-anchoring domains in the CR. (A) AlphaFold-predicted full-length Nup155. ( B) Fitting of the C-terminal region of Nup155 into the cryo-EM density (contour level, 4.5 s). (C) Interaction of Nup155 with the neighboring inner Nup160 and Nup205 (contour level, 4.5 s). (D)b-propeller domains of Nup155, Nup133, and Nup160 all localize to the membrane envelope region of the cryo-EM density map of NPC CR full ring at 14.6 resolution (contour level, 3.0 s).RESEARCH |STRUCTURE OF THE NUCLEAR PORE Downloaded from https://www.science.org on March 01, 2024 shape. Inner and outer Nup205 assume a different position due to the presence of the Nup214Nup88-Nup62 complex in the inner Y-complex. Nup358 density was easily recognized in the presence of the generated AlphaFold model with a prominent S shape, and allowed for identification of 5 copies for each CR protomer. Nup88 density was recognized due to the b-propeller and the long a-helix. The additional density which belongs to the Nup214 b-propeller was recognized upon generation of its AlphaFold model. Building of the Nup88Nup214-Nup62 complex was assisted by predicting the hetero-trimeric coiled coil stricture inAlphaFold, from which a composite model of the Nup88-Nup214-Nup62 complex was obtained. The final model was compared with the previous atomic model (PDB ID: 6LK8) (14). The model fitting quality was estimated for each subunit by the cor relation coefficient in ChimeraX ( 67) and in Phenix ( 68). A value of correlation coefficient ranges from -1 to 1, with 1 as the perfect fit, and 0.5 to 1.0 as good fit. This modeling process using AlphaFold is reminiscent of the use of stereochemical information of amino acids and nucleic acids in the current practice of structural modeling (53) that increases model accuracy. Nup358 expression and purification X. laevis Nup358 constructs (residues 1-800 and 1-900) were cloned into pET21a with a C-terminal His tag. Expression was carried out in E.coli BL21 DE3. Briefly, cells were grown in terrific broth media, supplemented with 100 mg/ml of Ampicillin and 30 mg/ml of Chloramphenicol, until OD 600reached 0.6. Cells were then transferred at 4C for 30 min before the addition of 1 mM IPTG and incubation overnight at 18 C. Cells were pelleted a t3 , 0 0 0gf o r2 0m i na n dr e s u s p e n d e di nl y s i s buffer (50 mM Tris-HCl pH 8.0, 150 mM NaCl, 1m MT C E P ,1 0m MI m i d a z o l e )s u p p l e m e n t e d with a protease inhibitor cocktail. Lysis was performed by sonication and the soluble fraction was separated by centrifugation at 40,000 g for 1 hour at 4C. The supernatant was incu-bated with Ni-NTA beads pre-equilibrated with lysis buffer, and purification was performed per manufacturer s recommendation. Eluted fractions were further separated by gel filtration chromatography with a Superdex 200 Increase 10/300 GL in gel filtration buffer (20 mM Hepes pH 7.4, 150 mM NaCl, 0.5 mM TCEP). Fractions were analyzed by Western blotting using an Anti-His antibody (Takara 631210). The Superdex 200 Increase 10/300 GL column was previously calibrated in gel filtration buffer using a high molecular weight kit from MW of 43 kDa to 669 kDa (Cytiva 28-4038-42). REFERENCES AND NOTES 1. J. Fernandez-Martinez, M. P. Rout, One ring to rule them all? Structural and functional diversity in the nuclear pore complex.Trends Biochem. Sci. 46, 595 607 (2021). doi: 10.1016/ j.tibs.2021.01.003 ; pmid: 33563541 2. B. Hampoelz et al., Nuclear pores assemble from nucleoporin condensates during oogenesis. Cell179, 671 686.e17 (2019). doi:10.1016/j.cell.2019.09.022 ; pmid: 31626769 3. J. S. Glavy, The quest for the blueprint of the nuclear pore complex. Protein J. 38, 363 376 (2019). doi: 10.1007/s10930019-09858-z ; pmid: 31410705 4. D. H. Lin, A. Hoelz, The structure of the nuclear pore complex (an update). Annu. Rev. Biochem. 88, 725 783 (2019). doi:10.1146/annurev-biochem-062917-011901 ; pmid: 30883195 5. A. Sali, From integrative structural biology to cell biology. J. Biol. Chem. 296, 100743 (2021). doi: 10.1016/ j.jbc.2021.100743 ; pmid: 33957123 6. P. A. Ferreira, The coming-of-age of nucleocytoplasmic transport in motor neuron disease and neurodegeneration.Cell. Mol. Life Sci. 76, 2247 2273 (2019). doi: 10.1007/ s00018-019-03029-0 ; pmid: 30742233 7. S. Frey, R. P. Richter, D. Grlich, FG-rich repeats of nuclear pore proteins form a three-dimensional meshwork with hydrogel-like properties. Science 314, 815 817 (2006). doi:10.1126/science.1132516 ; pmid: 17082456 8. E. A. Lemke, The multiple faces of disordered nucleoporins. J. Mol. Biol. 428 (10 Pt A), 2011 2024 (2016). doi: 10.1016/ j.jmb.2016.01.002 ; pmid: 26791761 9. D. Devos et al., Components of coated vesicles and nuclear pore complexes share a common molecular architecture. PLOS Biol. 2, e380 (2004). doi: 10.1371/journal.pbio.0020380 ; pmid: 15523559 10. I. C. Berke, T. Boehmer, G. Blobel, T. U. Schwartz, Structural and functional analysis of Nup133 domains reveals modular building blocks of the nuclear pore complex. J. Cell Biol. 167, 591597 (2004). doi: 10.1083/jcb.200408109 ; pmid: 15557116 11. C. W. Akey, Interactions and structure of the nuclear pore complex revealed by cryo-electron microscopy. J. Cell Biol. 109, 955970 (1989). doi: 10.1083/jcb.109.3.955 ; pmid: 2768344 12. S. J. Kim et al., Integrative structure and functional anatomy of a nuclear pore complex. Nature 555, 475 482 (2018). doi:10.1038/nature26003 ; pmid: 29539637 13. A. Ori et al., Cell type-specific nuclear pores: A case in point for context-dependent stoichiometry of molecular machines. Mol. Syst. Biol. 9, 648 (2013). doi: 10.1038/msb.2013.4 ; pmid: 23511206 14. G. Huang et al., Structure of the cytoplasmic ring of the Xenopus laevis nuclear pore complex by cryo-electron microscopy single particle analysis. Cell Res. 30, 520 531 (2020). doi: 10.1038/s41422-020-0319-4 ; pmid: 32376910 15. Y. Zhang et al., Molecular architecture of the luminal ring of the Xenopus laevis nuclear pore complex. Cell Res. 30, 532 540 (2020). doi: 10.1038/s41422-020-0320-y ; pmid: 32367042 16. A. von Appen et al., In situ structural analysis of the human nuclear pore complex. Nature 526,1 4 0 143 (2015). doi:10.1038/nature15381 ; pmid: 26416747 17. K. H. Bui et al., Integrated structural analysis of the human nuclear pore complex scaffold. Cell155, 1233 1243 (2013). doi:10.1016/j.cell.2013.10.055 ; pmid: 24315095 18. M. Eibauer et al., Structure and gating of the nuclear pore complex. Nat. Commun. 6, 7532 (2015). doi: 10.1038/ ncomms8532 ; pmid: 26112706 19. C. W. Akey et al., Comprehensive structure and functional adaptations of the yeast nuclear pore complex. Cell185, 361378.e25 (2022). doi: 10.1016/j.cell.2021.12.015 ; pmid: 34982960 20. L. Tai et al., 8 structure of the outer rings of the Xenopus laevis nuclear pore complex obtained by cryo-EM and AI. Protein Cell (2022). doi: 10.1007/s13238-021-00895-y ; pmid: 35015240 21. J. Jumper et al., Highly accurate protein structure prediction with AlphaFold. Nature 596, 583 589 (2021). doi: 10.1038/ s41586-021-03819-2 ; pmid: 34265844 22. M. Mirdita, S. Ovchinnikov, M. Steinegger, ColabFold Making protein folding accessible to all. bioRxiv [Preprint] (2021). https://doi.org/10.1101/2021.08.15.456425 . 23. K. E. Knockenhauer, T. U. Schwartz, The nuclear pore complex as a flexible and dynamic gate. Cell164, 1162 1171 (2016). doi:10.1016/j.cell.2016.01.034 ; pmid: 26967283 24. S. G. Brohawn, N. C. Leksa, E. D. Spear, K. R. Rajashankar, T. U. Schwartz, Structural evidence for common ancestry of the nuclear pore complex and vesicle coats. Science 322, 1369 1373 (2008). doi: 10.1126/science.1165886 ;p m i d : 18974315 25. E. W. Debler et al., A fence-like coat for the nuclear pore membrane. Mol. Cell 32, 815 826 (2008). doi: 10.1016/ j.molcel.2008.12.001 ; pmid: 1911166126. K. C. Hsia, P. Stavropoulos, G. Blobel, A. Hoelz, Architecture of a coat for the nuclear pore membrane. Cell131, 1313 1326 (2007). doi: 10.1016/j.cell.2007.11.038 ; pmid: 18160040 27. S. Bilokapic, T. U. Schwartz, Molecular basis for Nup37 and ELY5/ELYS recruitment to the nuclear pore complex. Proc. Natl. Acad. Sci. U.S.A. 109, 15241 15246 (2012). doi:10.1073/pnas.1205151109 ; pmid: 22955883 28. D. H. Lin et al., Architecture of the symmetric core of the nuclear pore. Science 352, aaf1015 (2016). doi: 10.1126/ science.aaf1015 ; pmid: 27081075 29. F. Madeira et al., The EMBL-EBI search and sequence analysis tools APIs in 2019. Nucleic Acids Res. 47(W1), W636 W641 (2019). doi: 10.1093/nar/gkz268 ; pmid: 30976793 30. S. M. Bailer, C. Balduf, E. Hurt, The Nsp1p carboxy-terminal domain is organized into functionally distinct coiled-coil regions required for assembly of nucleoporin subcomplexes and nucleocytoplasmic transport. Mol. Cell. Biol. 21, 7944 7955 (2001). doi: 10.1128/MCB.21.23.7944-7955.2001 ; pmid: 11689687 31. P. Grandi et al., A novel nuclear pore protein Nup82p which specifically binds to a fraction of Nsp1p. J. Cell Biol. 130, 1263 1273 (1995). doi: 10.1083/jcb.130.6.1263 ; pmid: 7559750 32. N. Belgareh et al., Functional characterization of a Nup159pcontaining nuclear pore subcomplex. Mol. Biol. Cell 9, 3475 3492 (1998). doi: 10.1091/mbc.9.12.3475 ; pmid: 9843582 33. T. Stuwe et al., Architecture of the fungal nuclear pore inner ring complex. Science 350,5 664 (2015). doi: 10.1126/ science.aac9176 ; pmid: 26316600 34. H. Chug, S. Trakhanov, B. B. Hlsmann, T. Pleiner, D. Grlich, Crystal structure of the metazoan Nup62 Nup58 Nup54 nucleoporin complex. Science 350, 106 110 (2015). doi:10.1126/science.aac7420 ; pmid: 26292704 35. J. Wu, M. J. Matunis, D. Kraemer, G. Blobel, E. Coutavas, Nup358, a cytoplasmically exposed nucleoporin with peptide repeats, Ran-GTP binding sites, zinc fingers, a cyclophilin A homologous domain, and a leucine-rich region. J. Biol. Chem. 270, 14209 14213 (1995). doi: 10.1074/jbc.270.23.14209 ; pmid: 7775481 36. S. A. Kassube et al., Crystal structure of the N-terminal domain of Nup358/RanBP2. J. Mol. Biol. 423, 752 765 (2012). doi:10.1016/j.jmb.2012.08.026 ; pmid: 22959972 37. E. Krissinel, K. Henrick, Inference of macromolecular assemblies from crystalline state. J. Mol. Biol. 372, 774 797 (2007). doi: 10.1016/j.jmb.2007.05.022 ; pmid: 17681537 38. A. Lupas, M. Van Dyke, J. Stock, Predicting coiled coils from protein sequences. Science 252, 1162 1164 (1991). doi:10.1126/science.252.5009.1162 ; pmid: 2031185 39. C. J. Bley et al., Architecture of the cytoplasmic face of the nuclear pore. bioRxiv [Preprint] (2021). https://doi.org/ 10.1101/2021.10.26.465790 . 40. P. Deshmukh, A. Singh, D. Khuperkar, J. Joseph, Acute necrotizing encephalopathy-linked mutations in Nup358 impair interaction of Nup358 with TNRC6/GW182 and miRNA function. Biochem. Biophys. Res. Commun. 559, 230237 (2021). doi: 10.1016/j.bbrc.2021.04.027 ; pmid: 33962210 41. A. Shibata, M. Kasai, A. Hoshino, T. Tanaka, M. Mizuguchi, RANBP2 mutation causing autosomal dominant acute necrotizing encephalopathy attenuates its interaction with COX11. Neurosci. Lett. 763, 136173 (2021). doi: 10.1016/ j.neulet.2021.136173 ; pmid: 34400285 42. S. J. Kim et al., Integrative structure-function mapping of the nucleoporin Nup133 suggests a conserved mechanism for membrane anchoring of the nuclear pore complex. Mol. Cell. Proteomics 13, 2911 2926 (2014). doi: 10.1074/mcp. M114.040915 ; pmid: 25139911 43. G. Drin et al., A general amphipathic alpha-helical motif for sensing membrane curvature. Nat. Struct. Mol. Biol. 14, 138 146 (2007). doi: 10.1038/nsmb1194 ; pmid: 17220896 44. S. A. Nordeen, D. L. Turman, T. U. Schwartz, Yeast Nup84Nup133 complex structure details flexibility and reveals conservation of the membrane anchoring ALPS motif. Nat. Commun. 11, 6060 (2020). doi: 10.1038/s41467-02019885-5 ; pmid: 33247142 45. E. Onischenko et al., Natively unfolded FG repeats stabilize the structure of the nuclear pore complex. Cell171, 904 917.e19 (2017). doi: 10.1016/j.cell.2017.09.033 ; pmid: 29033133 46. R. Bernad, H. van der Velde, M. Fornerod, H. Pickersgill, Nup358/RanBP2 attaches to the nuclear pore complex viaassociation with Nup88 and Nup214/CAN and plays a supporting role in CRM1-mediated nuclear protein export. Fontana et al.,Science 376, eabm9326 (2022) 10 June 2022 10 of 11RESEARCH |STRUCTURE OF THE NUCLEAR PORE Downloaded from https://www.science.org on March 01, 2024 Mol. Cell. Biol. 24, 2373 2384 (2004). doi: 10.1128/ MCB.24.6.2373-2384.2004 ; pmid: 14993277 47. W. Khlbrandt, Biochemistry. The resolution revolution. Science 343, 1443 1444 (2014). doi: 10.1126/science.1251652 ; pmid: 24675944 48. W. Chiu, M. F. Schmid, G. D. Pintilie, C. L. Lawson, Evolution of standardization and dissemination of cryo-EM structures and data jointly by the community, PDB, and EMDB. J. Biol. Chem. 296, 100560 (2021). doi: 10.1016/ j.jbc.2021.100560 ; pmid: 33744287 49. K. Tunyasuvunakool et al., Highly accurate protein structure prediction for the human proteome. Nature 596, 590 596 (2021). doi: 10.1038/s41586-021-03828-1 ; pmid: 34293799 50. S. Mosalaganti et al., Artificial intelligence reveals nuclear pore complexity. bioRxiv [Preprint] (2021). https://doi.org/10.1101/ 2021.10.26.465776 . 51. G. Huang et al., Cryo-EM structure of the inner ring from Xenopus laevis nuclear pore complex. bioRxiv [Preprint] (2021). https://doi.org/10.1101/2021.11.13.468242 . 52. L. Tai et al., 8 structure of the nuclear ring of the Xenopus laevis nuclear pore complex solved by cryo-EM and AI. bioRxiv [Preprint] (2021). https://doi.org/10.1101/ 2021.11.10.468011 . 53. W. A. Hendrickson, Stereochemically restrained refinement of macromolecular structures. Methods Enzymol. 115, 252 270 (1985). doi: 10.1016/0076-6879(85)15021-4 ; pmid: 3841182 54. R. Evans et al., Protein complex prediction with AlphaFoldMultimer. bioRxiv [Preprint] (2021). https://doi.org/10.1101/ 2021.10.04.463034 . 55. I. R. Humphreys et al., Computed structures of core eukaryotic protein complexes. Science 374, eabm4805 (2021). doi:10.1126/science.abm4805 ; pmid: 34762488 56. C. W. Akey, M. Radermacher, Architecture of the Xenopus nuclear pore complex revealed by three-dimensional cryoelectron microscopy. J. Cell Biol. 122,119 (1993). doi:10.1083/jcb.122.1.1 ; pmid: 8314837 57. C. J. Russo, L. A. Passmore, Electron microscopy: Ultrastable gold substrates for electron cryomicroscopy. Science 346,1 3 7 7 1380 (2014). doi: 10.1126/science.1259530 ; pmid: 25504723 58. K. Naydenova, C. J. Russo, Measuring the effects of particle orientation to improve the efficiency of electron cryomicroscopy. Nat. Commun. 8, 629 (2017). doi: 10.1038/ s41467-017-00782-3 ; pmid: 2893182159. A. Morin et al., Collaboration gets the most out of software. eLife 2, e01456 (2013). doi: 10.7554/eLife.01456 ; pmid: 24040512 60. S. Q. Zheng et al., MotionCor2: Anisotropic correction of beam-induced motion for improved cryo-electron microscopy. Nat. Methods 14, 331 332 (2017). doi: 10.1038/nmeth.4193 ; pmid: 28250466 61. J. Zivanov et al., New tools for automated high-resolution cryoEM structure determination in RELION-3. eLife 7, e42166 (2018). doi: 10.7554/eLife.42166 ; pmid: 30412051 62. A. Rohou, N. Grigorieff, CTFFIND4: Fast and accurate defocus estimation from electron micrographs. J. Struct. Biol. 192, 216221 (2015). doi: 10.1016/j.jsb.2015.08.008 ; pmid: 26278980 63. K. Zhang, Gctf: Real-time CTF determination and correction. J. Struct. Biol. 193,112 (2016). doi: 10.1016/j.jsb.2015.11.003 ; pmid: 26592709 64. A. Punjani, J. L. Rubinstein, D. J. Fleet, M. A. Brubaker, cryoSPARC: Algorithms for rapid unsupervised cryo-EM structure determination. Nat. Methods 14, 290 296 (2017). doi:10.1038/nmeth.4169 ; pmid: 28165473 65. M. Steinegger, J. Sding, MMseqs2 enables sensitive protein sequence searching for the analysis of massive data sets. Nat. Biotechnol. 35, 1026 1028 (2017). doi: 10.1038/nbt.3988 ; pmid: 29035372 66. W. L. Delano, The PyMol Molecular Graphics System (Delano Scientific, 2002). 67. T. D. Goddard et al., UCSF ChimeraX: Meeting modern challenges in visualization and analysis. Protein Sci. 27,1 425 (2018). doi: 10.1002/pro.3235 ; pmid: 28710774 68. P. D. Adams et al., PHENIX: A comprehensive Python-based system for macromolecular structure solution. Acta Crystallogr. D Biol. Crystallogr. 66, 213 221 (2010). doi: 10.1107/ S0907444909052925 ; pmid: 20124702 ACKNOWLEDGMENTS We thank W. Chiu for help with the design of data collection, M. Kirschner for initially offering to use oocytes from his laboratory, W. L. Wang for giving us graphene-coated UltrAuFoil holy gold films on gold support, A. N. Hayati and P. Sliz for runningsome AlphaFold predictions on Boston Children s Hospital s cluster, and H. Sharif for discussions on tilt data processing. The authors acknowledge Boston Children sH o s p i t a l s High-Performance Computing Resources BCH HPC Clusters Enkefalos 2 (E2) andMassachusetts Green High-Performance Computing (MGHPCC), which were made available for conducting the research reported in this publication. Funding: All cryo-EM data were collected at Stanford-SLAC Cryo-EM Center (S2C2) supported by the NIHCommon Fund Transformative High Resolution Cryo-Electron Microscopy program (U24 GM129541). This work was also supported by the US Department of Energy, Office of Basic Energy Sciences, Nanomachine Program, under contract DE-AC02-05CH11231 (to C.B.); National Institutes of Health (NIH) grant R01GM032543(to C.B.); and a postdoctoral fellowship from the Cancer Research Institute (to P.F.). Author contributions: Conceptualization: T.-M.F. and H.W. Cryo-EM sample preparation and optimization: P.F. and Y.D. Analysis of beam-induced motion and tilt-angle associated CTF: Y.D. and A.B.T. Data collection: P.F., C.W.H., Y.D., and X.P. Manual particle picking: P.F., Y.D., and X.P. Data processing: X.P., P.F., and Y.D. AlphaFold model generation: A.B.T., H.W., and P.F.Model fitting into density: P.F. Figure design and creation:Y.D., P.F., and X.P. Recombinant protein expression and purification: P.F. Participated in discussions: L.W. Supervision: H.W. and C.B. Writing, original draft: H.W., P.F., Y.D., X.P., and A.B.T. Writing, review and editing: H.W., P.F., Y.D., X.P., A.B.T., C.W.H., L.W., T.-M.F, and C.B. Competing interests: The authors declare no competing interests. Data and materials availability: All data and materials reported in the main text and supplementary materials are available upon reasonable request. The electron density maps have been deposited in the Electron Microscopy Data Bank (EMDB) with accession numbers EMD-25817and EMD-25818 for a CR protomer and a full CR ring built from the CR protomer map, respectively, and the atomic coordinates have been deposited in the Protein Data Bank with the accession number 7TDZ. License information: Copyright 2022 the authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original US government works. https://www. science.org/about/science-licenses-journal-article-reuse SUPPLEMENTARY MATERIALS science.org/doi/10.1126/science.abm9326 Figs. S1 to S8 Tables S1 to S6 MDAR Reproducibility Checklist View/request a protocol for this paper from Bio-protocol . Submitted 22 October 2021; accepted 3 March 2022 10.1126/science.abm9326 Fontana et al.,Science 376, eabm9326 (2022) 10 June 2022 11 of 11RESEARCH |STRUCTURE OF THE NUCLEAR PORE Downloaded from https://www.science.org on March 01, 2024 |
23-0037.pdf | Journal of Machine Learning Research 24 (2023) 1-43 Submitted 1/23; Revised 7/23; Published 7/23 Atlas : Few-shot Learning with Retrieval Augmented Language Models Gautier Izacard1,2,,[email protected] Patrick Lewis1,,[email protected] Maria [email protected] Lucas Hosseini1,[email protected] Fabio Petroni1,[email protected] Timo Schick1,[email protected] Jane [email protected] Armand Joulin1,[email protected] Sebastian Riedel1,3,[email protected] Edouard Grave1,[email protected] 1Meta AI,2ENS, PSL University & Inria,3University College London Editor: Ivan Titov Abstract Large language models have shown impressive few-shot results on a wide range of tasks. However, when knowledge is key for such results, as is the case for tasks such as question answering and fact checking, massive parameter counts to store knowledge seem to be needed. Retrieval-augmented models are known to excel at knowledge intensive tasks without the need for as many parameters, but it is unclear whether they work in few-shot settings. In this work we present Atlas, a carefully designed and pre-trained retrieval-augmented language model able to learn knowledge intensive tasks with very few training examples. We perform evaluations on a wide range of tasks, including MMLU, KILT and Natural Questions, and study the impact of the content of the document index, showing that it can easily be updated. Notably, Atlasreaches over 42% accuracy on Natural Questions using only 64 examples, outperforming a 540B parameter model by 3% despite having 50x fewer parameters. Keywords: retrieval augmented language models, information retrieval, language models 1. Introduction Large language models (LLMs) are impressive few-shot learners (Brown et al., 2020; Rae et al., 2021; Hoffmann et al., 2022; Chowdhery et al., 2022). They are able to learn new tasks with very few examples or even from instructions alone. For this generalisation ability to emerge, the key ingredients are scaling both the parameter count of the model, and the size of the training data. Large language models owe this improvement to both a larger computational budget, enabling more complex reasoning, and the ability to memorize more . Equal contribution . Work done while at Meta AI c2023 Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, Edouard Grave. License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/ . Attribution requirements are provided athttp://jmlr.org/papers/v24/23-0037.html . Izacard, Lewis, Lomeli, Hosseini, Petroni, Schick, Dwivedi-Yu, Joulin, Riedel, Grave information related to downstream tasks from the larger training data. While it is intuitive to assume that increased reasoning abilities lead to better generalisation, and hence few-shot learning, the same is not true for in-parameter memorisation. Specifically, it is unclear to what extent effective few-shot learning requires vast knowledge in the parameters of the model. In this paper, we investigate whether few-shot learning requires models to store a large amount of information in their parameters, and if memorisation can be decoupled from generalisation. To do so, we leverage the fact that memory can be outsourced and replaced by an external non-parametric knowledge source by employing a retrieval-augmented architecture. These models employ a non-parametric memory, for example a neural retriever over a large, external, potentially non-static knowledge source to enhance a parametric language model. In addition to their memorisation abilities, such architectures are attractive due to a number of other established advantages in terms of adaptability, interpretability and efficiency (Guu et al., 2020; Lewis et al., 2020; Yogatama et al., 2021; Borgeaud et al., 2021, inter alia). However, retrieval-augmented models have yet to demonstrate compelling few-shot learning capabilities. In this work we address this gap, and present Atlas, a retrieval-augmented language model capable of strong few-shot learning, despite having lower parameter counts than other powerful recent few-shot learners. Atlasretrieves relevant documents based on the current context by using a generalpurpose dense retriever using a dual-encoder architecture, based on the Contriever (Izacard et al., 2022). The retrieved documents are processed, along with the current context, by a sequence-to-sequence model using the Fusion-in-Decoder architecture (Izacard and Grave, 2021a) that generates the corresponding output. We study the impact of different techniques to train Atlason its few-shot performance on a range of downstream tasks, including question answering and fact checking. We find that jointly pre-training the components is crucial for few-shot performance, and we carefully evaluate a number of existing and novel pretraining tasks and schemes for this purpose. Atlasachieves strong downstream performance in both few-shot and resource-rich settings. For example, with only 11B parameters, Atlas achieves an accuracy of 42.4% on Natural Questions using 64 training examples (45.1% using a Wikipedia-only index), outperforming PaLM (Chowdhery et al., 2022), a 540B parameter model by almost 3 points, and 64.0% in a full data set setting with a Wikipedia index, establishing a new state of the art by 8.1 points. In summary we make the following contributions: A thorough study on how to design and train retrieval-augmented language models, with a focus on downstream few-shot learning and sample efficiency. The findings of this study lead to a retrieval-augmented language model, called Atlas, that exhibits few-shot abilities that emerge at lower scale than standard LLM. We provide an exploration of fine-tuning strategies to efficiently adapt both the retriever and the language model to the task at hand. Thorough downstream experiments in few-shot settings, demonstrating state-of-the-art results on few-shot Natural Questions (+2.8%), TriviaQA (+3.3%), FEVER (+5.1%), and results on par with models with 15 more parameters on MMLU. 2 Atlas: Few-shot Learning with Retrieval Augmented Language Models Fact checking:Bermuda Triangle is in the western part of the Himalayas.AtlasFalseMasked Language Modelling:Bermuda Triangle is in the <MASK> of the Atlantic Ocean. TheBermuda Triangle is anurban legendfocused on a loosely-defined region in the western part of the NorthAtlantic Ocean.western partPretrainingFew-shotQuestion answering:Where is the Bermuda Triangle?Western part of the North Atlantic Ocean Figure 1: We introduce Atlas, a retrieval-augmented language model that exhibits strong few-shot performance on knowledge tasks, and uses retrieval during both pretraining and fine-tuning. Experiments investigating full data set finetuning, setting new state-of-the-art results in Natural Questions (+8.1%), TriviaQA (+9.3%) and 5 KILT Tasks. Experiments demonstrating the updateability and interpretability characteristics of Atlas. Experiments demonstrating that a compressed index using product quantisation achieves comparable performance as an uncompressed index while resulting in a 5x memory reduction. Our code, pre-trained Atlascheckpoints, and various supporting data are available at https://github.com/facebookresearch/atlas 2. Method Our approach follows the text-to-text framework (Raffel et al., 2019). This means that all the tasks are framed as follows: the system gets a text query as input, and generates a text output. For example, in the case of question answering, the query corresponds to the question and the model needs to generate the answer. In the case of classification tasks, the query corresponds to the textual input, and the model generates the lexicalized class label, that is the word corresponding to the label. We give more examples of downstream tasks, from the KILT benchmark in Figure 2. As many natural language processing tasks require knowledge , our goal is to enhance standard text-to-text models with retrieval, which, as we hypothesise in the introduction, may be crucial to endow models with few-shot capabilities. 2.1 Architecture Our model is based on two sub-models: the retriever and the language model . When performing a task, from question answering to generating Wikipedia articles, our model starts by retrieving the top-k relevant documents from a large corpus of text with the retriever. 3 Izacard, Lewis, Lomeli, Hosseini, Petroni, Schick, Dwivedi-Yu, Joulin, Riedel, Grave Task Query Output Fact Checking Bermuda Triangle is in the western part of the Himalayas.False Question Answeringwho is playing the halftime show at super bowl 2016 Coldplay Entity Linking NTFS-3G is an open source <E>cross-platform</E> implementation of the Microsoft Windows NTFS file system with read-write support.Cross-platform software Figure 2: Examples of query and output pairs for different tasks from KILT. Then, these documents are fed to the language model, along with the query, which in turns generates the output. Both the retriever and the language model are based on pre-trained transformer networks, which we describe in more detail below. 2.1.1 Retriever Our retriever module is based on the Contriever (Izacard et al., 2022), an information retrieval technique based on continuous dense embeddings. The Contriever uses a dualencoder architecture, where the query and documents are embedded independently by a transformer encoder (Huang et al., 2013; Karpukhin et al., 2020). Average pooling is applied over the outputs of the last layer to obtain one vector representation per query or document. A similarity score between the query and each document is then obtained by computing the dot product between their corresponding embeddings. The Contriever model is pre-trained using the MoCo contrastive loss (He et al., 2020), and uses unsupervised data only. As shown in the following section, an advantage of dense retrievers is that both query and document encoders can be trained without document annotation, using standard techniques such as gradient descent and distillation. 2.1.2 Language Model For the language model, we rely on the T5 sequence-to-sequence architecture (Raffel et al., 2019). We rely on the Fusion-in-Decoder modification of sequence-to-sequence models, and process each document independently in the encoder (Izacard and Grave, 2021a). We then concatenate the outputs of the encoder corresponding to the different documents, and perform cross-attention over this single sequence in the decoder. Following Izacard and Grave (2021a), we concatenate the query to each document in the encoder. Another way to process the retrieved documents in the language model would be to concatenate the query and all the documents, and to use this long sequence as input of the model. Unfortunately, this approach does not scale with the number of documents, since the self-attention in the encoder results in a quadratic complexity with respect to the number of documents. 4 Atlas: Few-shot Learning with Retrieval Augmented Language Models 2.2 Training Objectives for the Retriever In this section, we discuss four different loss functions to train the retriever jointly with the language model. We consider loss functions that leverage the language model to provide supervisory signal to train the retriever. In other words, if the language model finds a document useful when generating the output, the retriever objective should encourage the retriever to rank said document higher. This allows us to train models using only query and output pairs from the task of interest, without relying on document annotations. For example, in the case of fact checking, a model only requires pairs of claims and corresponding verdicts but no documents containing the evidence to back up the verdict. In practice, we can apply this approach on any task, including self-supervised pre-training. As shown in the experimental section, pre-training is critical for obtaining models that exhibit few-shot learning abilities. 2.2.1 Attention Distillation (ADist) The first loss that we consider is based on the attention scores of the language model, and is heavily inspired by Izacard and Grave (2021b). The main idea is that the cross-attention scores between the input documents and the generation can be used as a proxy of the importance of each input document when generating the output. In particular, Izacard and Grave (2021b) showed that these scores can be aggregated across attention heads, layers and tokens for a given document to obtain a single score for each document. Then, these scores are distilled into the retriever by minimizing the KL-divergence with the probability distribution pretrover the top-K documents {dk}1,...,Kobtained from the retriever: pretr(d|q) =exp(s(d,q)/)K k=1exp(s(dk,q)/), (1) wheresis the dot-product between the embedding vectors of the query and documents and is a temperature hyperparameter. In the original paper, to obtain a relevance score per document it was proposed to use the pre-softmax scores from the decoder cross-attentions, and average across heads, layers and tokens. Here, we use the pre-softmax score multiplied by the norm of the values, an alternative which gives slightly stronger results. First, let us briefly review the Fusion-in-Decoder model (FiD, Izacard and Grave, 2021a). The underlying architecture is a sequence-to-sequence model, composed of an encoder and a decoder. The encoder independently processes Kdifferent text inputs (input (dk))1kK, where input (d)is the concatenation of the input query and the retrieved document d. The output representations of the encoder are then concatenated to form a global representation Xof dimension ( kk)d, wherekis the length of input (dk)anddis the dimension of the hidden representations of the model. Then, the decoder processes this representation as a regular autoregressive model, alternating self-attention, cross-attention and feed-forward modules. Only the cross-attention module explicitly takes as input the global output representation Xof the encoder. If HRddenotes the output of the previous self-attention layer of the decoder, the cross-attention operation consists in the following operations. First, queries Q, keysKand values Vare computed by applying linear transformations: Q=WQH,K=WKX,V=WVX. 5 Izacard, Lewis, Lomeli, Hosseini, Petroni, Schick, Dwivedi-Yu, Joulin, Riedel, Grave Then a similarity score between the query at position i,Qi, and the key at position j,Kj, is obtained by computing the dot-product between these two elements, and normalized over the dimension: i,j=QT iKj, i,j=exp(i,j) mexp(i,m). Anewrepresentationisobtainedasasumofthevalues, weightedbytheattentionprobabilities, before going through a final linear transformation Wo: Oi=WO ji,jVi,j. This describes the single-head attention case, in the case of multi-head attention with nh heads, the output of the cross-attention layer can be written as: Oi=nh h=1WO,h jh,i,jVj,h. For the layer land the head h, we use the quantity l,h,i,jVl,h,j2as the measure of relevance for the input token at position jrelatively to the generated token at position i. We average these scores over all attention heads, layers, tokens of the generation and tokens of the input segment input (d)to obtain an attention score score attn(d)for each document d: score attn(d) = mean h,l,i,jinput kl,h,i,jVl,h,j2. We apply the Softmax operator over the resulting scores, to obtain a distribution pattn(d) over the top-K retrieved documents: pattn(d) =exp (score attn(d)) kexp (score attn(dk)). We then minimize the KL-divergence between pattn, and the distribution pretrfrom the retriever defined in Equation 1: KL(pattnpretr) =K k=1pattn(dk) log(pattn(dk) pretr(dk)) . Here, this loss is only used to optimize the parameters of the retriever, and not the language model. When using recent deep learning frameworks, this is achieved by applying a StopGradient operator on pattn. 2.2.2 End-to-end Training of Multi-Document Reader and Retriever (EMDR2) Next, we consider the method introduced by Sachan et al. (2021), which is inspired by the expectation-maximization algorithm, treating retrieved documents as latent variables. Given 6 Atlas: Few-shot Learning with Retrieval Augmented Language Models a query q, the corresponding output aand the setDKof top-K retrieved documents with the current retriever, the EMDR2loss to train the retriever is log[K k=1plm(a|q,dk)pretr(dk|q)] , wherepretris again the probability over the top-K documents obtained with the retriever, as defined by Equation 1. Again, only the parameters of the retriever are updated by applying a StopGradient operator around plm. One should note that the probability distribution over documents that minimizes this loss function is an indicator of the document corresponding to the highest probability of the output according to the language model. Finally, in practice, the EMDR2loss function is applied at the token level, and not at the sequence level. 2.2.3 Likelihood Distillation (LDist) Third, we discuss a simpler loss function which is inspired by the objectives from the attention distillation and EMDR2methods (Izacard and Grave, 2021b; Sachan et al., 2021). More precisely, we want to train the retriever to predict how much each document would improve the ability of the language model to predict the output, given the query. To this end, we minimize the KL-divergence between the documents distribution of the retriever (Eqn. 1), and the documents posterior distribution according to the language model conditioned on a single document and using a uniform prior: pLDist (dk)pLM(a|dk,q). Using the Softmax operator, we have that pLDist (dk) =exp(logpLM(a|dk,q))K i=1exp(logpLM(a|di,q)). 2.2.4 Leave-one-out Likelihood Distillation (LOOL) Finally, we propose an objective based on how much worsethe prediction of the language model gets when removing one of the top-k retrieved documents. To do so, we compute the log probability of the output for each subset of k-1 documents, and use the negative value as relevance score for each document. Following the previous loss function, we use the softmax operator to obtain a probability distribution over documents: plool(dk) =exp(logpLM(a|DK\{dk},q))K i=1exp(logpLM(a|DK\{di},q)). As before, we then minimize the KL-divergence between this distribution, and the one obtained with retriever. This loss is more expensive to compute than LDist and EMDR, but, like ADist, employs the language model more closely to the way it is trained: the LM is trained to be conditioned on a set of Kdocuments. For LOOL, the language model is conditioned on (K1)documents, rather than a single document as in EMDR2and LDist. For all losses, we can also use a temperature hyperparameter when computing the target or retriever distributions to control the distributions peakiness of, which might be important for some tasks or losses. Indeed, for LDist and LOOL, the likelihood of the output may not vary much when conditioning on different documents, especially in the case of long outputs. 7 Izacard, Lewis, Lomeli, Hosseini, Petroni, Schick, Dwivedi-Yu, Joulin, Riedel, Grave 2.3 Pretext Tasks In this section, we describe pretext tasks that can be used to jointly pre-train the retriever and the language model using only unsupervised data. 2.3.1 Prefix Language Modeling First, we consider a standard language modeling task as a potential pre-training objective. To cast language modeling in the text-to-text framework, we consider a chunk of Nwords, and split this chunk in two sub-sequences of equal length N/2. Then, the first sub-sequence is used as the query, and the second corresponds to the output. We thus retrieve relevant documents by using the first sub-sequence of N/2tokens, to generate the output. 2.3.2 Masked Language Modeling Second, we consider masked language modeling, as formulated by Raffel et al. (2019). Again, starting from a chunk of Nwords, we sample kspans of average length 3 tokens, leading to a masking ratio of 15%. We then replace each span by a different special token. The model is then trained to generate the masked spans, each span beginning with the special sentinel mask token that was inserted in the input sequence. We retrieve documents using the masked query, but replace the special mask tokens with a mask token supported by the retriever vocabulary. 2.3.3 Title to Section Generation Finally, we consider a more abstractive generation task, generating sections from Wikipedia articles, given the article and section title. Here, the query corresponds to the title of the article, together with the title of the section, and the output corresponds to the text of the section. We exclude sections See also, References, Further reading and External links. 2.4 Efficient Retriever Fine-tuning Retrieval is facilitated by using a document index, which is a pre-computed collection of the document embeddings for all the documents in the retrieval corpus. When jointly training the retriever and language model, the index needs to be updated regularly, otherwise, the embeddings of the documents stored in the index become stale relative to the updated retriever. This means that we need to recompute the embeddings for the full collection of documents regularly during training to keep the index fresh, which can be computationally expensive for large indices. This is particularly true at fine-tuning time, where the number of training examples could be small relative to the number of documents in the index. Training the retriever could thus add an important computational overhead compared to standard language model finetuning. In this section, we analyse strategies that might make this process more efficient, alleviating the need to re-compute the embeddings of all the documents too often. 8 Atlas: Few-shot Learning with Retrieval Augmented Language Models 2.4.1 Full Index Update Let us start by analysing the overhead due to updating the index compared to using a fixed retriever. To compare the computation time of different models, we will make the following assumption: the time required to perform a forward pass on a document with a model of P parameters is O(P). While this computation model may seem naive, the main assumption is that document sizes are constant.1Since we split long documents into passages with similar number of words, and use padding when processing documents of different sizes, this assumption is reasonable in practice. Let Kbe the number of documents that are retrieved and processed by the language model, Plmbe the number of parameters of the language model and Bthe batch size. Each training step has a complexity of 4BKPlm.2 Next, letNbe the number of documents in the index, and Pretrbe the number of parameters of the retriever. Then, re-computing the full index has a complexity of NPretr. If we refresh the index every Rtraining steps, we obtain the following overhead: NPretr 4BKPlmR. If we use the BERT base architecture for our retriever and T5-XL for our language model, we getPretr Plm1 25, leading to the overhead: N 100BKR. If we use an index containing 37Mdocuments (the size of our Wikipedia index), train with a batch size of 64with 20retrieved documents and refresh the index every 1000 steps, this results in an overhead of 30%. 2.4.2 Re-ranking A second strategy is to retrieve a larger number of documents Lwith the retriever, and to re-embed and rerank these documents with the up-to-date retriever, and pass the resulting top-Kto the language model. In that case, the overhead of reranking the topLdocuments is equal toBLPretr. Since we perform this operation at every time step, the overhead is equal to LPretr 4KPlm. Using the same assumption as before, we finally get that the overhead is of the order of L 100K. If we re-rank 10x more documents than what the language model processes (that is L= 10K), we get an overhead of 10%. However, note that if many updates are performed on the retriever, the index might still need to be fully updated, as the true top-k documents may not be retrieved in the top-L results from the stale index. In practice, it is possible to track the positions of the top-K re-ranked documents in the top-L, and estimate when the index needs to be updated. 1.See Hoffmann et al. (2022) for more details about the computation of the FLOPS corresponding to the forward and backward passes of transformer networks. 2. There is a factor 4 to account for the backward pass and activation checkpointing. 9 Izacard, Lewis, Lomeli, Hosseini, Petroni, Schick, Dwivedi-Yu, Joulin, Riedel, Grave 2.4.3 Query-side Fine-tuning Finally, the last strategy is to decouple the encoding of the queries and documents as done in Guu et al. (2020). In this case, we fix the parameters corresponding to the document encoder, and only train the parameters corresponding to the query encoder. Thus, the embeddings of documents are fixed, and we do not need to refresh the index, and thus there is no computational overhead. As we will see in practice, the impact of fixing the documents encoder varies greatly for different tasks when a large training data set is available. For most of the few-shot settings that we consider, query-side finetuning does not have large performance impact, and sometimes even slightly improves performance. 3. Related Work In this section we first review the literature on retrieval in language models, before giving an overview on few-shot learning in natural language processing. 3.1 Retrieval-augmented models in Natural Language Processing There has been a long line of work studying the effect and potential benefits of retrieval augmentation for NLP tasks. 3.1.1 Retrieval for Knowledge Intensive Tasks Previous work has shown that retrieval improves performance across a variety of tasks such as question answering (Voorhees, 1999; Chen et al., 2017; Kwiatkowski et al., 2019), fact checking (Thorne et al., 2018), dialogue (Dinan et al., 2019) or citation recommendation (Petroni et al., 2022). Historically, this information retrieval step was implemented using term-matching methods, such as TF-IDF or BM25 (Jones, 1972; Robertson et al., 1995). For open-domain question answering (Voorhees, 1999), documents are often retrieved from Wikipedia (Chen et al., 2017). Recently, dense retrievers based on neural networks have become popular. These usually follow a dual-encoder architecture (Yih et al., 2011; Huang et al., 2013; Shen et al., 2014), where queries and passages are encoded independently as vectors, and relevance is computed using the inner product or Euclidean distance. Popular supervised retrievers include DPR (Karpukhin et al., 2020), which is trained to discriminate the relevant passage among negative passages, and extensions such as ANCE (Xiong et al., 2021) which improved the hard negatives mining process. We refer the reader to Yates et al. (2021) for a survey of dense retrieval techniques. After retrieval, the relevant documents are processed to produce the final output. In open-domain QA, models can extract a span of text from retrieved documents as the answer (Chen et al., 2017; Clark and Gardner, 2018; Wang et al., 2019; Karpukhin et al., 2020), a method inspired by reading comprehension (Richardson, 2013; Rajpurkar et al., 2016). Recently, generating the answer as free-form text, using a seq2seq model conditioned on retrieved documents have become prevalent (Lewis et al., 2020; Izacard and Grave, 2021a; Min et al., 2020). These architectures have also been shown to reduce hallucination in dialogue agents (Shuster et al., 2021). 10 Atlas: Few-shot Learning with Retrieval Augmented Language Models 3.1.2 Retriever training The need for expensive query-document annotations for training the retriever can be bypassed, by leveraging signals from the language model, or using unsupervised learning. REALM (Guu et al., 2020) and RAG (Lewis et al., 2020) jointly train the retriever and language model by modelling documents as latent variable, and minimizing the objective with gradient descent. REALM pre-trains end-to-end with an MLM approach but uses an extractive BERT-style model (Devlin et al., 2019). Guu et al. (2020) also explore a query-side finetuning at finetuning time to avoid index refreshes, which is also explored in the context of phrase-based retrieval by Lee et al. (2021b). Izacard and Grave (2021a) proposed to use cross-attention scores as supervision with knowledge distillation. Sachan et al. (2021) perform joint training of the reader and the retriever by leveraging the likelihood of the output generated by the reader. Sachan et al. (2021) and Lee et al. (2021a) both employ salient span masking to pre-train retrievers, leveraging the likelihood and attention scores from the language model. Theinverse cloze task was proposed by Lee et al. (2019) to pre-train dense retrievers in an unsupervised way. Paranjape et al. (2021) propose a method to train retrieval-augmented generators using a second informed retriever with access to the output, which the test-time retriever can be distilled from, and Hofsttter et al. (2022) recently proposed a training set filtering/weighting approach to train stronger retrieval-augmented generators. Izacard et al. (2022) explored different contrastive learning methods to train retrievers, while Ram et al. (2022) used recurring spans within a document to create pseudo-positive query-document pairs. 3.1.3 Retrieval-augmented language models Continuous cache models (Grave et al., 2017b) defines a probability distribution over recent tokens, by computing the similarity between previous and current representations of tokens. This distribution is then interpolated with the distribution of the language model, to improve predictions. Later, the amount of tokens used to compute this distribution was extended to a much larger memory by leveraging approximate nearest neighbors search (Grave et al., 2017a). The related kNN-LM model (Khandelwal et al., 2020) replaced LSTMs by transformer networks, and scaled the memory to billions of tokens, leading to strong performance improvements. More recently, RETRO (Borgeaud et al., 2021) extended these by scaling the retrieval memory to trillions of tokens, and changing the model architecture to take retrieved documents as input. 3.1.4 Retrieval-Augmentation with Search Engines Recently, different works have proposed to train large language models to interact with a search engine, by generating text queries, and using the retrieved documents as additional context (Nakano et al., 2021; Thoppilan et al., 2022; Shuster et al., 2022). In the context of few-shot question answering, Lazaridou et al. (2022) used the question to perform a search query, and retrieved documents are added to the prompt of a large language model performing in-context learning. 11 Izacard, Lewis, Lomeli, Hosseini, Petroni, Schick, Dwivedi-Yu, Joulin, Riedel, Grave 3.2 Few-shot Learning Few-shot learning, the task of learning from very few examples, has been studied for decades (Thrun and Pratt, 1998; Fink, 2005; Vinyals et al., 2016), but has recently seen an explosion of interest in NLP with the arrival of large pre-trained models. 3.2.1 In-context Learning with Large Language Models Providing language models with natural language descriptions of tasks, as proposed by Radford et al. (2019) has led to significant developments in few-shot learning. GPT-3 (Brown et al., 2020) demonstrated the ability of large language models to perform few-shot predictions, where the model is given a description of the task in natural language with few examples. Scaling model size, data and compute is crucial to enable this learning ability, leading to the further development of large models (Lieber et al., 2021; Rae et al., 2021; Smith et al., 2022; Chowdhery et al., 2022; Smith et al., 2022). Hoffmann et al. (2022) revisited the scaling law from Kaplan et al. (2020), suggesting that training on more data with a smaller model may be more effective, resulting in Chinchilla, a 70B parameter model with improved parameter efficiency. 3.2.2 Few-shot Finetuning and Prompt-based Learning The above models perform few-shot learning with in-context instructions without training the parameters of the language model. Few-shot learning can also be accomplished by combining textual templates (prompts) and various forms of model finetuning, either fully updating a models parameters, for example for classification (Schick and Schtze, 2021a; Schick and Schutze, 2021; Gao et al., 2021; Tam et al., 2021) or generation (Schick and Schtze, 2021b). Prompts themselves can be optimized, for example by search (Jiang et al., 2020; Shin et al., 2020) or by only updating parts of the model (Logan et al., 2021), or learning soft-prompts (Lester et al., 2021; Li and Liang, 2021). Due to its simplicity, in this work we either employ simple prompts or simply feed in inputs without preprocessing, and perform full-model finetuning, a method similar to Le Scao and Rush (2021). 4. Experiments In this section, we report empirical evaluations of our language models on few-shot learning. We start by introducing our experimental setup, describing our evaluation benchmarks in section 4.1, and giving the training details of our models in section 4.2. Then, we perform an ablation study to compare the different technical choices leading to our main model. We finally evaluate this model, called Atlas, on different natural language understanding tasks in few-shot and full data set settings. 4.1 Benchmarks To evaluate our retrieval-augmented language models we consider the following benchmarks, which include different tasks. 12 Atlas: Few-shot Learning with Retrieval Augmented Language Models 4.1.1 Knowledge-Intensive Language Tasks (KILT) First, we use the KILT evaluation suite (Petroni et al., 2020), containing 11 data sets corresponding to 5 tasks: fact checking, question answering, dialog generation, entity linking and slot-filling. To be solved, these different tasks require knowledge about the world, which can be found in Wikipedia. We evaluate our model on the following tasks and data sets included in KILT: question answering: Natural Questions (Kwiatkowski et al., 2019), TriviaQA (Joshi et al., 2017) and HotpotQA (Yang et al., 2018); slot filling: Zero Shot RE (Levy et al., 2017) and T-REx (Elsahar et al., 2018); entity linking: AIDA CoNLL-YAGO (Hoffart et al., 2011); dialogue: Wizard of Wikipedia (Dinan et al., 2019); and fact checking: FEVER (Thorne et al., 2018). The KILT versions of these data sets differ from their original versions, as instances requiring knowledge not present in the August 2019 Wikipedia dump have been removed. 4.1.2 Massively-Multitask Language Understanding (MMLU) Our second main evaluation benchmark is MMLU (Hendrycks et al., 2021), which contains 57 multi-choice question answering data sets (referred to as domains), sourced from real examinations designed for humans. These cover a very broad range of topics, for example high school mathematics, professional law, logical fallacies and clinical knowledge and can be broadly categorized in four subsets: humanities, social sciences, STEM and other. We focus on few-shot learning, and the authors of the benchmarks suggest to use 5 training examples per domain. Beyond the 5-shot setting, We also consider three additional settings. The first is azero-shot setting, with no training data at all. The second, which we call multi-task few-shot, is where we train a single model on the 5-shot data from all tasks, hence leading to a training set of 285 examples. The last, which we call transfer learning , leverages additional training examples from other multiple-choice QA tasks provided by the MMLU authors, namely MCTest (Richardson et al., 2013), RACE (Lai et al., 2017), ARC (Clark et al., 2018) and OBQA (Mihaylov et al., 2018) leading to a training set of 95k examples. 4.1.3 Additional Benchmarks Additionally, we report results on the original open-domain versions of the popular Natural Questions (Kwiatkowski et al., 2019), and TriviaQA (Joshi et al., 2017) data sets. Generated answers are evaluated with the standard exact match metric (EM), as used by Rajpurkar et al. (2016). A generated answer is considered correct if it matches any answer of the list of acceptable answers after normalization. This normalization step consists in lowercasing and removing articles, punctuation and duplicated whitespaces. We also evaluate our model on the original version of FEVER (Thorne et al., 2018), which presents fact checking as a three-way classification problem for textual claims (either Supported: the text is supported by evidence in Wikipedia, refuted: the claim is not consistent with evidence in Wikipedia, or not enough info, where there is insufficient evidence to make a judgement). We also perform experiments to assess temporal sensitivity of our models. Here, we construct a data set from TempLAMA (Dhingra et al., 2022), consisting of a set of time-sensitive cloze questions on a range of topics, where the answer changes from 2017 to 2020. We assess the accuracy of our models when supplied with a index from 2017 vs 2020 to assess to what 13 Izacard, Lewis, Lomeli, Hosseini, Petroni, Schick, Dwivedi-Yu, Joulin, Riedel, Grave degree models faithfully reflect the content of the index supplied to them at test time, and how effective updating the index is as a continual learning or model updateability method. 4.2 Technical Details We now describe the procedure for pre-training and fine-tuning our models. We focus on the setting used for the ablation studies performed in Section 4.3 and Section 4.4. We give more details about the hyperparameters used for our final model later. 4.2.1 Pre-training For the pre-training, we initialize the retriever module using the unsupervised Contriever model (Izacard et al., 2022), which uses the BERT base architecture. We initialize the language model with the T5 pre-trained weight (Raffel et al., 2019). As the original T5 pre-trained model included supervised data in the training set, we use the version 1.1 models which were trained on unlabeled text only. Specifically, we initialize from the T5-lm-adapt variants due to their improved stability. For the ablation studies performed in Section 4.3 and Section 4.4, we use T5-XL which contains 3B weights. We pre-train all our models for 10,000 iterations, using AdamW with a batch size of 64 and a learning rate of 104for the reader and 105for the retriever with linear decay and 1,000 warmup steps. We refresh the index every 1,000 steps. This means that the index is recomputed 10 times during the pre-training, leading to an overhead of around 30%, compared to training with a fixed retriever. We set the number of retrieved documents to 20. We detail the hyperparameters used for the training of our final model at the beginning of Section 5. 4.2.2 Fine-tuning When performing a downstream task, either in a few-shot setting or with a large training set, we employ fine-tuning to adapt our models to these tasks. For the few-shot KILT ablation experiments, we perform a fixed number of fine-tuning iterations, instead of using early-stopping. More precisely, we decided to use 50 iterations for the 64-shot setting and 200 iterations in the 1024-shot setting. In both cases, we use a batch size of 32examples, a learning rate of 4105with linear decay and 5 warmup steps for both the reader and the retriever. 4.2.3 Unlabeled Data Sets Finally, we discuss the unlabeled text data sets that we use to train our models, which form the retrieval index. First, we consider the Dec. 20, 2021 Wikipedia dump, for which we keep the lists and infoboxes, which are linearized by adding a semi-colon separator between the entries. We split articles by section, and split long sections into passages of equal sizes and containing less than 200 words. This leads to a total of 37M passages, containing 78 words in average. We also use documents from the 2020-10 common crawl dump, preprocessed with the CCNet pipeline (Wenzek et al., 2020). We perform additional document filtering, in a similar fashion to Gopher (Rae et al., 2021). More precisely, we filter documents based on document length, average word length, ratio of alphanumeric characters and number of 14 Atlas: Few-shot Learning with Retrieval Augmented Language Models 64-shot 1024-shot MLM NQ WoW FEVER Avg. NQ WoW FEVER Avg. Closed-book 1.083 6.5 14.1 59.0 26.5 10.7 16.5 75.3 34.2 No joint pre-training 9.0 14.1 67.0 30.0 9.9 16.6 78.3 34.9 Fixed retriever 0.823 39.9 14.3 72.4 42.2 45.3 17.9 90.0 51.1 ADist 0.780 40.9 14.4 73.8 43.0 46.2 17.2 90.9 51.4 EMDR20.783 43.3 14.6 72.1 43.3 44.9 18.3 85.7 49.6 LDist 0.783 45.0 15.0 77.0 45.7 44.9 17.9 90.2 51.0 LOOL 0.766 41.8 15.0 74.4 43.7 47.117.9 87.5 50.8 Table 1: Loss function ablation. We compare different loss functions to pre-train the retriever jointly with the language model. We compare different loss functions to pre-train the retriever jointly with the language model. We use the prefix MLM task for pre-training. Fine-tuning is performed with query-side fine-tuning and the loss used for pre-training. Best result is bold, second highest underlined. repeated tokens. This leads to a total of 350M passages. The same passages are used for the index and model pre-training. During pre-training, we ensure the passage we are training on is filtered out from the retrieved documents, to prevent the model from simply retrieving the passage it is de-nosing/generating, and trivially using it to solve the pre-training task. 4.3 Pre-training Loss and Tasks We start our ablation study by comparing different pre-training tasks and objective functions to jointly train the retriever and the language model. Our goal here is to answer the following research questions: (RQ 1) Does jointly pre-training the whole model lead to better few-shot performance? (RQ 2) What is the best objective function for the retriever, and the best pretext task? We start by comparing the training objectives of the retriever, introduced in Section 2.2, by pre-training models using the masked language modelling task. We evaluate these models on a subset of the 64-shot and 1024-shot KILT benchmark: Natural Questions, FEVER and Wizard of Wikipedia, along with three baselines: a closed-book model, a model without joint pre-training, and a model pre-trained with a fixed retriever. The closed-book baseline is a non-retrieval-augmented T5 model, initialized with T5-XL, and further pre-trained on the same data as the other models with the masked language modelling task to ensure that all models are trained on a similar amount of tokens. Finally, the closed-book model is fine-tuned without retrieval augmentation. For the baseline without joint pre-training: the reader is also pre-trained without retrieval, and the retriever is initialized at finetuning from Contriever and trained with the LDist loss. Similarly the model pre-trained with a fixed retriever is fine-tuned with the LDist loss. We report results in Table 1. First, we note the poor performance of the closed-book baseline, indicating the importance of augmentation. Next, we observe that pre-training our model with retrieval is important to obtain good performance on few-shot tasks. Indeed, all models that include retrieval during pre-training 15 Izacard, Lewis, Lomeli, Hosseini, Petroni, Schick, Dwivedi-Yu, Joulin, Riedel, Grave 64-shot 1024-shot NQ WoW FEVER Avg. NQ WoW FEVER Avg. Prefix Language Modelling 41.0 14.5 64.9 40.1 44.717.9 86.0 49.5 Masked Language Modelling 42.7 14.9 69.7 42.4 44.7 18.3 88.8 50.6 Title-to-section generation 41.1 15.2 66.1 40.8 45.4 17.9 84.6 49.3 Table 2: Pretext task ablation. We compare different pretext tasks, used to jointly pre-train our models. Examples are randomly sampled from the training set of the KILT version of the data set. We report the exact match on Natural Questions, the F1 score on Wizard of Wikipedia and the accuracy on FEVER. 64-shot 1024-shot Index Training data NQ WoW FEVER Avg. NQ WoW FEVER Avg. Wiki Wiki 42.714.9 69.7 42.444.7 18.3 88.8 50.6 Wiki CCNet 40.9 15.3 67.3 41.2 44.8 18.4 88.1 50.4 CCNet Wiki 32.9 14.5 72.1 39.8 37.8 17.1 85.8 46.9 CCNet CCNet 38.4 14.9 70.1 41.1 42.0 17.3 88.9 49.4 Table 3: Index content ablation. In this table, we report results for models where the content of the index was changed between the pre-training and the fine-tuning. strongly outperform the baseline without joint pre-training. Next, we compare a model that was pre-trained with a fixed retriever, and models using the various retriever training objectives. On the MLM validation metric corresponding to the pre-training objective, we observe that jointly training the retriever leads to strong improvements. This effect tends to be less marked on 64-shot downstream tasks, and almost non-existent for 1024-shot. We believe that this is evidence that the biggest impact of pre-training is on the language model, which learns to use and aggregate information from the retrieved documents. Lastly, we do not observe significant systematic differences between the different retriever training objectives. We thus decide adopt use Likelihood Distillation for subsequent experiments, as it tends to be more stable than EMDR2or ADist, and more computationally efficient than LOOL. Next, we compare the different self-supervised pretext tasks introduced in Section 2.3 in Table 2. Here we observe similar results for all three tasks, with a small advantage for masked language modelling. Thus, in what follows, we adopt masked language modelling for pre-training. Finally, we consider different combinations of data sourcesWikipedia and common crawlfor the index and training data during pre-training. In all cases, we use the Wikipedia 2021 dump as the index when performing few-shot fine-tuning. We report results in Table 3. First, we observe that using a Wikipedia-based index leads to better downstream performance. There could be two explanations for this: first, as we use Wikipedia for the few-shot tasks, the model might be better adapted when trained using the same data. Another explanation 16 Atlas: Few-shot Learning with Retrieval Augmented Language Models 64-shot 1024-shot NQ WoW FEVER Avg. NQ WoW FEVER Avg. Standard fine-tuning 44.3 14.9 73.2 44.1 47.0 18.4 89.7 51.7 Top-100 re-ranking 44.2 14.6 75.4 44.7 47.1 18.7 88.9 51.6 Query-side fine-tuning 45.0 15.0 77.0 45.7 44.9 17.9 90.2 51.0 Fixed retriever 36.8 14.5 72.0 41.1 38.0 17.7 89.3 48.3 Table 4: Retriever fine-tuning ablation. Here, we compare different strategies to fine-tune the retriever in a few-shot setting. might be that Wikipedia is a higher-quality and denser source of knowledge than common crawl. Second, when using a common crawl index, we observe that pre-training on Wikipedia data leads to lower performance than using common crawl data. We believe that the primary reason is that the distribution mismatch between the two domains leads to generally-less relevant retrieved documents. In turn, this probably means that the pre-training is less efficient, because the language model does not leverage as much information from the documents. In the following, we decide to combine the data from both domains for the index and the pre-training data to extend the coverage. 4.4 Fine-tuning In this section, we perform an ablation study on how to apply our models on downstream tasks, which relies on fine-tuning. In particular, we want to investigate the following research question: (RQ 3) How to efficiently fine-tune Atlason tasks with limited training data? To answer this question, we compare the different strategies to fine-tune the retriever module, described in Section 2.4. We report results in Table 4. First, as for pre-training, we observe that keeping the retriever fixed during fine-tuning leads to a significant performance drops, for both 64and 1024-shot settings. Second, the re-ranking strategy (row 2) leads to very similar results to fully updating the index (row 1), while being significantly more efficient. Lastly, fine-tuning only the query encoder also leads to strong results: in particular, in the 64-shot setup, this is slightly stronger than performing full fine-tuning, which we attribute to there being less opportunity for over-fitting. On the other hand, on the 1024-shot setting, performing a full fine-tuning leads to stronger results, especially on Natural Questions. In the following, we use query-side fine-tuning for experiments with less than 64 examples, and standard fine-tuning for larger data sets. 5. Training and Evaluating Atlas In this section, we apply the findings from the ablations of the previous sections to train a family of Atlasmodels, ranging from 770M to 11B parameters. More specifically, we use the Likelihood Distillation objective function, along with the masked language modelling 17 Izacard, Lewis, Lomeli, Hosseini, Petroni, Schick, Dwivedi-Yu, Joulin, Riedel, Grave 5-shot 5-shot (multi-task) Full / Transfer 770M 3B 11B 770M 3B 11B 770M 3B 11B Closed-book T5 29.2 35.7 36.1 26.5 40.0 43.5 42.4 50.4 54.0 Atlas 38.9 42.3 43.4 42.1 48.7 56.4 56.3 59.9 65.8 +9.8 +6.6 +7.3 +15.6 +8.7 +12.9 +13.9 +9.5 +11.8 Table 5: Performance on MMLU as a function of model size. We report performance of Atlason MMLU as a function of model size and compare it to closed-book T5. pretext task. We pre-train these models using a mix of Wikipedia and Common Crawl data, for both the training data and content of the index. During pre-training, the reader generates based on 20 documents retrieved using the re-ranking strategy described in 2.4. For this we first retrieve 100 documents from the index containing embeddings, which are potentially stale, then these documents are re-embed and re-ranked using the up-to-date retriever. The index is updated every 2,500 steps. We pre-train models for 10,000 iterations using AdamW with a batch size of 128. While training longer continued to improve perplexity, we did not observe further improvements on downstream tasks after finetuning by training longer. 5.1 MMLU Results As mentioned in section 4.1, we consider four setting for MMLU: 1) a zero-shot setting where we directly apply the pre-trained model with no few-shot finetuning 2) a 5-shot setting, where we finetune a model using 5 training examples for each of the 57 domains 3) a 5-shot multitask setting, where, rather than finetuning a model independently for each domain, we train a single model to perform all tasks and 4) a setting with access to a number of auxiliary data sets, with 95K total training examples. We train the models to generate the letter corresponding to the correct answer option (A, B, C or D), and pick the answer with the most likely of the 4 letters at test time. Full technical details can be found in appendix A.1. 5.1.1 Performance vs Parameters We start by comparing Atlasto closed-book models of different sizes for 5-shot, 5-shot multitask and the full setting, and report results in Table 5. Across these settings, Atlasoutperforms the closed-book baselines by between 6.6 and 15.6 points, demonstrating consistent utility of retrieval for few-shot language understanding across 57 domains. The closed-book T5 struggles to perform significantly better than random (25%) in few-shot settings with 770M parameters, whereas the equivalent Atlasachieves around 40%, significantly better than random, despite its small size. All models improve with more data, but interestingly, the 770M models do not benefit as much from few-shot multitask learning compared to larger models (for closed-book, it actually loses 3 points) suggesting smaller models struggle to grasp the synergies between the tasks in the few-shot setting. Larger models exploit the multi-task setting well, with Atlasimproving more than closed-book. For example, Atlas-11B improves by 13 points (43.4 56.4), but equivalent closed-book only improves 18 Atlas: Few-shot Learning with Retrieval Augmented Language Models by 7 (36.143.5). Finally, on the transfer learning setting, all models improve, but the relative gaps between closed-book at Atlasmodels remain similar. 5.1.2 De-biasing When finetuning, we permute which answer option appears with which answer letter to reduce over-fitting and encourage a uniform prior over answer letters. However, the model may still exhibit a bias towards some letters, especially in few-shot settings, so we also include a second de-biased inference mode in addition the standard inference used above. Here, we run 4 forward passes, one for each cyclic permutation of the answer letter-answer option assignment in the question, for example the answer option assigned to letter A becomes B, what was B becomes C etc.3We then sum the 4 probabilities to obtain the final prediction, which reduces spurious bias towards one of the answer letters (further details in appendix A.1). The results are shown in Table 6. We find that in zero-shot and 5-shot settings, de-biasing is very effective, improving results by 10.3 and 4.5 points respectively. When more training data is available, the need for de-biasing decreases, leading to only 0.2 point improvement in the multi-task and full data settings. 5.1.3 Comparison to Published Works Next, we compare our Atlas-11B results with de-biasing to recently reported results with state-of-the-art large language models such as GPT-3 or Chinchilla, which required significantly more amount of computation to train. We report results in Table 7. We find that Atlasis able to perform significantly better than random in zero-shot, and in conjunction with de-biased inference, achieves zero-shot scores that exceed 5-shot results reported with GPT3 in the literature (47.1% vs 43.9%) (Hendrycks et al., 2021). For the 5-shot setting, Atlasoutperforms GPT-3 by 4%, while using 15 less parameters, and 10 less pre-training compute.4When multitask-training on the combined 5-shot data, Atlas improves to 56.6% close to the 5-shot performance of Gopher (60.0%). Finally, on the full data setting, where we train on auxiliary data recommended by the MMLU authors, Atlas reaches an overall accuracy of 65.6%, close to the state-of-the-art. Interestingly, in this setup, Atlassignificantly outperforms GPT-3, while on the 5-shot setting, their performance is similar. 5.2 Open-domain Question Answering Results Next we evaluate Atlason two open-domain question answering benchmarks: Natural Questions and TriviaQA. We compare to prior work, both in a few-shot setting using 64 examples, and using the full training set, and report results in Table 8. On these benchmarks, whichrequire high-degree ofmemorisation, we clearly seethe benefitsof retrievalaugmentation. Atlas-11B obtains state-of-the-art results on 64-shot question answering, for both Natural Questions and TriviaQA. In particular, it outperforms significantly larger 3.Exploring all answer option permutations would involve 24 forward passes, which improves results by an additional1% over the 4 cyclic permutations, but requires much more compute, so we exclude it here, see Appendix A.1 4.Atlass pre-training compute is dominated by the T5 pre-training. The computational requirements for the retrieval-augmented pre-train is orders of magnitude lower 19 Izacard, Lewis, Lomeli, Hosseini, Petroni, Schick, Dwivedi-Yu, Joulin, Riedel, Grave Zero-shot 5-shot 5-shot (multi-task) Full / Transfer Standard Inference 36.8 43.4 56.4 65.8 De-biased Inference 47.1 47.9 56.6 66.0 Table 6: Standard versus de-biased inference for MMLU. These results are reported for Atlas-11B, using cyclic permutations for de-biasing, which increases inference costs by a factor of 4. Setting Model Params FLOPS All Hum. Soc. Sci. STEM Other zero-shot Atlas 11B 3.5e22 47.1 43.6 54.1 38.0 54.4 5-shotGPT-3 175B 3.1e23 43.9 40.8 50.4 36.7 48.8 Gopher 280B 5.0e23 60.0 56.2 71.9 47.4 66.1 Chinchilla 70B 5.0e23 67.5 63.6 79.3 55.0 73.9 Atlas11B 3.5e22 47.9 46.1 54.6 38.8 52.8 5-shot MT Atlas 11B 3.5e22 56.6 50.1 66.4 46.4 66.2 TransferUnifiedQA 11B 3.3e22 48.9 45.6 56.6 40.2 54.6 GPT-3 175B 3.1e23 53.9 52.5 63.9 41.4 57.9 Atlas 11B 3.5e22 66.0 61.1 77.2 53.2 74.4 Table 7: Comparison to state-of-the-art on MMLU.For the 5-shot setting, Atlasuses fine-tuning, while previous works use in-context learning. The Atlasmodel uses de-biased inference. FLOPS refers to total the amount of computation necessary to train the model, including pre-training and/or fine-tuning. 5-shot MT refers to training a single model on multiple tasks, using 5 examples per task. models, such as PaLM, or models that required significantly more training compute such as Chinchilla. When using the full training set, Atlasalso obtains state-of-the-art results, for example improving the accuracy on Natural Questions from 55.9% to 60.4%. This result is obtained using an index comprised of CCNet and the December 2021 Wikipedia corpora, our default setting for the index. In section 6.2 we consider using indexes composed of Wikipedia corpus archived at different dates, and demonstrate an additional +3.6% on Natural Questions when using an index which is temporally matched to Natural Questions. We report performance as a function of model size as well as detailed hyperparameters in Appendix A.2. Atlasalso compares favorably to recent work exploring retrieval-augmented few-shot question answering with very large models. Lazaridou et al. (2022) explore Natural Questions in a 15-shot setup using Gopher, augmenting questions with 50 passages retrieved using Google Search. This method consists of generating 4 candidate answers from each retrieved passages, and then re-ranking using either a score inspired by RAG (Lewis et al., 2020) or a more expensive approach. This method (not shown in our tables) achieves exact match scores of 32.7% (RAG) and 38.4% (Ensemble), requiring 50 (RAG) or 450 (Ensemble) forward passes of Gopher-280B per test-time question. Atlas, using the same 15 training examples 20 Atlas: Few-shot Learning with Retrieval Augmented Language Models NQ TriviaQA filtered TriviaQA unfiltered Model 64-shot Full 64-shot Full 64-shot Full GPT-3 (Brown et al., 2020) 29.9 71.2 Gopher (Rae et al., 2021) 28.2 57.2 61.3 Chinchilla (Hoffmann et al., 2022) 35.5 64.6 72.3 PaLM (Chowdhery et al., 2022) 39.6 81.4 RETRO (Borgeaud et al., 2021) 45.5 FiD (Izacard and Grave, 2021a) 51.4 67.6 80.1 FiD-KD (Izacard and Grave, 2021b) 54.7 73.3 R2-D2 (Fajcik et al., 2021) 55.9 69.9 Atlas 42.4 60.4 74.5 79.8 84.7 89.4 Table 8: Comparison to state-of-the-art on question answering. We report results on Natural Questions, and on TriviaQA for both the filtered set, commonly used for opendomain question answering and the unfiltered hidden set for which evaluation is accessible online: https://competitions.codalab.org/competitions/17208 . For the 64-shot setting, our model uses fine-tuning, while the other models use prompting. and 50 passages achieves 38.7 EM, despite having 25 fewer parameters, and requiring comparatively negligible compute. 5.3 FEVER Results We report results on the original 3-class FEVER fact checking test set in Table 9. We consider a 64-shot setting, with training examples uniformly sampled from the full training set. Unlike the development and test sets, the train set is imbalanced, with more positive labels than negative, posing a challenge for few-shot learning. In this setting, we achieve an accuracy of 64.3%. We also report a 15-shot setting, with 5 examples uniformly sampled from each class to compare with published results from Gopher (Rae et al., 2021), where Atlasscores 56.2%, outperforming Gopher by 5.1 points. Lastly we fine-tune our model on the full training set, and achieve a score of 78%, within 1.5% of the ProoFVer, which uses a specialized architecture, a retriever trained with sentence-level annotations, and is supplied with the Wikipedia corpus released with FEVER, whereas Atlasretrieves from CCNet and the December 2021 Wikipedia dump. If we give Atlasan index comprised of the FEVER Wikipedia corpus, we set a new state-of-the-art of 80.1% 5.4 KILT Results Finally we evaluate Atlason KILT, a benchmark composed of several different knowledge intensive tasks, which was described in section 4.1. We report results on test sets in Table 10 for which evaluation is available online.5The KILT versions of data sets are filtered, and thus results on Natural Questions, TriviaQA and FEVER reported elsewhere are not directly comparable on KILT. We consider both a 64-shot setting and a full fine-tuning setting, in both cases we train Atlasindividually on each data set. More details on the hyperparameters and 5.https://eval.ai/web/challenges/challenge-page/689 21 Izacard, Lewis, Lomeli, Hosseini, Petroni, Schick, Dwivedi-Yu, Joulin, Riedel, Grave 15-shot 65-shot Full data set Gopher (Rae et al., 2021) 51.1 ProoFVer (Krishna et al., 2022) 79.5 Atlas 56.2 64.3 78.0 /80.1 Table 9: Comparison to state-of-the-art on FEVER. We report accuracy on FEVER test set, for which evaluation is available here: https://competitions.codalab.org/ competitions/18814 . For the few-shot settings, our model uses fine-tuning while other models use prompting.uses an index composed of the FEVER Wikipedia corpus. ModelAIDA FEV T-REx zsRE NQ HoPo TQA WoW acc acc acc acc em em em f1 GENRE (Cao et al., 2021) 89.9 Sphere (Piktus et al., 2021) 89.0 81.7 74.2 51.6 38.3 72.7 15.5 SEAL (Bevilacqua et al., 2022) 89.5 83.6 74.6 53.7 40.5 70.9 18.3 Re2G (Glass et al., 2022) 89.6 87.7 51.7 76.3 18.9 FID+RS (Hofsttter et al., 2022) 92.2 85.2 83.761.2 39.1 84.620.6 Atlas, 64-shot 66.5 87.1 58.9 74.9 43.6 34.7 76.4 15.5 Atlas, full train set 90.6 93.5 85.1 80.8 61.3 50.6 84.0 21.6 Table 10: Downstream results on the KILT hidden test sets. Downstream metrics are accuracy (AIDA CoNLL-YAGO, FEVER, T-REx, zero-shot RE), exact match (Natural Questions, HotpotQA, TriviaQA), or F1 (Wizard of Wikipedia). development set results are reported in Appendix A.3. For 64-shot, we greatly exceed random performance, and are even competitive with some fully-finetuned models on the leaderboard, such as for FEVER, where our 64-shot Atlasis only 2-2.5 points behind Sphere, SEAL and Re2G, and outperforms Sphere and SEAL on zero-shot RE. In the full data set setting, Atlasis within 3% to the state-of-the-art for 3 data sets, and sets the state-of-the-art in the remaining five data sets. 6. Analysis In this section we discuss specific aspects of Atlasas a retrieval-augmented language model. First, we analyse retrieved documents to interpret Atlasgenerations. Second, we probe the updateability and temporal sensitivity of Atlaswhen the content of the index is modified. 6.1 Interpretability and Leakage An advantage of semi-parametric models like Atlasis the ability to inspect retrieved items to aid interpretability. To better understand how well Atlasretrieves, and how it uses retrieved passages, we examine the retrieved passages for multi-task few-shot MMLU. As 22 Atlas: Few-shot Learning with Retrieval Augmented Language Models Hum. Soc Sci. STEM Other All020406080100% of Retrieved PassagesCCNet Wiki Text Wiki Infobox 1 5 10 15 20 25 Top K Retrieved Documents15202530% of Retrieved Passages [0,1)[1,2)[2,4)[4,8)[8,16)[16,) Answer frequency in retrieved docs (interval)020406080MMLU Accuracy (%) Figure 3: MMLU Retrieval Analysis. Left: Fraction of sources of top 30 retrieved passages for MMLU from CCNet, Wikipedia passages and info boxes for the 5-shot multitask Atlas. Center: How often the text of the correct MMLU answer option appears in retrieved passages, as a function of the number of retrieved passages. Right: MMLU accuracy as a function of answer occurrence frequency in retrieved passages set shown in the left panel of Figure 3, the model retrieves the majority of its passages from CCNet (85% on average). Wikipedia makes up about 15% of retrieved passages, which is higher than we would expect under a uniform prior, given Wikipedia only makes up about 10% of the index. The fraction of Wikipedia retrieval varies between MMLU domains, with the model using Wikipedia to a greater extent for STEM domains, and least for social sciences. The domain making the greatest use of Wikipedia is abstract algebra (73%), and the least is moral scenarios (3%). We also note that the MMLU-finetuned Atlasdoes not make significant use of Wikipedia infobox passages. We can also analyse the content of passages to assess how they may be useful for accomplishing the downstream task. The middle panel of Figure 3 shows how often retrieved documents contain the text of the correct answer option. There being at least one mention of the correct answer choice in 30% of test questions in the top 25 passages.6The right panel shows that the accuracy on MMLU increases when the correct answer option text occurs more frequently in retrieved passages, rising from 55% for questions when the answer option does not appear, to 77% for questions mentioned more than 15 times. A human analysis of retrieved documents revealed that documents are helpful for answering questions in a number of different ways. Manual inspection of a sample of 50 correctly-answered questions revealed that 44% contained at least partially useful background information. These are documents that would improve the likelihood of a non-expert human answering correctly, such as contextual clues surrounding a quotation from a question, or helpful numerical figures for quantity-based questions, which help to narrow down the answer options to a smaller range. In a further 26% of cases, a passage contained all the necessary information to answer the question, stated in a straightforward way. If read competently, such passages make the question simple to answer, and often include information such as 6.Note: Depending on the question, it may not be important or useful to retrieve the exact text of the answer in MMLU, and as such, a hits@k value of 30% does not imply that retrieval fails to surface useful information in 70% of cases 23 Izacard, Lewis, Lomeli, Hosseini, Petroni, Schick, Dwivedi-Yu, Joulin, Riedel, Grave canonical definitions, or the exact numerical answer requested in the question. 28% of retrieval sets did not contain obvious information which would make the question easier. Finally, 2% contained the verbatim question in a passage, together with its answer. Given that MMLU has been created from pre-existing exams, it is possible that these questions appear on the open web. Models trained on web data (or, in our case, retrieving from it) run the risk of answering correctly not through generalisation, but by verbatim memorisation, which could lead to misleadingly high scores. In some very large language models, which can verbatim memorize and recall large parts of their pre-training data (Carlini et al., 2021), efforts have sometimes been made to filter occurrences of downstream instances from pre-training data, but this has not been performed for MMLU in the literature. In order to assess the prevalence of MMLU leakage in our index, we manually checked retrieval results for questions where the longest n-gram overlap between the question (without answer options) and a passage was at least 75% the length of the question. This resulted in an estimate of leakage of 2.8% of questions from our CCNet corpus. A benefit of retrieval-augmented models such as Atlasis the editability of its knowledge (see Section 6.2 for additional analysis). To estimate pure, non-leaked performance, we can filter out any potentially-leaked passages from retrieved results and rerun the language model. The MMLU score drops slightly when controlling for this leakage from 56.4 to 55.8% (-.5%).We note that our CCNet corpus is relatively small compared to the pre-trained corpora of recent very large models, which are trained on up to 1.4 trillion tokens (Hoffmann et al., 2022), 35x the size of our index, making it likely that models trained on corpora of that size would observe more MMLU leaked examples, but detecting such leakage is challenging in non-retrieval-augmented models. 6.2 Temporal Sensitivity and Updateability Abenefitof retrieval-augmented models is thatthey canbekeptup-to-date without retraining, by updating or swapping their index at test time. To assess the effectiveness of this mechanism in Atlas, we first construct a data set of time-sensitive questions derived from TempLAMA (Dhingra et al., 2022). TempLAMA is a collection of templated cloze questions derived from Wikidata and Wikipedia where the correct answer changes over time. We select a subset of questions from this data set which have a different answer in 2017 and 2020, for example, Question: Theo Walcott plays for ___ Answer: Arsenal F.C. (2017), Everton F.C. (2020) , and form a small training set of 248 training, 112 development and 806 test questions. Using this data set, we finetune closed-book T5-XXL and Atlasusing the questions and the 2017 answers, supplying Atlaswith a 2017 Wikipedia index, and then measure exact match accuracy on the 2017 test set. The results can be found in the first row and first two columns of Table 11. We first observe that, as expected, Atlasgreatly outperforms T5 (57.7% c.f. 12.1%). We also note that, as desired, both T5 and Atlasalmost never generate an answer from 2020 when trained with the 2017 answers, scoring 2.8% and 1.5% respectively (first row, second two columns of Table 11). However, as shown in row 2, we can swap the Atlasindex to a 2020 Wikipedia index, without retraining , and find that Atlasupdates its predictions accordingly, with 2020 accuracy rising to a similar level to its 2017 performance (53.1%), whereas the purely parametric T5 has no such updateability mechanism. 24 Atlas: Few-shot Learning with Retrieval Augmented Language Models 2017 Test Set Acc. 2020 Test Set Acc. Train Set Test-time Index Closed-book Atlas Closed-book Atlas 2017 answers2017 12.1 57.7 2.9 1.5 2020 12.1 10.2 2.9 53.1 2020 answers2017 4.8 50.1 3.6 4.2 2020 4.8 3.5 3.6 60.5 Table 11: Results on our TempLAMA-derived data set. We report performance for a static, closed-book T5-11B, as well as Atlas-11B supplied with a test-time Wikipedia index from 2017 or 2020. We evaluate models finetuned on a small training set of 248 time-sensitive cloze-question-answer pairs, using answers either from 2017 or 2020. Good models should score highly when the test set year matches the year of the test-time index, and score low otherwise. Dec. 2017 Dec. 2018 Aug. 2019 Dec. 2020 Dec. 2021 64-shot 44.7 45.1 44.1 44.0 41.3 Full 63.2 64.0 62.4 61.1 59.6 Table 12: Impact of index data temporality on Natural Questions. We report exact match performance on Natural Questions using different Wikipedia dumps in the index. We observe that the dump from December 2018, commonly used for Natural Questions, leads to the best result. This demonstrates that Atlascan be faithful and condition strongly on its supplied index. Furthermore, this zero-shot updateability mechanism has the useful property of staying up-to-date without requiring up-to-date annotated data, or continuous, lifelong pre-training, as would be may required for a large parametric-only model. Rows 3 and 4 of Table 11 complete the picture, where this time we train with 2020 answers, and demonstrate Atlas can zero-shot transfer backwards in time to 2017 effectively too (50.1%). Interestingly, T5 is unable to answer questions from 2020 well, even when trained with 2020 answers (3.6%), likely because it was pre-trained on data pre-dating 2020 (Dodge et al., 2021). We also examine temporal effects for Natural Questions. Natural Questions is a data set composed of search queries collected via the Google search engine in a short period of time. Thus data have a strong temporal bias, with a lot of questions about the 2018 World Cup for example. Moreover some questions are ambiguous without specification of the temporal context. For instance, for the question when did ireland last beat england at twickenham , the expected answer is 2018 in Natural Questions, while Ireland also beat England at Twickenham in 2022 as well as many other times before. In Table 12, we report results obtained by finetuning Atlasusing different Wikipedia dumps for the index. We observe that the 2018 December Wikipedia dump, which is close to the date of data collection, 25 Izacard, Lewis, Lomeli, Hosseini, Petroni, Schick, Dwivedi-Yu, Joulin, Riedel, Grave 101102 Memory (in GB)0.20.40.60.8NQ Recall@50 101102 Memory (in GB)10203040NQ Exact Match 100101 Memory (in GB)0.20.40.60.8NQ Recall@50 100101 Memory (in GB)10203040NQ Exact Match Figure 4: Index Compression: Atlas-3B 64-shot NQ performance (left column: Retrieval Recall@50, right column: QA Exact Match score), as a function of index size, for different levels of quantisation. The right-most point in each plot represents the uncompressed index. Top Row: Wikipedia + CCNet Index. Bottom Row: Wikipedia Index. leads to the best results for both few-shot and full fine-tuning. In particular, it leads to a new state-of-the-art of 64.0 EM on Natural Questions. 6.2.1 Index Compression Maintaining dense retrieval indices can be memory-intensive, especially as the number of indexed items is scaled. In this section, we briefly analyse the memory requirements of Atlass index in the case of a) a Wikipedia index and b) the combined CCNet and Wikipedia index used in most of the experiments above. There are two sources of memory pressure for Atlass retrieval componentthe passages themselves, and the document embedding index. The tokenized passages, once binarized, require 11GB and 130GB of storage for the Wikipedia and combined indices respectively. These passages do not need to be stored in expensive GPU RAM, and could even be memorymapped to disk, sharded across nodes or compressed if required, and thus do not represent a limiting hardware challenge in this context. The embedding index itself, however, must be stored in GPU RAM for fast search, and thus its size is more sensitive. In the above experiments, we perform exact search over our index, which is achieved by sharding the index over all the the available GPUs, and computing the search in parallel. The index is 26 Atlas: Few-shot Learning with Retrieval Augmented Language Models stored at fp16 precision, resulting in a total GPU memory requirement of 49 GB and 587 GB for the Wikipedia and combined indices, respectively. This large GPU memory requirement for the index limits accessibility and ease of deployment. However, many index compression techniques are available for nearest neighbour search, which can often dramatically reduce memory requirements at the cost of some retrieval accuracy. Following Izacard et al. (2020), we explore the effect of Product Quantization (PQ, Jegou et al., 2010), a popular lossy compression technique on Atlas-3Bs accuracy for the 64-shot NQ task at different compression levels. The results are shown in Figure 4. We find that substantial compression is possible before the onset of significant performance degradation. Namely, the Wikipedia index can be compressed from 49GB to 4GB with negligible drop in retrieval precision and exact match. Likewise, the combined index can be compressed from 587GB to 50GB without serious degradation, indicating that the combined index could be loaded onto a single 80GB GPU. 7. Discussion In this paper, we introduce Atlas, a large retrieval-augmented language model. By jointly pre-training the retriever module and the language model, we show that Atlashas strong few-shot learning capabilities on a wide range of knowledge intensive tasks, including Natural Questions, TriviaQA, FEVER, 8 KILT tasks and 57 MMLU tasks. For example, Atlas-11B reaches more than 42% accuracy on Natural Questions and 84.7% on TriviaQA when training on 64 examples, which is an improvement of almost 3 points compared to PaLM, a 540B parameter model, which required 50x more pre-training compute. We also provided detailed ablations and analyses for what factors are important when training such retrieval-augmented models, and demonstrated Atlass updateability, interpretability and controlability capabilities. Lastly, we demonstrated that Atlasis also powerful in full data set settings obtaining a new state-of-the-art results on Natural Questions, TriviaQA, FEVER, and 5 KILT tasks. The few-shot results presented in this paper were obtained by fine-tuning Atlason few examples, rather than using in-context learning. In context learning presents significant practical advantages over fine-tuning, as it does not change the model weights. The development of retrieval-augmented language models preserving the ability of their non-augmented counterparts to generalize from few in-context examples and instructions is a crucial challenge toward general retrieval-augmented language models and their wider adoption. Appendix A. Training details and Additional Results In this appendix we present additional results and provide details about the parameters used to fine-tune models on MMLU, question answering data sets and KILT tasks. A.1 MMLU Here, we report results on the 57 MMLU domains, details about the fine-tuning and how the model predictions are de-biased. 27 Izacard, Lewis, Lomeli, Hosseini, Petroni, Schick, Dwivedi-Yu, Joulin, Riedel, Grave A.1.1 Featurization MMLU consists of multiple choice questions with four possible lexicalized answer options. We represent the input using the following template: question: {question text} options: (A) {answer 1} (B) {answer 2} (C) {answer 3} (D) {answer 4} answer: [MASK_0] and train the model to generate the mask token followed by the letter of the correct answer: [MASK_0] {correct answer option letter} This format closely matches the format of MLM pre-training objective, aiding few-shot learning. When training, we permute the order of the answer options, that is shuffling which answer option appears as letter A etc. This helps reduce overfitting, and encourages a uniform prior on the letters. A.1.2 Standard Inference Once trained we obtain predictions from the model by selecting the pre-softmax logits for the tokens A, B, C and D, and performing a softmax over them to obtain a distribution over the 4 answer options. For standard inference, we then simply return the answer corresponding to the argmax of this distribution. A.1.3 De-biased Inference As mentioned in the main text, even though our model is finetuned with data that encourages a uniform prior over answer letters (by permuting which answer option letter is used with which lexical answer option text in training data), this may not be enough to ensure the model has no residual bias towards specific letters. Consider answers a, questions qand a nuisance variable zZ, which represents the ordering of the answer options or, equivalently, which answer letter gets assigned to which answer option text. There are 4 answer options in MMLU, and thus |Z|= 24unique ways they can be ordered, or assigned to given letters. Running our model with our standard inference for a question q, corresponds to calculating p(a|q=q,z=z)for the answer ordering zthat happens to appear in the data set. We can control for zby running the model with all possible answer orderings in the input, and marginalizing: p(a|q=q) = zZp(a|q=q,z=z)p(z=z|q=q), and assuming p(z=z|q=q)is uniform (no answer ordering is more likely than another), this reduces to simplyp(a|q=q) zZp(a|q=q,z=z). This procedure requires 24 forward passes, one for each answer ordering, so is 24 slower than standard inference. Table 13 shows the result of applying the full permutation de-biasing, which leads to an 12% improvement zero-shot and 6% in 5-shot performance overall. Empirically, using only the cyclic permutations of the answer order provided in the original data set (of which there are 4) works nearly as well, which is what we report in the main paper, and only increases inference compute by a factor of 4, rather than 24. Cyclic permutation de-biasing improves over standard inference by 10% in zero-shot and 5% in 5-shot. Empirically, de-biased inference is largely unnecessary when training in the 5-shot multitask or full data set setting, as there is enough data for the model to learn a more uniform prior over the letters. 28 Atlas: Few-shot Learning with Retrieval Augmented Language Models Setting Model All Hum. Soc. Sci. STEM Other zero-shotStandard 36.8 37.5 39.0 30.2 39.7 All permutations 48.5 45.7 55.2 39.4 54.4 Cyclic Permutations 47.1 43.6 54.1 38.0 54.9 5-shotStandard 43.4 41.8 49.3 33.9 48.8 All permutations 49.0 46.0 56.1 40.5 54.6 Cyclic Permutations 47.9 46.1 54.6 38.8 52.8 Table 13: MMLU scores with de-biasing. A.1.4 Evaluation We evaluate by following the method of Hendrycks et al. (2021), namely, micro-averaging across all 57 domains to obtain overall accuracy. We quote the results of GPT3 (Brown et al., 2020) and UnifiedQA (Khashabi et al., 2020) from the MMLU leaderboard at https: //github.com/hendrycks/test . For Chinchilla and Gopher, we calculate the scores on the categories using the full MMLU results from Hoffmann et al. (2022). A.1.5 Index The index used for MMLU for all MMLU experiments in the main paper comprised of concatenation of the Wikipedia passages, Wikipedia info boxes and Common Crawl indices, for a total of 387M passages. We can assess the importance of the index by running a model without the common crawl data, leading to a 5-shot multitask result of 52.8%, compared to 56.4% for the full model, a drop of 3.6%. This indicates that whilst the Wikipedia data is sufficient do well on the task, the addition of the CCNet data improves results further. A.1.6 Hyperparameters and Development Data Selecting hyperparameters is challenging in few-shot settings. We do not assume access to an in-domaindevelopmentsetforthe5-shottask. Instead, wedetermineasetofhyperparameters for the 5-shot task using data from RACE, one of the auxiliary data sets provided by MMLU. Here, we sample 5 sets of 5-shot training data, and for each model size, we explore batch size{32,64}, learning rates for the language model and retriever {(5e-5, 1e-5), (4e-5, 4e5)}, retriever temperature {0.1,0.01}and a fixed number of training steps {16,32,64,128}, picking the setting that achieves strongest RACE validation scores. Having determined these hyperparameters, we apply them directly to the 5-shot MMLU task. For the 5-shot multi-task and full/transfer settings, we use the same batch size, temperatures and learning rates as the 5-shot task, but use a set of 285 MMLU validation examples (5 per domain) in order to determine the total number of training steps and for early stopping. The hyperparameters selected in the MMLU experiments can be found in Table 14. We use query-side finetuning for the 5-shot and 5-shot multitask settings, and top-128 reranking for the full setting. For all MMLU runs we retrieve 30 documents. 29 Izacard, Lewis, Lomeli, Hosseini, Petroni, Schick, Dwivedi-Yu, Joulin, Riedel, Grave 770M 3B 11B Batch size 64 64 64 Learning rate (5e-5, 1e-5) (5e-5, 1e-5) (5e-5, 1e-5) Retriever Temperature 0.1 0.1 0.1 5-shot train steps 64 32 16 5-shot (multitask) max train steps 2000 500 250 Full / transfer max train steps 5000 2000 2000 Table 14: Hyperparameters for MMLU Run # All Hum. Soc. Sci. STEM Other 1 45.2 40.6 54.1 37.1 51.1 2 45.1 39.8 54.4 37.1 52.0 3 45.0 40.0 54.1 37.7 51.1 4 45.6 41.3 54.7 37.0 51.6 5 44.3 40.6 50.7 38.1 49.8 Ave: 45.00.5 40.50.6 53.61.6 37.40.5 51.10.8 Table 15: Interrun Variance for 5-shot MMLU using Atlas-11B A.1.7 Inter-run Variance few-shot learning is well-known to suffer from high variance. In the main paper, we quote the result obtained with our first run. In order to assess the effect of noise and variance, we ran the 5-shot experiment with Atlas5 times.7We observe high variance for individual domains, sometimes as high as 20%, however, once aggregated across all 57 domains, the inter-run variance is low. The overall scores for these different runs, when using the same hyperparameters are shown in Table 15. Due the effects of averaging over the many domains that comprise MMLU, the inter-run variance is quite modest on the aggregated metrics, with a std deviation of 0.5 in this experiment. A.1.8 Closed-Book Baselines The closed book baselines we compare Atlasto in Table 5 are initialized from the same T5 model as their respective Atlas, and then pre-trained with MLM for the same number of steps (10K) using the same pre-training data as Atlas, for fairer comparison. The same procedure as for Atlaswas used to determine hyperparameters for MMLU for the closed-book models. 7.This experiment was performed with a slightly different index to the main experiments, which achieves a stronger result 30 Atlas: Few-shot Learning with Retrieval Augmented Language Models A.1.9 Full Results Tables 16 and 17 shows the full MMLU scores for each domain for Atlasand the closed book T5 respectively. The full results for the cyclic-permutation-de-biased Atlas-XXL can be found in Table 18. A.2 Question Answering We report additional training details on question answering tasks, as well as results obtained with models of different sizes. A.2.1 Training Details For question answering, similarly to the MMLU experiments, we format the input using the following template: question: {question text} answer: [MASK_0] and train the model to generate the mask token followed by the answer: [MASK_0] {answer} . We generate answers using greedy decoding. For both training and testing, we retrieve 40 passages, and truncate the result of the concatenation between the query and the passages to 384 tokens. For few-shot fine-tuning we train Atlasfor 30 steps using 64 random samples from the train sets. The retriever is trained using query-side fine-tuning. We select the model after 30 training steps. We use AdamW with a batch size of 32 and a learning rate of 4105with linear decay and 5 iterations of warmup for both the language model and the retriever. For the fine-tuning on the full data sets, we train the model for 5k gradient steps and refresh the index every 500 steps for the first 1,000 training steps and every 2k training steps afterwards. We use AdamW with a batch size of 64 and a learning rate of 4105with linear decay and 5 iterations of warmup for both the language model and the retriever. We evaluate models every 500 steps and select the best one on the validation set based on the exact match score. A.2.2 Impact of Scaling In Table 19, we report performance on Natural Questions and TriviaQA as a function of the number of parameters in the reader module. Both for few-shot learning and full fine-tuning we observe strong improvements by scaling the size of the reader module. However we can notice sign of saturation when finetuning on full data sets, with limited gains when scaling from 3B to 11B parameters (+0.6% on Natural Questions, +0.5% on TriviaQA). While performance improves substantially when scaling from 3B to 11B parameters with 64 training samples, with +3.7% and +1.2% improvement on Natural Questions and TriviaQA respectively. For these experiments we use a setup similar to the one use in Table 8, except that we use an index composed of the December 2018 Wikipedia dump processed as described in section 4.2. 31 Izacard, Lewis, Lomeli, Hosseini, Petroni, Schick, Dwivedi-Yu, Joulin, Riedel, Grave 5-shot 5-shot (multi-task) Full / Transfer 770M 3B 11B 770M 3B 11B 770M 3B 11B All 38.9 42.3 43.4 42.1 48.7 56.4 56.3 59.9 65.8 Humanities 37.3 40.0 41.9 37.7 46.4 50.0 50.9 53.0 60.3 Social Sciences 41.7 46.8 49.3 47.5 53.7 65.6 66.0 70.8 77.2 STEM 32.3 35.0 33.9 34.4 39.4 46.2 44.8 50.7 53.4 Other 44.9 48.1 48.8 50.4 55.9 66.6 65.5 68.1 74.4 abstract algebra 30.0 27.0 28.0 27.0 31.0 30.0 22.0 27.0 33.0 anatomy 28.9 50.4 45.2 44.4 57.8 64.4 57.8 68.9 69.6 astronomy 55.3 59.9 59.2 52.6 66.4 67.8 69.1 78.3 79.6 business ethics 49.0 51.0 48.0 50.0 62.0 60.0 51.0 70.0 68.0 clinical knowledge 41.9 44.9 40.0 46.8 54.3 64.9 64.2 72.5 74.0 college biology 38.2 45.8 50.0 36.8 52.1 63.2 63.2 72.2 78.5 college chemistry 32.0 29.0 29.0 31.0 33.0 38.0 45.0 39.0 45.0 college computer science 33.0 35.0 30.0 23.0 29.0 30.0 43.0 48.0 47.0 college mathematics 31.0 31.0 28.0 29.0 27.0 34.0 32.0 29.0 36.0 college medicine 31.2 35.8 38.2 50.3 40.5 52.0 60.1 59.5 63.6 college physics 20.6 26.5 31.4 21.6 28.4 39.2 27.5 44.1 42.2 computer security 53.0 50.0 55.0 49.0 61.0 64.0 69.0 71.0 76.0 conceptual physics 34.9 41.7 37.4 40.9 43.4 57.0 53.2 58.3 59.6 econometrics 28.9 21.1 27.2 26.3 25.4 34.2 28.9 37.7 36.8 electrical engineering 26.9 31.7 31.7 38.6 44.1 51.7 61.4 60.7 67.6 elementary mathematics 25.9 28.8 29.4 29.6 30.2 32.8 29.6 35.5 33.9 formal logic 34.9 33.3 33.3 23.0 30.2 29.4 34.1 38.9 34.1 global facts 28.0 34.0 34.0 36.0 40.0 49.0 50.0 49.0 52.0 high school biology 24.8 37.7 27.7 48.7 57.1 66.5 66.5 76.8 81.9 high school chemistry 34.5 31.0 31.0 31.5 36.5 48.3 44.8 52.2 52.2 high school computer science 31.0 39.0 28.0 37.0 42.0 42.0 50.0 59.0 57.0 high school european history 42.4 49.7 53.3 50.9 58.2 69.7 70.9 73.9 80.0 high school geography 38.9 42.4 50.0 46.5 56.6 69.2 74.2 80.8 82.8 high school gov. and pol. 57.5 60.6 60.1 52.9 64.8 76.7 80.8 85.5 91.7 high school macroeconomics 32.8 39.7 44.9 39.0 45.6 57.2 55.1 63.1 66.7 high school mathematics 30.7 33.0 35.6 28.1 27.8 37.0 30.7 34.8 37.0 high school microeconomics 34.5 42.9 45.4 44.1 51.7 68.9 63.4 70.2 81.1 high school physics 18.5 24.5 22.5 25.8 25.8 33.1 27.2 30.5 39.7 high school psychology 52.8 61.1 59.8 56.7 67.2 79.4 76.3 84.0 87.0 high school statistics 39.8 29.6 34.7 27.3 34.7 38.0 37.0 43.1 45.8 high school us history 43.6 49.0 55.9 46.1 57.8 59.8 62.7 72.5 76.5 high school world history 48.1 52.7 59.9 48.1 66.2 65.4 70.0 78.5 79.7 human aging 46.2 44.8 39.5 48.0 55.2 60.1 56.1 68.2 73.1 human sexuality 41.2 43.5 27.5 46.6 51.1 59.5 77.1 72.5 81.7 international law 54.5 57.9 60.3 55.4 72.7 73.6 81.8 82.6 85.1 jurisprudence 38.9 55.6 32.4 53.7 60.2 73.1 76.9 73.1 81.5 logical fallacies 43.6 54.0 57.1 44.2 58.3 70.6 64.4 73.0 76.7 machine learning 36.6 34.8 28.6 31.3 37.5 46.4 36.6 47.3 50.9 management 45.6 51.5 52.4 48.5 52.4 81.6 78.6 75.7 87.4 marketing 59.4 67.1 70.5 66.7 74.4 83.8 83.8 83.3 91.9 medical genetics 50.0 53.0 58.0 56.0 61.0 75.0 68.0 78.0 81.0 miscellaneous 63.0 64.2 68.8 64.0 72.4 84.3 85.4 83.9 90.9 moral disputes 37.0 41.3 41.3 40.8 50.3 60.1 61.9 66.2 73.7 moral scenarios 24.7 24.7 26.5 21.9 26.9 26.6 23.8 23.8 35.8 nutrition 40.9 45.1 45.1 49.0 52.3 67.0 64.7 68.6 76.8 philosophy 48.6 50.5 56.3 49.8 59.2 69.5 70.4 73.0 77.8 prehistory 45.7 50.0 52.8 54.9 64.8 74.4 69.8 75.0 80.6 professional accounting 28.4 33.0 34.0 35.1 34.0 45.7 43.6 46.1 51.8 professional law 32.4 33.5 34.8 30.4 37.6 39.1 41.5 41.5 50.5 professional medicine 29.4 26.1 27.6 34.6 40.8 52.2 47.8 43.4 59.6 professional psychology 37.7 43.0 50.2 45.1 51.0 60.6 59.5 62.4 74.0 public relations 40.0 46.4 44.5 51.8 54.5 66.4 63.6 66.4 68.2 security studies 35.1 33.5 38.8 44.1 39.6 57.6 60.8 61.6 72.2 sociology 45.3 51.2 51.2 52.7 60.2 69.2 74.1 78.6 85.1 us foreign policy 58.0 70.0 73.0 63.0 63.0 74.0 80.0 80.0 83.0 virology 34.3 34.3 32.5 38.0 42.8 45.2 47.6 49.4 53.0 world religions 65.5 69.0 71.9 70.2 82.5 80.1 83.6 83.6 87.1 Table 16: MMLU Test set scores for Atlasfor each model size and each of the 57 domains. 32 Atlas: Few-shot Learning with Retrieval Augmented Language Models 5-shot 5-shot (multi-task) Full / Transfer 770M 3B 11B 770M 3B 11B 770M 3B 11B All 29.2 35.7 36.1 26.5 40.0 43.5 42.4 50.4 54.0 Humanities 30.5 35.4 35.5 27.3 38.5 41.6 41.0 48.6 51.3 Social Sciences 29.7 38.0 39.4 24.8 43.8 48.9 48.6 57.8 64.7 STEM 29.0 31.4 30.8 26.5 32.8 35.8 33.4 40.6 41.7 Other 26.7 37.7 38.6 27.0 45.0 48.5 46.8 55.2 59.1 abstract algebra 26.0 23.0 21.0 29.0 30.0 26.0 23.0 29.0 26.0 anatomy 21.5 40.0 40.7 27.4 39.3 45.9 35.6 43.7 42.2 astronomy 37.5 38.8 37.5 27.6 39.5 41.4 36.2 50.7 55.3 business ethics 29.0 54.0 42.0 26.0 47.0 55.0 53.0 64.0 60.0 clinical knowledge 32.5 33.6 40.0 28.7 44.2 47.9 45.3 52.8 57.7 college biology 29.9 34.7 34.0 29.9 34.7 40.3 38.2 46.5 52.1 college chemistry 37.0 22.0 32.0 20.0 35.0 33.0 36.0 34.0 36.0 college computer science 28.0 35.0 34.0 28.0 27.0 36.0 31.0 44.0 35.0 college mathematics 31.0 29.0 27.0 22.0 34.0 27.0 30.0 33.0 32.0 college medicine 24.3 34.7 34.1 27.2 40.5 40.5 35.8 41.6 48.6 college physics 33.3 23.5 23.5 22.5 19.6 26.5 22.5 32.4 24.5 computer security 36.0 42.0 46.0 31.0 49.0 52.0 50.0 65.0 61.0 conceptual physics 26.4 35.7 30.2 23.4 30.6 32.8 34.5 37.4 43.8 econometrics 26.3 21.9 28.9 17.5 19.3 24.6 29.8 25.4 29.8 electrical engineering 31.0 33.1 31.7 31.0 31.0 36.6 41.4 47.6 51.7 elementary mathematics 26.2 27.5 28.0 27.0 31.2 33.3 25.9 31.2 35.5 formal logic 34.1 34.1 31.7 15.1 34.9 31.0 31.7 38.1 42.1 global facts 32.0 30.0 25.0 34.0 34.0 27.0 28.0 34.0 30.0 high school biology 22.6 31.9 29.7 27.1 41.6 50.0 43.5 57.7 60.6 high school chemistry 27.1 26.6 27.6 28.6 31.5 29.1 30.5 36.5 38.9 high school computer science 26.0 32.0 25.0 33.0 37.0 45.0 45.0 55.0 48.0 high school european history 34.5 43.0 42.4 24.2 60.0 59.4 58.2 69.1 76.4 high school geography 31.3 40.4 36.9 24.7 45.5 50.5 56.1 66.7 74.2 high school gov. and pol. 28.0 49.2 51.3 19.2 56.0 59.6 55.4 70.5 75.6 high school macroeconomics 25.6 37.7 32.1 26.7 42.3 43.6 41.0 51.5 56.4 high school mathematics 35.9 35.2 35.9 28.1 26.7 31.1 27.8 36.7 31.9 high school microeconomics 27.3 29.8 36.1 20.6 35.7 42.9 42.9 50.8 60.5 high school physics 21.9 25.2 22.5 24.5 28.5 29.1 27.8 31.1 27.8 high school psychology 26.1 46.4 51.0 24.8 54.3 60.2 56.3 67.3 76.1 high school statistics 27.8 33.3 33.3 17.6 30.6 33.8 32.9 33.3 37.0 high school us history 30.4 39.7 45.6 27.5 46.1 58.3 51.0 63.2 72.5 high school world history 42.6 50.6 41.8 29.1 54.0 64.6 66.7 72.2 73.8 human aging 28.3 37.2 29.6 26.0 45.3 46.2 46.6 57.0 62.8 human sexuality 29.8 34.4 41.2 25.2 42.0 44.3 51.1 58.0 59.5 international law 57.9 57.9 41.3 44.6 57.9 58.7 62.8 71.9 71.1 jurisprudence 30.6 33.3 34.3 32.4 49.1 52.8 55.6 67.6 74.1 logical fallacies 40.5 55.8 46.6 25.8 51.5 62.0 43.6 69.3 71.2 machine learning 33.0 34.8 36.6 29.5 35.7 37.5 32.1 37.5 42.9 management 21.4 29.1 40.8 24.3 47.6 50.5 60.2 69.9 70.9 marketing 38.9 58.5 60.7 31.2 67.9 75.6 69.2 79.9 85.9 medical genetics 26.0 36.0 36.0 29.0 43.0 44.0 40.0 54.0 50.0 miscellaneous 24.5 45.2 46.4 27.1 52.2 58.2 51.3 64.6 72.7 moral disputes 32.4 37.3 38.7 28.6 43.4 43.4 49.7 64.7 64.7 moral scenarios 24.7 24.7 24.7 23.0 23.9 24.7 23.8 24.0 23.8 nutrition 30.1 33.0 34.6 25.8 42.5 44.1 50.3 55.6 61.1 philosophy 28.6 32.5 37.3 31.2 38.9 45.0 44.1 56.6 59.2 prehistory 33.6 37.0 41.4 27.5 39.8 50.6 41.0 51.5 57.7 professional accounting 21.3 28.0 30.5 25.9 35.5 34.0 37.2 41.5 42.2 professional law 28.2 33.4 34.0 27.6 35.4 35.5 38.3 43.0 45.6 professional medicine 19.5 26.5 24.3 20.2 32.0 37.9 38.6 40.8 46.0 professional psychology 27.8 32.8 32.8 26.6 39.5 43.6 38.4 48.0 58.3 public relations 22.7 43.6 40.0 21.8 47.3 56.4 50.0 55.5 60.0 security studies 37.6 26.1 31.0 20.4 34.7 44.1 56.3 61.6 66.9 sociology 43.3 41.8 38.8 30.8 45.8 52.7 60.2 66.7 72.1 us foreign policy 49.0 57.0 66.0 38.0 56.0 61.0 59.0 75.0 76.0 virology 29.5 26.5 34.3 30.1 36.1 39.8 44.0 46.4 41.6 world religions 24.0 40.9 47.4 32.7 49.1 57.3 48.0 63.7 70.2 Table 17: MMLU Test set scores for the T5 closed book baseline for each model size and each of the 57 domains. 33 Izacard, Lewis, Lomeli, Hosseini, Petroni, Schick, Dwivedi-Yu, Joulin, Riedel, Grave Domain zero-shot 5-shot 5-shot (multi-task) Full / Transfer All 47.1 47.9 56.6 66.0 Humanities 43.6 46.1 50.1 61.1 Social Sciences 54.1 54.6 66.4 77.2 STEM 38.0 38.8 46.4 53.2 Other 53.9 52.8 66.2 74.4 abstract algebra 22.0 26.0 31.0 31.0 anatomy 48.9 47.4 62.2 70.4 astronomy 61.8 62.5 68.4 81.6 business ethics 60.0 57.0 62.0 70.0 clinical knowledge 50.6 49.4 66.4 72.8 college biology 51.4 53.5 61.1 77.8 college chemistry 36.0 39.0 39.0 45.0 college computer science 32.0 32.0 33.0 49.0 college mathematics 30.0 35.0 35.0 34.0 college medicine 44.5 41.0 52.6 67.6 college physics 24.5 26.5 37.3 42.2 computer security 59.0 59.0 68.0 76.0 conceptual physics 37.0 41.3 57.0 60.0 econometrics 20.2 20.2 36.8 37.7 electrical engineering 37.9 40.0 50.3 65.5 elementary mathematics 31.2 28.0 30.7 36.5 formal logic 27.8 27.0 32.5 35.7 global facts 41.0 43.0 51.0 53.0 high school biology 53.2 56.5 68.7 83.2 high school chemistry 41.9 41.4 49.3 51.2 high school computer science 40.0 36.0 46.0 60.0 high school european history 56.4 58.8 68.5 80.6 high school geography 57.1 59.6 71.2 81.3 high school gov. and pol. 67.9 67.9 77.2 90.2 high school macroeconomics 46.9 48.5 57.9 65.9 high school mathematics 28.1 28.9 34.1 31.5 high school microeconomics 51.7 51.7 68.9 82.4 high school physics 26.5 25.8 32.5 41.1 high school psychology 66.2 65.5 78.9 86.8 high school statistics 31.5 30.1 43.1 45.8 high school us history 57.8 54.9 64.7 77.5 high school world history 59.1 62.9 65.4 79.3 human aging 48.4 50.7 60.5 70.4 human sexuality 55.7 54.2 61.8 84.0 international law 66.1 72.7 71.9 84.3 jurisprudence 61.1 64.8 72.2 81.5 logical fallacies 54.6 57.7 71.2 77.9 machine learning 37.5 39.3 43.8 44.6 management 56.3 56.3 79.6 89.3 marketing 72.2 73.1 84.6 91.9 medical genetics 55.0 58.0 71.0 81.0 miscellaneous 69.7 67.8 83.8 90.4 moral disputes 45.1 46.8 60.1 72.3 moral scenarios 24.5 30.3 25.8 38.5 nutrition 56.5 53.9 67.0 77.1 philosophy 56.3 57.6 70.7 77.2 prehistory 59.3 60.5 71.6 78.7 professional accounting 35.1 33.0 42.2 50.7 professional law 36.3 38.4 39.4 51.7 professional medicine 35.7 33.1 52.2 60.7 professional psychology 47.7 49.3 60.9 74.0 public relations 54.5 53.6 68.2 68.2 security studies 47.3 45.7 59.2 73.9 sociology 62.2 62.7 71.6 84.6 us foreign policy 64.0 68.0 73.0 83.0 virology 39.8 40.4 44.6 51.8 world religions 77.2 74.9 80.7 87.1 Table 18: MMLU Test set scores for the de-biased Atlas-XXL using cyclic permutations for each of the 57 domains for zero-shot, 5 shot, 5-shot-multitask and the transfer setting. 34 Atlas: Few-shot Learning with Retrieval Augmented Language Models Number of parameters 220M 770M 3B 11B Natural Questions 64-shot 27.0 35.4 41.3 45.1 Natural Questions full 54.1 60.8 63.4 64.0 TriviaQA 64-shot 55.3 65.0 70.2 71.4 TriviaQA full 71.8 74.9 77.5 78.0 Table 19: Impact of model size on question answering data sets. We report exact match performance on the test sets of Natural Questions and TriviaQA filtered depending on the number of parameters in the reader module. For these experiments the index contains the December 2018 Wikipedia dump. A.3 KILT For the results on KILT reported in Table 10 we fine-tune Atlasindividually on each data set. We format the input using a template similar to the one used for question answering: question: {query text} answer: [MASK_0] and train the model to generate the mask token followed by the expected output: [MASK_0] {output} . We retrieve 20 passages and generate answer using greedy decoding. In KILT, FEVER is a two-way classification task of claims. We lexicalize the SUPPORTS (resp. REFUTES) label into true (respectively false). For few-shot fine-tuning we train Atlasfor 30 steps using 64 random samples from the train sets. The retriever is trained using query-side fine-tuning. We evaluate models every 5 steps and select the best one on the development set based on the reported metric. We use AdamW with a batch size of 32 and a learning rate of 4105with linear decay and 5 iterations of warmup for both the language model and the retriever. For the fine-tuning on the full data sets, the model is trained for 5k gradient steps. We evaluate models every 500 steps and select the best one on the development set based on the reported metric. The index is refreshed every 500 step for the first 1000 iterations, and every 2k steps afterwards. We use AdamW with a batch size of 64 and a learning rate of 4105with linear decay and 500 iterations of warmup for both the language model and the retriever. We report results on the development sets in Table 20. References M. Bevilacqua, G. Ottaviano, P. Lewis, S. Yih, S. Riedel, and F. Petroni. Autoregressive search engines: Generating substrings as document identifiers. In NeurIPS , 2022. URL https://openreview.net/forum?id=Z4kZxAjg8Y . S. Borgeaud, A. Mensch, J. Hoffmann, T. Cai, E. Rutherford, K. Millican, G. v. d. Driessche, J.-B.Lespiau, B.Damoc, A.Clark, D.d.L.Casas, A.Guy, J.Menick, R.Ring, T.Hennigan, 35 Izacard, Lewis, Lomeli, Hosseini, Petroni, Schick, Dwivedi-Yu, Joulin, Riedel, Grave ModelAIDA FEV T-REx zsRE NQ HoPo TQA WoW acc acc acc acc em em em f1 Atlas64-shot 69.0 88.1 58.5 60.2 44.2 34.1 77.1 15.4 Atlasfull data set 92.7 94.4 84.8 80.9 63.4 51.4 84.4 21.0 Table 20: Downstream results on the KILT dev sets. Downstream metrics are accuracy (AIDA CoNLL-YAGO, FEVER, T-REx, zero-shot RE), exact match (Natural Questions, HotpotQA, TriviaQA), or F1 (Wizard of Wikipedia). S. Huang, L. Maggiore, C. Jones, A. Cassirer, A. Brock, M. Paganini, G. Irving, O. Vinyals, S. Osindero, K. Simonyan, J. W. Rae, E. Elsen, and L. Sifre. Improving language models by retrieving from trillions of tokens, 2021. URL https://arxiv.org/abs/2112.04426 . T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. Language models are few-shot learners, 2020. URL https://arxiv.org/ abs/2005.14165 . N. D. Cao, G. Izacard, S. Riedel, and F. Petroni. Autoregressive entity retrieval. In ICLR, 2021. URL https://openreview.net/forum?id=5k8F6UU39V . N. Carlini, F. Tramr, E. Wallace, M. Jagielski, A. Herbert-Voss, K. Lee, A. Roberts, T. B. Brown, D. Song, lfar Erlingsson, A. Oprea, and C. Raffel. Extracting training data from large language models. In USENIX Security Symposium , 2021. URL https://www. usenix.org/conference/usenixsecurity21/presentation/carlini-extracting . D. Chen, A. Fisch, J. Weston, and A. Bordes. Reading Wikipedia to answer open-domain questions. In ACL, 2017. URL https://aclanthology.org/P17-1171 . A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann, P. Schuh, K. Shi, S. Tsvyashchenko, J. Maynez, A. Rao, P. Barnes, Y. Tay, N. Shazeer, V. Prabhakaran, E. Reif, N. Du, B. Hutchinson, R. Pope, J. Bradbury, J. Austin, M. Isard, G. Gur-Ari, P. Yin, T. Duke, A. Levskaya, S. Ghemawat, S. Dev, H. Michalewski, X. Garcia, V. Misra, K. Robinson, L. Fedus, D. Zhou, D. Ippolito, D. Luan, H. Lim, B. Zoph, A. Spiridonov, R. Sepassi, D. Dohan, S. Agrawal, M. Omernick, A. M. Dai, T. S. Pillai, M. Pellat, A. Lewkowycz, E. Moreira, R. Child, O. Polozov, K. Lee, Z. Zhou, X. Wang, B. Saeta, M. Diaz, O. Firat, M. Catasta, J. Wei, K. Meier-Hellstern, D. Eck, J. Dean, S. Petrov, and N. Fiedel. Palm: Scaling language modeling with pathways, 2022. URL https://arxiv.org/abs/2204.02311 . C. Clark and M. Gardner. Simple and effective multi-paragraph reading comprehension. In ACL, 2018. URL https://aclanthology.org/P18-1078 . 36 Atlas: Few-shot Learning with Retrieval Augmented Language Models P. Clark, I. Cowhey, O. Etzioni, T. Khot, A. Sabharwal, C. Schoenick, and O. Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge, 2018. URL https://arxiv.org/abs/1803.05457 . J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL, 2019. B. Dhingra, J. R. Cole, J. M. Eisenschlos, D. Gillick, J. Eisenstein, and W. W. Cohen. Time-aware language models as temporal knowledge bases. TACL, 2022. URL https: //aclanthology.org/2022.tacl-1.15 . E. Dinan, S. Roller, K. Shuster, A. Fan, M. Auli, and J. Weston. Wizard of wikipedia: Knowledge-powered conversational agents. In ICLR, 2019. URL https://openreview. net/forum?id=r1l73iRqKm . J. Dodge, M. Sap, A. Marasovi, W. Agnew, G. Ilharco, D. Groeneveld, M. Mitchell, and M.Gardner. Documentinglargewebtextcorpora: Acasestudyonthecolossalcleancrawled corpus. In EMNLP, 2021. URL https://aclanthology.org/2021.emnlp-main.98 . H. Elsahar, P. Vougiouklis, A. Remaci, C. Gravier, J. Hare, F. Laforest, and E. Simperl. T-REx: A large scale alignment of natural language with knowledge base triples. In LREC, 2018. URL https://aclanthology.org/L18-1544 . M. Fajcik, M. Docekal, K. Ondrej, and P. Smrz. R2-D2: A modular baseline for open-domain question answering. In Findings of EMNLP , 2021. URL https://aclanthology.org/ 2021.findings-emnlp.73 . M. Fink. Object classification from a single example utilizing class relevance metrics. In NIPS, 2005. URL https://proceedings.neurips.cc/paper/2004/file/ ef1e491a766ce3127556063d49bc2f98-Paper.pdf . T. Gao, A. Fisch, and D. Chen. Making pre-trained language models better few-shot learners. InACL-IJCNLP , 2021. URL https://aclanthology.org/2021.acl-long.295 . M. Glass, G. Rossiello, M. F. M. Chowdhury, A. R. Naik, P. Cai, and A. Gliozzo. Re2g: Retrieve, rerank, generate, 2022. URL https://arxiv.org/abs/2207.06300 . E. Grave, M. Cisse, and A. Joulin. Unbounded cache model for online language modeling with open vocabulary, 2017a. URL https://arxiv.org/abs/1711.02604 . E. Grave, A. Joulin, and N. Usunier. Improving neural language models with a continuous cache. In ICLR, 2017b. URL https://openreview.net/forum?id=B184E5qee . K. Guu, K. Lee, Z. Tung, P. Pasupat, and M.-W. Chang. Realm: Retrieval-augmented language model pre-training. arXiv:2002.08909 , 2020. URL https://arxiv.org/abs/ 2002.08909 . K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick. Momentum contrast for unsupervised visual representation learning. In CVPR, 2020. 37 Izacard, Lewis, Lomeli, Hosseini, Petroni, Schick, Dwivedi-Yu, Joulin, Riedel, Grave D. Hendrycks, C. Burns, S. Basart, A. Zou, M. Mazeika, D. Song, and J. Steinhardt. Measuring massive multitask language understanding. In ICLR, 2021. URL https: //openreview.net/forum?id=d7KBjmI3GmQ . J. Hoffart, M. A. Yosef, I. Bordino, H. Frstenau, M. Pinkal, M. Spaniol, B. Taneva, S. Thater, and G. Weikum. Robust disambiguation of named entities in text. In EMNLP, 2011. URL https://aclanthology.org/D11-1072 . J. Hoffmann, S. Borgeaud, A. Mensch, E. Buchatskaya, T. Cai, E. Rutherford, D. d. L. Casas, L. A. Hendricks, J. Welbl, A. Clark, T. Hennigan, E. Noland, K. Millican, G. v. d. Driessche, B. Damoc, A. Guy, S. Osindero, K. Simonyan, E. Elsen, J. W. Rae, O. Vinyals, and L. Sifre. Training compute-optimal large language models, 2022. URL https: //arxiv.org/abs/2203.15556 . S. Hofsttter, J. Chen, K. Raman, and H. Zamani. Multi-task retrieval-augmented text generation with relevance sampling, 2022. URL https://arxiv.org/abs/2207.03030 . P.-S. Huang, X. He, J. Gao, L. Deng, A. Acero, and L. Heck. Learning deep structured semantic models for web search using clickthrough data. In CIKM, 2013. G. Izacard and E. Grave. Leveraging passage retrieval with generative models for open domain question answering. In EACL, 2021a. URL https://aclanthology.org/2021. eacl-main.74 . G. Izacard and E. Grave. Distilling knowledge from reader to retriever for question answering. InICLR, 2021b. URL https://openreview.net/forum?id=NTEz-6wysdb . G. Izacard, F. Petroni, L. Hosseini, N. De Cao, S. Riedel, and E. Grave. A memory efficient baseline for open domain question answering, 2020. URL https://arxiv.org/abs/2012. 15156. G. Izacard, M. Caron, L. Hosseini, S. Riedel, P. Bojanowski, A. Joulin, and E. Grave. Unsupervised dense information retrieval with contrastive learning. TMLR, 2022. URL https://openreview.net/forum?id=jKN1pXi7b0 . H. Jegou, M. Douze, and C. Schmid. Product quantization for nearest neighbor search. IEEE TPAMI, 2010. Z. Jiang, F. F. Xu, J. Araki, and G. Neubig. How can we know what language models know? TACL, 2020. URL https://aclanthology.org/2020.tacl-1.28 . K. S. Jones. A statistical interpretation of term specificity and its application in retrieval. Journal of documentation , 1972. M. Joshi, E. Choi, D. Weld, and L. Zettlemoyer. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In ACL, 2017. URL https://aclanthology. org/P17-1147 . J. Kaplan, S. McCandlish, T. Henighan, T. B. Brown, B. Chess, R. Child, S. Gray, A. Radford, J. Wu, and D. Amodei. Scaling laws for neural language models, 2020. URL https: //arxiv.org/abs/2001.08361 . 38 Atlas: Few-shot Learning with Retrieval Augmented Language Models V. Karpukhin, B. Ouz, S. Min, L. Wu, S. Edunov, D. Chen, and W.-t. Yih. Dense passage retrieval for open-domain question answering. arXiv:2004.04906 , 2020. URL https://arxiv.org/abs/2004.04906 . U. Khandelwal, O. Levy, D. Jurafsky, L. Zettlemoyer, and M. Lewis. Generalization through memorization: Nearest neighbor language models. In ICLR, 2020. URL https://openreview.net/forum?id=HklBjCEKvH . D. Khashabi, S. Min, T. Khot, A. Sabharwal, O. Tafjord, P. Clark, and H. Hajishirzi. UNIFIEDQA: Crossing format boundaries with a single QA system. In Findings of EMNLP, 2020. URL https://aclanthology.org/2020.findings-emnlp.171 . A. Krishna, S. Riedel, and A. Vlachos. ProoFVer: Natural logic theorem proving for fact verification. TACL, 2022. URL https://aclanthology.org/2022.tacl-1.59 . T. Kwiatkowski, J. Palomaki, O. Redfield, M. Collins, A. Parikh, C. Alberti, D. Epstein, I. Polosukhin, J. Devlin, K. Lee, K. Toutanova, L. Jones, M. Kelcey, M.-W. Chang, A. M. Dai, J. Uszkoreit, Q. Le, and S. Petrov. Natural questions: A benchmark for question answering research. TACL, 2019. URL https://aclanthology.org/Q19-1026 . G. Lai, Q. Xie, H. Liu, Y. Yang, and E. Hovy. RACE: Large-scale ReAding comprehension dataset from examinations. In EMNLP, 2017. URL https://aclanthology.org/ D17-1082 . A. Lazaridou, E. Gribovskaya, W. Stokowiec, and N. Grigorev. Internet-augmented language models through few-shot prompting for open-domain question answering, 2022. URL https://arxiv.org/abs/2203.05115 . T. Le Scao and A. Rush. How many data points is a prompt worth? In NAACL-HLT , 2021. URL https://aclanthology.org/2021.naacl-main.208 . H. Lee, A. Kedia, J. Lee, A. Paranjape, C. D. Manning, and K.-G. Woo. You only need one model for open-domain question answering, 2021a. URL https://arxiv.org/abs/2112. 07381. J. Lee, M. Sung, J. Kang, and D. Chen. Learning dense representations of phrases at scale. InACL-IJCNLP , 2021b. URL https://aclanthology.org/2021.acl-long.518 . K. Lee, M.-W. Chang, and K. Toutanova. Latent retrieval for weakly supervised open domain question answering. In ACL, 2019. URL https://aclanthology.org/P19-1612 . B. Lester, R. Al-Rfou, and N. Constant. The power of scale for parameter-efficient prompt tuning. In EMNLP, 2021. URL https://aclanthology.org/2021.emnlp-main.243 . O. Levy, M. Seo, E. Choi, and L. Zettlemoyer. Zero-shot relation extraction via reading comprehension. In CoNLL, 2017. URL https://aclanthology.org/K17-1034 . P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. Kttler, M. Lewis, W.-t. Yih, T. Rocktschel, S. Riedel, and D. Kiela. Retrieval-augmented generation for knowledge-intensive nlp tasks. In NeurIPS , 2020. URL https://proceedings.neurips. cc/paper_files/paper/2020/file/6b493230205f780e1bc26945df7481e5-Paper.pdf . 39 Izacard, Lewis, Lomeli, Hosseini, Petroni, Schick, Dwivedi-Yu, Joulin, Riedel, Grave X. L. Li and P. Liang. Prefix-tuning: Optimizing continuous prompts for generation. In ACL-IJCNLP , 2021. URL https://aclanthology.org/2021.acl-long.353 . O. Lieber, O. Sharir, B. Lenz, and Y. Shoham. Jurassic-1: Technical details and evaluation. Technical report, AI21 Labs, 2021. I. R. L. Logan, I. Balavzevic, E. Wallace, F. Petroni, S. Singh, and S. Riedel. Cutting down on prompts and parameters: Simple few-shot learning with language models, 2021. URL https://arxiv.org/abs/2106.13353 . T. Mihaylov, P. Clark, T. Khot, and A. Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. In EMNLP, 2018. URL https:// aclanthology.org/D18-1260 . S. Min, J. Michael, H. Hajishirzi, and L. Zettlemoyer. AmbigQA: Answering ambiguous opendomainquestions. In EMNLP,2020. URL https://aclanthology.org/2020.emnlp-main. 466. R. Nakano, J. Hilton, S. A. Balaji, J. Wu, L. Ouyang, C. Kim, C. Hesse, S. Jain, V. Kosaraju, W. Saunders, X. Jiang, K. Cobbe, T. Eloundou, G. Krueger, K. Button, M. Knight, B. Chess, and J. Schulman. Webgpt: Browser-assisted question-answering with human feedback, 2021. URL https://arxiv.org/abs/2112.09332 . A. Paranjape, O. Khattab, C. Potts, M. Zaharia, and C. D. Manning. Hindsight: Posteriorguided training of retrievers for improved open-ended generation, 2021. URL https: //arxiv.org/abs/2110.07752 . F. Petroni, P. Lewis, A. Piktus, T. Rocktschel, Y. Wu, A. H. Miller, and S. Riedel. How context affects language models factual predictions. arXiv:2005.04611 , 2020. URL https://arxiv.org/abs/2005.04611 . F. Petroni, S. Broscheit, A. Piktus, P. Lewis, G. Izacard, L. Hosseini, J. Dwivedi-Yu, M. Lomeli, T. Schick, P.-E. Mazar, A. Joulin, E. Grave, and S. Riedel. Improving wikipedia verifiability with ai, 2022. URL https://arxiv.org/abs/2207.06220 . A. Piktus, F. Petroni, V. Karpukhin, D. Okhonko, S. Broscheit, G. Izacard, P. Lewis, B. Ouz, E. Grave, W.-t. Yih, and S. Riedel. The web is your oyster knowledge-intensive nlp against a very large web corpus, 2021. URL https://arxiv.org/abs/2112.09924 . A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. Language models are unsupervised multitask learners. OpenAI Technical Report , 2019. J. W. Rae, S. Borgeaud, T. Cai, K. Millican, J. Hoffmann, F. Song, J. Aslanides, S. Henderson, R. Ring, S. Young, E. Rutherford, T. Hennigan, J. Menick, A. Cassirer, R. Powell, G. v. d. Driessche, L. A. Hendricks, M. Rauh, P.-S. Huang, A. Glaese, J. Welbl, S. Dathathri, S. Huang, J. Uesato, J. Mellor, I. Higgins, A. Creswell, N. McAleese, A. Wu, E. Elsen, S. Jayakumar, E. Buchatskaya, D. Budden, E. Sutherland, K. Simonyan, M. Paganini, L. Sifre, L. Martens, X. L. Li, A. Kuncoro, A. Nematzadeh, E. Gribovskaya, D. Donato, 40 Atlas: Few-shot Learning with Retrieval Augmented Language Models A. Lazaridou, A. Mensch, J.-B. Lespiau, M. Tsimpoukelli, N. Grigorev, D. Fritz, T. Sottiaux, M. Pajarskas, T. Pohlen, Z. Gong, D. Toyama, C. d. M. dAutume, Y. Li, T. Terzi, V. Mikulik, I. Babuschkin, A. Clark, D. d. L. Casas, A. Guy, C. Jones, J. Bradbury, M. Johnson, B. Hechtman, L. Weidinger, I. Gabriel, W. Isaac, E. Lockhart, S. Osindero, L. Rimell, C. Dyer, O. Vinyals, K. Ayoub, J. Stanway, L. Bennett, D. Hassabis, K. Kavukcuoglu, and G. Irving. Scaling language models: Methods, analysis & insights from training gopher, 2021. URL https://arxiv.org/abs/2112.11446 . C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer, 2019. URL https://arxiv.org/abs/1910.10683 . P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. SQuAD: 100,000+ questions for machine comprehension of text. In EMNLP, 2016. URL https://aclanthology.org/D16-1264 . O. Ram, G. Shachaf, O. Levy, J. Berant, and A. Globerson. Learning to retrieve passages without supervision. In NAACL-HLT , 2022. URL https://aclanthology.org/2022. naacl-main.193 . M. Richardson. MCTest: A Challenge Dataset for the OpenDomain Machine Comprehension of Text. In EMNLP, 2013. URL https://www.microsoft.com/en-us/research/publication/ mctest-challenge-dataset-open-domain-machine-comprehension-text/ . M. Richardson, C. J. Burges, and E. Renshaw. MCTest: A challenge dataset for the opendomain machine comprehension of text. In EMNLP, 2013. URL https://aclanthology. org/D13-1020 . S. E. Robertson, S. Walker, S. Jones, M. M. Hancock-Beaulieu, M. Gatford, et al. Okapi at TREC-3. NIST Special Publication Sp , 1995. D. S. Sachan, S. Reddy, W. Hamilton, C. Dyer, and D. Yogatama. End-to-end training of multi-document reader and retriever for open-domain question answering, 2021. URL https://arxiv.org/abs/2106.05346 . T. Schick and H. Schutze. Its not just size that matters: Small language models are also few-shot learners, 2021. URL https://arxiv.org/abs/2009.07118 . T. Schick and H. Schtze. Exploiting cloze-questions for few-shot text classification and natural language inference. In EACL, 2021a. URL https://aclanthology.org/2021. eacl-main.20 . T. Schick and H. Schtze. Few-shot text generation with natural language instructions. In EMNLP, 2021b. URL https://aclanthology.org/2021.emnlp-main.32 . Y. Shen, X. He, J. Gao, L. Deng, and G. Mesnil. Learning semantic representations using convolutional neural networks for web search. In WWW, 2014. 41 Izacard, Lewis, Lomeli, Hosseini, Petroni, Schick, Dwivedi-Yu, Joulin, Riedel, Grave T. Shin, Y. Razeghi, R. L. Logan IV, E. Wallace, and S. Singh. AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts. In EMNLP, 2020. URL https://aclanthology.org/2020.emnlp-main.346 . K. Shuster, S. Poff, M. Chen, D. Kiela, and J. Weston. Retrieval augmentation reduces hallucination in conversation, 2021. URL https://arxiv.org/abs/2104.07567 . K. Shuster, M. Komeili, L. Adolphs, S. Roller, A. D. Szlam, and J. Weston. Language models that seek for knowledge: Modular search & generation for dialogue and prompt completion, 2022. URL https://arxiv.org/abs/2203.13224 . S. Smith, M. Patwary, B. Norick, P. LeGresley, S. Rajbhandari, J. Casper, Z. Liu, S. Prabhumoye, G. Zerveas, V. Korthikanti, E. Zhang, R. Child, R. Y. Aminabadi, J. Bernauer, X. Song, M. Shoeybi, Y. He, M. Houston, S. Tiwary, and B. Catanzaro. Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language model, 2022. URL https://arxiv.org/abs/2201.11990 . D. Tam, R. R. Menon, M. Bansal, S. Srivastava, and C. Raffel. Improving and simplifying pattern exploiting training, 2021. URL https://arxiv.org/abs/2103.11955 . R.Thoppilan, D.DeFreitas, J.Hall, N.Shazeer, A.Kulshreshtha, H.-T.Cheng, A.Jin, T.Bos, L.Baker, Y.Du, Y.Li, H.Lee, H.S.Zheng, A.Ghafouri, M.Menegali, Y.Huang, M.Krikun, D. Lepikhin, J. Qin, D. Chen, Y. Xu, Z. Chen, A. Roberts, M. Bosma, V. Zhao, Y. Zhou, C.-C. Chang, I. Krivokon, W. Rusch, M. Pickett, P. Srinivasan, L. Man, K. Meier-Hellstern, M. R. Morris, T. Doshi, R. D. Santos, T. Duke, J. Soraker, B. Zevenbergen, V. Prabhakaran, M. Diaz, B. Hutchinson, K. Olson, A. Molina, E. Hoffman-John, J. Lee, L. Aroyo, R. Rajakumar, A. Butryna, M. Lamm, V. Kuzmina, J. Fenton, A. Cohen, R. Bernstein, R. Kurzweil, B. Aguera-Arcas, C. Cui, M. Croak, E. Chi, and Q. Le. Lamda: Language models for dialog applications, 2022. URL https://arxiv.org/abs/2201.08239 . J. Thorne, A. Vlachos, C. Christodoulopoulos, and A. Mittal. FEVER: a large-scale dataset for fact extraction and VERification. In NAACL-HTL , 2018. URL https: //aclanthology.org/N18-1074 . S. Thrun and L. Pratt. Learning to Learn: Introduction and Overview , page 317. Kluwer Academic Publishers, USA, 1998. ISBN 0792380479. O. Vinyals, C. Blundell, T. Lillicrap, k. kavukcuoglu, and D. Wierstra. Matching networks for one shot learning. In NIPS, 2016. URL https://proceedings.neurips.cc/paper/ 2016/file/90e1357833654983612fb05e3ec9148c-Paper.pdf . E. M. Voorhees. The TREC-8 question answering track report. In LREC, 1999. URL http://www.lrec-conf.org/proceedings/lrec2000/pdf/26.pdf . Z. Wang, P. Ng, X. Ma, R. Nallapati, and B. Xiang. Multi-passage BERT: A globally normalized BERT model for open-domain question answering. In EMNLP-IJCNLP , 2019. URL https://aclanthology.org/D19-1599 . 42 Atlas: Few-shot Learning with Retrieval Augmented Language Models G. Wenzek, M.-A. Lachaux, A. Conneau, V. Chaudhary, F. Guzmn, A. Joulin, and E. Grave. CCNet: Extracting high quality monolingual datasets from web crawl data. In LREC, 2020. URL https://aclanthology.org/2020.lrec-1.494 . L. Xiong, C. Xiong, Y. Li, K.-F. Tang, J. Liu, P. N. Bennett, J. Ahmed, and A. Overwijk. Approximate nearest neighbor negative contrastive learning for dense text retrieval. In ICLR, 2021. URL https://openreview.net/forum?id=zeFrfgyZln . Z. Yang, P. Qi, S. Zhang, Y. Bengio, W. W. Cohen, R. Salakhutdinov, and C. D. Manning. Hotpotqa: A dataset for diverse, explainable multi-hop question answering, 2018. URL https://arxiv.org/abs/1809.09600 . A. Yates, R. Nogueira, and J. Lin. Pretrained Transformers for Text Ranking: BERT and Beyond. In WSDM, 2021. W.-t. Yih, K. Toutanova, J. C. Platt, and C. Meek. Learning discriminative projections for text similarity measures. In CoNLL, 2011. URL https://aclanthology.org/W11-0329 . D. Yogatama, C. de Masson dAutume, and L. Kong. Adaptive semiparametric language models.TACL, 2021. URL https://aclanthology.org/2021.tacl-1.22 . 43 |
2403.03950.pdf | Stop Regressing: Training Value Functions via Classification for Scalable Deep RL Jesse Farebrother1,2,*, Jordi Orbay1,, Quan Vuong1,, Adrien Ali Taga1,, Yevgen Chebotar1, Ted Xiao1, Alex Irpan1, Sergey Levine1, Pablo Samuel Castro1,3,, Aleksandra Faust1, Aviral Kumar1,, Rishabh Agarwal1,3,* *Equal Contribution,Core Contribution,1Google DeepMind,2Mila, McGill University,3Mila, Universit de Montral Value functions are a central component of deep reinforcement learning (RL). These functions, parameterized by neural networks, are trained using a mean squared error regression objective to match bootstrapped target values. However, scaling value-based RL methods that use regression to large networks, such as high-capacity Transformers, has proven challenging. This difficulty is in stark contrast to supervised learning: by leveraging a cross-entropy classification loss, supervised methods have scaled reliably to massive networks. Observing this discrepancy, in this paper, we investigate whether the scalability of deep RL can also be improved simply by using classification in place of regression for training value functions. We demonstrate that value functions trained with categorical cross-entropy significantly improves performance and scalability in a variety of domains. These include: single-task RL on Atari 2600 games with SoftMoEs, multi-task RL on Atari with large-scale ResNets, robotic manipulation with Q-transformers, playing Chess without search, and a language-agent Wordle task with high-capacity Transformers, achieving state-of-the-art results on these domains. Through careful analysis, we show that the benefits of categorical cross-entropy primarily stem from its ability to mitigate issues inherent to value-based RL, such as noisy targets and non-stationarity. Overall, we argue that a simple shift to training value functions with categorical cross-entropy can yield substantial improvements in the scalability of deep RL at little-to-no cost. 1. Introduction A clear pattern emerges in deep learning breakthroughs from AlexNet (Krizhevsky et al., 2012) to Transformers (Vaswani et al., 2017) classification problems seem to be particularly amenable to effective training with large neural networks. Even in scenarios where a regression approach appears natural, framing the problem instead as a classification problem often improves performance (Torgo and Gama, 1996; Rothe et al., 2018; Rogez et al., 2019). This involves converting real-valued targets into categorical labels and minimizing categorical cross-entropy rather than the mean-squared error. Several hypotheses have been put forward to explain the superiority of this approach, including stable gradients (Imani and White, 2018; Imani et al., 2024), better representations (Zhang et al., 2023), implicit bias (Stewart et al., 2023), and dealing with imbalanced data (Pintea et al., 2023) suggesting their potential utility beyond supervised regression. Unlike trends in supervised learning, value-based reinforcement learning (RL) methods primarily rely on regression. For example, deep RL methods such as deep Q-learning (Mnih et al., 2015) and actor-critic (Mnih et al., 2016) use a regression loss, such as mean-squared error, to train a value function from continuous scalar targets. While these value-based deep RL methods, powered by regression losses, have led to high-profile results (Silver et al., 2017), it has been challenging to scale them up to large networks, such as high-capacity transformers. This lack of scalability has been attributed to several issues (Kumar et al., 2021, 2022; Agarwal et al., 2021; Lyle et al., 2022; Le Lan et al., 2023; Obando-Ceron et al., 2024), but what if simply reframing the regression problem as classification can enable the same level of scalability achieved in supervised learning? Corresponding author(s): [email protected], [email protected], [email protected]:2403.03950v1 [cs.LG] 6 Mar 2024 Stop Regressing: Training Value Functions via Classification for Scalable Deep RL Online Atari (DQN)0100200Normalized Performance+29%Single-task RL: SoftMoE (8 experts) Offline Multi-game Atari (Scaled QL)Online Multi-task Atari (IMPALA)+82%+115%Training Generalist Policies: ResNet-101 Language Agent: Wordle (CQL)Robotic Manipulation (Q-Transformer)Chess (Q-function Distillation)+43%+67% +70%Scaling Beyond Atari: High-capacity TransformersRegression (MSE) Classification (HL-Gauss) Figure 1|Performance gains from HL-Gauss cross-entropy loss (3.1) over MSE regression loss for training valuenetworks with modern architectures, including MoEs (4.2.1), ResNets (4.2), and Transformers (4.3). The x-axis labels correspond to domain name, with training method in brackets. For multi-task RL results, we report gains with ResNet-101 backbone, the largest network in our experiments. For Chess, we report improvement in performance gap relative to the teacher Stockfish engine, for the 270M transformer. For Wordle, we report results with behavior regularization of 0.1. In this paper, we perform an extensive study to answer this question by assessing the efficacy of various methods for deriving classification labels for training a value-function with a categorical cross-entropy loss. Our findings reveal that training value-functions with cross-entropy substantially improves the performance, robustness, and scalability of deep RL methods (Figure 1) compared to traditional regression-based approaches. The most notable method (HL-Gauss; Imani and White, 2018) leads to consistently 30% better performance when scaling parameters with Mixture-of-Experts in single-task RL on Atari (Obando-Ceron et al., 2024); 1.82.1performance in multi-task setups on Atari (Kumar et al., 2023; Ali Taga et al., 2023); 40% better performance in the language-agent task of Wordle (Snell et al., 2023); 70% improvement for playing chess without search (Ruoss et al., 2024); and 67% better performance on large-scale robotic manipulation with transformers (Chebotar et al., 2023). The consistent trend across diverse domains, network architectures, and algorithms highlights the substantial benefits of treating regression as classification in deep RL, underscoring its potential as a pivotal component as we move towards scaling up value-based RL. Withstrong empirical results to support the use of cross-entropy as a drop-in replacement for the mean squared error (MSE) regression loss in deep RL , we also attempt to understand the source of these empirical gains. Based on careful diagnostic experiments, we show that the categorical cross-entropylossoffersanumberofbenefitsovermean-squaredregression. Ouranalysissuggeststhat the categorical cross-entropy loss mitigates several issues inherent to deep RL, including robustness to noisy targets and allowing the network to better use its capacity to fit non-stationary targets. These findings not only help explain the strong empirical advantages of categorical cross-entropy in deep RL but also provide insight into developing more effective learning algorithms for the field. 2. Preliminaries and Background Regression as classification. We take a probabilistic view on regression where given input we seek to model the target as a conditional distribution |N(=(;),2)for some fixed variance 2and predictor function :parameterized by the vector . The maximum 2 Stop Regressing: Training Value Functions via Classification for Scalable Deep RL NeuralNetworkSoftmaxProjectTargetDistribution Cross-Entropy Loss Figure 2|Regression as Classification. Data points are transformed by a neural network to produce a categorical distribution via a softmax. The prediction is taken to be the expectation of this categorical distribution. The logits of the network are reinforced by gradient descent on the cross-entropy loss with respect to a target distribution whose mean is the regression target . Figure 3 depicts three methods for constructing and projecting the target distribution in RL. likelihood estimator for data {,} =1is characterized by the mean-squared error (MSE) objective, min =1((;))2, with the optimal predictor being (;)=[|]. Instead of learning the mean of the conditional distribution directly, an alternate approach is to learn a distribution over the target value, and then, recover the prediction as a statistic of the distribution. To this end, we will construct the target distribution |with probability density function (|) such that our scalar target can be recovered as the mean of this distribution =[|]. We can now frame the regression problem as learning a parameterized distribution (|;)that minimizes the KL divergence to the target (|), min =1 Y(|)log((|;)) (2.1) which is the cross-entropy objective. Finally, our prediction can be recovered as (;)=[|;]. Given this new problem formulation, in order to transform the distribution learning problem into a tractable loss we restrict to the set of categorical distributions supported on [min,max]with evenly spaced locations or classes, min1<< maxdefined as, Z=( =1:0, =1=1) , (2.2) whereis the probability associated with location andis the Dirac delta function at location . The final hurdle is to define a procedure to construct the target distribution |and its associated projection onto the set of categorical distributions Z. We defer this discussion to 3 where we discuss various methods for performing these steps in the context of RL. Reinforcement Learning (RL). We consider the reinforcement learning (RL) problem where an agent interacts with an environment by taking an action Ain the current state Sand subsequently prescribed a reward +1before transitioning to the next state +1Saccording to the environment transition probabilities. The return numerically describes the quality of a sequence of actions as the cumulative discounted sum of rewards = =0++1where[0,1)is the discount factor. The agents goal is to learn the policy :SP(A)that maximizes the expected return. The action-value function allows us to query the expected return from taking action in state and following policy thereafter: (,)=[|=, =]. Deep Q-Networks (DQN; Mnih et al., 2015) proposes to learn the approximately optimal stateaction value function (,;)(,)with a neural network parameterized by . Specifically, 3 Stop Regressing: Training Value Functions via Classification for Scalable Deep RL DQN minimizes the mean-squared temporal difference (TD) error from transitions (,,+1,+1) sampled from dataset D, TDMSE()=D (bT)(,;)(,;)2 (2.3) whereis a slow moving copy of the parameters that parameterize the target network and (bT)(,;)=+1+max (+1,;)=, =, is the sample version of the Bellman optimality operator which defines our scalar regression target. Most deep RL algorithms that learn value functions use variations of this basic recipe, notably regressing to predictions obtained from a target value network. In addition to the standard online RL problem setting, we also explore the offline RL setting where we train agents using a fixed dataset of environment interactions (Agarwal et al., 2020; Levine et al., 2020). One widely-used offline RL method is CQL (Kumar et al., 2020) that jointly optimizes the TD error with a behavior regularization loss with strength , using the following training objective: min D" log exp((+1,;))# D[(,;)] +TDMSE(), (2.4) This work aims to replace the fundamental mean-squared TD-error objective with a classification-style cross-entropy loss for both value-based and actor-critic methods, in both offline and online domains. 3. Value-Based RL with Classification In this section, we describe our approach to cast the regression problem appearing in TD-learning as a classification problem. Concretely, instead of minimizing the squared distance between the scalar Q-value and its TD target (Equation 2.3) we will instead minimize the distance between categorical distributions representing these quantities. To employ this approach, we will first define the categorical representation for the action-value function (,). Categorical Representation. We choose to represent as the expected value of a categorical distribution Z. This distribution is parameterized by probabilities (,;)for each location or classwhich are derived from the logits (,;)through the softmax function: (,;)=[(,;)], (,;)= =1(,;), (,;)=exp((,;)) =1exp (,;). To employ the cross-entropy loss (Equation 2.1) for TD learning, it is necessary that the target distribution is also a categorical distribution, supported on the same locations ,...,. This allows for the direct computation of the cross-entropy loss as: TDCE()=D" =1(,;)log (,;)# , (3.1) where the target probabilities are defined such that =1(,;)(bT)(,;). In the subsequent sections, we explore two strategies for obtaining the target probabilities (,;). 4 Stop Regressing: Training Value Functions via Classification for Scalable Deep RL Categorical Distributional RL Two-Hot HLGauss Figure 3|Visualizing target-value categorical distribution in cross-entropy based TD learning . While Two-Hot (left, 3.1) puts probability mass on exactly two locations, HL-Gauss (middle, 3.1) distributes the probability mass to neighbouring locations (which is akin to smoothing the target value). CDRL (right, 3.2) models the categorical return distribution, distributing probability mass proportionally to neighboring locations. 3.1. Constructing Categorical Distributions from Scalars The first set of methods we outline will project the scalar target (bT)(,;)onto the categorical distribution supported on {} =1. A prevalent but nave approach for the projection step involves discretizing the scalar into one of bins where represents the center of the bin. The resulting onehot distribution is lossy and induces errors in the -function. These errors would compound as more Bellman backups are performed, resulting in more biased estimates, and likely worse performance. To combat this, we first consider the two-hot approach (Schrittwieser et al., 2020) that represents a scalar target exactlyvia a unique categorical distribution that puts non-zero densities on two locations that the target lies between (see Figure 3; Left). A Two-Hot Categorical Distribution. Letand+1be the locations which lower and upper-bound the TD target (bT)(,;)+1. Then, the probability, and+1, put on these locations is: (,;)=(bT)(,;) +1, +1(,;)=+1(bT)(,;) +1.(3.2) For all other locations, the probability prescribed by the categorical distribution is exactly zero. In principle, this Two-Hot transformation provides a uniquely identifiable and a non-lossy representation of the scalar TD target to a categorical distribution. However, Two-Hot does not fully harness the ordinal structure of discrete regression. Specifically, the classes are not independent and instead have a natural ordering, where each class intrinsically relates to its neighbors. The class of Histogram Losses introduced by Imani and White (2018) seeks to exploit the ordinal structure of the regression task by distributing probability mass to neighboring bins akin to label smoothing in supervised classification (Szegedy et al., 2016). This is done by transforming a noisy version of the target value into a categorical distribution where probability mass can span multiple bins near the target (See Figure 3; Center), rather than being restricted to two locations. Histograms as Categorical Distributions. Formally, define the random variable |,with probabilitydensity |,andcumulativedistributionfunction |,whoseexpectationis (bT)(,;). We can project the distribution |,onto the histogram with bins of width =(maxmin)/ centered at by integrating over the interval [/2,+/2]to obtain the probabilities, (,;)=+/2 /2|,(|,) =|,(+/2|,)|,(/2|,). (3.3) 5 Stop Regressing: Training Value Functions via Classification for Scalable Deep RL We now have a choice for the distribution |,. We follow the suggestion of Imani and White (2018) in using the Gaussian distribution |,N(=(bT)(,;),2)where the variance 2is a hyper-parameter that can control the amount of label smoothing applied to the resulting categorical distribution. We refer to this method as HL-Gauss. How should we tune in practice? HL-Gauss requires tuning the standard deviation , in addition to the bin width and distribution range [,]. 99.7% of the samples obtained by sampling from a standard Normal distribution should lie within three standard deviations of the mean with high confidence, which corresponds to approximately 6/bins. Thus, a more interpretable hyperparameter that we recommend tuning is /: setting it to /6distributes most of the probability mass to+1neighbouring locations for a mean value centered at one of the bins. Unless specified otherwise, we set /=0.75for our experiments, which distributes mass to approximately 6locations. 3.2. Modelling the Categorical Return Distribution In the previous section, we chose to construct a target distribution from the usual scalar regression target representing the expected return. Another option is to directly model the distribution over future returns using our categorical model , as done in distributional RL (Bellemare et al., 2023). Notably, C51 (Bellemare et al., 2017), an early distributional RL approach, use the categorical representation along with minimizing the cross-entropy between the predicted distribution and the distributional analogue of the TD target. To this end, we also investigate C51 as an alternative to Two-Hot and HL-Gauss for constructing the target distribution for our cross-entropy objective. Categorical Distributional RL. The first step to modelling the categorical return distribution is to define the analogous stochastic distributional Bellman operator on , (bT)(,;)= =1(+1,+1;)+1+=, =, where+1=arg max(+1,). As we can see, the stochastic distributional Bellman operator has theeffectofshiftingandscalingthelocations necessitatingthecategoricalprojection,firstintroduced by Bellemare et al. (2017). At a high level, this projection distributes probabilities proportionally to the immediate neighboring locations 1+1+(See Figure 3; Right). To help us identify these neighboring locations we define =arg max{:}and=arg min{:}. Now the probabilities for location can be written as, (,;)= =1(+1,+1;)(+1+) (3.4) ()= +11{=}++1 +11{=}. For a complete exposition of the categorical projection, see Bellemare et al. (2023, Chapter 5). 4. Evaluating Classification Losses in RL The goal of our experiments in this section is to evaluate the efficacy of the various target distributions discussedinSection3combinedwiththecategoricalcross-entropyloss (3.1)inimprovingperformance and scalability of value-based deep RL on a variety of problems. This includes several single-task and multi-task RL problems on Atari 2600 games as well as domains beyond Atari including language agents,chess,androboticmanipulation. ThesetasksconsistofbothonlineandofflineRLproblems. For eachtask,weinstantiateourcross-entropylossesinconjunctionwithastrongvalue-basedRLapproach previously evaluated on that task. Full experimental methodologies including hyperparameters for each domain we consider can be found in Appendix B. 6 Stop Regressing: Training Value Functions via Classification for Scalable Deep RL 1.2 1.4 1.5HL-Gauss C51 MSE T wo-HotIQM 0.2 0.3 0.3 0.3Optimality GapOnline RL: Atari 200M Aggregate Statistics Human Normalized Score 0 25 50 75 100 Number of Gradient Updates (x 62.5k)0.00.51.01.5IQM Normalized ScoreHL-Gauss T wo-HotC51 MSEOffline RL: Atari CQL Figure 4|Regression vs cross-entropy losses for (Left) Online RL and (Right) Offline RL (4.1) . HL-Gauss and CDRL outperform MSE, with HL-Gauss performing the best. Moreover, Two-Hot loss underperforms MSE but is more stable with prolonged training in offline RL, akin to other cross-entropy losses. See 4.1 for more details. 4.1. Single-Task RL on Atari Games WefirstevaluatetheefficacyofHL-Gauss,Two-Hot,andC51(Bellemareetal.,2017)aninstantiation of categorical distributional RL, on the Arcade Learning Environment (Bellemare et al., 2013). For our regression baseline we train DQN (Mnih et al., 2015) on the mean-squared error TD objective which has been shown to outperform other regression based losses (Ceron and Castro, 2021). Each method is trained with the Adam optimizer, which has been shown to reduce the performance discrepancy between regression-based methods and distributional RL approaches (Agarwal et al., 2021). Evaluation . Following the recommendations by Agarwal et al. (2021), we report the interquartile mean (IQM) normalized scores with 95% stratified bootstrap confidence intervals (CIs), aggregated across games with multiple seeds each. We report human-normalized aggregated scores across 60 Atari games for online RL. For offline RL, we report behavior-policy normalized scores aggregated across 17 games, following the protocol in Kumar et al. (2021). Online RL results . Following the setup of Mnih et al. (2015), we train DQN for 200M frames with the aforementioned losses. We report aggregated human-normalized IQM performance and optimality gap across 60 Atari games in Figure 4. Observe that HL-Gauss substantially outperforms the Two-Hot and MSE losses. Interestingly, HL-Gauss also improves upon categorical distributional RL (C51), despite not modelling the return distribution. This finding suggests that the loss (categorical cross-entropy) is perhaps the more crucial factor for C51, as compared to modelling the return distribution. Offline RL results. The strong performance of HL-Gauss with online DQN, which involves learning from self-collected interactions, raises the question of whether it would also be effective in learning from offline datasets. To do so, we train agents with different losses on the 10% Atari DQN replay dataset (Agarwal et al., 2020) using CQL (2) for 6.25M gradient steps. As shown in Figure 4, HL-Gauss and C51 consistently outperform MSE, while Two-Hot shows improved stability over MSE but underperforms other classification methods. Notably, HL-Gauss again surpasses C51 in this setting. Furthermore, consistent with the findings of Kumar et al. (2021), utilizing the mean squared regression loss results in performance degradation with prolonged training. However, cross-entropy losses (both HL-Gauss and C51) do not show such degradation and generally, remain stable. 4.2. Scaling Value-based RL to Large Networks In supervised learning, particularly for language modeling (Kaplan et al., 2020), increasing the parameter count of a network typically improves performance. However, such scaling behavior remain elusive for value-based deep RL methods, where naiveparameter scaling can hurt performance (Ali Taga et al., 2023; Kumar et al., 2023; Obando-Ceron et al., 2024). To this end, we investigate the efficacy of our classification methods, as an alternative to MSE regression loss in deep RL, towards enabling better performance with parameter scaling for value-networks. 7 Stop Regressing: Training Value Functions via Classification for Scalable Deep RL 1 2 4 8 Number of Experts2.53.03.54.04.5IQM Normalized ScoreMSE MoE+ HL-Gauss MoE+MSEOnline RL (Atari): SoftMoE Expert Scaling. Figure 5|MoE scaling curves for HL-Gauss and MSE on Online RL . HL-Gauss, with a single expert, outperform all regression configurations. Both HL-Gauss and MSE scale similarlywhenemployingSoftMoE,withHL-Gaussproviding 30%IQM improvement. SoftMoE also mitigates negative scalingobservedwithMSEalone. See4.2.1formoredetails. IMPALA-CNNResNet-18 ResNet-34 ResNet-50 ResNet-101123IQM Normalized ScoreHL-Gauss MSEAsteroids (63 Variants)Multi-T ask RL: Scaling CapacityFigure6|ScalingcurvesonMulti-taskOnlineRL .Results for actor-critic IMPALA with ResNets on Asteroids . HLGauss outperforms MSE and notably reliably scales better with larger networks. Since human scores are not available for variants, we report normalized scores using a baseline IMPALA agent with MSE loss. See 4.2.2 for more details. 4.2.1. Scaling with Mixture-of-Experts Recently, Obando-Ceron et al. (2024) demonstrate that while parameter scaling with convolutional networks hurts single-task RL performance on Atari, incorporating Mixture-of-Expert (MoE) modules in such networks improves performance. Following their setup, we replace the penultimate layer in the architecture employed by Impala (Espeholt et al., 2018) with a SoftMoE (Puigcerver et al., 2024) module and vary the number of experts in {1,2,4,8}. Since each expert is a copy of the original penultimate layer, this layers parameter count increases by a factor equal to the number of experts. The only change we make is to replace the MSE loss in SoftMoE DQN, as employed by Obando-Ceron et al. (2024), with the HL-Gauss cross-entropy loss. We train on the same subset of 20 Atari games used by Obando-Ceron et al. (2024) and report aggregate results over five seeds in Figure 5. As shown in Figure 5, we find that HL-Gauss consistently improves performance over MSE by a constant factor independent of the number of experts. One can also observe that SoftMoE + MSE seems to mitigate some of the negative scaling effects observed with MSE alone. As SoftMoE + MSE uses a softmax in the penultimate layer this could be providing similar benefits to using a classification loss but as we will later see these benefits alone cannot be explained by the addition of the softmax. 4.2.2. Training Generalist Policies with ResNets Next, we consider scaling value-based ResNets (He et al., 2016) in both offline and online settings to train a generalist video game-playing policy on Atari. In each case, we train a family of differently sized Q-networks for multi-task RL, and report performance as a function of the network size. Multi-taskOnlineRL . Following Ali Taga et al. (2023), we train a multi-task policy capable of playing Atari game variants with different environment dynamics and rewards (Farebrother et al., 2018). We evaluate two Atari games: 63 variants for Asteroids and 29 variants for Space Invaders . We employ a distributed actor-critic method, IMPALA (Espeholt et al., 2018), and compare the standard MSE critic loss with the cross-entropy based HL-Gauss loss. Our experiments investigate the scaling properties of these losses when moving from Impala-CNN ( 2M parameters) to larger ResNets (He et al., 2016) up to ResNet-101 (44M parameters). We evaluate multi-task performance after training for 15 billion frames, and repeat each experiment with five seeds. Results for Asteroids are presented in Figure 6, with additional results on Space Invaders presented in Figure D.3. We observe that in both environments HL-Gauss consistently outperforms 8 Stop Regressing: Training Value Functions via Classification for Scalable Deep RL MSE. Notably, HL-Gauss scales better, especially on Asteroids where it even slightly improves performance with larger networks beyond ResNet-18, while MSE performance significantly degrades. Multi-game Offline RL . We consider the the setup from Kumar et al. (2023), where we modify their recipe to use a non-distributional HL-Gauss loss, in place of distributional C51. Specifically, we train a single generalist policy to play 40 different Atari games simultaneously, when learning from a near-optimal training dataset, composed of replay buffers obtained from online RL agents trained independently on each game. This multi-game RL setup was originally proposed by Lee et al. (2022). The remaining design choices (e.g., feature normalization; the size of the network) are kept identical. As shown in Figure 7, HL-Gauss scales even better than the C51 results from Kumar et al. (2023), resulting in an improvement of about 45%over the best prior multi-game result available with ResNet101 (80M parameters) as measured by the IQM human normalized score (Agarwal et al., 2021). Furthermore, while the performance of MSE regression losses typically plateaus upon increasing model capacity beyond ResNet-34, HL-Gauss is able to leverage this capacity to improve performance, indicating the efficacy of classification-based cross-entropy losses. Additionally, when normalizing against scores obtained by a DQN agent, we show in Figure D.4 that in addition to performance, the rate of improvement as the model scale increases tends to also be larger for the HL-Gauss loss compared to C51. AmidarAsterixZaxxonYarsRevenge Residual Neural Network Multi-Game DT (200M) 31M 60M 79M Number of Parameters0.600.901.201.50IQM Normalized Score MSEC51 T wo-HotHL-GaussOffline RL (Multi-Game): Scaling Capacity Figure 7|ScalingcurvesonMulti-gameAtari(OfflineRL) . IQM human normalized score for ResNet{34,50,101}, with spatial embeddings, to play 40 Atari games simultaneously using a single value network (Kumar et al., 2023). HL-Gauss enables remarkable scaling, substantially outperforming categorical distributional RL (C51) and regression (MSE) losses used by prior work, as well as the multi-game Decision Transformer (Lee et al., 2022). See 4.2.2 for more details and Figure D.4 for a version of these results reported in terms of DQN normalized scores, another commonly used metric. 4.3. Value-Based RL with Transformers Next, we evaluate the applicability of the HL-Gauss cross-entropy loss beyond Atari. To do so, we consider several tasks that utilize high-capacity Transformers, namely, a language-agent task of playing Wordle, playing Chess without inference-time search, and robotic manipulation. 4.3.1. Language Agent: Wordle To evaluate whether classification losses enhance the performance of value-based RL approaches on language agent benchmarks, we compare HL-Gauss with MSE on the task of playing the game of Wordle1. Wordle is a word guessing game in which the agent gets 6 attempts to guess a word. Each turn the agent receives environment feedback about whether guessed letters are in the true word. The dynamics of this task are non-deterministic. More generally, the task follows a turn-based structure, 1www.nytimes.com/games/wordle/index.html 9 Stop Regressing: Training Value Functions via Classification for Scalable Deep RL T S W O R MPD </a> </s> Transformer Value FunctionTSSWORCANTIMES 0.1 0.3 1.0 Behavior Regularizer Strength () 0.300.350.400.45Success RateHL-Gauss MSEOffline RL (Wordle): Transformer (125M) Figure 8|Regression vs cross-entropy loss for Wordle (Offline RL). Comparing HL-Gauss cross-entropy loss with MSE regression loss for a transformer trained with offline RL on Wordle dataset (Snell et al., 2023). Here, we evaluate the success rate of guessing the word in one turn given a partially played Wordle game (e.g., image on left). HL-Gauss leads to substantially higher success rates for varying strengths of behavior regularization. See 4.3.1 for more details. reminiscent of dialogue tasks in natural language processing. This experiment is situated in the offline RL setting, where we utilize the dataset of suboptimal game-plays provided by Snell et al. (2023). Our goal is to train a GPT-like, decoder-only Transformer, with 125M parameters, representing the Q-network. See Figure 8 (left) for how the transformer model is used for playing this game. On this task, we train the language-based transformer for 20K gradient steps with an offline RL approach combining Q-learning updates from DQN with a CQL-style behavior regularizer (2), which corresponds to standard next-token prediction loss (in this particular problem). As shown in Figure 8, HL-Gauss outperforms MSE, for multiple coefficients controlling the strength of CQL regularization. 4.3.2. Grandmaster-level Chess without Search Transformers have demonstrated their effectiveness as general-purpose algorithm approximators, effectively amortizing expensive inference-time computation through distillation (Ruoss et al., 2024; Lehnert et al., 2024). In this context, we explore the potential benefits of using HL-Gauss to convert scalar action-values into classification targets for distilling a value-function. Using the setup of Ruoss et al. (2024), we evaluate HL-Gauss for distilling the action-value function of Stockfish 16 the strongest available Chess engine that uses a combination of complex heuristics and explicit search into a causal transformer. The distillation dataset comprises 10 million chess games annotated by the Stockfish engine, yielding 15 billion data points (Figure 9, left). We train 3 transformer models of varying capacity (9M, 137M, and 270M parameters) on this dataset, using either HL-Gauss or 1-Hot classification targets. We omit MSE as Ruoss et al. (2024) demonstrate that 1-Hot targets outperform MSE on this task. The effectiveness of each model is evaluated based on its ability to solve 10,000 chess puzzles from Lichess, with success measured by the accuracy of the generated action sequences compared to known solutions. Both the setup and results are presented in Figure 9 (right). While the one-hot target with the 270M Transformer from Ruoss et al. (2024) outperformed an AlphaZero baseline without search, HL-Gauss closes the performance gap with the substantially stronger AlphaZero with 400 MCTS simulations (Schrittwieser et al., 2020). 4.3.3. Generalist Robotic Manipulation with Offline Data Finally, we evaluate whether cross-entropy losses can improve performance on a set of large-scale vision-based robotic manipulation control tasks from Chebotar et al. (2023). These tasks present a simulated 7-DoF mobile manipulator, placed in front of a countertop surface. The goal is to 10 Stop Regressing: Training Value Functions via Classification for Scalable Deep RL AlphaZero (w/ MCTS) 9M 137M 270M Number of Parameters0.850.900.95Lichess Puzzle Accuracy (10k)HL-Gauss Ruoss et al. (One-Hot)Offline RL (Distill): Chess Transformer Scaling Figure 9|Grandmaster-level Chess without Search. (Left) Dataset generation for Q-value distillation on Chess. (Right) Scaling Curves. Following the setup from Ruoss et al. (2024), where they train Transformer models to play chess via supervised learning on Stockfish 16 Q-values and then follow greedy policy for evaluation. As the results show, HL-Gauss outperforms one-hot targets used by Ruoss et al. (2024) and nearly matches the performance of AlphaZero with tree search. 0 20k 40k 60k 80k Training Steps0.00.10.2Success RateHL-Gauss MSERobotics Manipulation: Q-Transformer Figure 10|Generalist robotic manipulation with offline data: (Left) Robot platform and (Right) HL-Gauss vs MSE on simulated vision-based manipulation. The robotic manipulation problem (4.3.3) uses the setup from Chebotar et al. (2023). The image on the left shows the 7 degree of freedom mobile manipulator robot used for these experiments. In the plots, error bars show 95% CIs. Note that utilizing a HL-Gauss enables significantly faster learning to a better point. control this manipulator to successfully grasp and lift 17 different kitchen objects in the presence of distractor objects, clutter, and randomized initial poses. We generate a dataset of 500,000(successful and failed) episodes starting from a small amount of human-teleoperated demonstrations ( 40,000 episodes) by replaying expert demonstrations with added sampled action noise, reminiscent of failed autonomously-collected rollouts obtained during deployment or evaluations of a behavioral cloning policy trained on the human demonstration data. We train a Q-Transformer model with 60M parameters, following the recipe in Chebotar et al. (2023), but replace the MSE regression loss with the HL-Gauss classification loss. As shown in Figure 10, HL-Gauss results in 67%higher peak performance over the regression baseline, while being much more sample-efficient, addressing a key limitation of the prior regression-based approach. 5. Why Does Classification Benefit RL? Our experiments demonstrate that classification losses can significantly improve the performance and scalability of value-based deep RL. In this section, we perform controlled experiments to understand why classification benefits value-based RL. Specifically, we attempt to understand how the categorical cross-entropy loss can address several challenges specific to value-based RL including representation learning, stability, and robustness. We will also perform ablation experiments to uncover the reasons behind the superiority of HL-Gauss over other categorical targets. 11 Stop Regressing: Training Value Functions via Classification for Scalable Deep RL 0.00 0.50 1.00 1.50 IQM Normalized ScoreMSE+Softmax MSE Cross-Entropy (C51) Cross-Entropy (HL Gauss)Online RL: Cross-Entropy Ablation Figure 11|Evaluating the learning stability of softmax parameterization (5.1.1) in online RL on Atari. Categorical representation of Q-values does not benefit MSE + Softmax relative to MSE, implying that the cross-entropy loss is critical. 0.0 0.5 1.0 IQM Normalized ScoreCross-Entropy (HL-Gauss)Cross-Entropy (C51)MSEMSE (+ Softmax)Loss functionOffline RL: Double DQN 0.0 0.5 1.0 1.5 IQM Normalized ScoreOffline RL: CQLFigure 12|Evaluations of the learning stability of MSE+Softmax (5.1.1) in Offline RL on Atari. We do not observeanysubstantialgainsfromusingasoftmaxoperator with the MSE loss for either architecture. This implies that the cross-entropy loss is critical. T wo-Hot 101 1000.00.51.01.5Bins 21 51 101 201IQM Normalized Score Online RL: HL-Gauss Label Smoothing Figure 13|Sweeping the ratio /for different number of bins in Online RL on Atari. . A wide range of values outperform Two-Hot, which corresponds to not using any label smoothing, implying that HL-Gauss does benefit from a label smoothing like effect. Furthermore, the optimal amount of label smoothing as prescribed by is independent of bin width. This implies that the HL-Gauss is leveraging the structure of the regression problem and the gains cannot be purely attributed to reduced overfitting from label smoothing (5.1.2). 5.1. Ablation Study: What Components of Classification Losses Matter? Classificationlossespresentedinthispaperdifferfromtraditionalregressionlossesusedinvalue-based RL in two ways: (1)parameterizing the output of the value-network to be a categorical distribution in place of a scalar, and (2)strategies for converting scalar targets into a categorical target. We will now understand the relative contribution of these steps towards the performance of cross-entropy losses. 5.1.1. Are Categorical Representations More Performant? Asdiscussedin3.1,weparameterizetheQ-networktooutputlogitsthatareconvertedtoprobabilities of a categorical distribution by applying the softmax operator. Using softmax leads to bounded Qvalues and bounded output gradients, which can possibly improve RL training stability (Hansen et al., 2024). To investigate whether our Q-value parameterization alone results in improved performance without needing a cross-entropy loss, we train Q-functions with the same parameterization as Eq (3.1) but with MSE. We do not observe any gains from using softmax in conjunction with the MSE loss in both online (Figure 11) and offline RL (Figure 12). This highlights that the use of the cross-entropy loss results in the bulk of the performance improvements. 5.1.2. Why Do Some Cross-Entropy Losses Work Better Than Others? Our results indicate that HL-Gauss outperforms Two-Hot, despite both these methods using a crossentropy loss. We hypothesize that the benefits of HL-Gauss could stem from two reasons: 1) HL12 Stop Regressing: Training Value Functions via Classification for Scalable Deep RL HL-GaussMSE0.00.51.01.5IQM Normalized Score = 0.1 HL-GaussMSE = 0.3 HL-GaussMSE Loss function = 1.0Reward Noise U(0,) Figure 14|HL-Gauss vs. MSE when trained using noisy rewards in an offline RL setting on Atari (4.1). Performance of HL-Gauss degrades slower than MSE as noise increases. Details are in 5.2.1. MSE C51 HL-Gauss1.01.21.41.61.8IQM Normalized ScoreOnline RL: Env Stochasticity Deterministic Sticky ActionsFigure15|Cross-entropyvsregressionlosses when varying environment stochasticity in online RL on Atari (4.1). HL-Gauss only outperforms MSE under deterministic dynamics. Details are in 5.2.1 . Gauss reduces overfitting by spreading probability mass to neighboring locations; and 2) HL-Gauss generalizes across a specific range of target values, exploiting ordinal structure in the regression problem. Thefirsthypothesiswouldbemoreconsistentwithhowlabelsmoothingaddressesoverfitting in classification problems (Szegedy et al., 2016). We test these hypotheses in the online RL setting across a subset of 13Atari games. To do so, we fix the value range [min,max]while simultaneously varying the number of categorical bins in {21,51,101,201}and the ratio of deviation to bin width in{0.25,0.5,0.75,1.0,2.0}. We find that a wide range of values for HL-Gauss outperform Two-Hot, indicating that spreading probability mass to neighbouring locations likely results in less overfitting. Interestingly, we notice that the second hypothesis is also at play, as the optimal value of seems to be independent of number of bins, indicating that HL-Gauss generalizes best across a specific range of target values and is indeed leveraging the ordinal nature of the regression problem. Thus, the gains from HL-Gauss cannot be entirely attributed to overfitting, as is believed to be the case for label smoothing. 5.2. What Challenges Does Classification Address in Value-Based RL? Having seen that the performance gains of cross-entropy losses stem from both the use of a categorical representation of values and distributed targets, we now attempt to understand which challenges in value-based RL cross-entropy losses address, or at least, partially alleviate. 5.2.1. Is Classification More Robust to Noisy Targets? Classification is less prone to overfitting to noisy targets than regression, as it focuses on the categorical relationship between the input and target rather than their exact numerical relationship. We investigate whether classification could better deal with noise induced by stochasticity in RL. (a) Noisy Rewards . To test robustness of classification to stochasticity in rewards, we consider an offline RL setup where we add random noise , sampled uniformly from (0,), to each dataset reward . We vary the noise scale {0.1,0.3,1.0}and compare the performance of cross-entropy based HL-Gauss with the MSE loss. As shown in Figure 14, the performance of HL-Gauss degrades more gracefully than MSE as the noise scale increases. (b) Stochasticity in Dynamics . Following Machado et al. (2018), our Atari experiments use sticky actions with 25% probability, the environment will execute the previous action again, instead 13 Stop Regressing: Training Value Functions via Classification for Scalable Deep RL of the agents executed action resulting in non-deterministic dynamics. Here, we turn off sticky actions to compare different losses on deterministic Atari (60 games). As shown in Figure 15, while cross-entropy based HL-Gauss outperforms MSE with stochastic dynamics, they perform comparably under deterministic dynamics while outperforming distributional C51. Overall, the benefits of cross-entropy losses can be partly attributed to less overfitting to noisy targets, an issue inherent to RL environments with stochastic dynamics or rewards. Such stochasticity issues may also arise as a result of dynamics mis-specification or action delays in real-world embodied RL problems, implying that a cross-entropy loss is a superior choice in those problems. 5.2.2. Does Classification Learn More Expressive Representations? It is well known that just using the mean-squared regression error alone does not produce useful representations in value-based RL, often resulting in low capacity representations (Kumar et al., 2021) that are incapable of fitting target values observed during subsequent training. Predicting a categorical distribution rather than a scalar target can lead to better representations (Zhang et al., 2023), that retain the representational power to model value functions of arbitrary policies that might be encountered over the course of value learning (Dabney et al., 2021). Lyle et al. (2019) showed that gains from C51 can be partially attributed to improved representations but it remains unknown whether they stem from backing up distributions of returns or the use of cross-entropy loss. To investigate this question, following the protocol in Farebrother et al. (2023), we study whether a learned representation, corresponding to penultimate feature vectors, obtained from value-networks trained online on Atari for 200M frames, still retain the necessary information to re-learn a policy from scratch. To do so, we train a Q-function with a single linear layer on top of frozen representation (Farebrother et al., 2023), akin to how self-supervised representations are evaluated in vision (He et al., 2020). As shown in Figure 16, cross-entropy losses result in better performance with linear probing. This indicates that their learned representations are indeed better in terms of supporting the value-improvement path of a policy trained from scratch (Dabney et al., 2021). 0.2 0.3HL-Gauss C51 MSEIQM 0.6 0.7 0.8Optimality GapOnline RL: Linear RL from Fixed Features Human Normalized Score Figure 16|Evaluating representations using linear probing (5.2.2) on Atari. This experiment follows the protocol of Farebrother et al. (2023). Optimality gap refers to the distance from human-level performance and lower is better. In both plots, HL-Gauss scores best, indicating its learned representations are the most conducive to downstream tasks. 5.2.3. Does Classification Perform Better Amidst Non-Stationarity? Non-stationarityisinherenttovalue-basedRLasthetargetcomputationinvolvesaconstantlyevolving argmax policy and value function. Bellemare et al. (2017) hypothesized that classification might mitigate difficulty of learning from a non-stationary policy, but did not empirically validate it. Here, we investigate whether classification can indeed handle target non-stationarity better than regression. Synthetic setup : We first consider a synthetic regression task on CIFAR10 presented in Lyle et al. 14 Stop Regressing: Training Value Functions via Classification for Scalable Deep RL 5k 10k 15k 20k Gradient Step103 102 101 100MSE HL GaussT wo-HotL2 RegressionSynthetic: Regression w/ Increasing Magnitude b=0b=8b=16b=24 Figure 17|Synthetic magnitude prediction experiment to simulate non-stationarity on CIFAR10 (5.2.3). Nonstationarityissimulatedbyfittingnetworkswithdifferentlosses on an increasing sequences of biases over gradient steps. Crossentropy losses are less likely to lose plasticity. 0.0 0.5 1.0 IQM Normalized ScoreCross-Entropy (HL-Gauss)Cross-Entropy (2-Hot)MSELoss functionOffline RL: SARSA 0.0 0.5 1.0 1.5 IQM Normalized ScoreOffline RL: CQLFigure 18|Offline QL vs SARSA to ablate policy nonstationarity on Atari (5.2.3). HL-Gauss gains over MSE vanish with SARSA. This is evidence that some of the benefits from classification stem from dealing with non-stationarity in value-based RL. (2024), where the regression target corresponds to mapping an input image through a randomly initialized neural network to produce high-frequency targets =sin(105())+whereis a constant bias that can control for the magnitude of the targets. When learning a value function with TD, the prediction targets are non-stationary and often increase in magnitude over time as the policy improves. Wesimulatethissettingbyfittinganetworkwithdifferentlossesontheincreasingsequence of bias{0,8,16,24,32}. See details in Appendix B.4. As shown in Figure 17, classification losses retain higher plasticity under non-stationary targets compared to regression. Offline RL : To control non-stationarity in an RL context, we run offline SARSA, which estimates the value of the fixed data-collection policy, following the protcol in Kumar et al. (2022). Contrary to Q-learning, which use the action which maximizes the learned Q-value at the next state +1for computing the Bellman target (2), SARSA uses the action observed at the next timestep (+1,+1) in the offline dataset. As shown in Figure 18, most of the benefit from HL-Gauss compared to the MSE loss vanishes in the offline SARSA setting, adding evidence that some of the benefits from classification stem from dealing with non-stationarity in value-based RL. To summarize , we find that the use of cross-entropy loss itself is central to obtain good performance in value-based RL, and while these methods do not address any specific challenge, they enable value-based RL methods to deal better with non-stationarity, induce highly-expressive representations, and provide robustness against noisy target values. 6. Related Work Prior works in tabular regression (Weiss and Indurkhya, 1995; Torgo and Gama, 1996) and computer vision (Van Den Oord et al., 2016; Kendall et al., 2017; Rothe et al., 2018; Rogez et al., 2019) have replaced regression with classification to improve performance. Most notably, Imani and White (2018) proposed the HL-Gauss cross-entropy loss for regression and show its efficacy on small-scale supervised regression tasks, outside of RL. Our work complements these prior works by illustrating for the first time that a classification objective trained with cross-entropy, particularly HL-Gauss, can enable effectively scaling for value-based RL on a variety of domains, including Atari, robotic manipulation, chess, and Wordle. Several state-of-the-art methods in RL have used the Two-Hot cross-entropy loss without any analysis, either as an ad-hoc trick (Schrittwieser et al., 2020), citing benefits for sparse rewards (Hafner et al., 2023), or simply relying on folk wisdom (Hessel et al., 2021; Hansen et al., 2024). However, in our experiments, Two-Hot performs worse than other cross-entropy losses and MSE. We believe this 15 Stop Regressing: Training Value Functions via Classification for Scalable Deep RL is because Two-Hot does not effectively distribute probability to neighboring classes, unlike C51 and HL-Gauss (see 5.1.2 for an empirical investigation). Closely related is the line of work on categorical distributional RL. Notably, Achab et al. (2023) offer an analysis of categorical one-step distributional RL, which corresponds precisely to the TwoHot algorithm discussed herein with the similarity of these two approaches not being previously recognized. Additionally, the work of Bellemare et al. (2017) pioneered the C51 algorithm, and while their primary focus was not on framing RL as classification, our findings suggest that the specific loss function employed may play a more significant role in the algorithms success than modeling the return distribution itself. Several methods find that categorical distributional RL losses are important for scaling offline value-based RL (Kumar et al., 2023; Springenberg et al., 2024), but these works do not attempt to isolate which components of this paradigm are crucial for attaining positive scaling trends. We also note that these findings do not contradict recent theoretical work (Wang et al., 2023; Rowland et al., 2023) which argues that distributional RL brings statistical benefits over standard RL orthogonal to use of a cross entropy objective or the categorical representation. Prior works have characterized the representations learned by TD-learning (Bellemare et al., 2019; Lyle et al., 2021; Le Lan et al., 2022, 2023; Kumar et al., 2021, 2022) but these prior works focus entirely on MSE losses with little to no work analyzing representations learned by cross-entropy based losses in RL. Our linear probing experiments in 5.2.2 try to fill this void, demonstrating that value-networks trained with cross-entropy losses learn better representations than regression. This finding is especially important since Imani and White (2018) did not find any representational benefits of HL-Gauss over MSE on supervised regression, indicating that the use of cross-entropy might have substantial benefits for TD-based learning methods in particular. 7. Conclusion In this paper, we showed that framing regression as classification and minimizing categorical crossentropy instead of the mean squared error yields large improvements in performance and scalability of value-based RL methods, on a wide variety of tasks, with several neural network architectures. We analyzed the source of these improvements and found that they stem specifically from the ability of the cross-entropy loss in enabling more expressive representations and handling noise and nonstationarity in value-based RL better. While the cross-entropy loss alone does not fully alleviate any of these problems entirely, our results show the substantial difference this small change can make. We believe that strong results with the use categorical cross-entropy has implications for future algorithm design in deep RL, both in theory and practice. For instance, value-based RL approaches havebeenhardertoscaleandtunewhenthevaluefunctionisrepresentedbyatransformerarchitecture and our results hint that classification might provide for a smooth approach to translate innovation in value-based RL to transformers. From a theoretical perspective, analyzing the optimization dynamics ofcross-entropymighthelpdeviseimprovedlossesortargetdistributionrepresentations. Finally,while we did explore a number of settings, further work is required to evaluate the efficacy of classification losses in other RL problems such as those involving pre-training, fine-tuning, or continual RL. Acknowledgements We would like to thank Will Dabney for providing feedback on an early version of this paper. Wed also like to thank Clare Lyle, Mark Rowland, Marc Bellemare, Max Schwarzer, Pierluca Doro, Nate Rahn, Harley Wiltzer, Wesley Chung, and Dale Schuurmans, for informative discussions. Wed also like to acknowledge Anian Ruoss, Grgoire Deltang, and Tim Genewein for their help with the Chess 16 Stop Regressing: Training Value Functions via Classification for Scalable Deep RL training infrastructure. This research was supported by the TPU resources at Google DeepMind, and the authors are grateful to Doina Precup and Joelle Baral for their support. Author Contributions JF led the project, implemented histogram-based methods, ran all the single-task online RL experiments on Atari, Q-distillation on Chess, jointly proposed and ran most of the analysis experiments, and contributed significantly to paper writing. JO and AAT set up and ran the multi-task RL experiments and helped with writing. QV ran the robotic manipulation experiments and YC helped with the initial set-up. TX helped with paper writing and AI was involved in discussions. SL advised on the robotics and Wordle experiments and provided feedback. PSC helped set up the SoftMoE experiments and hosted Jesse at GDM. PSC and AF sponsored the project and took part in discussions. AK advised the project, proposed offline RL analysis for non-stationarity and representation learning, contributedsignificantlytowriting, revising, andthenarrative, andsetuptheroboticsandmulti-game scaling experiments. RA proposed the research direction, advised the project, led the paper writing, ran offline RL and Wordle experiments, and helped set up all of the multi-task scaling and non-Atari experiments. References Mastane Achab, Rda Alami, Yasser Abdelaziz Dahou Djilali, Kirill Fedyanin, and Eric Moulines. One-step distributional reinforcement learning. CoRR, abs/2304.14421, 2023. Rishabh Agarwal, Dale Schuurmans, and Mohammad Norouzi. An optimistic perspective on offline reinforcement learning. In International Conference on Machine Learning (ICML) , 2020. Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron Courville, and Marc G. Bellemare. Deep reinforcement learning at the edge of the statistical precipice. Neural Information Processing Systems (NeurIPS) , 2021. Adrien Ali Taga, Rishabh Agarwal, Jesse Farebrother, Aaron Courville, and Marc G. Bellemare. Investigating multi-task pretraining and generalization in reinforcement learning. In International Conference on Learning Representations (ICLR) , 2023. MarcG.Bellemare,YavarNaddaf,JoelVeness,andMichaelBowling. Thearcadelearningenvironment: An evaluation platform for general agents. Journal of Artificial Intelligence Research (JAIR) , 47: 253279, 2013. Marc G. Bellemare, Will Dabney, and Rmi Munos. A distributional perspective on reinforcement learning. In International Conference on Machine Learning (ICML) , 2017. Marc G. Bellemare, Will Dabney, Robert Dadashi, Adrien Ali Taga, Pablo Samuel Castro, Nicolas Le Roux, Dale Schuurmans, Tor Lattimore, and Clare Lyle. A geometric perspective on optimal representations for reinforcement learning. In Neural Information Processing Systems (NeurIPS) , 2019. Marc G. Bellemare, Will Dabney, and Mark Rowland. Distributional reinforcement learning . MIT Press, 2023. 17 Stop Regressing: Training Value Functions via Classification for Scalable Deep RL James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/ google/jax . Pablo Samuel Castro, Subhodeep Moitra, Carles Gelada, Saurabh Kumar, and Marc G. Bellemare. Dopamine: A Research Framework for Deep Reinforcement Learning. CoRR, abs/1812.06110, 2018. Johan Samir Obando Ceron and Pablo Samuel Castro. Revisiting rainbow: Promoting more insightful andinclusivedeepreinforcementlearningresearch. In InternationalConferenceonMachineLearning (ICML), 2021. Yevgen Chebotar, Quan Vuong, Karol Hausman, Fei Xia, Yao Lu, Alex Irpan, Aviral Kumar, Tianhe Yu, Alexander Herzog, Karl Pertsch, et al. Q-transformer: Scalable offline reinforcement learning via autoregressive q-functions. In Conference on Robot Learning (CoRL) , 2023. Will Dabney, Andr Barreto, Mark Rowland, Robert Dadashi, John Quan, Marc G. Bellemare, and David Silver. The value-improvement path: Towards better representations for reinforcement learning. In AAAI Conference on Artificial Intelligence , 2021. Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Vlad Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, Shane Legg, and Koray Kavukcuoglu. Impala: Scalable distributeddeep-rlwithimportanceweightedactor-learnerarchitectures. In InternationalConference on Machine Learning (ICML) , 2018. Jesse Farebrother, Marlos C. Machado, and Michael Bowling. Generalization and regularization in DQN.CoRR, abs/1810.00123, 2018. JesseFarebrother,JoshuaGreaves,RishabhAgarwal,CharlineLeLan,RossGoroshin,PabloSamuelCastro, and Marc G. Bellemare. Proto-value networks: Scaling representation learning with auxiliary tasks. In International Conference on Learning Representations (ICLR) , 2023. Danijar Hafner, Jurgis Pasukonis, Jimmy Ba, and Timothy P. Lillicrap. Mastering diverse domains through world models. CoRR, abs/2301.04104, 2023. Nicklas Hansen, Hao Su, and Xiaolong Wang. TD-MPC2: Scalable, robust world models for continuous control. In International Conference on Learning Representations (ICLR) , 2024. KaimingHe,XiangyuZhang,ShaoqingRen,andJianSun. Deepresiduallearningforimagerecognition. InIEEE Conference on Computer Vision and Pattern Recognition (CVPR) , 2016. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. Matteo Hessel, Ivo Danihelka, Fabio Viola, Arthur Guez, Simon Schmitt, Laurent Sifre, Theophane Weber, DavidSilver, andHadovanHasselt. Muesli: Combiningimprovementsinpolicyoptimization. InInternational Conference on Machine Learning (ICML) , 2021. Daniel Ho, Kanishka Rao, Zhuo Xu, Eric Jang, Mohi Khansari, and Yunfei Bai. Retinagan: An objectaware approach to sim-to-real transfer. In IEEE International Conference on Robotics and Automation (ICRA), 2021. 18 Stop Regressing: Training Value Functions via Classification for Scalable Deep RL Ehsan Imani and Martha White. Improving regression performance with distributional losses. In International Conference on Machine Learning (ICML) , 2018. Ehsan Imani, Kai Luedemann, Sam Scholnick-Hughes, Esraa Elelimy, and Martha White. Investigating the histogram loss in regression. CoRR, abs/2402.13425, 2024. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. CoRR, abs/2001.08361, 2020. Alex Kendall, Hayk Martirosyan, Saumitro Dasgupta, Peter Henry, Ryan Kennedy, Abraham Bachrach, and Adam Bry. End-to-end learning of geometry and context for deep stereo regression. In IEEE International Conference on Computer Vision (ICCV) , 2017. AlexKrizhevsky,IlyaSutskever,andGeoffreyEHinton. Imagenetclassificationwithdeepconvolutional neural networks. Neural Information Processing Systems (NeurIPS) , 2012. Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative q-learning for offline reinforcement learning. Neural Information Processing Systems (NeurIPS) , 2020. Aviral Kumar, Rishabh Agarwal, Dibya Ghosh, and Sergey Levine. Implicit under-parameterization inhibits data-efficient deep reinforcement learning. In International Conference on Learning Representations (ICLR) , 2021. Aviral Kumar, Rishabh Agarwal, Tengyu Ma, Aaron Courville, George Tucker, and Sergey Levine. Dr3: Value-based deep reinforcement learning requires explicit regularization. In International Conference on Learning Representations (ICLR) , 2022. Aviral Kumar, Rishabh Agarwal, Xinyang Geng, George Tucker, and Sergey Levine. Offline Q-Learning on Diverse Multi-Task Data Both Scales and Generalizes. In International Conference on Learning Representations (ICLR) , 2023. Charline Le Lan, Stephen Tu, Adam Oberman, Rishabh Agarwal, and Marc G. Bellemare. On the generalization of representations in reinforcement learning. In International Conference on Artificial Intelligence and Statistics (AISTATS) , 2022. Charline Le Lan, Stephen Tu, Mark Rowland, Anna Harutyunyan, Rishabh Agarwal, Marc G. Bellemare, and Will Dabney. Bootstrapped representations in reinforcement learning. In International Conference on Machine Learning (ICML) , 2023. Kuang-HueiLee,OfirNachum,Mengjiao(Sherry)Yang,LisaLee,DanielFreeman,SergioGuadarrama, Ian Fischer, Winnie Xu, Eric Jang, Henryk Michalewski, and Igor Mordatch. Multi-game decision transformers. In Neural Information Processing Systems (NeurIPS) , 2022. Lucas Lehnert, Sainbayar Sukhbaatar, Paul Mcvay, Michael Rabbat, and Yuandong Tian. Beyond a*: Better planning with transformers via search dynamics bootstrapping. CoRR, abs/2402.14083, 2024. Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems. CoRR, abs/2005.01643, 2020. Clare Lyle, Marc G. Bellemare, and Pablo Samuel Castro. A comparative analysis of expected and distributional reinforcement learning. In AAAI Conference on Artificial Intelligence , 2019. 19 Stop Regressing: Training Value Functions via Classification for Scalable Deep RL Clare Lyle, Mark Rowland, Georg Ostrovski, and Will Dabney. On the effect of auxiliary tasks on representationdynamics. In InternationalConferenceonArtificialIntelligenceandStatistics(AISTATS) , 2021. Clare Lyle, Mark Rowland, and Will Dabney. Understanding and preventing capacity loss in reinforcement learning. In International Conference on Learning Representations (ICLR) , 2022. Clare Lyle, Zeyu Zheng, Khimya Khetarpal, Hado van Hasselt, Razvan Pascanu, James Martens, and Will Dabney. Disentangling the causes of plasticity loss in neural networks. CoRR, abs/2402.18762, 2024. Marlos C. Machado, Marc G. Bellemare, Erik Talvitie, Joel Veness, Matthew Hausknecht, and Michael Bowling. Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents. Journal of Artificial Intelligence Research (JAIR) , 61:523562, 2018. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518 (7540):529533, 2015. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning (ICML) , 2016. Johan Obando-Ceron, Ghada Sokar, Timon Willi, Clare Lyle, Jesse Farebrother, Jakob Foerster, Gintare Karolina Dziugaite, Doina Precup, and Pablo Samuel Castro. Mixtures of experts unlock parameter scaling for deep rl. CoRR, abs/2402.08609, 2024. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In Neural Information Processing Systems (NeurIPS) , 2019. Silvia L. Pintea, Yancong Lin, Jouke Dijkstra, and Jan C. van Gemert. A step towards understanding why classification helps regression. In IEEE International Conference on Computer Vision (ICCV) , pages 1997219981, 2023. Joan Puigcerver, Carlos Riquelme Ruiz, Basil Mustafa, and Neil Houlsby. From sparse to soft mixtures of experts. In International Conference on Learning Representations (ICLR) , 2024. Gregory Rogez, Philippe Weinzaepfel, and Cordelia Schmid. Lcr-net++: Multi-person 2d and 3d pose detection in natural images. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 42(5):11461161, 2019. Rasmus Rothe, Radu Timofte, and Luc Van Gool. Deep expectation of real and apparent age from a single image without facial landmarks. International Journal of Computer Vision (IJCV) , 126(2-4): 144157, 2018. Mark Rowland, Yunhao Tang, Clare Lyle, Rmi Munos, Marc G. Bellemare, and Will Dabney. The statistical benefits of quantile temporal-difference learning for value estimation. In International Conference on Machine Learning (ICML) , 2023. 20 Stop Regressing: Training Value Functions via Classification for Scalable Deep RL Anian Ruoss, Grgoire Deltang, Sourabh Medapati, Jordi Grau-Moya, Li Kevin Wenliang, Elliot Catt, John Reid, and Tim Genewein. Grandmaster-level chess without search. CoRR, abs/2402.04494, 2024. Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, Timothy P. Lillicrap, and David Silver. Mastering atari, go, chess and shogi by planning with a learned model. Nature, 588 (7839):604609, 2020. David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel, and Demis Hassabis. Mastering the game of go without human knowledge. Nature, 550(7676):354359, 2017. Charlie Victor Snell, Ilya Kostrikov, Yi Su, Sherry Yang, and Sergey Levine. Offline RL for natural language generation with implicit language q learning. In International Conference on Learning Representations (ICLR) , 2023. Jost Tobias Springenberg, Abbas Abdolmaleki, Jingwei Zhang, Oliver Groth, Michael Bloesch, Thomas Lampe, Philemon Brakel, Sarah Bechtle, Steven Kapturowski, Roland Hafner, et al. Offline actorcritic reinforcement learning scales to large models. CoRR, abs/2402.05546, 2024. Lawrence Stewart, Francis Bach, Quentin Berthet, and Jean-Philippe Vert. Regression as classification: Influence of task formulation on neural network features. In International Conference on Artificial Intelligence and Statistics (AISTATS) , 2023. ChristianSzegedy,VincentVanhoucke,SergeyIoffe,JonathonShlens,andZbigniewWojna. Rethinking the inception architecture for computer vision. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , 2016. LusTorgoandJooGama. Regressionbyclassification. In BrazilianSymposiumonArtificialIntelligence , pages 5160. Springer, 1996. Aron Van Den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In International Conference on Machine Learning (ICML) , 2016. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, ukasz Kaiser, and Illia Polosukhin. Attention is all you need. Neural Information Processing Systems (NeurIPS) , 2017. Kaiwen Wang, Kevin Zhou, Runzhe Wu, Nathan Kallus, and Wen Sun. The benefits of being distributional: Small-loss bounds for reinforcement learning. In Neural Information Processing Systems (NeurIPS) , 2023. Sholom M Weiss and Nitin Indurkhya. Rule-based machine learning methods for functional prediction. Journal of Artificial Intelligence Research (JAIR) , 3:383403, 1995. Shihao Zhang, Linlin Yang, Michael Bi Mi, Xiaoxu Zheng, and Angela Yao. Improving deep regression with ordinal entropy. In International Conference on Learning Representations (ICLR) , 2023. 21 Stop Regressing: Training Value Functions via Classification for Scalable Deep RL A. Reference Implementations import jax import jax.scipy.special import jax.numpy as jnp def hl_gauss_transform( min_value: float, max_value: float, num_bins: int, sigma: float, ): """Histogram loss transform for a normal distribution.""" support = jnp.linspace(min_value, max_value, num_bins + 1, dtype=jnp.float32) def transform_to_probs(target: jax.Array) -> jax.Array: cdf_evals = jax.scipy.special.erf((support target) / (jnp.sqrt(2) * sigma)) z = cdf_evals[-1] cdf_evals bin_probs = cdf_evals[1:] cdf_evals[:-1] return bin_probs / z def transform_from_probs(probs: jax.Array) -> jax.Array: centers = (support[:-1] + support[1:]) / 2 return jnp.sum(probs * centers) return transform_to_probs, transform_from_probs Listing 1|An implementation of HL-Gauss (Imani and White, 2018) in Jax (Bradbury et al., 2018). import torch import torch.special import torch.nn as nn import torch.nn.functional as F class HLGaussLoss(nn.Module): def __init__(self, min_value: float, max_value: float, num_bins: int, sigma: float): super().__init__() self.min_value = min_value self.max_value = max_value self.num_bins = num_bins self.sigma = sigma self.support = torch.linspace( min_value, max_value, num_bins + 1, dtype=torch.float32 ) def forward(self, logits: torch.Tensor, target: torch.Tensor) -> torch.Tensor: return F.cross_entropy(logits, self.transform_to_probs(target)) def transform_to_probs(self, target: torch.Tensor) -> torch.Tensor: cdf_evals = torch.special.erf( (self.support target.unsqueeze(-1)) / (torch.sqrt(torch.tensor(2.0)) * self.sigma) ) z = cdf_evals[..., -1] cdf_evals[..., 0] bin_probs = cdf_evals[..., 1:] cdf_evals[..., :-1] return bin_probs / z.unsqueeze(-1) def transform_from_probs(self, probs: torch.Tensor) -> torch.Tensor: centers = (self.support[:-1] + self.support[1:]) / 2 return torch.sum(probs * centers, dim=-1) Listing 2|An implementation of HL-Gauss (Imani and White, 2018) in PyTorch (Paszke et al., 2019). 22 Stop Regressing: Training Value Functions via Classification for Scalable Deep RL B. Experimental Methodology In the subsequent sections we outline the experimental methodology for each domain herein. B.1. Atari Both our online and offline RL regression baselines are built upon the Jax (Bradbury et al., 2018) implementation of DQN+Adam in Dopamine (Castro et al., 2018). Similarly, each of the classification methods(i.e., HL-GaussandTwo-Hot)werebuiltupontheJax(Bradburyetal.,2018)implementation of C51 in Dopamine (Castro et al., 2018). Hyperparameters for DQN+Adam are provided in Table B.1 along with any hyperparameter differences for C51 (Table B.2), Two-Hot (Table B.2), and HL-Gauss (Table B.3). Unless otherwise stated the online RL results in the paper were ran for 200M frames on 60 Atari games with five seeds per game. The offline RL results were ran on the 17 games in Kumar et al. (2021) with three seeds per game. The network architecture for both the online and offline results is the standard DQN Nature architecture that employs three convolutional layers followed by a single non-linear fully-connected layer before outputting the action-values. Table B.1|DQN+Adam Hyperparameters. Discount Factor 0.99 -step 1 Minimum Replay History 20,000agent steps Agent Update Frequency 4environment steps Target Network Update Frequency 8,000agent steps Exploration 0.01 Exploration decay 250,000agent steps Optimizer Adam Learning Rate 6.25105 Adam 1.5104 Sticky Action Probability 0.25 Maximum Steps per Episode 27,000agent steps Replay Buffer Size 1,000,000 Batch Size 32Table B.2|C51 & Two-Hot Hyperparameters. Difference in hyperparameters from DQN+Adam Table B.1. Number of Locations 51 [min,max] [ 10,10] Learning Rate 0.00025 Adam 0.0003125 Table B.3|HL-Gauss Hyperparameters.Difference in hyperparameters from C51 Table B.2. Smoothing Ratio /0.75 B.1.1. Mixtures of Experts All experiments ran with SoftMoE reused the experimental methodology of Obando-Ceron et al. (2024). Specifically, we replace the penultimate layer of the DQN+Adam in Dopamine (Castro et al., 2018) with a SoftMoE (Puigcerver et al., 2024) module. The MoE results were ran with the Impala ResNet architecture (Espeholt et al., 2018). We reuse the same set of 20games from Obando-Ceron et al. (2024) and run each configuration for five seeds per game. All classification methods reused the parameters from Table B.2 for C51 and Two-Hot or Table B.3 for HL-Gauss. B.1.2. Multi-Task & Multi-Game The multi-task and multi-game results follow exactly the methodology outlined in Ali Taga et al. (2023) and Kumar et al. (2023) respectively. We reuse the hyperparameters for HL-Gauss outlined in Table B.3. For multi-task results each agent is run for five seeds per game. Due to the prohibitive compute of the multi-game setup we run each configuration for a single seed. 23 Stop Regressing: Training Value Functions via Classification for Scalable Deep RL B.2. Chess We follow exactly the setup in Ruoss et al. (2024) with the only difference being the use of HL-Gauss with a smoothing ratio /=0.75. Specifically, we take the action-values produced by Stockfish and project them a categorical distribution using HL-Gauss. As Ruoss et al. (2024) was already performing classification we reuse the parameters of their categorical distribution, those being, =128bins evenly divided between the range [0,1]. For each parameter configuration we train a single agent and report the evaluation puzzle accuracy. Puzzle accuracy numbers for one-hot and AlphaZero w/ MCTS were taken directly from Ruoss et al. (2024, Table 6). B.3. Robotic manipulation experiments. We study a large-scale vision-based robotic manipulation setting on a mobile manipulator robot with 7 degrees of freedom, which is visualized in Figure 10 (left). The tabletop robot manipulation domain consistsofatabletopwithvariousrandomizedobjectsspawnedontopofthecountertop. ARetinaGAN is applied to transform the simulation images closer to real-world image distributions, following the method in (Ho et al., 2021). We implement a Q-Transformer policy following the procedures in (Chebotar et al., 2023). Specifically, we incorporate autoregressive -learning by learning -values per action dimension, incorporate conservative regularization to effectively learn from suboptimal data, and utilize Monte-Carlo returns. Figure B.1|Robot manipulation domain. The simulated robot manipulation (4.3.3) consists of a tabletop with randomized objects. A learned RetinaGAN transformation is applied to make the visual observation inputs more realistic. B.4. Regression Target Magnitude & Loss of Plasticity To assess whether classification losses are more robust when learning non-stationary targets of increasing magnitude we leverage the synthetic setup from Lyle et al. (2024). Specifically, we train a convolutional neural network that takes CIFAR 10 images as input and outputs a scalar prediction: :32323. The goal is to fit the regression target, =sin(())+ where=105,are a set of randomly sampled target parameters for the same convolutional architecture, and is a bias that changes the magnitude of the prediction targets. It is clear that increasing shouldnt result in a more challenging regression task. When learning a value function with TD methods the regression targets are non-stationary and hopefully increasing in magnitude (corresponding to an improving policy). To simulate this setting we consider fitting the network on the increasing sequence {0,8,16,24,32}. For each value 24 Stop Regressing: Training Value Functions via Classification for Scalable Deep RL we sample a new set of target parameters and regress towards for5,000gradient steps with a batch size of 512with the Adam optimizer using a learning rate of 103. We evaluate the Mean-Squared Error (MSE) throughout training for three methods: Two-Hot, HLGauss, and L2 regression. For both Two-Hot and HL-Gauss we use a support of [40,40]with 101 bins. Figure 17 depicts the MSE throughout training averaged over 30seeds for each method. One can see that the network trained with L2 regression does indeed loose its ability to rapidly fit targets of increasing magnitude, consistent with Lyle et al. (2024). On the other hand, the classification methods are more robust and tend to converge to the same MSE irrespective of the target magnitude . Furthermore, we can see that HL-Gauss outperforms Two-Hot, consistent with our previous findings. These results help provide some evidence that perhaps one of the reasons classification outperforms regression is due to the network being more plastic under non-stationary targets. 25 Stop Regressing: Training Value Functions via Classification for Scalable Deep RL C. Per-Game Atari Results 2,5005,0007,50010,000Episode ReturnHL-Gauss T wo Hot C51MSEAirRaid 1,0002,0003,0004,000 HL-Gauss T wo HotC51MSEAlien 5001,0001,500 HL-Gauss T wo HotC51 MSEAmidar 5001,0001,5002,000HL-Gauss T wo Hot C51 MSEAssault 5,00010,00015,00020,000HL-Gauss T wo HotC51 MSEAsterix 5007501,0001,250Episode ReturnHL-Gauss T wo HotC51MSEAsteroids 200,000400,000600,000800,000HL-Gauss T wo HotC51 MSEAtlantis 2505007501,000 HL-GaussT wo Hot C51 MSEBankHeist 10,00020,000HL-GaussT wo Hot C51MSEBattleZone 1,5003,0004,5006,000HL-Gauss T wo HotC51MSEBeamRider 150300450600Episode ReturnHL-Gauss T wo Hot C51MSEBerzerk 203040 HL-GaussT wo Hot C51MSEBowling -300306090HL-Gauss T wo Hot C51MSEBoxing 100200300HL-Gauss T wo Hot C51MSEBreakout 1,5003,0004,500 HL-Gauss T wo HotC51MSECarnival 2,0004,0006,0008,00010,000Episode ReturnHL-GaussT wo Hot C51 MSECentipede 1,5003,0004,5006,000 HL-Gauss T wo HotC51 MSEChopperCommand 30,00060,00090,000120,000HL-Gauss T wo HotC51 MSECrazyClimber 10,00020,00030,000HL-Gauss T wo Hot C51MSEDemonAttack -20-10010HL-Gauss T wo Hot C51MSEDoubleDunk 015,00030,00045,00060,000Episode Return HL-GaussT wo HotC51 MSEElevatorAction 05001,0001,5002,000HL-Gauss T wo Hot C51 MSEEnduro -75-50-25025 HL-Gauss T wo HotC51MSEFishingDerby 102030HL-Gauss T wo HotC51 MSEFreeway 1,5003,0004,500HL-Gauss T wo HotC51 MSEFrostbite 2,5005,0007,50010,000Episode ReturnHL-Gauss T wo HotC51MSEGopher 2505007501,0001,250 HL-Gauss T wo Hot C51MSEGravitar 10,00020,00030,000HL-Gauss T wo HotC51 MSEHero -100 HL-GaussT wo Hot C51MSEIceHockey 200400600800 HL-GaussT wo Hot C51MSEJamesbond -15,000-10,000-5,000Episode ReturnHL-GaussT wo Hot C51MSEJourneyEscape 2,5005,0007,50010,000HL-Gauss T wo Hot C51MSEKangaroo 3,0004,5006,0007,500 HL-Gauss T wo HotC51MSEKrull 10,00020,000HL-Gauss T wo HotC51 MSEKungFuMaster 05001,0001,500 HL-GaussT wo HotC51 MSEMontezumaRevenge 1,0002,0003,0004,000Episode ReturnHL-Gauss T wo HotC51MSEMsPacman 3,0006,0009,00012,000HL-Gauss T wo Hot C51 MSENameThisGame 5,00010,00015,000 HL-Gauss T wo HotC51MSEPhoenix -300-200-100HL-Gauss T wo HotC51 MSEPitfall -20-1001020HL-Gauss T wo Hot C51 MSEPong 1,0002,0003,0004,000Episode ReturnHL-Gauss T wo Hot C51MSEPooyan 05,00010,00015,000 HL-GaussT wo Hot C51 MSEPrivateEye 3,0006,0009,00012,000HL-Gauss T wo Hot C51MSEQbert 5,00010,00015,000HL-Gauss T wo HotC51MSERiverraid 15,00030,00045,000HL-Gauss T wo HotC51MSERoadRunner 15304560Episode ReturnHL-GaussT wo Hot C51MSE Robotank 15,00030,00045,000 HL-Gauss T wo HotC51 MSESeaquest -30,000-25,000-20,000-15,000 HL-GaussT wo Hot C51MSESkiing 1,0001,5002,000 HL-Gauss T wo HotC51 MSESolaris 2,0004,0006,0008,000 HL-Gauss T wo HotC51 MSESpaceInvaders 15,00030,00045,00060,000Episode ReturnHL-Gauss T wo Hot C51MSEStarGunner -20-1001020 HL-GaussT wo Hot C51 MSET ennis 2,5005,0007,50010,000HL-Gauss T wo Hot C51MSETimePilot 50100150200HL-Gauss T wo HotC51 MSETutankham 3,0006,0009,00012,000 HL-GaussT wo Hot C51MSEUpNDown 0 50 100 150 200 Iteration03006009001,200Episode ReturnHL-Gauss T wo HotC51 MSEVenture 0 50 100 150 200 Iteration150,000300,000450,000 HL-Gauss T wo HotC51 MSEVideoPinball 0 50 100 150 200 Iteration2,5005,0007,50010,000HL-Gauss T wo Hot C51MSEWizardOfWor 0 50 100 150 200 Iteration10,00020,00030,00040,000HL-Gauss T wo Hot C51MSEYarsRevenge 0 50 100 150 200 Iteration3,0006,0009,00012,000HL-Gauss T wo Hot C51MSEZaxxon Figure C.1|Training curves on single-task online RL (4.1) for all 60 Atari games. All games ran for 200M frames and ran for: DQN(Adam), C51, Two-Hot, and HL-Gauss. 26 Stop Regressing: Training Value Functions via Classification for Scalable Deep RL SkiingPhoenixKrull PrivateEyeAlien JamesbondT ennis UpNDownBowlingAsteroidsCarnival MsPacmanStarGunnerAmidar Robotank FishingDerbyJourneyEscapePong MontezumaRevengeKungFuMasterQbert VentureRiverraidBoxing BattleZonePitfall RoadRunnerAirRaid CrazyClimberGopherAtlantisKangaroo DoubleDunkBeamRiderGravitar ChopperCommandPooyanBerzerkZaxxonAssaultFreewayTimePilotIceHockey YarsRevengeBreakoutTutankhamHero BankHeistFrostbiteEnduro ElevatorActionSpaceInvadersNameThisGameWizardOfWorAsterix VideoPinballSeaquest DemonAttackCentipedeSolaris-100.0-10.0-1.01.010.0100.01,000.0Percent ImprovementOnline RL: Per-Game Improvement of HNS for HL-Gauss vs. MSE 0 25 50 75 100 125 150 175 200 Iteration0.00.30.60.91.21.5IQM Normalized ScoreC51 MSEHL-Gauss T wo HotOnline RL: Atari 200M IQM Figure C.2|HL-Gauss vs MSE per game in single-task online RL (4.2.2) .(Left)Each column displays the relative final performance of HL-Gauss with respect to MSE in the single-task online RL training curves. This is a summary of the curves displayed in Figure C.1. Note that HL-Gauss outperforms MSE in 3/4of all games, and that HL-Gauss scores at least 10% higher on 1/2 of all games. (Right) IQM normalized training curves throughout training. D. Additional Results 0 200 400 600 800 1000 1200 1400 Number of Frames (in millions)0.00.51.01.52.02.5IQM Normalized Score Multi-task Space Invaders (29 variants) IMPALA-CNN HL Gauss IMPALA-CNN MSE ResNet-18 HL Gauss ResNet-18 MSE ResNet-34 HL Gauss ResNet-34 MSE ResNet-50 HL Gauss ResNet-50 MSE ResNet-101 HL Gauss ResNet-101 MSE Figure D.1|Multi-task online RL (4.2.2) training curves for Space Invaders trained concurrently on 29 game variants. Note that for every architecture, the HLGauss variant scales better than its respective MSE variant. 0 200 400 600 800 1000 1200 1400 Number of Frames (in millions)0.00.51.01.52.02.53.03.5IQM Normalized Score Multi-task Asteroids (63 variants) IMPALA-CNN HL Gauss IMPALA-CNN MSE ResNet-18 HL Gauss ResNet-18 MSE ResNet-34 HL Gauss ResNet-34 MSE ResNet-50 HL Gauss ResNet-50 MSE ResNet-101 HL Gauss ResNet-101 MSEFigure D.2|Multi-task online RL (4.2.2) training curves for Asteroids trained concurrently on 63 game variants . These results investigate the scaling properties per architecture of MSE critic loss and cross-entropy HL-Gauss loss. Note that with the architectures larger than Resnet 18, HL-Gauss keeps improving while MSE performance drops after 1300M frames. These larger architectures also all reach higher peak IQM scores with HL-Gauss. IMPALA-CNNResNet-18 ResNet-34 ResNet-50 ResNet-101123IQM Normalized ScoreHL-Gauss MSESpace Invaders (29 Variants)Multi-T ask RL: Scaling Capacity Figure D.3|Scaling curves on Multi-task Online RL . Online RL scaling results with actor-critic IMPALA with ResNets onSpace Invaders . HL-Gauss outperforms MSE for all models. Since human scores are not available for variants, we report normalized scores using a baseline IMPALA agent with MSE loss. See 4.2.2 for more details. Multi-Game DT (200M) 31M 60M 79M Number of Parameters0.501.001.502.00IQM Normalized Score MSEC51 T wo-HotHL-GaussOffline RL (Multi-Game): Scaling CapacityFigure D.4|Multi-task Offline RL results presented in terms of DQN normalized scores . Note that when aggregate results are computed with DQN normalization, HLGauss exhibits a faster rate of improvement than C51 as the number of parameters scales up. 27 |
2402.04494.pdf | Grandmaster-Level Chess Without Search Anian Ruoss*,1, Grgoire Deltang*,1, Sourabh Medapati1, Jordi Grau-Moya1, Li Kevin Wenliang1, Elliot Catt1, John Reid1and Tim Genewein1 *Equal contributions,1Google DeepMind The recent breakthrough successes in machine learning are mainly attributed to scale: namely largescale attention-based architectures and datasets of unprecedented scale. This paper investigates the impact of training at scale for chess. Unlike traditional chess engines that rely on complex heuristics, explicit search, or a combination of both, we train a 270M parameter transformer model with supervised learningonadatasetof10millionchessgames. Weannotateeachboardinthedatasetwithaction-values provided by the powerful Stockfish 16 engine, leading to roughly 15 billion data points. Our largest model reaches a Lichess blitz Elo of 2895 against humans, and successfully solves a series of challenging chess puzzles, without any domain-specific tweaks or explicit search algorithms. We also show that our model outperforms AlphaZeros policy and value networks (without MCTS) and GPT-3.5-turbo-instruct. A systematic investigation of model and dataset size shows that strong chess performance only arises at sufficient scale. To validate our results, we perform an extensive series of ablations of design choices and hyperparameters. 1. Introduction One of the most iconic successes of AI is IBMs Deep Blue(Campbelletal.,2002)defeatingtheworldchess champion Garry Kasparov in 1997. This was widely seen as the first major demonstration that machines are capable of out-competing humans in intellectual domains that require sophisticated rational reasoning and strategic planningfeats of intelligence that were long believed to be exclusive to humans. Deep Blue was an expert system that combined an extensive database of chess knowledge and heuristics with a strong tree search algorithm (alpha-beta pruning). Almost all modern and much stronger chess engines follow a similar recipe, with Stockfish 16 currently being the worlds strongest (publicly available) engine. Notable exceptions are DeepMinds AlphaZero (Silver et al., 2017), which uses search and self-taught heuristics but no human chess knowledge, and its open-source replication Leela Chess Zero, which currently often comes in as a close second in chess computer competitions (Haworth and Hernandez, 2021). Recent breakthroughs in scaling up AI systems have resulted in dramatic progress in cognitive domains that remained challenging for earlier-generation systems like Deep Blue. This progress has been driven by general-purpose techniques, in particular (self-) supervised training on expert data with attention-based architectures (Vaswani et al., 2017) applied at scale, resulting in the development of LLMs with impressive and unexpected cognitive abilities like OpenAIs GPT series (Brown et al., 2020; OpenAI, 2023), theLLaMA family of models (Touvron et al., 2023a,b), or Google DeepMinds Chinchilla (Hoffmann et al., 2022) and Gemini (Anil et al., 2023). However, it is unclear whether the same technique would work in a domain like chess, where successful policies typically rely on sophisticated algorithmic reasoning (search, dynamic programming) and complex heuristics. Thus, the main question of this paper is: Is it possible to use supervised learning to obtain a chess policy that generalizes well and thus leads to strong play without explicit search? To study this question we apply the success recipe of general supervised training at scale to chess (see Figure 1). We use a standard attention-based architecture and a standard supervised training protocol to learn to predict action-values (corresponding to winpercentages) for chess boards. The strength of the resulting chess policy thus depends entirely on the strength of the underlying action-value predictor. To get a large corpus of ground-truth action-values we use Stockfish 16 as an oracle to annotate millions of board states obtained from randomly drawn games on lichess.org, which are mostly played by humans varying significantly in playing strength. As we will show this leads to a strong, grandmaster-level chess policy (Lichess blitz Elo 2895 against humans), driven by a modern transformer to predict action-values without any explicit search . This policy outperforms GPT-3.5turbo-instruct (and, therefore, GPT-4 (Carlini, 2023)) and AlphaZeros policy and value networks, which reach Elo ratings of 1755, 1620, and 1853, respectively. Therefore, our work shows that it is possible Corresponding author(s): {anianr,gdelt}@google.com 2024 Google DeepMind. All rights reservedarXiv:2402.04494v1 [cs.LG] 7 Feb 2024 Grandmaster-Level Chess Without Search Board annotation via strong oracle (Stockfish 16) Draw N games, extract all unique boards Action [UCI] aAction-value [Win%] QSF(s, a) Best Action [UCI] aSF(s) e2e3 48 Qd1f3 e2e4 53 Qd1f3 62Compute SF oracle action-values Collection of games [PGN notation] Action-value Loss: Policy:Loss: Policy:State-value Loss: Policy:Behavioral cloningBoard State [FEN] sState-value [Win%] VSF(s) <FEN 1> 51 < FEN 2> 54 < FEN M> 31N gamesDataset for all legal actions Legal action leading to next state with minimal predicted expected state-value for opponent player. Legal action with maximal predicted expected action-value.Legal action with highest predicted probability. Datasets Puzzle Test set 1.33% overlap with training setSource: curated chess puzzles from LichessSource: games from Lichess (range of Elo)10k 1MTraining sets 1kTest set 14.7% overlap with training set (mostly early-game boards) Predictors (M 40 N) 10M 10k Figure 1|Top(Data annotation): We extract all boards from randomly drawn games from Lichess, discard duplicate board states, and compute the state-value for each board as the win-probability via Stockfish. We compute action-values and the best action for all legal moves of a board state in the same way. Bottom left (Dataset creation): We construct training and test sets of various sizes (see Table A1). Our largest training set has 15.3B action-values. Drawing games i.i.d. from the game database for our test set leads to 14.7%of test-boards appearing in the largest training set (mostly very early game states). We also use a test set of 10k chess puzzles that come with a correct sequence of moves. Bottom right (Policies): We train predictors on three targets (stateor action-values, or oracle actions), each of which can be used for a chess policy. Our value predictors are discrete discriminators (classifiers) that predict into which bin {0,...,}the oracle value falls. to distill a good approximation of Stockfish 16 into a feed-forward neural network via standard supervised learning at sufficient scaleakin to the quote famously attributed to Jos Ral Capablanca, world chess champion from 1921 to 1927: I see only one move ahead, but it is always the correct one. We make the following main contributions: We distill an approximation of Stockfish 16 into a neural predictor that generalizes well to novel board states. We construct a policy from our neural predictor and show that it plays chess at grandmaster level (Lichess blitz Elo 2895) against humans and succcessfully solves many challenging chess puzzles (up to Elo 2800). To the best of our knowledge this is currently the strongest chess engine without explicit search. We perform ablations of the model size and data set size, showing that robust generalization and strong chess play only arise at sufficient scale.2. Methods We now provide details on the dataset creation, the predictors and policies, and the evaluation (see Figure 1). 2.1. Data Toconstructadatasetforsupervisedtrainingwedownload 10 million games from Lichess (lichess.org) from February 2023. We extract all board states from these games and estimate the state-value SF()for eachstatewithStockfish16usingatimelimitof50ms per board (unbounded depth and level). The value of a state is the win percentage estimated by Stockfish, lying between 0%and 100%.1We also use Stockfish to estimate action-values SF(,)for all legal actionsAlegal()in each state. Here we use a time limit of 50ms per state-action pair (unbounded depth and max skill level), which corresponds to an oracle Lichess blitz Elo of 2713 (see Section 3.1). The actionvalues (win percentages) also determine the oracle 1Stockfish returns a score in centipawns that we convert to the win percentage with the standard formula win%=50%2/(1+exp(0.00368208centipawns))from https://lichess.org/page/accuracy . 2 Grandmaster-Level Chess Without Search best action SF: SF()=arg max Alegal()SF(,). Werarelygettime-outswhencomputingaction-values via Stockfish, in which case we cannot determine the best action for a board state and drop the corresponding record from the behavioral cloning training set (see Table A1). Since we train on individual boards and not whole games we randomly shuffle the dataset after annotation. For our largest training dataset, based on 10M games, this results in 15.32B action-value estimates (or530M state-value estimates and best oracle actions) to train on. To create test datasets we follow the same annotation procedure, but on 1k games downloaded from a different month (March 2023, 1.8M action-value estimates, 60k state-value estimates and best oracle actions). Since there is only a small number of early-game board-states and players often play popular openings, this i.i.d. test set contains 14.7%of boards that are also in the training set. We do not remove them, as doing so would introduce distributional shift and skew test-set metrics. Finally, we also create a puzzle test set, following the procedure in Carlini (2023), consisting of 10k challenging board states that come with a correct sequence of moves to solve the puzzle, which we compare against in our puzzle set accuracy evaluation. Only 1.33%of the puzzle set boards appear in the training set (i.e., the initial board states, not complete solution sequences). Since evaluation of puzzle solutions is slow, we use a subset of 1k puzzles in some of our evaluations ( 1.4% overlap with training set). Value binning The predictors we train are discrete discriminators (classifiers), therefore we convert winpercentages (the ground-truth stateor action-values) into discrete classes via binning: we divide the interval between 0%and100%uniformly into bins (non-overlapping sub-intervals) and assign a one-hot code to each bin {0,...,}. If not mentioned otherwise, =128. For our behavioral cloning experiments we train to predict oracle actions directly which are already discrete. We perform ablations for the number of bins in Section 3.4. 2.2. Model For all our predictors we use a modern decoderonly transformer backbone (Touvron et al., 2023a,b; Vaswani et al., 2017) to parameterize a discrete probability distribution by normalizing the transformers outputs with a log-softmax layer. The model thus outputs logprobabilities. The context-size is 79foraction-value prediction, and 78for state-value predictionandbehavioralcloning(seeTokenizationbelow). The output size is (the number of bins) for actionandstate-valuepredictionand 1968(thenumberofall possible legal actions) for behavioral cloning. We use learned positional encodings (Gehring et al., 2017) as the length of the input sequences is constant. Our largest model has roughly 270million parameters. We provide all details for the model-size ablations in Section 3.3. Tokenization Board states are encoded as FEN strings which we convert to fixed-length strings of 77characters where the ASCII-code of each character is one token. A FEN string is a description of all pieces on the board, whose turn it is, the castling availability for both players, a potential en passant target, a halfmove clock and a full-move counter. We essentially take any variable-length field in the FEN string, and convert it into a fixed-length sub-string by padding with . if needed. We never flip the board; the FEN string always starts at rank 1, even when it is the blacks turn. We store the actions in UCI notation (e.g., e2e4 for the well-known white opening move). To tokenize them we determine all possible legal actions across games, which is 1968, sort them alphanumerically (case-sensitive), and take the actions index as the token, meaning actions are always described by a single token (all details in Appendix A.1). Training protocol Predictors are trained by minimizing cross-entropy loss (i.e., log-loss) via minibatch based stochastic gradient descent using Adam (Kingma and Ba, 2015). We train for 10million steps, which corresponds to 2.67epochs for a batch size of 4096with 15.32B data points (cf. Table A1). The target labels are either bin-indices in the case of stateor action-value prediction (see Section 2.1) or action indices for behavioral cloning; using a onehot encoding in all cases (details in Appendices A.2 and A.3). 2.3. Predictors and Policies Our predictors are discrete distributions parameterized by neural networks (|)that take a tokenized inputand output a predictive distribution over discrete labels{0,...,}. Depending on the predictiontargetwedistinguishbetweenthreetasks(seeFigure1 for an overview). (AV) Action-value prediction The target label is the bininto which the ground-truth action-value estimateSF(,)falls. The input to the predictor is 3 Grandmaster-Level Chess Without Search the concatenation of tokenized state and action. The loss for a single data point is: logAV (|,)with:=bin(SF(,)),(1) whereisthenumberofbinsand bin()isafunction that computes the (one-hot) bin-index of value . To use the predictor in a policy, we evaluate the predictor for all legal actions in the current state and pick the action with maximal expected action-value: AV()=arg max AlegalAV (|,)[] | {z } (,). (SV) State-value prediction The target label is the binthat the ground-truth state-value SF()falls into. The input to the predictor is the tokenized state. The loss for a single data point is: logSV (|)with:=bin(SF()).(2) Tousethestate-valuepredictorasapolicy,weevaluate the predictor for all states =(,)that are reachable via legal actions from the current state (where (,)is the deterministic transition of taking action in state). Sinceimplies that it is now the opponentsturn, thepolicypickstheactionthatleadstothe state with the worst expected value for the opponent: SV()=arg min AlegalSV (|)[] | {z } (). (BC)Behavioralcloning Thetargetlabelisthe(onehot) action-index of the ground-truth action SF() within the set of all possible actions (see Tokenization in Section 2.2). The input to the predictor is the tokenized state, which leads to the loss for a single data point: logBC (SF()|). (3) This straightforwardly gives a policy that picks the highest-probability action: BC()=arg max AlegalBC (|). 2.4. Evaluation We use the following evaluation metrics to compare our models against each other and/or measure training progress. The first two metrics evaluate the predictors only; the second two evaluate the policies constructed from our predictors.Action-accuracy The test set percentage where the predictor policy picks the ground-truth best action: ()=SF(). Action-ranking (Kendalls )The average Kendall rank correlation (a standard statistical test) across the test set, quantifying the correlation of the predicted actions with the ground-truth ranking by Stockfish in each state, ranging from -1 (exact inverse order) to 1 (exact same order) and 0 being no correlation. Thepredictorrankingisgivenby (,),((,)), andBC (|), respectively, for all legal actions. The ground-truth ranking is given by Stockfishs actionvaluesSF(,)for all legal actions. Puzzle-accuracy We evaluate our policies on their capability of solving puzzles from a collection of Lichess puzzles that are rated by Elo difficulty from 399to2867, calculated by Lichess based on how often each puzzle has been solved correctly. We use puzzle-accuracy asthepercentageofpuzzleswherethe policys action-sequence exactly matches the known solution action-sequence. For our main puzzle result in Section 3.2 we use 10k puzzles to report puzzleaccuracy, otherwise we use the first 1k puzzles to speed up evaluation. Game playing strength (Elo) We evaluate the playing strength (measured as an Elo rating) of the predictor policies in two different ways: (i) we play Blitz games on Lichess against either only humans or only bots, and (ii) we run an internal tournament between all the agents from Table 1 except for GPT-3.5-turboinstruct. Weplay400gamesperpairofagent,yielding 8400 games in total, and compute Elo ratings with BayesElo (Coulom, 2008), with the default confidence parameter of 0.5. We anchor the relative BayesElo values to the Lichess ELO vs. bots of our 270M model. For the game-playing strength evaluations only (i.e., not for determining the puzzle accuracy) we use a softmax policy for the first 5 full-moves, instead of the arg max policy described earlier, with a low temperature of 0.005for the value or action-value functions, 0.05for the action functions (like the policy network of AlphaZero), and 0.5for the visit counts used in the full version of AlphaZero. This renders the policies stochastic to both create variety in games, and prevents simple exploits via repeated play. 2.5. Baselines We compare the performance of our models against Stockfish 16 (with a time limit of 0.05s per legal move, i.e., the oracle used to generate our dataset), three 4 Grandmaster-Level Chess Without Search Lichess Elo Accuracy (%) Agent Search Input Tournament Elo vs. Bots vs. Humans Puzzles Actions Kendalls 9M Transformer (ours) FEN 2007 ( 15) 2054 85.5 64.2 0.269 136M Transformer (ours) FEN 2224 ( 14) 2156 92.1 68.5 0.295 270M Transformer (ours) FEN 2299(14)2299 2895 93.5 69.4 0.300 GPT-3.5-turbo-instruct PGN 1755 66.5 AlphaZero (policy net only) PGN 1620 ( 22) 61.0 AlphaZero (value net only) PGN 1853 ( 16) 82.1 AlphaZero (400 MCTS simulations) PGN 2502 ( 15) 95.8 Stockfish 16 (0.05s) [oracle] FEN 2706 ( 20) 2713 99.1 100.0 1.000 Table 1|Prediction and playing strength comparison for our models (three different sizes) against Stockfish 16, variants of AlphaZero (with and without Monte-Carlo tree search), and GPT-3.5-turbo-instruct. Tournament Elo ratings are determined by having the models play against each other and cannot be directly compared to the Lichess Elo. Lichess (blitz) Elo ratings result from playing against human opponents or bots on Lichess. Stockfish 16 (time limit of 50ms per move) is our data-generating oracle, thus obtaining a Kendalls of1and 100%action accuracy. Models operating on the PGN observe the full move history, whereas FENs only contain very limited historical information. Best results without search in bold. variants of AlphaZero (Silver et al., 2017): (i) the original with 400 MCTS simulations, (ii) only the policy network, and (iii) only value network (where (ii) and (iii) perform no additional search), and the GPT3.5-turbo-instruct from Carlini (2023). AlphaZeros networks have 27.6M parameters and are trained on 44M games (details in Schrittwieser et al. (2020)). Note that these baselines have access to the whole game history (via the PGN), in contrast to our models that only observe the current game state (which contains very limited historical information via the FEN). This helps the baseline policies for instance to easily deal with threefold repetition (games are drawn if the same board state is appears three times throughout the game), which requires a workaround for us (described in Section 5). Moreover, GPT-3.5-turboinstruct also requires whole games encoded via PGN to reduce hallucinations according to Carlini (2023), who also finds that GPT-4 struggles to play full games without making illegal moves, so we do not compare against GPT-4. 3. Results Herewepresentourcomprehensiveexperimentalevaluation. For all parameters not explicitly mentioned we use the same setting across our two main experiments (Section 3.1, Section 3.2); for investigating scaling behavior and all ablations in Section 3.3 and Section 3.4 we use a different set of default settings (geared towards getting representative results with better computational efficiency). We provide all details in Appendix A.2 and Appendix A.3, respectively.3.1. Main Result In Table 1 we show the playing strength (internal tournament Elo, external Lichess Elo, and puzzle solving competence) and predictor metrics of our large-scale transformer models when trained on the full ( 10M games) training set. Our main evaluation compares three transformer models with 9M,136M, and 270M parameters after training (none of them overfit the training set as shown in Appendix B.1). The results show that all three models exhibit non-trivial generalization to novel boards and can successfully solve a large fraction of puzzles. Across all metrics, having larger models consistently improves scores, confirming that model scale matters for strong chess performance. Our largest model achieves a blitz Elo of 2895 against human players, which places it into grandmaster territory. However, the Elo drops when playing against bots on Lichess, which may be a result of having a significantly different player-pool, some minor technical issues, and perhaps a qualitative difference in how bots exploit weaknesses compared to humans (seeSection5foradetaileddiscussionoftheseissues). 3.2. Puzzles In Figure 2 we compare the puzzle performance of our 270M parameter model against Stockfish 16 (time limit of 50ms per move), GPT-3.5-turbo-instruct, and AlphaZeros value network. We use our large puzzle set of 10k puzzles, grouped by their assigned Elo difficulty from Lichess. Stockfish 16 performs the best across all difficulty categories, followed by our 270M model. AlphaZeros value network (trained on 44M games) and GPT-3.5-turbo-instruct achieve non-trivial puzzle performance, but significantly lag behind our 5 Grandmaster-Level Chess Without Search 200-400 400-600 600-800 800-1000 1000-1200 1200-1400 1400-1600 1600-1800 1800-2000 2000-2200 2200-2400 2400-2600 2600-2800 2800-3000 Puzzle Rating (Elo)020406080100Accuracy ( %)Stockfish 16 (0.05s ) [oracle] 270M T ransformer (ours ) AlphaZero (value net only) GPT -3.5-turbo-instruct Figure 2|Puzzle solving competency comparison between our 270M transformer, Stockfish 16 (time limit of 50ms per move), AlphaZeros value network, and GPT-3.5-turbo-instruct on 10000 Lichess puzzles (curated following Carlini (2023)). model. Weemphasizethatsolvingthepuzzlesrequires a correct move sequence , and since our policy cannot explicitly plan ahead, solving the puzzle sequences relies entirely on having good value estimates that can be used greedily. 3.3. Scaling Laws Figure 3 shows our scaling analysis over the dataset and model size. We visualize the puzzle accuracy (training and test loss in Figure A4), which correlates well with the other metrics and the overall playing strength. For small training set size ( 10k games, left panel) larger architectures ( 7M) start to overfit as training progresses. This effect disappears as the dataset size is increased to 100k (middle panel) and 1Mgames(rightpanel). Theresultsalsoshowthatthe finalaccuracyofamodelincreasesasthedatasetsizeis increased (consistently across model sizes). Similarly, we observe the general trend of increased architecture size leading to increased overall performance regardless of dataset size (as also shown in our main result in Section 3.1). 3.4. Variants and Ablations We test a series of experimental variants and perform extensive ablations using the 9M parameter model. The results and conclusions drawn are used to inform and justify our design choices and determine default model-, data-, and training-configurations. Table 2 summarizes all results. Predictor-targets By default we learn to predict action-values given a board state. Here we compare againstusingstate-valuesororacleactions(behavioral cloning) as the prediction targets. See Section 2.3 and Figure 1 for more details and how to construct policies from each of the predictors. As the results inAccuracy (%) Ablation Parameter Puzzles Actions Kendalls Predictor-targetAV 83.3 63.0 0.259 SV 77.5 58.5 0.215 BC 65.7 56.7 0.116 Network depth2 62.3 54.4 0.219 4 76.2 59.9 0.242 8 81.3 62.3 0.254 16 80.4 62.3 0.255 Data samplerUniform 83.3 63.0 0.259 Weighted 49.9 48.2 0.192 Value bins16 83.0 61.4 0.248 32 83.0 63.2 0.261 64 84.4 63.1 0.259 128 83.8 63.4 0.262 256 83.7 63.0 0.260 Loss functionlog(class.) 81.3 62.3 0.254 L2 (regr.) 82.6 58.9 0.235 Stockfish Limit [s]0.05 84.0 62.2 0.256 0.1 85.4 62.5 0.254 0.2 84.3 62.6 0.259 0.5 83.3 63.0 0.259 Table 2|Ablations for different parameters (see Section 3.4). Table 2 show, the action-value predictor is superior in terms of action-ranking (Kendalls ), action accuracy, and puzzle accuracy. The same trend is also shown in Figure A5 (in Appendix B.2, which tracks puzzle accuracy over training iterations for the different predictors. This superior performance of action-value prediction might stem primarily from the significantly largeraction-valuedataset( 15.3Bstate-actionpairsvs. 530M states for our largest training set constructed from 10M games). We thus run an additional ablation where we train all three predictors on exactly the same amount of dataresults shown in Appendix B.2 largely confirm this hypothesis. Please see our more detailed discussion of the different predictor targets as we discuss these results in Appendix B.2, where 6 Grandmaster-Level Chess Without Search 0 1 2 3 4 5 Step1e6020406080Puzzle Accuracy (%)10'000 Games 400K 1M 2M 7M 9M 34M 0 1 2 3 4 5 Step1e6100'000 Games 0 1 2 3 4 5 Step1e61'000'000 Games Figure 3|Puzzle accuracy for different training set sizes (stated above panels) and model sizes (color-coded), evaluated with our small puzzle set of 1k puzzles. Generally, larger models trained on larger datasets lead to higher performance (which strongly correlates with test set performance and general chess playing strength), highlighting the importance of scale for strong chess play. This effect cannot be explained by memorization since<1.41%of the initial puzzle board states appear in our training set. If the model size is too large in relation to the training set size learning is impeded and overfitting occurs. we also discuss performance discrepancy between behavioral cloning and the state-value predictor based policy may be largely explained by the fact that we train on experts actions only instead of the full action distribution of the expert. Network depth We show the influence of increasing the transformers depth while keeping the number of parameters constant in Table 2. Since transformers may learn to roll out iterative computation (which arises in search) across layers, deeper networks may hold the potential for deeper unrolls. We compensate for having fewer layers by varying the embedding dimensionandwideningfactorsuchthatallmodelshave the same number of parameters. The performance of our models increases with their depth but seems to saturate at around 8 layers, indicating that depth is important, but not beyond a certain point. Datasampler Weremoveduplicateboardstatesduring the generation of the training and test sets. This increases data diversity but introduces distributional shift compared to the natural game distribution of boards where early board states and popular openings occur more frequently. To quantify the effect of this shift we use an alternative weighted data sampler that draws boards from our filtered training set according to the distribution that would occur if we had not removed duplicates. Results in Table 2 reveal that training on the natural distribution (via the weighted sampler) leads to significantly worse results compared to sampling uniformly randomly from the filtered training set (both trained models are evaluated on a filtered test set with uniform sampling, and the puzzle test set). We hypothesize that theincreased performance is due to the increased data diversityseenunderuniformsampling. Aswetrainfor very few epochs, the starting position and common opening positions are only seen a handful of times during training under uniform sampling, making it unlikely that strong early-game play of our models can be attributed to memorization. Value binning Table 2 shows the impact of varying the number of bins used for stateand action-value discretization (from 16to256), demonstrating that more bins lead to improved performance. To strike a balance between performance and computational efficiency, we use =32bins for our ablations and =128for the main experiments. Loss function We treat learning Stockfish actionvalues as a classification problem and thus train by minimizing cross-entropy loss (log-loss). This is as close as possible to the (tried and tested) standard LLM setup. An alternative is to treat the problem as a scalar regression problem. If we parameterize a fixedvarianceGaussianlikelihoodmodelwithatransformer and perform maximum (log) likelihood estimation, this is equivalent to minimizing mean-squared error (L2 loss). To that end, we modify the architecture to output a scalar (without a final log-layer or similar). The log-loss outperforms the L2 loss on two out of the three metrics (Table 2). Stockfish time limit We create training sets from 1million games annotated by Stockfish with varying time limits to manipulate the playing strength of our oracle. We report scores on the puzzle set (same for 7 Grandmaster-Level Chess Without Search all models) and a test set created using the same time limit as the training set (different for all models). Table 2 shows that a basic time-limit of 0.05seconds gives only marginally worse puzzle performance. As a compromise between computational effort and final model performance we thus choose this as our default value (for our 10M games dataset we need about 15B action-evaluation calls with Stockfish, i.e., roughly 8680 days of unparallelized Stockfish evaluation time). 4. Related Work Early chess AI research made heavy use of designing explicit search strategies coupled with heuristics, as evidenced by Turings initial explorations (Burt, 1955) and implementations like NeuroChess (Thrun, 1994). This approach culminated in systems like Deep Blue (Campbell et al., 2002) and Stockfish (Romstad et al., 2008), known for their advanced search algorithms. The development of AlphaZero (Silver et al., 2017) marked a paradigm shift, employing deep RL with Monte Carlo Tree Search, thus learning its own heuristics (policy and value networks) instead of manually designing them. Neural networks play a significant role in chess AI (Klein, 2022), including enhancements to AlphaZeros self-play mechanisms (V. et al., 2018), the use of deep RL (Lai, 2015), and a general trend of moving away from explicit search methods, by leveraging large-scale game datasets for training (David et al., 2016; Schrittwieser et al., 2020). The rise of large language models has also led to innovations in chess AI, cf. Kamlishs language-based models (Kamlish et al., 2019), the encoding of chess games via natural language (DeLeo and Guven, 2022; Toshniwal et al., 2022), and the evaluation LLMs ability to play chess (Carlini, 2023; Gramaje, 2023). Czech et al. (2023) show that strategic input representations and value loss enhancements significantly boost chess performance of vision transformers, and AlrdahiandBatista-Navarro(2023);Fengetal.(2023) show that adding chess specific data sources (e.g., chess textbooks) to language model training can improvetheirchessperformance. Stckl(2021)explored scaling effects of transformers on chess performance, which resonates with our emphasis on the importance of model and dataset scale. 5. Discussion In order to use our state-based policies to play against humans and bots, two minor technical issues appear that can only be solved by having (some) access to game history. We briefly discuss both issues and (a) Possible Move (Mate-in-3) (b) Actual Move (Mate-in-5) Figure 4|Two options to win the game in 3or5 moves, respectively (more options exist). Since they both map into the highest-value bin our bot ignores Nh6+, the fastest way to win (in 3), and instead plays Nd6+ (mate-in-5). Unfortunately, a state-based predictor without explicit search cannot guarantee that it will continue playing the Nd6+ strategy and thus mightrandomlyalternatebetweendifferentstrategies. Overall this increases the risk of drawing the game or losing due to a subsequent (low-probability) mistake, such as a bad softmax sample. Board from a game between our 9M Transformer (white) and a human (blitz Elo of 2145). present our workarounds. Blindness to threefold repetition By construction, our state-based predictor cannot detect the risk of threefold repetition (drawing because the same board occurs three times), since it has no access to the game history (FENs contain minimal historical info, sufficient for the Fifty Move rule). To reduce draws from threefold repetitions, we check if the bots next move would trigger the rule and set the corresponding actions win percentage to 50%before computing the softmax. However, our bots still cannot plan ahead to minimize the risk of being forced into threefold repetition. Indecisiveness in the face of overwhelming victory If Stockfish detects a mate-in(e.g., 3or5) it outputs and not a centipawn score. We map all such outputs to the maximal value bin (i.e., a win percentage of 100%). Similarly, in a very strong position, several actions may end up in the maximum value bin. Thus, across time-steps this can lead to our agent playing somewhat randomly, rather than committing to one plan that finishes the game quickly (the agent has no knowledge of its past moves). This creates the paradoxical situation that our bot, despite being in a position of overwhelming win percentage, fails to 8 Grandmaster-Level Chess Without Search take the (virtually) guaranteed win and might draw or even end up losing since small chances of a mistake accumulate with longer games (see Figure 4). To prevent some of these situations, we check whether the predicted scores for all top five moves lie above a win percentage of 99%and double-check this condition with Stockfish, and if so, use Stockfishs top move (out of these) to have consistency in strategy across time-steps. Elo: Humans vs. bots Table 1 shows a difference in LichessElowhenplayingagainsthumanscomparedto bots. While the precise reasons are not entirely clear, we have three plausible hypotheses: (i) humans tend to resign when our bot has overwhelming win percentage but many bots do not (meaning that the previously described problem gets amplified when playing against bots); (ii) humans on Lichess rarely play against bots, meaning that the two player pools (humans and bots) are hard to compare and Elo ratings between pools may be miscalibrated (Justaz, 2023); and (iii) based on preliminary (but thorough) anecdotal analysis by a chess NM, our models make the occasional tactical mistake which may be penalized qualitatively differently (and more severely) by other bots compared to humans (see some of this analysis in Appendices B.4 and B.5). While investigating this Elo discrepancy further is interesting, it is not central to our paper and does not impact our main claims. 5.1. Limitations While our largest model achieves very good performance, it does not completely close the gap to Stockfish 16. All our scaling experiments point towards closing this gap eventually with a large enough model trained on enough data. However, the current results do not allow us to claim that the gap can certainly be closed. Another limitation, as discussed earlier, is that our predictors see the current state but not the complete game history. This leads to some fundamental technical limitations that cannot be overcome without small domain-specific heuristics or augmenting the training data and observable info. Finally, when using a state-value predictor to construct a policy, we consider all possible subsequent states that are reachable via legal actions. This requires having a transition model(,), and may be considered a version of 1-step search. While the main point is that our predictors do not explicitly search over action sequences , we limit the claim of without search to our action-value policy and behavioral cloning policy. Note that the primary goal of this project was to investigate whether a complex, search-based algorithm, such as Stockfish 16, can be well approximated with afeedforward neural network. In the course of this, we have made a serious attempt to produce a strong chess policy and estimate its playing strength, but we have not exhausted every conceivable option to maximize playing strengthit may well be that further tweaks of our approach could lead to even stronger policies. Similarly, we have made a serious attempt at calibrating our policys playing strength via Lichess, where the claim of grandmaster-level play currently holds against human opponents, but we have not calibrated our policy under official tournament conditions. We alsocannotruleoutthatopponents,throughextensive repeated play, may be able to find and exploit weaknesses reliably due to the fairly deterministic nature of our policy. 6. Conclusion OurpapershowsthatisispossibletodistillanapproximationofStockfish16intoafeed-forwardtransformer via standard supervised training. The resulting predictorgeneralizeswelltounseenboardstates, and, when used in a policy, leads to strong chess play (Lichess Elo of 2895 against humans). We demonstrate that strongchesscapabilitiesfromsupervisedlearningonly emergeatsufficientdatasetandmodelscale. Ourwork thus adds to a rapidly growing body of literature showing that complex and sophisticated algorithms can be distilled into feed-forward transformers, implying a paradigm-shift away from viewing large transformers as mere statistical pattern recognizers to viewing them as a powerful technique for general algorithm approximation. Impact Statement While the results of training transformer-based architectures at scale in a (self) supervised way will have significant societal consequences in the near future, these concerns do not apply to a closed domain like chess that has limited real-world impact and has been a domain of machine superiority for decades. Another advantage of supervised training on a single task over other forms of training (particularly selfplay or reinforcement learning, and meta-learning) is that the method requires a strong oracle solution to begin with (for data annotation) and is unlikely to significantly outperform the oracleso the potential for the method to rapidly introduce substantial unknown capabilities (with wide societal impacts) is very limited. 9 Grandmaster-Level Chess Without Search Acknowledgments We thank Aurlien Pomini, Avraham Ruderman, Eric Malmi,CharlieBeattie,ChrisColen,ChrisWolff,David Budden, Dashiell Shaw, Guillaume Desjardins, HamdanilRasyid,HimanshuRaj,JoelVeness,JohnSchultz, Julian Schrittwieser, Laurent Orseau, Lisa Schut, Marc Lanctot, Marcus Hutter, Matthew Aitchison, Nando de Freitas, Nenad Tomasev, Nicholas Carlini, Nick Birnie, Nikolas De Giorgis, Ritvars Reimanis, Satinder Baveja, Thomas Fulmer, Tor Lattimore, Vincent Tjeng, Vivek Veeriah, and Zhengdong Wang for insightful discussions and their helpful feedback.References H. Alrdahi and R. Batista-Navarro. Learning to play chess from textbooks (LEAP): a corpus for evaluating chess moves based on sentiment analysis. arXiv:2310.20260 , 2023. R. Anil, S. Borgeaud, Y. Wu, J. Alayrac, J. Yu, R. Soricut, J. Schalkwyk, A. M. Dai, A. Hauth, K. Millican, D. Silver, S. Petrov, M. Johnson, I. Antonoglou, J.Schrittwieser,A.Glaese,J.Chen,E.Pitler,T.P.Lillicrap,A.Lazaridou,O.Firat,J.Molloy,M.Isard,P.R. Barham, T. Hennigan, B. Lee, F. Viola, M. Reynolds, Y. Xu, R. Doherty, E. Collins, C. Meyer, E. Rutherford, E. Moreira, K. Ayoub, M. Goel, G. Tucker, E. Piqueras, M. Krikun, I. Barr, N. Savinov, I. Danihelka, B. Roelofs, A. White, A. Andreassen, T. von Glehn, L. Yagati, M. Kazemi, L. Gonzalez, M. Khalman, J. Sygnowski, and et al. Gemini: A family of highly capable multimodal models. arXiv:2312.11805 , 2023. J. Bradbury, R. Frostig, P. Hawkins, M. J. Johnson, C. Leary, D. Maclaurin, G. Necula, A. Paszke, J. VanderPlas, S. Wanderman-Milne, and Q. Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/ google/jax . T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. Language models are few-shot learners. In NeurIPS , 2020. C. Burt. Faster than thought: A symposium on digital computingmachines.editedbyb.v.bowden. British Journal of Statistical Psychology , 1955. M. Campbell, A. J. H. Jr., and F. Hsu. Deep blue. Artif. Intell., 2002. N. Carlini. Playing chess with large language models. https://nicholas.carlini.com/ writing/2023/chess-llm.html , 2023. R. Coulom. Whole-history rating: A bayesian rating system for players of time-varying strength. In Computers and Games , 2008. J. Czech, J. Blml, and K. Kersting. Representation matters: The game of chess poses a challenge to vision transformers. arXiv:2304.14918 , 2023. 10 Grandmaster-Level Chess Without Search O. E. David, N. S. Netanyahu, and L. Wolf. Deepchess: End-to-end deep neural network for automatic learning in chess. In ICANN (2) , 2016. DeepMind, I. Babuschkin, K. Baumli, A. Bell, S. Bhupatiraju, J. Bruce, P. Buchlovsky, D. Budden, T. Cai, A. Clark, I. Danihelka, A. Dedieu, C. Fantacci, J. Godwin, C. Jones, R. Hemsley, T. Hennigan, M. Hessel, S. Hou, S. Kapturowski, T. Keck, I. Kemaev, M. King, M. Kunesch, L. Martens, H. Merzic, V. Mikulik, T. Norman, G. Papamakarios, J. Quan, R. Ring, F. Ruiz, A. Sanchez, L. Sartran, R. Schneider, E. Sezener, S. Spencer, S. Srinivasan, M. Stanojevi, W. Stokowiec, L. Wang, G. Zhou, and F. Viola. The DeepMind JAX Ecosystem, 2020. URL http://github.com/google-deepmind . M.DeLeoandE.Guven. Learningchesswithlanguage models and transformers. arXiv:2209.11902 , 2022. X. Feng, Y. Luo, Z. Wang, H. Tang, M. Yang, K. Shao, D. Mguni, Y. Du, and J. Wang. Chessgpt: Bridging policy learning and language modeling. arXiv:2306.09200 , 2023. J. Gehring, M. Auli, D. Grangier, D. Yarats, and Y. N. Dauphin. Convolutionalsequencetosequencelearning. InICML, 2017. B. A. Gramaje. Exploring GPTs capabilities in chesspuzzles. Masters thesis, Universitat Politcnica de Valncia, 2023. G. Haworth and N. Hernandez. The 20thtop chess engine championship, TCEC20. J. Int. Comput. Games Assoc., 2021. T. Hennigan, T. Cai, T. Norman, L. Martens, and I. Babuschkin. Haiku: Sonnet for JAX, 2020. URL http://github.com/deepmind/dm-haiku . J. Hoffmann, S. Borgeaud, A. Mensch, E. Buchatskaya, T. Cai, E. Rutherford, D. de Las Casas, L. A. Hendricks, J. Welbl, A. Clark, T. Hennigan, E. Noland, K. Millican, G. van den Driessche, B. Damoc, A. Guy, S. Osindero, K. Simonyan, E. Elsen, J. W. Rae, O. Vinyals, and L. Sifre. Training compute-optimal large language models. arXiv:2203.15556 , 2022. Justaz. Exact ratings for everyone on lichess. https://lichess.org/@/justaz/blog/ exact-ratings-for-everyone-on-lichess/ klIoAEAU , 2023. I. Kamlish, I. B. Chocron, and N. McCarthy. Sentimate: Learning to play chess through natural language processing. arXiv:1907.08321 , 2019. D.P.KingmaandJ.Ba. Adam: Amethodforstochastic optimization. In ICLR (Poster) , 2015.D. Klein. Neural networks for chess. arXiv:2209.01506 , 2022. M. Lai. Giraffe: Using deep reinforcement learning to play chess. arXiv:1509.01549 , 2015. OpenAI. GPT-4 technical report. arXiv:2303.08774 , 2023. T. Romstad, M. Costalba, J. Kiiski, G. Linscott, Y. Nasu, M. Isozaki, H. Noda, and et al. Stockfish, 2008. URL https://stockfishchess.org . M. Sadler and N. Regan. Game Changer: AlphaZeros Groundbreaking Chess Strategies and the Promise of AI. New In Chess, 2019. J.Schrittwieser,I.Antonoglou,T.Hubert,K.Simonyan, L. Sifre, S. Schmitt, A. Guez, E. Lockhart, D. Hassabis, T. Graepel, T. P. Lillicrap, and D. Silver. Mastering atari, go, chess and shogi by planning with a learned model. Nat., 2020. N. Shazeer. GLU variants improve transformer. arXiv:2002.05202 , 2020. D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, T. P. Lillicrap, K. Simonyan, and D. Hassabis. Mastering chess and shogi by self-play with a general reinforcement learning algorithm. arXiv:1712.01815 , 2017. A. Stckl. Watching a language model learning chess. InRANLP, 2021. S. Thrun. Learning to play the game of chess. In NIPS, 1994. S. Toshniwal, S. Wiseman, K. Livescu, and K. Gimpel. Chessasatestbedforlanguagemodelstatetracking. InAAAI, 2022. H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.Lachaux,T.Lacroix,B.Rozire,N.Goyal,E.Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample. Llama: Open and efficient foundation language models. arXiv:2302.13971 , 2023a. H. Touvron, L. Martin, K. Stone, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv:2307.09288 , 2023b. S. K. G. V., K. Goyette, A. Chamseddine, and B. Considine. Deep pepper: Expert iteration based chess agent in the reinforcement learning setting. arXiv:1806.00683 , 2018. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need. In NIPS, 2017. 11 Grandmaster-Level Chess Without Search A. Experimental Setup A.1. Tokenization The first part of a FEN string encodes the position of pieces rank-wise (row-wise). The only change we make is that we encode each empty square with a ., which always gives us 64characters for a board. The next character denotes the active player (w or b). The next part of the FEN string denotes castling availability(uptofourcharactersforKing-andQueenside for each color, or - for no availability)we take this string and if needed pad it with . such that it always has length 4. Next are two characters for the en passant target, which can be - for no target; we use the two characters literally or -. for no target. Finally we have the halfmove clock (up to two digits) and the fullmove number (up to three digits); we take the numbers as characters and pad them with . to make sure they are always tokenized into two and three characters respectively. A.2. Main Setup We use the same basic setup for all our main experiments and only vary the model architecture. Concretely our base setup is as follows: We train for 20 million steps with a batch size of 4096, meaning that we train for 5.35 epochs. We use the Adam optimizer (Kingma and Ba, 2015) with a learning rate of 1e-4. We train on the dataset generated from 10 million games (cf. Table A1) for the action value policy with 128 return buckets and a stockfish time limit of 0.05s. We use the unique sampler and Polyak averaging for evaluation and evaluate on 1000 games (cf. Table A1) and 1000 puzzles from a different month than that used for training. We train a vanilla decoder-only transformer without causal masking (Vaswani et al., 2017), with the improvements proposed in LLaMA (Touvron et al., 2023a,b), i.e., post-normalization and SwiGLU (Shazeer, 2020). We use three different modelconfigurations: (i)8heads, 8layers, andanembedding dimension of 256, (ii) 8 heads, 8 layers, and an embedding dimension of 1024, and (iii) 8 heads, 16 layers, and an embedding dimension of 1024. A.3. Ablation Setup We use the same basic setup for all our ablation experiments and only vary the ablation parameters. Concretely our base setup is as follows: We train for 5 million steps with a batch size of 1024, meaning that we train for 3.19 epochs. We use the Adam optimizer (Kingma and Ba, 2015) with a learning rate 0.0 0.2 0.4 0.6 0.8 1.0 Win %020000400006000080000100000CountFigure A1|The win percentages for our action-value dataset generated from 1000 games (cf. Table A1). We use 50 buckets to generate the histogram. The distribution is skewed towards 0 as we consider all legal moves per board and most actions are not advantageous for the player. of 4e-4. We train on the dataset generated from 1 million games (cf. Table A1) for the action value policy with 32 return buckets and a stockfish time limit of 0.5s. We use the unique sampler and train a vanilla decoder-only transformer (Vaswani et al., 2017) with post-normalization, 8 heads, 8 layers, an embedding dimension of 256, and no causal masking. We use Polyak averaging for evaluation and evaluate on 1000 games(cf.TableA1)and1000puzzlesfromadifferent month than that used for training. A.4. Dataset Statistics We visualize some dataset statistics in Figures A1 and A2. A.5. Playing-strength evaluation on Lichess We evaluate and calibrate the playing strength of our models by playing against humans and bots on Lichess. Our standard evaluation allows for both playing against bots and humans (see Table 1), but since humanstendtorarelyplayagainstbotstheEloratings in this case are dominated by playing against other bots (see our discussion of how this essentially creates two different, somewhat miscalibrated, player pools in Section 5). In our case the policies in the column denoted with vs. Bots in Table 1 have played against some humans but the number of games against humans is<4.5%of total games played. To get better calibration against humans we let our largest model play exclusively against humans (by not accepting games with other bots) which leads to a significantly higher Elo ranking (see Table 1). Overall we have 12 Grandmaster-Level Chess Without Search State-Value Behavioral Cloning Action-Value Split Games Records Bytes Records Bytes Records Bytes Train 104591897 43.7 MB 589130 41.1 MB 17373887 1.4 GB 1055747753 422.0 MB 5720672 397.4 MB 167912926 13.5 GB 10655259971 4.0 GB 54991050 3.8 GB 1606372407 129.0 GB 107530310443 38.6 GB 527633465 36.3 GB 15316914724 1.2 TB Test 10362829 4.6 MB 62561 4.4 MB 1838218 148.3 MB Table A1|Dataset sizes. For simplicity, we typically refer to the datasets by the number of games they were created from. 0 250 500 750 1000 1250 1500 1750 2000 Move Index050000100000150000200000250000300000350000400000Count Figure A2|The moves (sorted by frequency) for our action-value dataset generated from 1000 games (cf. Table A1). We use 50 buckets to generate the histogram. There are 1968 possible moves and the five most frequent ones are a2a3, g2g3, h2h3, a2a4, a7a6. played the following numbers of games for the different policies shown in Table 1: 9M (553 games), 136M (169 games), 270M (228 games against bots, 174 games against humans), Stockfish (30 games), GPT-3.5-turbo-instruct (181 games). A.6. Stockfish and AlphaZero Setup Stockfish We use Stockfish 16 (the version from December 2023) throughout the paper. When we play, we use the oracle we used for training, which is an unconventional way to play with this engine: We evaluate each legal move in the position for 50ms, and return the best move based on these scores. This is not entirely equivalent to a standard thinking time of 50ms times the number of legal moves per position, as we force Stockfish to spend 50ms on moves that could be uninteresting and unexplored. We chose to keep this setup to have a comparison to the oracle we train on. Note that, when comparing the legal moves in a given position, we do not clear Stockfishs cache between the moves. Therefore, due to the way the cache works, this biases the accuracy of Stockfishsevaluation to be weaker for the first moves considered. Finally, due to the nature of our internal hardware setup, weusetwodifferentkindsofchipstorunStockfish: (i) to compute the Lichess Elo, we use a 6-core Intel(R) Xeon(R) W-2135 CPU @ 3.70GHz, and (ii) to compute the tournament Elo, we use a single Tensor Processing Unit (V3), as for all the other agents. AlphaZero WeusetheAlphaZeroversionfrom2020, with a network trained at that time (Schrittwieser et al., 2020). We use three different versions: (i) policy network only, (ii) value network only and (iii) standard version with search. For (i), we are using the probability distribution over actions returned by the policy network, and take the argmax as the best action. For (ii), we do a search limited to depth 1, with 100 MCTS simulations, enough to cover all legal actions, and take the argmax over visit counts. For (iii), we use the standard search from the paper, with 400MCTSsimulationsandtheexactsameUCBscaling parameters. We also take the argmax over visit counts. Note that AlphaZeros policy and value network have been trained on 44M games, whereas we trained our largest models on only 10M games. A.7. Computational Resources Our codebase is based on JAX (Bradbury et al., 2018) and the DeepMind JAX Ecosystem (DeepMind et al., 2020; Hennigan et al., 2020). We used 4 Tensor Processing Units (V5) per model for the ablation experiments. We used 128 Tensor Processing Units (V5) per model to train our large (9M, 136M and 270M) models. We used a single Tensor Processing Unit (V3) per agent for our Elo tournament. B. Additional Results B.1. Loss Curves In Figure A3 we show the train and test loss curves (andtheevolutionofthepuzzleaccuracy)forthelarge models from Section 3.1. We observe that none of the 13 Grandmaster-Level Chess Without Search 0 1 2 3 4 5 6 Step1e634567T rain Loss9M 136M 270M (a) Training Loss 0 1 2 3 4 5 6 Step1e634567V al. Loss9M 136M 270M (b) Test Loss 0 1 2 3 4 5 6 Step1e6020406080100Puzzle Accuracy (%) 9M 136M 270M (c) Puzzle Accuracy Figure A3|Train and test loss curves and puzzle accuracy over time for the models from Section 3.1. We observe no overfitting, which justifies always using the fully trained model in our evaluations. Relative Tournament Elo Prediction Target Same # of Games in Dataset Same # of Data Points Action-Value +492(31) +252(22) State-Value +257 (23) +264(22) Behavioral-Cloning 0 (28) 0(24) Table A2|Ranking the policies that arise from our three different predictors by having them play against eachotherinatournamentandcomputingrelativeElo rankings ( 200games per pairing; i.e., 600games per column). When constructing the training data for all three predictors based on the same number of games (middle column), the action-value dataset is much larger than the state-value / behavioral cloning set, which leads to a stronger policy. When correcting for thisbyforcingthesamenumberoftrainingdatapoints for all three (right column), the difference between stateand action-value prediction disappears. models overfit and that larger models improve both the training and the test loss. In Figure A4 we visualize the train and test loss curves for the scaling experiment from Section 3.3. In line with the results shown in the main paper we observe that on the smallest training set, models with 7M parameters start to overfit, but not for the larger training sets. Except for the overfitting cases we observe that larger models improve both the training and test loss, regardless of training set size, and that larger training set size improves the test loss when keeping the model size constant. B.2. Predictor-Target Comparison In Figure A5 we compare the puzzle accuracy for the three different predictor targets (action-values, statevalues, or best action) trained on 1million games. As discussedinthemaintext,forafixednumberofgameswe have very different dataset sizes for state-value prediction (roughly 55million states) and action-value prediction (roughly 1.6billion states); see Table A1 for all dataset sizes. It seems plausible that learning action-values might pose a slightly harder learning problem, leading to slightly slower initial learning, but eventually this is compensated for by having much more data to train on compared to state-value learning (see Figure A5, which shows this trend). Also note that since we use the same time-budget per Stockfish call, all action-values for one state use more Stockfish computation time in total (due to one call per action) when compared to state-values (one call per board). To control for the effect of dataset size, we train all three predictors ( 9M parameter model) on a fixed set of 40million data points. Results are shown in Figure A6. As the results show, the state-value policy in this case slightly outperforms the action-value policy, except for action-ranking (Kendalls ), which makes sense since the action-value predictor is implicitly trained to produce good action rankings. To see how this translates into playing-strength, we pit all three policies (AV, SV, BC) against each other and determine their relative Elo rankings. Table A2 shows that when not controlling for the number of training datapoints, theaction-valuepolicyisstrongest(inline with the findings in Table 2 and Figure A5), but when controlling for the number of training data points the action-value and state-value policy perform almost identical (in line with Figure A6). Throughout all these results we observe lower performance of the behavioral cloning policy, despite being trained on a comparable number of datapoints as the state-value policy. The main hypothesis for this is that the amount of information in the behavioral cloning dataset is lower than the state value dataset, since we throw away any information in the stateor action-values beyond the index of the oracle action. We suspect that training on the full action distribution 14 Grandmaster-Level Chess Without Search 0 1 2 3 4 5 Step1e60123456T rain Loss10'000 Games 0 1 2 3 4 5 Step1e6100'000 Games 0 1 2 3 4 5 Step1e61'000'000 Games 400K 1M 2M 7M 9M 34M (a) Training Loss 0 1 2 3 4 5 Step1e62021222324V al. Loss (log scale)10'000 Games 0 1 2 3 4 5 Step1e6100'000 Games 0 1 2 3 4 5 Step1e61'000'000 Games 400K 1M 2M 7M 9M 34M (b) Test Loss Figure A4|Loss curves when scaling model size and training set size. 0 1 2 3 4 5 Step1e60510152025Kendall's ActionV alue StateV alue Behavioral Cloning (a) Kendalls 0 1 2 3 4 5 Step1e6020406080Puzzle Accuracy (%) Action-V alue State-V alue Behavioral Cloning (b) Puzzles Accuracy (%) 0 1 2 3 4 5 Step1e6102030405060Action Accuracy (%) ActionV alue StateV alue Behavioral Cloning (c) Action Accuracy (%) Figure A5|Comparison of the three different prediction targets (action-value, state-value, and behavioral cloning) train on the datasets generated from 1million games. Note that this means that the action-value network is trained on roughly 30times more data than the other two (cf. Table A1). Action-value learning (trained on 1.6B action-values) learns slightly slower but outperforms the other two variants in the long run (which are trained on roughly 55M states / best actions). Behavioral-cloning falls significantly short of state-value learning, even though both are trained on virtually the same amount of data. 15 Grandmaster-Level Chess Without Search 0 1 2 3 4 5 Step1e605101520Kendall's ActionV alue StateV alue Behavioral Cloning (a) Kendalls 0 1 2 3 4 5 Step1e601020304050607080Puzzle Accuracy (%) Action-V alue State-V alue Behavioral Cloning (b) Puzzles Accuracy (%) 0 1 2 3 4 5 Step1e6102030405060Action Accuracy (%) ActionV alue StateV alue Behavioral Cloning (c) Action Accuracy (%) Figure A6|Comparison of the three different prediction targets (action-value, state-value, and behavioral cloning) trained on exactly the same number of data points ( 40M). The superiority of action-value learning over state-value learning disappears (or even reverses to some degree), except when measuring the action-ranking correlation (Kendalls ) which the action-value policy is indirectly trained to perform well on. of the oracle (with cross-entropy loss), rather than the best action only would largely close this gap, but we consider this question beyond the scope of this paper and limit ourselves to simply reporting the observed effect in our setting. B.3. Polyak Averaging We investigate the impact of Polyak averaging, an optimization technique where parameters are set to a weighted average over the last iterations rather than just using the most recent value, using the same setup as for our ablation experiments (see Appendix A.3). When using Polyak averaging with an exponential moving average decay factor of 0.99, we obtain a Kendallsof 0.259, a puzzle accuracy of 83.3%, and an action accuracy of 63.0%. In contrast, standard evaluation, obtains a Kendalls of 0.258, a puzzle accuracy of 83.1%, and an action accuracy of 62.8%. Thus, we use Polyak averaging for all experiments. B.4. Tactics In Figure A7, we analyze the tactics learned by our 270M transformer used against a human with a blitz Elo of 2145. We observe that our model has learned to sacrifice material when it is advantageous to build a longer-term advantage. B.5. Playing Style We recruited chess players of National Master level and above to analyze our agents games against bots and humans on the Lichess platform. They made the following qualitative assessments of its playing style and highlighted specific examples (see Figure A8). Our agent has an aggressive enterprising style whereit frequently sacrifices material for long-term strategic gain. The agent plays optimistically: it prefers moves that give opponents difficult decisions to make even if they are not always objectively correct. It values king safety highly in that it only reluctantly exposes its own king to danger but also frequently sacrifices material and time to expose the opponents king. For example 17 .. Bg5 in game B.5.1 encouraged its opponent to weaken their king position. Its style incorporates strategic motifs employed by the most recent neural engines (Sadler and Regan, 2019; Silver et al., 2017). For example it pushes wing pawns in the middlegame when conditions permit (see game B.5.2). In game B.5.3 our agent executes a correct long-term exchange sacrifice. In game B.5.4 the bot uses a motif of a pin on the back rank to justify a pawn sacrifice for long term pressure. Game B.5.5 features a piece sacrifice to expose its opponents king. The sacrifice is not justified according to Stockfish although the opponent does not manage to tread the fine line to a permanent advantage and blunders six moves later with Bg7. Our agent has a distinct playing style to Stockfish: one analyzer commented it feels more enjoyable than playing a normal engine, as if you are not just hopelessly crushed. Indeed it does frequently agree with Stockfishs move choices suggesting that our agents action-value predictions match Stockfishs. However the disagreements can be telling: the piece sacrifice in the preceding paragraph is such an example. Also, game B.5.6 is interesting because our agent makes movesthatStockfishstronglydisagreeswith. Inparticularouragentstronglyfavours18.. Rxb4andbelieves black is better, in contrast Stockfish believes white is better and prefers Nd4. Subsequent analysis by the masters suggests Stockfish is objectively correct in this instance. Indeed on the very next move our agent has 16 Grandmaster-Level Chess Without Search Human (2145 Elo) (a) Blunder (Bd4 was best)270M Transformer Human (2145 Elo) 270M Transformer (b) Inaccuracy (d4 was best) (c) Checkmate is now unavoidable (Bf2 was best) Figure A7|Example of the learned tactics for our 270M transformer (vs. a human player with a blitz Elo of 2145). Our model decides to sacrifice two pawns since the white bishop will not be able to prevent it from promoting one of the pawns. The individual subfigure captions contain the Stockfish analysis from Lichess (i.e., our model plays optimally). 17 Grandmaster-Level Chess Without Search (a) King weakening (b) Wing pawn push (c) Exchange sacrifice for long term compensation (d) Long term sacrifice (e) Exposing the opponents king (f) Disagreement with Stockfish (g) Optimistic blunder Figure A8|Examples of our 270M transformers playing style against online human opponents. 18 Grandmaster-Level Chess Without Search reversed its opinion and agrees with Stockfish. Our agents aggressive style is highly successful againsthumanopponentsandachievesagrandmasterlevel Lichess Elo of 2895. However, we ran another instance of the bot and allowed other engines to play it. Its estimated Elo was far lower, i.e., 2299. Its aggressive playing style does not work as well against engines that are adept at tactical calculations, particularly when there is a tactical refutation to a suboptimal move. Most losses against bots can be explained by just one tactical blunder in the game that theopponentrefutes. ForexampleBxh3ingameB.5.7 loses a piece to g4. Finally, the recruited chess masters commented that ouragentsstylemakesitveryusefulforopeningrepertoire preparation. It is no longer feasible to surprise human opponents with opening novelties as all the best moves have been heavily over-analyzed. Modern opening preparation amongst professional chess playersnowfocusesondiscoveringsub-optimalmovesthat pose difficult problems for opponents. This aligns extremely well with our agents aggressive, enterprising playing style which does not always respect objective evaluations of positions. B.5.1. King weakening game 1. e4 c5 2. Nf3 Nc6 3. Bb5 g6 4. O-O Bg7 5. c3 Nf6 6. Re1 O-O 7. d4 d5 8. e5 Ne4 9. Bxc6 bxc6 10. Nbd2 Nxd2 11. Bxd2 Qb6 12. dxc5 Qxc5 13. h3 Qb5 14. b4 a5 15. a4 Qc4 16. Rc1 Bd7 17. Bg5 f6 18. Bd2 Bf5 19. exf6 exf6 20. Nd4 Bd7 21. Nb3 axb4 22. cxb4 Qh4 23. Nc5 Bf5 24. Ne6 Rfc8 25. Nxg7 Kxg7 26. Re7+ Kh8 27. a5 Re8 28. Qe2 Be4 29. Rxe8+ Rxe8 30. f3 1-0 B.5.2. Wing pawn push game 1. e4 c6 2. d4 d5 3. Nc3 dxe4 4. Nxe4 Nf6 5. Ng3 c5 6. Bb5+ Bd7 7. Bxd7+ Nbxd7 8. dxc5 Qa5+ 9. Qd2 Qxc5 10. Nf3 h5 11. O-O h4 12. Ne2 h3 13. g3 e5 14. Nc3 Qc6 15. Qe2 Bb4 16. Bd2 O-O 17. Rae1 Rfe8 18. Ne4 Bxd2 19. Qxd2 Nxe4 0-1 B.5.3. Exchange sacrifice game 1. d4 d5 2. c4 e6 3. Nc3 Bb4 4. cxd5 exd5 5. Nf3 Nf6 6. Bg5 h6 7. Bh4 g5 8. Bg3 Ne4 9. Rc1 h5 10. h3 Nxg3 11. fxg3 c6 12. e3 Bd6 13. Kf2 h4 14. g4 Bg3+ 15. Ke2 O-O 16. Kd2 Re8 17. Bd3 Nd7 18. Kc2 Rxe3 19. Kb1 Qe7 20. Qc2 Nf8 21. Rhf1 Ne6 22. Bh7+ Kg7 23. Bf5 Rxf3 24. gxf3 Nxd4 25. Qd3 Nxf5 26. gxf5 Qe5 27. Ka1 Bxf5 28. Qe2 Re8 29. Qxe5+ Rxe5 30. Rfd1 Bxh3 31. Rc2 Re3 32. Ne2 Bf5 33. Rcd2 Rxf3 34. Nxg3 hxg3 0-1B.5.4. Long term sacrifice game 1. d4 d5 2. c4 e6 3. Nf3 Nf6 4. Nc3 Bb4 5. Bg5 dxc4 6. e4 b5 7. a4 Bb7 8. axb5 Bxe4 9. Bxc4 h6 10. Bd2 Bb7 11. O-O O-O 12. Be3 c6 13. bxc6 Nxc6 14. Qb3 Qe7 15. Ra4 a5 16. Rd1 Rfd8 17. d5 exd5 18. Nxd5 Nxd5 19. Rxd5 Rxd5 20. Bxd5 Rd8 21. Ra1 a4 22. Rxa4 Qd7 23. Bc4 Qd1+ 24. Qxd1 Rxd1+ 25. Bf1 Ba5 26. Rc4 Rb1 27. Rc2 Nb4 28. Rc5 Nc6 29. Bc1 Bb4 30. Rc2 g5 31. h4 g4 32. Nh2 h5 33. Bd3 Ra1 34. Nf1 Ne5 35. Be2 Be4 36. Rc8+ Kh7 37. Be3 Re1 38. Bb5 Bd3 39. Bxd3+ Nxd3 40. Rd8 Nxb2 41. Rd5 Be7 42. Rd7 Bxh4 43. g3 Bf6 44. Rxf7+ Kg6 45. Rxf6+ Kxf6 46. Bd4+ Kg5 47. Bxb2 Rb1 48. Bc3 Kf5 49. Kg2 Rb3 50. Ne3+ Ke4 51. Bf6 Rb5 52. Kf1 Rb6 53. Bc3 Rb3 54. Bd2 Kd3 55. Be1 Rb5 56. Ng2 Ke4 57. Ke2 Rb2+ 58. Bd2 Rc2 59. Ne3 Ra2 60. Nc4 Kd4 61. Nd6 Ke5 62. Ne8 Kf5 63. Kd3 Ra6 64. Bc3 Rc6 65. Bb4 Kg6 66. Nd6 Ra6 67. Bc5 Ra5 68. Bd4 Ra6 69. Nc4 Ra4 70. Nb6 Ra5 71. Ke4 h4 72. gxh4 Kh5 73. Bf6 Ra2 74. Ke3 Ra3+ 75. Ke2 g3 76. Nd5 Ra2+ 77. Kf3 gxf2 78. Nf4+ Kh6 79. Kg2 f1=Q+ 80. Kxf1 Rc2 81. Bg5+ Kh7 82. Ne2 Kg6 83. Kf2 Ra2 84. Kf3 Ra4 85. Ng3 Rc4 86. Bf4 Rc3+ 87. Kg4 Rc4 88. h5+ Kf6 89. Nf5 Ra4 90. Ne3 Ra5 91. Nc4 Ra4 92. Ne5 Kg7 93. Kf5 Ra5 94. Kg5 Rb5 95. Kg4 Rb1 96. Kf5 Rb5 97. Ke4 Ra5 98. h6+ Kh7 99. Bd2 Ra2 100. Be3 Ra6 101. Ng4 Ra3 102. Bd2 Ra2 103. Bf4 Ra5 104. Kf3 Rf5 105. Ke3 Kg6 106. Ke4 Rh5 107. Kf3 Rh3+ 108. Kg2 Rh5 109. Kg3 Ra5 110. Be3 Ra3 111. Kf3 Rb3 112. Ke4 Rb4+ 113. Bd4 Ra4 114. Ke5 Rc4 115. Kd5 Ra4 116. Ke4 Rb4 117. Kd3 Ra4 118. Kc3 Ra3+ 119. Kc4 Rg3 120. Ne3 Rh3 121. Kd5 Rxh6 122. Bb6 Rh3 123. Nc4 Rh5+ 124. Ke6 Rg5 125. Nd2 Rg2 126. Nf1 Rb2 127. Bd8 Re2+ 128. Kd5 Re1 129. Ne3 Rxe3 130. Bh4 Kf5 131. Bf2 Rd3+ 132. Kc4 Ke4 133. Bc5 Rc3+ 134. Kxc3 1/2-1/2 B.5.5. Expose king game 1. e4 c5 2. Nf3 Nc6 3. Na3 Nf6 4. e5 Nd5 5. d4 cxd4 6. Nb5 a6 7. Nbxd4 g6 8. Bc4 Nc7 9. Nxc6 bxc6 10. Ng5 Ne6 11. Nxf7 Kxf7 12. Bxe6+ Kxe6 13. Bd2 Kf7 14. Qf3+ Kg8 15. e6 dxe6 16. O-O-O Qd5 17. Qe3 Bg7 18. Bc3 Qxa2 19. Rd8+ Kf7 20. Qf4+ Bf6 21. Rxh8 Qa1+ 22. Kd2 Qxh1 23. Bxf6 exf6 24. Qc7+ 1-0 B.5.6. Stockfish disagreement game 1. e4 c5 2. Nf3 Nc6 3. d4 cxd4 4. Nxd4 Nf6 5. Nc3 e6 6. Ndb5 d6 7. Bf4 e5 8. Bg5 a6 9. Na3 b5 10. Nd5 Qa5+ 11. Bd2 Qd8 12. Bg5 Be7 13. Bxf6 Bxf6 14. c4 b4 15. Nc2 Rb8 16. g3 b3 17. axb3 Rxb3 18. Ncb4 Rxb4 19. Nxb4 Nxb4 20. Qa4+ Kf8 21. Qxb4 g6 22. Bg2 h5 23. h4 Kg7 24. O-O g5 25. hxg5 Bxg5 26. f4 Be7 27. fxe5 dxe5 28. Qc3 Bc5+ 29. Kh2 Qg5 30. Rf5 19 Grandmaster-Level Chess Without Search Bxf5 31. Qxe5+ Qf6 32. Qxf6+ Kxf6 33. exf5 Kg5 34. Bd5 Rb8 35. Ra2 f6 36. Be6 Kg4 37. Kg2 Rb3 38. Bf7 Rxg3+ 39. Kf1 h4 40. Ra5 Bd4 41. b4 h3 42. Bd5 h2 43. Bg2 Rb3 44. Rxa6 Rb1+ 45. Ke2 Rb2+ 0-1 B.5.7. Blunder game 1. b3 e5 2. Bb2 Nc6 3. e3 d5 4. Bb5 Bd6 5. Bxc6+ bxc6 6. d3 Qg5 7. Nf3 Qe7 8. c4 Nh6 9. Nbd2 O-O 10. c5 Bxc5 11. Nxe5 Bb7 12. d4 Bd6 13. O-O c5 14. Qh5 cxd4 15. exd4 Rae8 16. Rfe1 f6 17. Nd3 Qf7 18. Qf3 Bc8 19. h3 Nf5 20. g3 Ne7 21. Bc3 Bxh3 22. g4 f5 23. Qxh3 fxg4 24. Qxg4 h5 25. Qe6 g5 26. Qxf7+ Rxf7 27. Bb4 Ref8 28. Bxd6 cxd6 29. b4 Nf5 30. Re6 Kg7 31. Rd1 Rc7 32. Nf3 g4 33. Nd2 h4 34. Nb3 Rc2 35. Nf4 g3 36. Nh5+ Kh7 37. fxg3 Nxg3 38. Nxg3 Rg8 39. Rd3 Rxa2 40. Rxd6 Rb2 41. Rxd5 Rxg3+ 42. Rxg3 hxg3 43. Nc5 Kg6 44. b5 Rxb5 45. Kg2 a5 46. Kxg3 a4 47. Rd6+ Kf5 48. Nxa4 Rb3+ 49. Kf2 Rh3 50. Nc5 Kg5 51. Rc6 Kf5 52. d5 Ke5 53. d6 Rh2+ 54. Kg3 Rd2 55. d7 Rxd7 56. Nxd7+ Ke4 57. Rd6 Ke3 58. Nf6 Ke2 59. Ng4 Ke1 60. Kf3 Kf1 61. Rd1# 1-0 20 |
2311.00088.pdf | Random coordinate descent: a simple alternative for optimizing parameterized quantum circuits Zhiyan Ding1, Taehee Ko2, Jiahao Yao1, Lin Lin1,4,5, and Xiantao Li3 1Department of Mathematics, University of California, Berkeley 2School of Computational Sciences, Korea Institute for Advanced Study 3Department of Mathematics, Pennsylvania State University 4Applied Mathematics and Computational Research Division, Lawrence Berkeley National Laboratory 5Challenge Institute for Quantum Computation, University of California, Berkeley November 2, 2023 Abstract Variational quantum algorithms rely on the optimization of parameterized quantum circuits in noisy settings. The commonly used back-propagation procedure in classical machine learning is not directly applicable in this setting due to the collapse of quantum states after measurements. Thus, gradient estimations constitute a significant overhead in a gradient-based optimization of such quantum circuits. This paper introduces a random coordinate descent algorithm as a practical and easy-to-implement alternative to the full gradient descent algorithm. This algorithm only requires one partial derivative at each iteration. Motivated by the behavior of measurement noise in the practical optimization of parameterized quantum circuits, this paper presents an optimization problem setting that is amenable to analysis. Under this setting, the random coordinate descent algorithm exhibits the same level of stochastic stability as the full gradient approach, making it as resilient to noise. The complexity of the random coordinate descent method is generally no worse than that of the gradient descent and can be much better for various quantum optimization problems with anisotropic Lipschitz constants. Theoretical analysis and extensive numerical experiments validate our findings. [email protected] [email protected] Ding and Ko are co-first authors with equal contribution. [email protected] [email protected] [email protected] 1arXiv:2311.00088v1 [quant-ph] 31 Oct 2023 1 Introduction Variational quantum algorithms have emerged as a promising application for near-term quantum devices, addressing various computational challenges with enhanced efficiency [46, 15]. These algorithms encompass several notable approaches, such as the variational quantum eigensolver , the quantum approximate optimization algorithm [22, 15, 40], and quantum machine learning [38, 8, 63, 14]. They are designed to operate in a hybrid quantum-classical fashion [43, 21]. In these algorithms, the quantum component involves the implementation of parameterized quantum gate operations. By performing measurements, a cost function (and optionally, its gradient) is obtained as the output. The classical computational procedure then utilizes an iterative method to produce updates for the parameters, which are subsequently leveraged to refine and reprogram the quantum circuits. This iterative process continues until convergence is achieved, forming a feedback loop that continues to improve the algorithms performance. In variational quantum algorithms, the optimizable parameters are defined within parameterized quantum circuits (PQCs) [7, 52, 64]. A PQC is a sequence of unitary operators represented by parameterized quantum gates that can be readily implemented on a quantum computer. Assuming we are working in an n-qubit Hilbert space, a parameterized quantum circuit can be expressed as follows: U() =JY j=1Uj(j)Wj. (1) Here, ={j}J j=1are the parameters that we need to optimize, Uj(j)C2n2nare the parameterized unitary operators, and WjC2n2nare fixed unitary operators. For instance, a simple example of a PQC consisting only of one-qubit Pauli rotation operators takes the form Uj(j) =MO m=1eij,kj,mj,kj,m, where j,kj,mC22is a single-qubit Pauli matrix that acts on kj,m-th qubit, j,kj,m represents one of the parameters in , and Wjs can be used to represent quantum gates that do not require parameterization, such as the controlled-NOT (CNOT) gate. Letdbe the dimension of the parameters, and we write = (1, 2,, d). We then optimize the parameter by minimizing a properly chosen cost function f(). As an example, the variation quantum eigensolvers (VQE) finds the smallest eigenvalue (groundstate energy) and its corresponding eigenvector (ground state) of a given Hamiltonian matrix Hby minimizing the energy of the state: = argminf() = argminU()0|H|U()0. (2) Here,|0 C2nis a predetermined initial state that can be easily prepared on a quantum computer. For each given ,U() is implemented on a quantum computer to evolve 2 |0, and the corresponding energy f() and its gradient f() can be consequently obtained with measurements. By solving the optimization problem (2), the minimum value gives an approximation to the smallest eigenvalue of H, while U()|0approximates the corresponding eigenvector. 1.1 Problem setup Although the problem of optimizing parameters in VQAs resembles classical optimization problems in machine learning, there exist key differences, particularly in how the cost function is evaluated and the level of accuracy that can be obtained for function and gradient evaluations. Firstly, quantum circuits used for estimating partial derivatives in various directions are typically different. This is predominantly because there is no straightforward method (in parallel to backpropagation) to estimate the entire gradient at once, given the inherent nature of quantum states. The predominant method for computing partial derivatives in a PQC is called the parameter-shift rule [18, 71, 5], which can only be applied to evaluate one component of the partial derivatives at a time. As a result, the estimation of the gradient, f(), typically incurs a cost that is dtimes greater than the cost associated with merely estimating a single partial derivative, if(). Secondly, the evaluation of any given quantity, a function value or a partial derivative, requires measurement from quantum computers and is subject to measurement noise. We note that this noise is associated with a finite sampling space. For example, a measurement of the Hamiltonian in (2), which is defined in a finite-dimensional Hilbert space, yields one of its eigenvalues corresponding to the ansatz. Thus, with an increased number of samples or measurements, the central limit theorem suggests that the distribution of the sample average of the function value or the partial derivative can be approximated by a Gaussian distribution, and as a result, the accuracy of function and gradient evaluations can be relatively low. Therefore, the optimization algorithm must be designed to be resilient to measure noise. In an idealized scenario, we may assume that both the function value and the partial derivatives incorporated into the optimization routine are subject to some Gaussian noise. But the magnitude of corresponding noises can differ up to a constant, especially in situations where the parameter shift rule is applicable (see ). With this consideration, the problem of optimizing PQCs can be stated as follows: Problem 1 (Optimizing parameterized quantum circuits) .Finding an efficient algorithm to solve the optimization problem, = argminRdf(), (3) under the following assumptions: 1. The cost of evaluating a partial derivative scales linearly with that of a function value. 3 2. Every evaluation of the function and partial derivative is susceptible to Gaussian noise: f()f() +N(0, 2 1()), if()if() +N(0, 2 2()). (4) Here, 1() and 2() depend on the real implementation and are not necessarily the same (see for example). For simplicity, in our later analysis, we assume that 2() has a uniform upper bound (see Assumption 2). 1.2 Optimization methods One widely used approach for optimizing VQA is through the application of gradient descent (GD) [67, 79]. The classical gradient descent method involves iteratively updating the parameters by utilizing the gradient of the cost function. n+1=nanf(n), (5) where andenotes the learning rate. In light of the measurement process in quantum computing, we consider the noisy gradient descent: Rather than implementing Eq. (5) with exact f(n), we apply an unbiased estimator g()1(for example, (4)). Consequently, the parameter update involves the following iteration, n+1=nang(n). (6) Since g(n) is an unbiased estimation, Eq. (6) is equivalent to Eq. (5) in the expectation sense. Specifically, by taking the conditional expectation on both sides, we have E(n+1|n) =nanf(n) (7) where E(|n) denotes the conditional expectation given n. While noisy gradient descent avoids the need for precise gradient information, it still requires the approximated full gradient information at each iteration. As argued before, in the context of VQA, it is often necessary to compute dpartial derivatives separately for each direction, which makes the cost of each updating step at least d. In this paper, we introduce an alternative optimization method called random coordinate descent (RCD) [73, 48, 60] for addressing Problem 1, with the goal of eliminating the cost dependency on din each step. RCD can be viewed as a variant of gradient descent (GD) where the full gradient in GD is approximated by a randomly selected component of f(n) in each iteration. Specifically, one RCD iteration can be expressed as: n+1=naneininf(n). (8) 1g() satisfies E[g()] =f() 4 Hereeinis the in-th unit direction, f in(n) is the corresponding partial derivative of the cost function, and inis a random index uniformly drawn from {1,2,, d}. Similar to Eq. (6), we can write the noisy RCD as: n+1=naneingin(n). (9) It is important to emphasize that in each iteration of RCD (9), only one partial derivative information is needed. Consequently, within the scope of VQA (as stated in the first assumption of Problem 1), the cost per step of RCD is dtimes smaller than that of GD. 1.3 Contribution This paper primarily focuses on the investigation of RCD in the context of noisy gradient evaluation. Our analysis is conducted in a specific comparison with GD, and we illustrate that, under specific conditions, RCD can serve as a favorable alternative for optimizing parameterized quantum circuits. The main contributions of this study can be summarized as follows: We show that RCD is theoretically no worse than GD when measuring the complexity by the number of partial derivative calculations (Theorems 3 and 4), assuming the presence of noise and the local PL condition. A summary of the complexities of the two methods is presented in Table 1 for comparison. It is important to highlight that the inequality LavgLdLavgalways holds. Consequently, when the optimization problem is highly anisotropic, i.e., LLavg, RCD is more cost-effective than GD. In the most extreme case when Lis nearly equal to dLavg, RCD can reduce the complexity by a factor of dcompared to GD. We demonstrate that (noisy) GD and RCD converge with high probability under the local PL condition (Assumption 3) and are stable under noisy gradient information. Specifically, if the initial parameter 0resides within the basin N(X) surrounding the global minimum, both noisy methods ensure that the subsequent parameters n will remain consistently within this basin until they converge with the same high probability (Lemmas 5 and 6). To the best of the our knowledge, such stochastic stability has not been established for the optimization methods in variational quantum algorithms. We provide extensive empirical evidence demonstrating that RCD consistently delivers superior performance compared to GD (Sections 1.5 and 4). Our numerical findings support the theoretical observation that RCD can take a larger learning rate than GD, leading to faster convergence. 5 Algorithm Iteration cost Iterations to reach tolerance Total cost GD ( d) O maxn L2 d 2,L log 1 o (L2 d2 ) RCD (1) O maxn Lavg2 d2 2,dLmax log 1 o (Lavg2 d2 ) Table 1: Comparison of the gradient descent and the randomized coordinate descent methods with an unbiased noisy gradient estimation. dis the dimension of the parameter, and the smoothness constants LandLavgare defined in (11) and (13), respectively. 2 is a bound for the measurement noise defined in (15a). In the table, we limit our attention to the situation where the learning rate is fixed. 1.4 Related works Gradient descent with noise The noisy gradient descent (6) is a popular optimization method in the classical machine learning community. Notable examples are the stochastic gradient descent (SGD) or the perturbed gradient descent (PGD) . The convergence properties of the noisy gradient descent method in (6) have been extensively studied [47, 50, 58, 51, 68, 41]. For classical machine learning, these previous works except established that when the cost function is Lsmooth, strong convex (or Polyak-ojasiewicz condition (PL) ) and satisfies an additional condition, f(n) converges linearly to an approximation of fmin. In the recent work , a similar theoretical result was shown for the noisy GD method applied to quantum optimization problems. Randomized coordinate descent The randomized coordinate descent method (RCD) has proven its efficiency over GD in many large-scale optimization problems. The convergence properties of RCD have been extensively explored in the fields of machine learning and optimization [73, 48, 60, 49, 39, 17]. For example, it was shown in that when fis strongly convex, the convergence complexity of RCD can be consistently lower than or equal to that of GD. Here, complexity refers to the total number of partial derivative calculations required for convergence. Later, for strongly convex functions, RCD accelerations were achieved with adaptive momentum-based strategies in various regimes [39, 49]. For the non-convex optimization, recent work shows the global convergence behavior of RCD with a focus on saddle point avoidance. Nevertheless, convergence rates of RCD have been scarcely studied for nonconvex optimization problems. More importantly, most related works focused on the case where partial derivatives are computed exactly, while in this work, we deal with the case where partial derivatives are estimated, which is subject to noise, and we will refer to it as noisy RCD (9). Locally-defined convex conditions for convergence analysis One limitation of the conventional convergence analysis is its reliance on assumptions of global convex or global PL conditions for the cost function f(). However, we show that such global assumptions are not satisfied in quantum problem applications with PQCs, as elaborated in Remark 2. Thus, one must weaken such a global assumption to a local one in the analysis. 6 Convergence analysis under local assumptions requires more sophisticated techniques (see [23, 34, 53, 45] and therein), but it provides important insights that help to interpret empirical results. In our work, we make a local non-convex condition based on the local PL condition . Under this condition and suitable assumptions for the cost function, we By employing a stochastic stability argument, we demonstrate that the noisy GD and RCD methods maintain a comparable convergence rate under our local PL condition with high probability (refer to Theorem 3 and Theorem 4). To the best of the authors knowledge, this paper is the first to provide a rigorous result for the complexity of noisy GD and RCD under a local PL condition designed for variational quantum algorithms built from PQCs. Other quantum optimization methods Another promising direction of research in variational quantum algorithms is zero-order optimization, more commonly known as gradientfree methods. Notably, policy gradient-based techniques have shown their effectiveness in noise robust optimization in the NISQ . Sung et al. construct models based on the previous method and further improve the sample efficiency of the methods. Furthermore, these zero-order optimization methods leverage the strengths of reinforcement learning [76, 24, 11, 12], Monte Carlo tree search [77, 44, 61], and natural evolutionary strategies [2, 80, 27], Bayesian [70, 69], as well as Gaussian processes . In addition to these zero-order methods, several other optimization methods have been proposed recently [31, 59, 65, 26, 25]. One interesting example is the quantum natural gradient (QNG), an approximate second-order method, that incorporates the quantum geometric tensor, which is similar to the natural gradient in classical machine learning. While an outcome of measurement is used as an estimate of the gradient in the QNG or the noisy gradient (6) from (1), the Jordan algorithm encodes the partial derivatives as binary numbers in the computational basis. This algorithm was later improved by Gilyen et al. using high-order finite difference approximations, and applications to VQAs for a certain class of smooth functions were considered. However, the methods [31, 26] require a significant number of ancilla qubits and complex control logics, due to the binary encoding of partial derivatives. Alternatively, proposed a quantum backpropagation algorithm, which uses log dcopies of the quantum state to compute dderivatives. The overhead for computing dderivatives is polylog( d) times that of function evaluation (therefore mimicking backpropagation). One of the main drawbacks of their algorithm is that there is an exponential classical cost associated with the process. For a more restrictive class of cost functions (polynomial functions), proposed a framework to implement the gradient descent and Newtons methods. This method also requires the coherent implementation of the cost function on a quantum computer using e.g., sparse input oracle, and thus can be challenging to implement in near-term devices. 7 1.5 A numerical illustration: Variational quantum eigenvalue solver As a brief illustration of the performance of noisy GD versus RCD methods, we consider the transverse-field Ising model, H=JN1X j=1ZjZj+1+ NX j=1Xj, (10) with the coefficient J= 1 and = 1 .5. Here, Ndenotes the number of qubits, and Xj, Zj are Pauli operators acting on the j-th qubit. In Fig. 1, we set N= 10. To implement the quantum circuits, we use Qiskit Aer-simulator with the command result.get counts that outputs measurement outcomes as classical bitstrings. We utilize the resulting classical bitstrings to compute partial derivatives by applying the parameter shift rule . Thus, the result in Fig. 1 takes into account the measurement noise. In each experiment, 10 independent simulations are used with a fixed initialization. The parameterized quantum circuit used for estimating the ground state energy of the Hamiltonian (10) is given in Figure 10 (Appendix D). We compare the optimization performance of the two methods in terms of the number of partial derivative evaluations. The optimization results in Fig. 1 suggest that RCD requires nearly 4 times fewer partial derivative evaluations than GD to converge to an energy ratio of 0.96 and a fidelity of 0.9, both of which are higher than the energy ratio and the fidelity obtained from GD. This observation can be explained by the analysis in Section 2.2, i.e., RCD can be more efficient than GD when the ratio of Lipschitz constants ( L/L avgor L/L max) is significantly larger than 1. Specifically, the ratio of the total computational cost of GD to RCD can be linked to the Lipschitz ratios, as summarized in Table 1. For instance, in the lower panels of Fig. 1, we observe that the ratio L/L avgandL/L max remains above 30 and 8 throughout the iterations. The faster convergence of RCD can be attributed to these large Lipschitz ratios. 2 Preliminaries and main results Before we establish results pertinent to the performance of RCD, we first establish consistent notations and assumptions, which are presented in Section 2.1. Following that, we outline our key theoretical findings in Section 2.2. 2.1 Notations and assumptions Given a vector vRd, we use standard norms for v, including the 2-norm v2:=qP iv2 i and the -norm v:= max i|vi|. In order to ensure the convergence of gradient-based methods, we list several technical assumptions. 8 Figure 1: The comparison of the performance of GD (red) and RCD (blue) for optimizing the Hamiltonian (10). The unit of the x-axis labels the number of partial derivative evaluations as an indication of the computational complexity. The top panels show the approximation of the ground state, including the energy ratio (left) and fidelity (right). In the bottom panels, we show the ratios of Lipschitz constants obtained from the two methods are compared:L Lavg(left) andL Lmax(right). We assume the cost function fsatisfies the L-smoothness. Specifically, it satisfies the following assumption: Assumption 1. The cost function fisL-smooth, in that, f() f()2L2,for all ,Rd. (11) Since the gradient is Lipschitz continuous, the partial derivatives are Lipschitz continuous as well. We define the componentwise Lipschitz constants, Definition 1. We say that a function fisLi-smooth with respect to the i-th component if |if(+eih)if()| Li|h| hR, (12) where if()denotes the partial derivative in the i-th direction. From these componentwise Lipschitz constants, we denote the maximum and average of those constants as Lmax:= max 1idLi, L avg=1 ddX i=1Li. (13) As shown in , in general we have, LiLavgLmaxLdLmax. (14) 9 Another interpretation is through the hessian: When fis twice continuously differentiable, the condition (11) is equivalent to 2f(x)LId, and similiarly, the condition (12) is equivalent to sup2 if()Li. We note that both the upper and lower bounds of Lin terms of Lmaxin (14) are tight. If 2fis a diagonal matrix, then Lmax=L, both being the largest diagonal element of 2f. (This is the case in which all coordinates are independent of each other, for example, f=P iix2 i.) On the other hand, if 2f=eewhere eRd satisfies ei= 1 for all i, then L=dLmax. This is a situation where fis highly anisotropic, e.g.,f= (P ixi)2/2, where L=dandLmax= 1. In addition, when Lavg=L, we see that Lavg=Lmax=Lifor all i. Next, it is important to note that the estimation of the gradients in quantum computing can be susceptible to noise, which stems from the inherent nature of quantum measurements. Consequently, in our analysis and comparative studies of different optimization methods, we will take into account the presence of noise. To facilitate such analysis, we make the following assumption: Assumption 2 (Bounds of the noise with respect to the 2-norm) .Given any Rd, we assume that we can find an unbiased random estimate g()for the gradient f(), meaning that E[g()] =f(). Furthermore, we assume that there exists a constant 2 >0such that 2 >sup Rdmax 1idE |if()gi()|2 . (15a) Here, we also assume g()is independent for different . Additionally, we assume the existence of a basin encompassing the global minimum, within which fsatisfies the PolyakLojasiewicz condition (PL) , equivalently, the local P L condition . Assumption 3 (Local PL condition) .Define Xas the set of global minima and fminas the global minimum value evaluated over X. Then there exists a f, > 0such that for any N(X) :=f1([fmin, f)), f()22(f()fmin). It is worthwhile to highlight that the PL condition is defined not on the entire space RdbutN(X), which is reasonable in the context of the variational quantum algorithm. We support this argument with the following remark. Remark 2. Letf()be a cost function defined by some parameterized quantum circuit (2). Note that fis periodic and smooth, due to its specialized form. By the extreme value theorem, we see that there exist global maximum and minimum of f, denoted by maxand 10 min. In general, fis not constant, which means that fmax> f min. Had fsatisfied the global PL condition, it would have followed that at the global maximum max, 0 =f(max)22(fmaxfmin)0, (16) which gives a contradiction to the general case that fmax> fmin. As another case study, if fis assumed to be convex, namely, f()f() + (f(),)for all ,Rd, (17) then setting =maxand=minresults in a contradiction. Therefore, the cost function fthat is constructed from an ansatz similar to (2), will not satisfy global PL or convex conditions in general. 2.2 Main result: complexity comparison of GD and RCD In this study, our main focus is to compare the complexity of noisy gradient descent (GD) and randomized coordinate descent (RCD) under the assumptions of a local Polyak Lojasiewicz (PL) condition 3. For the sake of simplicity, in the remaining part of this paper, we will refer to noisy gradient descent and noisy randomized coordinate descent as GD and RCD, respectively, without explicitly mentioning the term noisy. The main theoretical results are summarized in the following two theorems: Theorem 3 (Complexity of GD (7)) .Assume fis a L-smooth function that satisfies assumption 3 and gsatisfies assumption 2. Given >0small enough, if f(1)fand an= (min /(L2 d),1/L )in GD (7), then with probability 1f(1)/fo(1), there exists at least one n < N =(max L2 d/(2), L/ ) (18) such that f(n)fmin+. Theorem 4 (Complexity of RCD (8)) .Assume fis aL-smooth function that satisfies assumption 3 and gsatisfies assumption 2. Given >0small enough, if f(1)fand an= (max /(Lavg2 d),1/Lmax )in RCD (8), then with probability 1f(1)/f o(1), there exists at least one n < N =(max Lavg2 d2/(2), Lmaxd/ ) (19) such that f(n)fmin+. Based on the theorem mentioned above, to achieve f(n)fmin, we can select the learning rate an= L2dfor GD and an= Lavg2dfor RCD. Recalling equation (14), we observe that LavgL, which means that we could use a larger learning rate for RCD. This choice aligns with the learning rates utilized in the numerical experiments presented in Section 1.5 as well as those in Section 4. 11 We compare the complexity of the noisy GD and RCD methods with the estimates of the number of iterations. First, according to the above result, we conclude that the number of iterations required for GD is N= L2 d 2 2, while for RCD, we have N= O Lavg2 d2 2 . Notably, in RCD, there is an additional factor of d, which can be understood in the expectation sense: During each iteration of the noisy RCD, the randomness arises from two sources: the random direction inand the noisy partial derivative gin(n). By taking the conditional expectation with respect to n, we obtain: E(n+1|n) =nan df(n). (20) Compared with (7), there is an extra 1 /dfactor in the expectation of RCD. Consequently, in each iteration, the rate of decay of the cost function is smaller in RCD compared to GD. Consequently, we anticipate that RCD would necessitate more iteration steps to achieve convergence. On the other hand, it is also important to note that in certain scenarios where Lavgdis comparable to L, the number of iterations required for RCD is comparable to that of GD. Meanwhile, it is important to point out that a more practical criterion for comparing the two methods is the cumulative cost of each method, which is represented by the number of partial derivative calculations from the quantum circuits. This is because quantum algorithms for estimating the gradient have a cost proportional to d. Since each iteration of GD needs to calculate the full gradient ( dpartial derivatives), the total number of partial derivative estimations in GD is Npartial,GD =L2 d2 2 . In contrast, the number of partial derivative estimations in RCD is: Npartial,RCD =OLavg2 d2 2 . From equation (14), we can deduce that: (Npartial,RCD ) =Npartial,GD =dO(Npartial,RCD ). This suggests that the computational cost of RCD is L/L avgtimes cheaper than that of GD. In an extreme case where fis highly skewed, i.e., L/L avgd, RCD can reduce the computational cost by a factor of the dimension d, which will be a significant reduction for large quantum circuits. 2This complexity aligns with the classical theoretical results for gradient descent (GD), which typically assume the presence of strong convexity or a local PL condition for the function f. 12 In addition to the complexity result, it is worth noting that the two methods exhibit similar success probability, which is approximately 1 f(1)/f, as indicated by the two aforementioned theorems. This observation is quite surprising, as each iteration of RCD appears noisier due to the random selection of the updating direction in. Intuitively, this suggests that we might need to choose a smaller learning rate anto ensure stability in RCD, which would consequently increase its complexity. However, our theory unveils that choosing a similar learning rate anis adequate to stabilize RCD. To elucidate this point, its important to recognize that, on average, RCD behaves equivalently to GD. By conducting more iterations, RCD can approximate its average behavior (expectation), effectively mitigating the extra randomness introduced by in. This compensation mechanism ensures that the success probabilities remain consistent between the two methods. 3 Proof of main results In this section, we provide the proofs for Theorems 3 and 4. We will start by showing the stochastic stability of the two methods in Section 3.1. This will guarantee that the parameter is likely to stay close to the global minimum until attaining a small loss. Following that, in Section 3.2, we utilize the local PolyakLojasiewicz (PL) condition around the global minimum to establish the convergence of f(n). In all of the following theoretical results and the corresponding proofs in the appendices, we assume fmin= 0 without loss of generality by modifying the original function as f()f()fmin. (21) Thus, all results in this section can be reformulated for the original cost function by the substitution (21), which will yield Theorems 3 and 4. 3.1 Stochastic stability In the context of optimization, stability and convergence are not separate properties. In a deterministic algorithm, convergence immediately guarantees stability. However, this connection does not hold for stochastic processes in general. For instance, when optimization methods such as noisy GD, SGD, or noisy RCD are applied, discrete-time stochastic processes are generated. In such cases, a convergence theory must be developed for a collection of random paths, which can exhibit different convergence behaviors among themselves. In our specific case, we anticipate that when nremains within the basin N(X) and the learning rate is correctly chosen, both the GD and the RCD methods, when the gradient is exactly calculated, converge to a global minimum due to the local PL condition stated in assumption 3. However, in the presence of noise in the gradient and the use of a constant learning rate, it is generally impossible to ensure that n N(X) almost surely, unless a different strategy is adopted such as the decreasing learning rates [53, 23, 34]. On the 13 other hand, the purpose of the optimization algorithm is to minimize the loss function, which means that it suffices to ensure stability until small loss is achieved. To quantify such a likelihood, in this section, we demonstrate that when 0 N(X), there exists a finite probability that nobtained from GD and RCD remain within N(X) until achieving a small loss. This provides a high probability of convergence for the two methods. We summarize the result for noisy GD in the following lemma. Lemma 5. Assume that fis aLsmooth function that satisfies the assumption 3 and g satisfies the assumption 2. If f(1)fand the learning rate is chosen as follows, an=a <min1 L,2f L2d , then, with high probability, iterations of noisy GD (7)remain in f1([0, f))until a small loss is achieved. Specifically, P N > 0such that f(N)/ N andf(n)>La2 d ,n < N f(1) f. (22) In light of equation (22), if we select the learning rate anto be sufficiently small, then with a probability of 1 f(1) f, the parameters are guaranteed to achieve a small loss before escaping the basin. Despite infrequent updates of the gradient components, RCD still demonstrates a similar level of stochastic stability. This key observation is summarized in the following lemma: Lemma 6. Assume that fis aL-smooth function that satisfies assumption 3 and gsatisfies assumption 2. Given any f(1)< f, if one chooses the learning rate an=a <min1 Lmax,d ,2f Lavg2d , then, with high probability, iterations from the noisy RCD (8)stay at f1([0, f))until achieving a small loss. Specifically, P N > 0such that f(N)/ N andf(n)>Lavga2 d ,n < N f(1) f. The proofs of Lemma 5 and 6 are provided in Appendices A and B, respectively. The core concept of these proofs is based on the construction of a specialized supermartingale and the utilization of Markovs inequality. For example, to prove Lemma 5, we define a stochastic process Vn=( f(n)In, n < f()I, n. 14 where the indicator random variable is given by, In=( 1,if{k}n1 k=1f1([0, f)) 0,otherwise ., and the stopping time = inf k:f(k)La2 d . We observe that Vnis a meticulously crafted supermartingale, allowing us to distinguish between stable and unstable events. In particular, we demonstrate that if nexits the basin before it reaches f(n) =La2 d (an unstable event), then supnVnf. Therefore, we can employ Vnas a categorizer and the probability of failure of GD can be characterized by the value of Vn. More specifically, P N > 0 such that f(N)/ N andf(n)>La2 d ,n < N P sup nVnf . Except for its use as a categorizer, we have designed Vnin such a way that it is a supermartingale, meaning E(Vn+1|kn)Vn. Therefore, we can use Markovs inequality for supermartingales to bound the supremum of Vnand achieve the desired result. 3.2 Convergence analysis In this section, we present the convergence properties of noisy GD and RCD methods. It is important to note that Theorems 3 and 4 directly follow from Theorems 7 and 8, respectively. Our first theorem shows the convergence performance of the noisy GD method, Theorem 7. Assume fis aL-smooth function that satisfies Assumption 3 and gsatisfies Assumption 2. Given any precision 0< < f, the initial guess f(1)< f, and the probability of failure f(1) 1 f,1 , we choose the learning rate in (5) an=a=O min1 L, L2d , and the total number of iterations N= 1 alog f(1) f(1) f . Then, with probability 1, we can find at least one mwith 1mNsuch that f(m). In particular, P{mN, f (m)} 1, 15 Next, we state the convergence property of the noisy RCD method in the following theorem, Theorem 8. Assume fis aL-smooth function that satisfies Assumption 3 and gsatisfies Assumption 2. Given any precision 0< < f, the initial guess f(1)< f, and the probability of failure f(1) 1 f,1 , we choose the learning rate in (9) an=a=O min1 Lmax,d , Lavg2d , and the total number of iterations N= d alog f(1) f(1) f . Then, with probability 1, we can find at least one mwith 1mNsuch that f(m). In particular, P{mN, f (m)} 1, The proofs of these theorems can be found in the appendix C. Remark 9. We emphasize that Theorem 7 and Theorem 8 are general convergence results that require only mild conditions. Specifically, Theorem 7 can be used to demonstrate the stability and convergence of the traditional SGD algorithm when the right assumptions are in place. A convergence result analogous to the one previously discussed has been investigated in [41, Theorem 7], where the authors impose a more stringent requirement on the cost function . In our work, we demonstrate the convergence of noisy GD using more sophisticated techniques in probability theory and adopt a weak version of probabilistic convergence . In addition, our approach can be directly extended to show the convergence of noisy RCD as in Theorem 8, which to the best of our knowledge, has not been established before. These two theorems suggest that with a high probability, the loss function can achieve small loss during the training process. In other words, it is likely that the parameter remains in the basin Nuntil the precision is attained at some point. After that, the optimization algorithm could diverge unless a certain strategy is applied, for example, a schedule of decreasing learning rates or an early stopping criterion. Remark 10. Our theoretical result clarifies a relation between the learning rate and the desired precision in optimization. For example, the precision is manifested in the upper bounds of the learning rates in Theorem 7 and Theorem 8. Thus, to reach precision , it is suggested to use an O()learning rate. Otherwise, due to the stability issue, the trajectory is no longer guaranteed to converge to the precision with positive probability. 16 We present the roadmap for proving Theorem 7 as follows: Define the stopping time = inf{k:f(k)}. To prove Theorem 7, it suffices to demonstrate that the probability of failure P( > N ) is small. Since the learning rate anis selected to be sufficiently small and, according to the lemma 5, it is likely that nwill remain within the basin until the loss is achieved3. Thus, informally, it suffices for us to assume n N. The next step is to find an upper bound for the probability of failure pfail=P( > N ). Using the local PL condition, we can show that when < f (n)< f, E(f(n+1)|n) 1a 2 f(n), meaning that the conditional expectation of f(n+1) decays to zero with rate 1a 2 . Inspired by this observation, we can construct a supermartingale to show that, if > N , with high probability, we have inf 1nNf(n). We note that this event is complementary to the failure event { > N }. Consequently, we obtain an upper bound for pfail. 4 Numerical results In Section 1.5, depicted in Fig. 1, we have demonstrated that the noisy RCD leads to faster convergence than the noisy GD for VQE problems. In this section, we extend our investigation to gauge the efficiency of noisy RCD applied to various other variational quantum algorithms, especially those involving non-convex optimization problems. The implementation of these algorithms is executed on classical computers. To emulate quantum measurement noise, the partial derivatives undergo perturbation through additive Gaussian noise as outlined in Section 1.14. Subsequently, we substantiate this approximation through a numerical experiment on a quantum simulator. This experiment further also proposes suitable values for the strength of the Gaussian noise that we will introduce in the upcoming numerical tests to appropriately mimic the measurement noise. In the experiment presented in Section 4.2, we utilize Qiskit-0.44.1 . The algorithms for subsequent examples are implemented using Numpy and Jax . We conducted each experiment ten times, employing different random initializations for each run. All tests are executed on an Intel(R) Xeon(R) CPU @ 2.20GHz, complemented by a T4 GPU. 3Rigorously, we must also take into account the possibility that the optimization algorithm does not reach loss in a finite number of iterations. 4The derivative with noise is computed by adding Gaussian noise to the original derivative: if(x) if(x) +, where follows a Gaussian distribution, denoted as N(0, ). In this notation, signifies the standard deviation, defining the intensity of the Gaussian noise. 17 4.1 Analyzing the noise distribution Building on the numerical experiment detailed in Section 1.5 and executed in Qiskit, we analyze the statistics of the partial derivatives derived from the quantum circuit. Fig. 2 showcases the histograms representing 10,000 estimates of partial derivatives with respect to the initial 12 directions, while the histograms for the remaining directions are presented in Appendix F. Each estimate of the partial derivatives is averaged over 1000 shots. From all histograms, we can clearly see that the distribution is closely approximated by a Gaussian distribution. In addition, the magnitude of the standard deviation of partial derivative estimates across all directions is comparable. These observations support assumptions of the noise model in Problem 1. For simplicity, we will employ the Gaussian noise model in our subsequent investigations to compare the performance of the noisy GD and RCD methods. In the next two sections, we conduct a comprehensive comparison between noisy RCD and GD across a broad spectrum of variational quantum algorithms and applications. 4.2 VQE with a varied circuit structure In Section 1.5, we utilize the VQE for the TFIM (10) employing both the noisy GD and the noisy RCD. In this section, we tackle the same optimization task but with a modified setup. Specifically, Fig. 3 depicts the PQC utilized in the experiments showcased in Fig. 4, distinct from that presented in Fig. 10. In the experiments illustrated in Fig. 4, each optimization outcome derives from 10 identical simulations with the same initial condition. We set the learning rates for the RCD and GD at 0 .3 and 0 .05, respectively. Each experiment utilizes 10,000 shots, with 18 trainable parameters. Results shown in Fig. 4 demonstrate that, compared to GD, RCD requires nearly three times fewer partial derivative evaluations to converge. 4.3 Quantum Approximate Optimization Algorithm (QAOA) for quantum Hamiltonians The Quantum Approximate Optimization Algorithm (QAOA) , originally devised for solving combinatorial problems, is a leading example for demonstrating quantum advantage on near-term quantum computers. As introduced in , the QAOA utilizes a parametrized quantum circuit (PQC), which naturally enables optimization through the variational quantum algorithm. In a generalized QAOA model, we begin with an initial quantum state |i, which can be easily prepared in experiments, and let it evolve by a parameterized unitary transformation, |(,)=U({j, j}p j=1)|i=eiH2peiH1peiH21eiH11|i, (23) where the vector (or) enumerates the parameters j(orj), and thus the total number of parameters is 2 pand the unitary transformation alternates between two kinds of param18 Figure 2: Histograms of the estimated partial derivatives: Each panel displays the histogram of 10000 partial derivative estimates in one of the first 12 directions, which are obtained by applying the parameter-shift rule for the ansatz in Fig. 10. The sampling of the partial derivatives is carried out at a suboptimal point chosen from one simulation used in Fig. 1, where the fidelity is about 0.889. 19 q0:RY() RZ() RY() RZ() RY() RZ() q1:RY() RZ() RY() RZ() RY() RZ() q2:RY() RZ() RY() RZ() RY() RZ() Figure 3: A variational circuit ansatz is employed for the Transverse-Field Ising Model expressed in Equation (10), utilizing 3 qubits. This circuit is a parameterized construct comprised of alternating rotation and entanglement layers. Each rotation layer involves the application of single qubit gates, specifically Rotation-y and Rotation-z gates, to all qubits. In contrast, the entanglement layer employs two-qubit gates, namely the controlled-X gate, to facilitate entanglement among the qubits. The ansatz is designated with 18 parameters. Figure 4: Performance comparison between GD (red) and RCD (blue) in terms of energy ratio and Lipschitz constant ratios for optimizing the Hamiltonian (10). The energy ratio E/EGSis presented in the left panel, while the Lipschitz constant ratios, denoted asL LavgandL Lmax, are shown in the middle and right panels respectively. The shaded areas in each panel represent variations observed across multiple trials. 20 eterized unitary transformations. With this ansatz, the optimization is performed with the parameters {j, j}associated with the application-dependent Hamiltonian matrices H1 andH2, respectively. In the subsequent sections, we will consider optimization problems based on the QAOA (23). We will conduct a comparative analysis of the noisy GD and RCD for various QAOA models that will span a range of systems, including the Ising model (refer to Section 4.3.1), the Heisenberg model (refer to Section 4.3.2), and Variational Quantum Factoring (refer to Section 4.4.3). 4.3.1 QAOA Ising Model In this section, we parameterize the transverse-field Ising model by a Hamiltonian H[h] =N1X j=1Zj+1Zj+NX j=1(Zj+hXj), (24) where Ndenotes the total number of qubits. The global control field h { 4}takes two discrete values, corresponding to the two alternating QAOA generators H1=H[4] and H2=H[+4] [13, 75]. The initial state |icorresponds to the ground state of H[2], while the desired target state |is selected as the ground state of H[+2]. The variational problem aims to optimize the fidelity5, max {i,i}p i=1F({i, i}p i=1) = max {i,i}p i=1||U({i, i}p i=1)|i|2, (25) where, U({i, i}p i=1)|i=eiH2peiH1peiH21eiH11|i. (26) We note that the fidelity optimization (25) is equivalent to the optimization of the form (2) by letting the Hamiltonian be ||. In the numerical test, we choose a system from (24) with three qubits ( N= 3), and then apply both GD and RCD methods in the optimization. Figure 5 shows the optimization results obtained from the noisy GD and RCD with the respective learning rates of 0.0045 and 0.015 by using an ansatz defined with 20 parameters. By adjusting the learning rate and tracking the stability, We observe that RCD permits a larger learning rate in comparison to GD, while maintaining the stability. Similar to the results presented in Fig. 1, we compare the performance of the two methods in terms of the number of partial derivative evaluations. From Fig. 5, We observe that noisy RCD converges much faster than noisy GD. While RCD achieves a fidelity near 1 with 500 partial derivative evaluations, GD only attains a fidelity below 0.25 with an equivalent number of evaluations. This computational 5Fidelity serves as a metric for optimization. However, one caveat of utilizing fidelity is its reliance on the ground state. In this context, we assume the presence of an oracle capable of producing the fidelity value. Subsequently, we also employ energy as an observable metric for optimization purposes. 21 effectiveness of RCD can be attributed to the large ratios of Lipschitz constants shown in Fig. 5, which are obtained along the optimization trajectories. Figure 5: Performance comparison between the noisy GD and RCD for the Ising model (24). The corresponding Lipschitz constant ratios, denoted asL LavgandL Lmax, are presented in the bottom figures. The shaded areas within the figures represent variations that have been observed across ten random realizations. The optimization is performed for parameters with dimension equal to 20. 4.3.2 QAOA Heisenberg Model Our second test problem with QAOA is the (anisotropic) spin-1 Heisenberg model, H= H1+H2, with the alternating Hamiltonians given by, H1=JNX j=1(Xj+1Xj+Yj+1Yj), H 2= NX j=1Zj+1Zj, with anisotropic parameter /J= 0.5 (topological/Haldane [16, 55, 37, 78]). For the Heisenberg model, we consider a system consisting of eight qubits ( N= 8) and choose the fidelity as a measure for optimization, similar to the setup for the results in Fig. 5. We set the antiferromagnetic initial state to |i=|10101010 . The target state is the ground state of the Hamiltonian H=H1+H2. We employ the QAOA ansatz represented by Eqn. (26) and carry out the fidelity optimization detailed in Eqn. (25). Figure 6 showcases the performance outcomes from noisy GD and RCD simulations with learning rates set to 0.01 and 0.1, respectively. This QAOA model involves 28 parameters. The fidelity result shows that RCD converges to the target state much faster than GD. 22 This phenomenon can be elucidated by noting that the ratios of Lipschitz constants derived from both noisy methods,L LavgandL Lmax, average around 10 and 6 along the trajectories, respectively. Especially, the magnitude of the ratioL Lmaxis similar to that of the ratio of the numbers of partial derivative evaluations to reach a high fidelity >0.8 from both noisy methods, as shown in Fig. 6. Based on the observed numerical results, a high ratio ofL Lmax is responsible for the efficiency of RCD in this optimization problem. Figure 6: Performance comparison between noisy GD and RCD for the Heisenberg model. The corresponding Lipschitz constant ratios, denoted asL LavgandL Lmax, are presented in the middle and right. The shaded areas within the figure represent variations that have been observed across ten random realizations. The optimization is performed in dimensions of 28. 4.4 QAOA for classical combinatorial optimization problems Quadratic Unconstrained Binary Optimization (QUBO) problems have significant applications in fields such as finance, logistics, and machine learning, etc. Recognized as one prominent optimization model in quantum computing, QUBO consolidates a wide range of combinatorial optimization problems [20, 35, 4, 28] and translates them into identifying the ground state of classical Ising models . The goal of QUBO is to identify a sequence of binary variables (0 or 1) that minimize a quadratic function. Specifically, a cost function fQis constructed over the set of binary vectors, Bn: fQ(x) =xQx=nX i,j=1Qijxixj. (27) 23 In this context, B={0,1}signifies the set of binary values (or bits), and Bnrepresents the collection of binary vectors with length n >0. A symmetric, real-valued matrix QRnn is introduced, with each element Qijdetermining the weight for the corresponding pair of indices i, j1, . . . , n . For example, if i=j, the term Qiix2 icontributes Qiito the function value when xi= 1. On the other hand, if i=j, the term Qijxixjcontributes Qijto the function value when both xi= 1 and xj= 1. Overall, QUBO seeks to minimize the function fQover the set of binary vectors by determining an optimal minimizer x, x= arg min xBnfQ(x). (28) Incorporating the variational quantum algorithm into QUBO, we reformulate the cost function using the substitution: xi=1Zi 2or1 +Zi 2, (29) where the variable xiis supplanted by the Pauli Z matrix operating on the i-th qubit. This replacement facilitates the formulation of a model Hamiltonian. Its ground state can be approximated by minimizing the expected energy via the variational quantum algorithm, as elaborated in Section 4.4.1. In the following sections, we evaluate the performance of the noisy GD and RCD across various QUBO applications, focusing on the ground state energy estimation. These applications encompass Max-Cut in Section 4.4.1, the Traveling Salesman Problem in Section 4.4.2, and Variational Quantum Factoring in Section 4.4.3. 4.4.1 Max-Cut For the Max-Cut problem, the graph employed in our numerical experiments is presented as follows: 0 1 23 The global cost function is designed to maximize C=P (i,j)Exi(1xj), where Erepresents the edges in the graph. For the given graph, the QUBO problem can be formulated as: min xi{0,1}3x2 0+ 2x0x1+ 2x0x2+ 2x0x32x2 1+ 2x1x23x2 2+ 2x2x32x2 3. 24 In order to construct the corresponding Hamiltonian, we associate the binary variables xi with the Pauli Zmatrices, denoted as Zi, which act on individual qubits. Taking into account the relationship between the binary variables xiand the Pauli matrices Zi, defined by the equation xi=1Zi 2, the cost function is articulated by the Hamiltonian: H=1 2I3Z0+1 2Z0Z1+1 2Z0Z2+1 2Z0Z3+1 2Z1Z2+1 2Z2Z3. (30) Using this Hamiltonian, we construct a parameterized quantum circuit with four qubits (N= 4) and 20 parameters. The circuit consists of alternating single-gate rotations, denoted as Usingle () =Qn i=1RY (i)6and entangler gate Uentangler7. The configuration of the parametrized quantum circuit is illustrated in Figure 11 in Appendix E. This structure resembles the variational quantum circuit of the QAOA, with the ansatz given by |()= [Usingle ()Uentangler ]m|+. For the optimization process, we assign a learning rate of 0.1 for GD and 3.0 for RCD and select energy as the optimization metric. As illustrated in Fig. 7, the RCD also outperforms GD in this case, as it converges to an energy ratio of 1 with roughly 200 partial derivative evaluations. In contrast, the GD achieves only an average of 0.75 with 1000 derivative evaluations. The superior performance of RCD in Fig. 7 can again be attributed to the significant values ofL LavgandL Lmax, both exceeding an order of magnitude of 3. As observed from the optimization result, a high ratio ofL Lavgis indicative of the rapid convergence of RCD in this application. Figure 7: Performance comparison between noisy GD and RCD for the Max-cut problem. The corresponding Lipschitz constant ratios, denoted asL LavgandL Lmax, are presented in the middle and right panels. The shaded areas within the figure represent variations that have been observed across ten random realizations. The optimization process has been performed in 20 dimensions. 6Each layer of rotation gates includes a rotation-Y gate applied to every qubit. 7The entanglement layer incorporates two-qubit gates for qubit entanglement without tunable parameters. In this experiment, the entangler gate employs controlled-Z gates. For a comprehensive explanation, refer to the circuit architecture in Appendix E 25 4.4.2 Traveling Salesman Problem (TSP) We have designed a numerical test for the Traveling Salesman Problem (TSP) using three cities as an example. The intercity costs for these cities are 48, 63, and 91 respectively. The cost of TSP is defined as C(x) =X i,jwijX pxi,pxj,p+1+AX p 1X ixi,p!2 +AX i 1X pxi,p!2 , where ilabels the node, pindicates its order, and xi,pis in the set {0,1}and the penalty parameter Ais set sufficiently large to effectively enforce constraints. More details regarding the expansion of C(x) can be found in Appendix G. Utilizing the defined cost function, we establish a model Hamiltonian in the same manner as presented in Section 4.4.1. We aim to prepare its ground state to address the QUBO problem. A detailed representation of the Hamiltonian is available in Appendix G. We construct a parameterized quantum circuit comprising alternating single-gate rotations, represented by Usingle () =Qn i=1RY(i) and entangler gate Uentangler . This circuit resembles the one depicted in Figure 11 in Appendix E, albeit with a greater number of qubits. The total number of trainable parameters is 90, which requires nine qubits ( N= 9) and ten alternating layers. We employ energy as the measure for the optimization cost function. In the left panel in Fig. 8, the optimization results obtained from the noisy RCD and GD are plotted. Notably, GD exhibits slower convergence compared to RCD in achieving an energy ratio of 1. The employment of 90 parameters in the optimization, a number markedly greater than those in prior applications, might account for this disparity. This increased parameter count likely requires additional iterations and partial derivative evaluations when applying GD. Similar to previous results, the two types of Lipschitz constant ratios are obtained and shown along with the iterations in Fig. 8. Again, we can see that the values of the ratios are considerably large, especially during the initial stage of the optimization, underlining the efficiency of RCD in the optimization process. 4.4.3 Variational Quantum Factoring Our next QUBO problem is designed as a variational quantum factoring task. For this task, we formulated the optimization problem within the framework of quantum adiabatic computation [19, 3]. For example, to factorize 143 into the product of two prime numbers, let 143 = pq, where p= 8 + 4 p2+ 2p1+ 1, q= 8 + 4 q2+ 2q1+ 1. 26 Figure 8: Performance comparison between noisy GD and RCD for the TSP problem. The corresponding Lipschitz constant ratios, denoted asL LavgandL Lmax, are presented in the middle and right panels. The shaded areas within the figure represent variations that have been observed across ten random realizations. The optimization process has been performed in 90 dimensions. In the first panel, E/EGSis defined as ( Ec)/(EGSc), where c/EGS= 3000. For clarity in the presentation, we adjust the energy by a constant. Upon direct computation, the relations are simplified to p1+q11 = 0 , (31) p2+q21 = 0 , (32) p2q1+p1q21 = 0 . (33) To solve this system of equations, we introduce a cost function c(p1, q1, p2, q2) = (p1+q11)2+ (p2+q21)2+ (p2q1+p1q21)2. (34) By borrowing techniques (see Appendix. H for more details) from [74, 62], the cost function can be reduced to c(p1, q1, p2, q2) = 53p1p2q1+2p1q13p2q1+2p1p2q13q2+p1q2+2p2q2+2p2q1q2.(35) Following the methods detailed in the QUBO, we treat ( p1, q1, p2, q2) as boolean functions and substitute each boolean with1 2(1Zi) as we did in previous sections. Then, the problem can be reformulated into the Ising Hamiltonian, H=3I+1 2Z0+1 4Z1+3 4Z0Z2+1 4Z21 4Z1Z2+1 4Z0Z1 1 4Z0Z1Z2+1 2Z3+1 4Z0Z3+3 4Z1Z3+1 4Z2Z31 4Z1Z2Z3.(36) The ground states of this Hamiltonian are |0110and|1001, which respectively correspond to the solutions for the factorization of the number 143. We summarize it as follows, (p1, p2, q1, q2) = (0 ,1,1,0)(p, q) = (13 ,11) (37) (p1, p2, q1, q2) = (1 ,0,0,1)(p, q) = (11 ,13) (38) p= 8 + 4 p2+ 2p1+ 1 and q= 8 + 4 q2+ 2q1+ 1 Boolean functions . (39) 27 In our numerical experiment, we select the mixer Hamiltonian H2=PXiand set up a 20-layer QAOA, which corresponds to 40 parameters8. We set the learning rates to 0 .0001 for GD and 0 .005 for RCD and choose the energy as a measure for optimization. Even with a small step size, the variance of GD is notably large. Employing a larger step size for GD further exacerbates the results. In Fig. 9, the optimization results of the Hamiltonian (36) are depicted, showing that the number of partial derivative evaluations for the RCD to reach an energy ratio of 1 is about 400 whereas the GD seems to require more than 1000 to the same tolerance. As discussed previously, this observation aligns with prior discussions, particularly given the pronounced magnitude of the Lipschitz constant ratios evident in Fig. 9. Figure 9: Performance comparison between noisy GD and RCD for the quantum factoring problem. The corresponding Lipschitz constant ratios, denoted asL LavgandL Lmax, are presented in the middle and right panels. The shaded areas within the figure represent variations that have been observed across ten random realizations. The optimization process has been performed in 40 dimensions. 5 Conclusion We considered the use of a noisy random coordinate descent method to analyze its potential advantage over the noisy gradient descent, which evaluates all partial derivatives at each step, in the context of variational quantum optimization. Most previous works on randomized coordinate descent algorithms studied the case of convex cost functions, which do not fit into most variational quantum applications that involve non-convex cost functions. In this work, we generalized the conventional convergence analysis of randomized coordinate descent to local convergence analysis under a local-PL condition that can capture a large class of non-convex optimization. In particular, we proved that noisy randomized coordinate descent can converge faster than noisy gradient descent in terms of the total cost, measured in terms of the total number of partial derivative estimations. In addition, we conducted extensive numerical experiments implementing both methods for many interesting quantum optimization problems. We observed that noisy randomized 8The QAOA ansatz builds the variational circuit by alternating between the parametrized unitary evolution associated with the problem Hamiltonian Hand the mixer Hamiltonian H2. 28 coordinate descent typically demands less measurement cost than noisy gradient descent, thereby demonstrating its efficiency in many non-convex quantum applications. From an optimization standpoint, variational quantum optimization as outlined in Problem 1 raises many interesting questions. For example, can second order, or zeroth order optimization methods (i.e., methods using only function evaluation) be more efficient compared to the current gradient-based algorithms? In a technical viewpoint, another question is whether the stability result Lemma 5 can be generalized so that the event covers the case that the iteration diverges at some timepoint, but it remains in the entire basin until then, f1[0, f), not necessarily in the region above the set of global minima, f1 La2 d , f . If this is possible to show, then it will provide stronger result such as the stability of the noisy GD and RCD within the entire basin as the stability of Markov Chain in . Acknowledgements. TK and XLs research is supported by the National Science Foundation Grants DMS2111221 and CCF-2312456. TK is supported by a KIAS Individual Grant CG096001 at Korea Institute for Advanced Study. This material is based upon work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Quantum Systems Accelerator (ZD). Additional funding is provided by the SciAI Center funded by the Office of Naval Research (ONR), under Grant Number N00014-23-1-2729 (LL). LL is a Simons investigator in Mathematics. A Stochastic stability of noisy GD In this section, we prove Lemma 5. Proof of Lemma 5. Define the probability filtration: Fn=(k|kn) and the stopping time9 = inf k:f(k)La2 d , which is the smallest timepoint where the noisy GD achieves f(k)La2 d . Define the indicator function In: In=( 1,if{k}n1 k=1f1([0, f)) 0,otherwise, (40) 9It is straightforward to see {n} F n,{ > n} F n. 29 and the stochastic process Vn=( f(n)In, n < f()I, n. According to the definition of Vn, there are complementary and exclusive events (cases): Case 1: If there exists 0 < n < such that: 1. n/ N; 2. For any m < n ,m N andf(m)>La2 d . Then Vnfsup nVnf. Case 2: For any n < ,f(n) N. We observe that Case 2 is the stable situation, indicating that f(n) remains in the basin of the global minimum until it achieves a small loss10. To prove (22), it suffices to show that P(1)f(1) f, (41) where 1denotes the event associated with Case 1. Now, we show that Vnis a supermartingale to bound supnVn. Taking the conditional expectation, we obtain E(Vn+1|Fn) =E(Vn+1|Fn,In= 1, n)P(n) +E(Vn+1|Fn,In= 1, > n )P( > n ), where we use In+1In. There are two terms in the above equation: For the first term, when n, we obtain Vn+1=V=Vn. This implies E(Vn+1|Fn,In= 1, n) =Vn. (42) For the second term, when > n , we have f(n)>La2 d . Then, taking the conditional expectation yields E[Vn+1|In= 1,1, > n ] =E[f(n+1)In+1|In= 1,1, > n ] f(n)af(n)2+La2 2(f(n)2+2 d) (1a)f(n) +La22 d 2 <(1a)f(n) +a 2f(n) 1a 2 f(n)In= 1a 2 Vn,(43) 10We emphasize that Case 2 also includes the situation where f(n) remains in the basin and never achieves the small loss. 30 where we use assumption 3 and a <1 Lin the second inequality, > n in the third inequality. Combining (42) and (43), we obtain E(Vn+1|Fn) =VnP(n) + 1a 2 VnP( > n )Vn. (44) Thus, Vnis a supermartingale. Now, we consider the Case 1 event: 1={n >1,n/ N andf(m)> withm N,1m < n } sup nVnf . Because Vnis a supermartingale, we obtain Case 1 happens with small probability: P(1)V1 f=f(1) f. This concludes the proof. B Stochastic stability of noisy RCD In this section, we prove Lemma 6 with a slight modification of the proof in Appendix A. From a theoretical viewpoint, the difference between the noisy GD and RCD methods is made by the construction of gradient estimate (e.g. see (5) and (8)). Compared to GD, the additional randomness of RCD comes with the random selection of a component as in (8). This difference affects the recursive inequality (43) mainly in the previous proof, where we considered the properties of the gradient estimator. From this observation, it suffices to derive a recursive inequality similar to (43) to prove Lemma 6. Note that the sampling of a component within RCD is performed before estimating a partial derivative. Thus, the first step is to take expectation on the partial derivative estimate, Ein[f(n+1)]f(n)aEin[inf(n)gin(n)] +Lina2 2Ein[|gin(n)|2], (45) where inis uniformly sampled index and ginis the corresponding unbiased estimate for the partial derivative. Let Fn,,In, and Vnbe as defined in the previous section. By considering the inequality (45) and the conditional expectation in (43), we achieve the 31 following result by taking expectations with respect to the random index in, E[Vn+1|In= 1,1, > n ] =E[f(n+1)In+1|In= 1,1, > n ] f(n)a df(n)2+Lmaxa2 2df(n)2+Lavg2 da2 2d In+1 1a d f(n) +Lavga22 2 In+1 < 1a d f(n) +a 2df(n) In+1 = 1a 2d f(n)In= 1a 2d Vn,(46) provided that f(n)>Lavga2d andan=a <minn 1 Lmax,d ,2f Lavg2do . Similar to (44), in the case of RCD, (46) implies E(Vn+1|Fn) =VnP(n) + 1a 2d VnP( > n )Vn, (47) which implies Vnforms a supermartingale. The remaining proof of Lemma 6 follows the same steps as the proof of Lemma 5, so we will not include them here. C The proofs of Theorem 7 and Theorem 8 We first show the convergence rate of the noisy GD method, followed by a similar analysis for the noisy RCD method. The following proofs are similar to those in Appendix A and Appendix B with minor differences. Proof of Theorem 7. Define the probability filtration: Fn=(k|kn) and the stopping time = inf{k:f(k)}, which is the smallest timepoint where the noisy GD achieves f(k). Then, our ultimate goal is to show that the event that inf 1nNf(n)occurs with high probability, say, at least 1 . Then, our goal is to show that for any f(1) f,1 , there exists a sufficiently large Nsuch that pfail:=P( > N ). (48) Define the indicator function In: In=( 1,if{k}n1 k=1f1([0, f)) 0,otherwise, 32 and the stochastic process Vn=( f(n)In, n < f()I, n. Define the unstable event: ={n >1,n/ N andf(m)> ,1m < n } sup nVnf . According to Lemma 5 and the proof in Appendix A, for learning rate awithLa2 d < , we obtain happens with small probability: P()V1 f=f(1) f. (49) Recalling (48), we note that, for any nN, P( > n )pfail. Plugging this into (44), we obtain that E(Vn+1|Fn) = 1P( > n ) + 1a 2 P( > n ) Vn = 1aP( > n ) 2 Vn 1ap fail 2 Vn.(50) By taking the total expectation on both sides and using a telescoping trick, we achieve that E(Vn+1) 1ap fail 2n V1= 1ap fail 2n f(1). (51) This means that if the probability of failure, pfail, is large, the expectation of Vn+1decreases quickly. By Markovs inequality, we have P(VN> ) 1apfail 2N1f(1) , equivalently, P(VN)1 1apfail 2N1f(1) . Now, if we consider the event {VN}, it is the union of the following two events (not necessarily exclusive and complementary), which are slightly different from the ones in Appendix A: 33 1: There exists nNsuch that f(n)andn N. This means inf 1nNf(n) . We want to show that 1happens with high probability. 2: There exists n < N such that f(n)> fandf(m)> for any m < n . We note that, when 2happens, we have Vn+1= 0 with f(n)> f, which implies 2. According to (49), we obtain P(2)P()f(1) f. Now, we give a lower bound for the event 1: P inf 1nNf(n) =P(1)P(VN)P(2)1 1apfail 2Nf(1) f(1) f. (52) Notice P inf 1nNf(n) P(N) = 1pfail. Combining the above two inequalities, we have pfail 1apfail 2Nf(1) +f(1) f. (53) Next, we show (48) using the proof by contradiction. Assume that the conclusion of the theorem is not true, meaning that for some f(1) f,1 and every N, pfail> . When pfail> andN=2 alog f(1) f(1) f , then 1apfail 2Nf(1) +f(1) f < |