shahules786 commited on
Commit
fe35880
1 Parent(s): 42a6a3b

Upload 4 files

Browse files
Files changed (4) hide show
  1. 1701.06538v1.md +521 -0
  2. 2202.09368v2.md +426 -0
  3. DeepSeekMoE_2.md +0 -0
  4. switch_transformers.md +0 -0
1701.06538v1.md ADDED
@@ -0,0 +1,521 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Outrageously Large Neural Networks: The Sparsely-Gated Mixture-Of-Experts Layer
2
+
3
+ Noam Shazeer1, Azalia Mirhoseini∗†1, Krzysztof Maziarz∗2, Andy Davis1, Quoc Le1, Geoffrey Hinton1and Jeff Dean1 1Google Brain, {noam,azalia,andydavis,qvl,geoffhinton,jeff}@google.com 2Jagiellonian University, Cracow, [email protected]
4
+
5
+ ## Abstract
6
+
7
+ The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE
8
+ to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.
9
+
10
+ ## 1 Introduction And Related Work 1.1 Conditional Computation
11
+
12
+ Exploiting scale in both training data and model size has been central to the success of deep learning. When datasets are sufficiently large, increasing the capacity (number of parameters) of neural networks can give much better prediction accuracy. This has been shown in domains such as text
13
+ (Sutskever et al., 2014; Bahdanau et al., 2014; Jozefowicz et al., 2016; Wu et al., 2016), images
14
+ (Krizhevsky et al., 2012; Le et al., 2012), and audio (Hinton et al., 2012; Amodei et al., 2015). For typical deep learning models, where the entire model is activated for every example, this leads to a roughly quadratic blow-up in training costs, as both the model size and the number of training examples increase. Unfortunately, the advances in computing power and distributed computation fall short of meeting such demand.
15
+
16
+ Various forms of conditional computation have been proposed as a way to increase model capacity without a proportional increase in computational costs (Davis & Arel, 2013; Bengio et al., 2013; Eigen et al., 2013; Ludovic Denoyer, 2014; Cho & Bengio, 2014; Bengio et al., 2015; Almahairi et al., 2015). In these schemes, large parts of a network are active or inactive on a per-example basis. The gating decisions may be binary or sparse and continuous, stochastic or deterministic. Various forms of reinforcement learning and back-propagation are proposed for trarining the gating decisions.
17
+
18
+ ![1_image_0.png](1_image_0.png)
19
+
20
+ While these ideas are promising in theory, no work to date has yet demonstrated massive improvements in model capacity, training time, or model quality. We blame this on a combination of the following challenges:
21
+
22
+ - Modern computing devices, especially GPUs, are much faster at arithmetic than at branching. Most of the works above recognize this and propose turning on/off large chunks of the network with each gating decision.
23
+
24
+ - Large batch sizes are critical for performance, as they amortize the costs of parameter transfers and updates. Conditional computation reduces the batch sizes for the conditionally active chunks of the network.
25
+
26
+ - Network bandwidth can be a bottleneck. A cluster of GPUs may have computational power thousands of times greater than the aggregate inter-device network bandwidth. To be computationally efficient, the relative computational versus network demands of an algorithm must exceed this ratio. Embedding layers, which can be seen as a form of conditional computation, are handicapped by this very problem. Since the embeddings generally need to be sent across the network, the number of (example, parameter) interactions is limited by network bandwidth instead of computational capacity.
27
+
28
+ - Depending on the scheme, loss terms may be necessary to achieve the desired level of sparsity per-chunk and/or per example. Bengio et al. (2015) use three such terms. These issues can affect both model quality and load-balancing.
29
+
30
+ - Model capacity is most critical for very large data sets. The existing literature on conditional computation deals with relatively small image recognition data sets consisting of up to 600,000 images. It is hard to imagine that the labels of these images provide a sufficient signal to adequately train a model with millions, let alone billions of parameters.
31
+ In this work, we for the first time address all of the above challenges and finally realize the promise of conditional computation. We obtain greater than 1000x improvements in model capacity with only minor losses in computational efficiency and significantly advance the state-of-the-art results on public language modeling and translation data sets.
32
+
33
+ 1.2 OUR APPROACH: THE SPARSELY-GATED MIXTURE-OF-EXPERTS LAYER
34
+ Our approach to conditional computation is to introduce a new type of general purpose neural network component: a Sparsely-Gated Mixture-of-Experts Layer (MoE). The MoE consists of a number of experts, each a simple feed-forward neural network, and a trainable gating network which selects a sparse combination of the experts to process each input (see Figure 1). All parts of the network are trained jointly by back-propagation.
35
+
36
+ While the introduced technique is generic, in this paper we focus on language modeling and machine translation tasks, which are known to benefit from very large models. In particular, we apply a MoE
37
+ convolutionally between stacked LSTM layers (Hochreiter & Schmidhuber, 1997), as in Figure 1.
38
+
39
+ The MoE is called once for each position in the text, selecting a potentially different combination of experts at each position. The different experts tend to become highly specialized based on syntax and semantics (see Appendix E Table 9). On both language modeling and machine translation benchmarks, we improve on best published results at a fraction of the computational cost.
40
+
41
+ ## 1.3 Related Work On Mixtures Of Experts
42
+
43
+ Since its introduction more than two decades ago (Jacobs et al., 1991; Jordan & Jacobs, 1994), the mixture-of-experts approach has been the subject of much research. Different types of expert architectures hae been proposed such as SVMs (Collobert et al., 2002), Gaussian Processes (Tresp, 2001; Theis & Bethge, 2015; Deisenroth & Ng, 2015), Dirichlet Processes (Shahbaba & Neal, 2009),
44
+ and deep networks. Other work has focused on different expert configurations such as a hierarchical structure (Yao et al., 2009), infinite numbers of experts (Rasmussen & Ghahramani, 2002), and adding experts sequentially (Aljundi et al., 2016). Garmash & Monz (2016) suggest an ensemble model in the format of mixture of experts for machine translation. The gating network is trained on a pre-trained ensemble NMT model. The works above concern top-level mixtures of experts. The mixture of experts is the whole model. Eigen et al. (2013) introduce the idea of using multiple MoEs with their own gating networks as parts of a deep model. It is intuitive that the latter approach is more powerful, since complex problems may contain many sub-problems each requiring different experts. They also allude in their conclusion to the potential to introduce sparsity, turning MoEs into a vehicle for computational computation.
45
+
46
+ Our work builds on this use of MoEs as a general purpose neural network component. While Eigen et al. (2013) uses two stacked MoEs allowing for two sets of gating decisions, our convolutional application of the MoE allows for different gating decisions at each position in the text. We also realize sparse gating and demonstrate its use as a practical way to massively increase model capacity.
47
+
48
+ ## 2 The Structure Of The Mixture-Of-Experts Layer
49
+
50
+ The Mixture-of-Experts (MoE) layer consists of a set of n "expert networks" E1, · · · , En, and a
51
+ "gating network" G whose output is a sparse n-dimensional vector. Figure 1 shows an overview of the MoE module. The experts are themselves neural networks, each with their own parameters.
52
+
53
+ Although in principle we only require that the experts accept the same sized inputs and produce the same-sized outputs, in our initial investigations in this paper, we restrict ourselves to the case where the models are feed-forward networks with identical architectures, but with separate parameters.
54
+
55
+ Let us denote by G(x) and Ei(x) the output of the gating network and the output of the i-th expert network for a given input x. The output y of the MoE module can be written as follows:
56
+
57
+ $$y=\sum_{i=1}^{n}G(x)_{i}E_{i}(x)$$
58
+ $$(\mathbf{l})$$
59
+ G(x)iEi(x) (1)
60
+ We save computation based on the sparsity of the output of G(x). Wherever G(x)i = 0, we need not compute Ei(x). In our experiments, we have up to thousands of experts, but only need to evaluate a handful of them for every example. If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted combination of "experts", each of which is itself a secondary mixture-of-experts with its own gating network. In the following we focus on ordinary MoEs. We provide more details on hierarchical MoEs in Appendix B. Our implementation is related to other models of conditional computation. A MoE whose experts are simple weight matrices is similar to the parameterized weight matrix proposed in (Cho & Bengio, 2014). A MoE whose experts have one hidden layer is similar to the block-wise dropout described in (Bengio et al., 2015), where the dropped-out layer is sandwiched between fully-activated layers.
61
+
62
+ ## 2.1 Gating Network
63
+
64
+ Softmax Gating: A simple choice of non-sparse gating function (Jordan & Jacobs, 1994) is to multiply the input by a trainable weight matrix Wg and then apply the *Sof tmax* function.
65
+
66
+ $$G_{\sigma}(x)=S o f t m a x(x\cdot W_{g})$$
67
+ $\eqref{eq:walpha}$.
68
+ Gσ(x) = *Sof tmax*(x · Wg) (2)
69
+ Noisy Top-K Gating: We add two components to the Softmax gating network: sparsity and noise. Before taking the softmax function, we add tunable Gaussian noise, then keep only the top k values, setting the rest to −∞ (which causes the corresponding gate values to equal 0). The sparsity serves to save computation, as described above. While this form of sparsity creates some theoretically scary discontinuities in the output of gating function, we have not yet observed this to be a problem in practice. The noise term helps with load balancing, as will be discussed in Appendix A. The amount of noise per component is controlled by a second trainable weight matrix W*noise*.
70
+
71
+ $$G(x)=S o f t m a x(K e e p T o p K(H(x),k))$$
72
+ $$(4)$$
73
+ $$(S)$$
74
+ G(x) = Sof tmax(*KeepT opK*(H(x), k)) (3)
75
+ $$H(x)_{i}=(x\cdot W_{g})_{i}+S t a n d a r d N o r m a l()\cdot S o f t p l u s((x\cdot W_{n o i s e})_{i})$$
76
+ $KeepTopK(v,k)_{i}=\begin{cases}v_{i}&\text{if}v_{i}\text{is in the top}k\text{elements of}v.\\ -\infty&\text{otherwise.}\end{cases}$
77
+ Training the Gating Network We train the gating network by simple back-propagation, along with the rest of the model. If we choose k > 1, the gate values for the top k experts have nonzero derivatives with respect to the weights of the gating network. This type of occasionally-sensitive behavior is described in (Bengio et al., 2013) with respect to noisy rectifiers. Gradients also backpropagate through the gating network to its inputs. Our method differs here from (Bengio et al.,
78
+ 2015) who use boolean gates and a REINFORCE-style approach to train the gating network.
79
+
80
+ ## 3 Addressing Performance Challenges 3.1 The Shrinking Batch Problem
81
+
82
+ On modern CPUs and GPUs, large batch sizes are necessary for computational efficiency, so as to amortize the overhead of parameter loads and updates. If the gating network chooses k out of n experts for each example, then for a batch of b examples, each expert receives a much smaller batch of approximately kb n b examples. This causes a naive MoE implementation to become very inefficient as the number of experts increases. The solution to this shrinking batch problem is to make the original batch size as large as possible. However, batch size tends to be limited by the memory necessary to store activations between the forwards and backwards passes. We propose the following techniques for increasing the batch size:
83
+ Mixing Data Parallelism and Model Parallelism: In a conventional distributed training setting, multiple copies of the model on different devices asynchronously process distinct batches of data, and parameters are synchronized through a set of parameter servers. In our technique, these different batches run synchronously so that they can be combined for the MoE layer. We distribute the standard layers of the model and the gating network according to conventional data-parallel schemes, but keep only one shared copy of each expert. Each expert in the MoE layer receives a combined batch consisting of the relevant examples from all of the data-parallel input batches. The same set of devices function as data-parallel replicas (for the standard layers and the gating networks) and as model-parallel shards (each hosting a subset of the experts). If the model is distributed over d devices, and each device processes a batch of size b, each expert receives a batch of approximately kbd nexamples. Thus, we achieve a factor of d improvement in expert batch size.
84
+
85
+ In the case of a hierarchical MoE (Section B), the primary gating network employs data parallelism, and the secondary MoEs employ model parallelism. Each secondary MoE resides on one device.
86
+
87
+ This technique allows us to increase the number of experts (and hence the number of parameters) by proportionally increasing the number of devices in the training cluster. The total batch size increases, keeping the batch size per expert constant. The memory and bandwidth requirements per device also remain constant, as do the step times, as does the amount of time necessary to process a number of training examples equal to the number of parameters in the model. It is our goal to train a trillionparameter model on a trillion-word corpus. We have not scaled our systems this far as of the writing of this paper, but it should be possible by adding more hardware.
88
+
89
+ Taking Advantage of Convolutionality: In our language models, we apply the same MoE to each time step of the previous layer. If we wait for the previous layer to finish, we can apply the MoE
90
+ to all the time steps together as one big batch. Doing so increases the size of the input batch to the MoE layer by a factor of the number of unrolled time steps.
91
+
92
+ Increasing Batch Size for a Recurrent MoE: We suspect that even more powerful models may involve applying a MoE recurrently. For example, the weight matrices of a LSTM or other RNN
93
+ could be replaced by a MoE. Sadly, such models break the convolutional trick from the last paragraph, since the input to the MoE at one timestep depends on the output of the MoE at the previous timestep. Gruslys et al. (2016) describe a technique for drastically reducing the number of stored activations in an unrolled RNN, at the cost of recomputing forward activations. This would allow for a large increase in batch size.
94
+
95
+ ## 3.2 Network Bandwidth
96
+
97
+ Another major performance concern in distributed computing is network bandwidth. Since the experts are stationary (see above) and the number of gating parameters is small, most of the communication involves sending the inputs and outputs of the experts across the network. To maintain computational efficiency, the ratio of an expert's computation to the size of its input and output must exceed the ratio of computational to network capacity of the computing device. For GPUs, this may be thousands to one. In our experiments, we use experts with one hidden layer containing thousands of RELU-activated units. Since the weight matrices in the expert have sizes input_size×hidden_*size* and hidden_size × output_*size*, the ratio of computation to input and output is equal to the size of the hidden layer. Conveniently, we can increase computational efficiency simply by using a larger hidden layer, or more hidden layers.
98
+
99
+ ## 4 Balancing Expert Utilization
100
+
101
+ We have observed that the gating network tends to converge to a state where it always produces large weights for the same few experts. This imbalance is self-reinforcing, as the favored experts are trained more rapidly and thus are selected even more by the gating network. Eigen et al. (2013)
102
+ describe the same phenomenon, and use a hard constraint at the beginning of training to avoid this local minimum. Bengio et al. (2015) include a soft constraint on the batch-wise average of each gate.1 We take a soft constraint approach. We define the importance of an expert relative to a batch of training examples to be the batchwise sum of the gate values for that expert. We define an additional loss L*importance*, which is added to the overall loss function for the model. This loss is equal to the square of the coefficient of variation of the set of importance values, multiplied by a hand-tuned scaling factor w*importance*. This additional loss encourages all experts to have equal importance.
103
+
104
+ $$I m p o r t a n c e(X)=\sum_{x\in X}G(x)$$
105
+ $$L_{importance}(X)=w_{importance}\cdot CV(Importance(X))^{2}\tag{7}$$
106
+ $$(6)$$
107
+
108
+ While this loss function can ensure equal importance, experts may still receive very different numbers of examples. For example, one expert may receive a few examples with large weights, and another may receive many examples with small weights. This can cause memory and performance problems on distributed hardware. To solve this problem, we introduce a second loss function, L*load* , which ensures balanced loads. Appendix A contains the definition of this function, along with experimental results.
109
+
110
+ ## 5 Experiments 5.1 1 Billion Word Language Modeling Benchmark
111
+
112
+ Dataset: This dataset, introduced by (Chelba et al., 2013) consists of shuffled unique sentences from news articles, totaling approximately 829 million words, with a vocabulary of 793,471 words.
113
+
114
+ Previous State-of-the-Art: The best previously published results (Jozefowicz et al., 2016) use models consisting of one or more stacked Long Short-Term Memory (LSTM) layers (Hochreiter
115
+ & Schmidhuber, 1997; Gers et al., 2000). The number of parameters in the LSTM layers of these models vary from 2 million to 151 million. Quality increases greatly with parameter count, as do computational costs. Results for these models form the top line of Figure 2-right. MoE Models: Our models consist of two stacked LSTM layers with a MoE layer between them (see Figure 1). We vary the sizes of the layers and the number of experts. For full details on model architecture, training regimen, additional baselines and results, see Appendix C.
116
+
117
+ Low Computation, Varied Capacity: To investigate the effects of adding capacity, we trained
118
+
119
+ ![5_image_0.png](5_image_0.png)
120
+
121
+ a series of MoE models all with roughly equal computational costs: about 8 million multiply-andadds per training example per timestep in the forwards pass, excluding the softmax layer. We call this metric (ops/timestep). We trained models with flat MoEs containing 4, 32, and 256 experts, and models with hierarchical MoEs containing 256, 1024, and 4096 experts. Each expert had about 1 million parameters. For all the MoE layers, 4 experts were active per input.
122
+
123
+ The results of these models are shown in Figure 2-left. The model with 4 always-active experts performed (unsurprisingly) similarly to the computationally-matched baseline models, while the largest of the models (4096 experts) achieved an impressive 24% lower perplexity on the test set. Figure 2: Model comparison on 1-Billion-Word Language-Modeling Benchmark. On the left, we plot test perplexity as a function of model capacity for models with similar computational budgets of approximately 8-million-ops-per-timestep. On the right, we plot test perplexity as a function of computational budget. The top line represents the LSTM models from (Jozefowicz et al., 2016).
124
+
125
+ The bottom line represents 4-billion parameter MoE models with different computational budgets.
126
+
127
+ Varied Computation, High Capacity: In addition to the largest model from the previous section, we trained two more MoE models with similarly high capacity (4 billion parameters), but higher computation budgets. These models had larger LSTMs, and fewer but larger and experts. Details
128
+
129
+ | Test | Test | #Parameters | ops/timestep | Training | TFLOPS | |
130
+ |-------------------------------------------|--------------------|---------------|---------------------------------|-------------------|-------------------|------|
131
+ | Perplexity Perplexity excluding embedding | Time | /GPU | | | | |
132
+ | 10 epochs 100 epochs | and softmax layers | 10 epochs | | | | |
133
+ | Best Published Results | 34.7 | 30.6 | 151 million | 151 million | 59 hours, 32 k40s | 1.09 |
134
+ | Low-Budget MoE Model | 34.1 | 4303 million | 8.9 million | 15 hours, 16 k40s | 0.74 | |
135
+ | Medium-Budget MoE Model | 31.3 | 4313 million | 33.8 million 17 hours, 32 k40s | 1.22 | | |
136
+ | High-Budget MoE Model | 28.0 | 4371 million | 142.7 million 47 hours, 32 k40s | 1.56 | | |
137
+
138
+ Table 1: Summary of high-capacity MoE-augmented models with varying computational budgets, vs. best previously published results (Jozefowicz et al., 2016). Details in Appendix C. can be found in Appendix C.2. Results of these three models form the bottom line of Figure 2-right.
139
+
140
+ Table 1 compares the results of these models to the best previously-published result on this dataset .
141
+
142
+ Even the fastest of these models beats the best published result (when controlling for the number of training epochs), despite requiring only 6% of the computation.
143
+
144
+ Computational Efficiency: We trained our models using TensorFlow (Abadi et al., 2016) on clusters containing 16-32 Tesla K40 GPUs. For each of our models, we determine computational efficiency in TFLOPS/GPU by dividing the number of floating point operations required to process one training batch by the observed step time and the number of GPUs in the cluster. The operation counts used here are higher than the ones we report in our ops/timestep numbers in that we include the backwards pass, we include the importance-sampling-based training of the softmax layer, and we count a multiply-and-add as two separate operations. For all of our MoE models, the floating point operations involved in the experts represent between 37% and 46% of the total.
145
+
146
+ For our baseline models wtih no MoE, observed computational efficiency ranged from 1.07-1.29 TFLOPS/GPU. For our low-computation MoE models, computation efficiency ranged from 0.740.90 TFLOPS/GPU, except for the 4-expert model which did not make full use of the available parallelism. Our highest-computation MoE model was more efficient at 1.56 TFLOPS/GPU, likely due to the larger matrices. These numbers represent a significant fraction of the theoretical maximum of 4.29 TFLOPS/GPU claimed by NVIDIA. Detailed results are in Appendix C, Table 7.
147
+
148
+ 5.2 100 BILLION WORD GOOGLE NEWS CORPUS
149
+
150
+ ![6_image_0.png](6_image_0.png)
151
+
152
+ On the 1-billion-word corpus, adding additional capacity seems to produce diminishing returns as the number of parameters in the MoE layer exceeds 1 billion, as can be seen in Figure 2-left. We hypothesized that for a larger training set, even higher capacities would produce significant quality improvements. We constructed a similar training set consisting of shuffled unique sentences from Google's internal news corpus, totalling roughly 100 billion words. Similarly to the previous section, we tested a series of models with similar computational costs of about 8 million ops/timestep. In addition to a baseline LSTM model, we trained models augmented with MoE layers containing 32, 256, 1024, 4096, 16384, 65536, and 131072 experts. This corresponds to up to 137 billion parameters in the MoE layer. Details on architecture, training, and results are given in Appendix D.
153
+
154
+ Results: Figure 3 shows test perplexity as a function of capacity after training on 10 billion words
155
+ (top line) and 100 billion words (bottom line). When training over the full 100 billion words, test perplexity improves significantly up to 65536 experts (68 billion parameters), dropping 39% lower than the computationally matched baseline, but degrades at 131072 experts, possibly a result of too much sparsity. The widening gap between the two lines demonstrates (unsurprisingly) that increased model capacity helps more on larger training sets. Even at 65536 experts (99.994% layer sparsity), computational efficiency for the model stays at a respectable 0.72 TFLOPS/GPU.
156
+
157
+ ## 5.3 Machine Translation (Single Language Pair)
158
+
159
+ Model Architecture: Our model was a modified version of the GNMT model described in (Wu et al., 2016). To reduce computation, we decreased the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We inserted MoE layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). Each MoE layer contained up to 2048 experts each with about two million parameters, adding a total of about 8 billion parameters to the models. Further details on model architecture, testing procedure and results can be found in Appendix E.
160
+
161
+ Datasets: We benchmarked our method on the WMT'14 En→Fr and En→De corpora, whose training sets have 36M sentence pairs and 5M sentence pairs, respectively. The experimental protocols were also similar to those in (Wu et al., 2016): newstest2014 was used as the test set to compare against previous work (Luong et al., 2015a; Zhou et al., 2016; Wu et al., 2016), while the combination of newstest2012 and newstest2013 was used as the development set. We also tested the same model on a Google's Production English to French data.
162
+
163
+ | Table 2: Results on WMT'14 En→ Fr newstest2014 (bold values represent best results). Model Test Test ops/timenstep Total Training Perplexity BLEU #Parameters Time MoE with 2048 Experts 2.69 40.35 85M 8.7B 3 days/64 k40s MoE with 2048 Experts (longer training) 2.63 40.56 85M 8.7B 6 days/64 k40s GNMT (Wu et al., 2016) 2.79 39.22 214M 278M 6 days/96 k80s GNMT+RL (Wu et al., 2016) 2.96 39.92 214M 278M 6 days/96 k80s PBMT (Durrani et al., 2014) 37.0 LSTM (6-layer) (Luong et al., 2015b) 31.5 LSTM (6-layer+PosUnk) (Luong et al., 2015b) 33.1 DeepAtt (Zhou et al., 2016) 37.7 DeepAtt+PosUnk (Zhou et al., 2016) 39.2 |
164
+ |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
165
+
166
+ | Table 3: Results on WMT'14 En → De newstest2014 (bold values represent best results). Model Test Test ops/timestep Total Training Perplexity BLEU #Parameters Time MoE with 2048 Experts 4.64 26.03 85M 8.7B 1 day/64 k40s GNMT (Wu et al., 2016) 5.25 24.91 214M 278M 1 day/96 k80s GNMT +RL (Wu et al., 2016) 8.08 24.66 214M 278M 1 day/96 k80s PBMT (Durrani et al., 2014) 20.7 DeepAtt (Zhou et al., 2016) 20.6 |
167
+ |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
168
+
169
+ | Table 4: Results on the Google Production En→ Fr dataset (bold values represent best results). Model Eval Eval Test Test ops/timestep Total Training Perplexity BLEU Perplexity BLEU #Parameters Time MoE with 2048 Experts 2.60 37.27 2.69 36.57 85M 8.7B 1 day/64 k40s GNMT (Wu et al., 2016) 2.78 35.80 2.87 35.56 214M 278M 6 days/96 k80s |
170
+ |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
171
+
172
+ Results: Tables 2, 3, and 4 show the results of our largest models, compared with published results. Our approach achieved BLEU scores of 40.56 and 26.03 on the WMT'14 En→Fr and En→De benchmarks. As our models did not use RL refinement, these results constitute significant gains of 1.34 and 1.12 BLEU score on top of the strong baselines in (Wu et al., 2016). The perplexity scores are also better.2 On the Google Production dataset, our model achieved 1.01 higher test BLEU
173
+ score even after training for only one sixth of the time.
174
+
175
+ ## 5.4 Multilingual Machine Translation
176
+
177
+ Dataset: (Johnson et al., 2016) train a single GNMT (Wu et al., 2016) model on a very large combined dataset of twelve language pairs. Results are somewhat worse than those for 12 separately trained single-pair GNMT models. This is not surprising, given that the twelve models have 12 times the capacity and twelve times the aggregate training of the one model. We repeat this experiment with a single MoE-augmented model. See Appendix E for details on model architecture.
178
+
179
+ We train our model on the same dataset as (Johnson et al., 2016) and process the same number of training examples (about 3 billion sentence pairs). Our training time was shorter due to the lower computational budget of our model.
180
+
181
+ Results: Results for the single-pair GNMT models, the multilingual GNMT model and the multilingual MoE model are given in Table 5. The MoE model achieves 19% lower perplexity on the dev set than the multilingual GNMT model. On BLEU score, the MoE model significantly beats the multilingual GNMT model on 11 of the 12 language pairs (by as much as 5.84 points), and even beats the monolingual GNMT models on 8 of 12 language pairs. The poor performance on English
182
+ → Korean seems to be a result of severe overtraining, as for the rarer language pairs a small number of real examples were highly oversampled in the training corpus.
183
+
184
+ | Table 5: Multilingual Machine Translation (bold values represent best results). GNMT-Mono GNMT-Multi MoE-Multi MoE-Multi vs. GNMT-Multi Parameters 278M / model 278M 8.7B ops/timestep 212M 212M 102M training time, hardware various 21 days, 96 k20s 12 days, 64 k40s Perplexity (dev) 4.14 3.35 -19% French → English Test BLEU 36.47 34.40 37.46 +3.06 German → English Test BLEU 31.77 31.17 34.80 +3.63 Japanese → English Test BLEU 23.41 21.62 25.91 +4.29 Korean → English Test BLEU 25.42 22.87 28.71 +5.84 Portuguese → English Test BLEU 44.40 42.53 46.13 +3.60 Spanish → English Test BLEU 38.00 36.04 39.39 +3.35 English → French Test BLEU 35.37 34.00 36.59 +2.59 English → German Test BLEU 26.43 23.15 24.53 +1.38 English → Japanese Test BLEU 23.66 21.10 22.78 +1.68 English → Korean Test BLEU 19.75 18.41 16.62 -1.79 English → Portuguese Test BLEU 38.40 37.35 37.90 +0.55 English → Spanish Test BLEU 34.50 34.25 36.21 +1.96 |
185
+ |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
186
+
187
+ ## 6 Conclusion
188
+
189
+ This work is the first to demonstrate major wins from conditional computation in deep networks.
190
+
191
+ We carefully identified the design considerations and challenges of conditional computing and addressed them with a combination of algorithmic and engineering solutions. While we focused on text, conditional computation may help in other domains as well, provided sufficiently large training sets. We look forward to seeing many novel implementations and applications of conditional computation in the years to come.
192
+
193
+ ## Acknowledgments
194
+
195
+ We would like to thank all of the members of the Google Brain and Google Translate teams who helped us with this project, in particular Zhifeng Chen, Yonghui Wu, and Melvin Johnson. Thanks also to our anonymous ICLR reviewers for the helpful suggestions on making this paper better.
196
+
197
+ 2Reported perplexities relative to the tokenization used by both our models and GNMT.
198
+
199
+ ## References
200
+
201
+ Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Gregory S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian J. Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Józefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Gordon Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul A. Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda B. Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. Tensorflow:
202
+ Large-scale machine learning on heterogeneous distributed systems. *CoRR*, abs/1603.04467, 2016. URL http://arxiv.org/abs/1603.04467.
203
+
204
+ Rahaf Aljundi, Punarjay Chakravarty, and Tinne Tuytelaars. Expert gate: Lifelong learning with a network of experts. *CoRR*, abs/1611.06194, 2016. URL http://arxiv.org/abs/1611.
205
+
206
+ 06194.
207
+
208
+ A. Almahairi, N. Ballas, T. Cooijmans, Y. Zheng, H. Larochelle, and A. Courville. Dynamic Capacity Networks. *ArXiv e-prints*, November 2015.
209
+
210
+ Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Y. Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Y. Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, and Zhenyao Zhu. Deep speech 2: End-to-end speech recognition in english and mandarin. *arXiv preprint arXiv:1512.02595*, 2015.
211
+ Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. *arXiv preprint arXiv:1409.0473*, 2014.
212
+
213
+ Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, and Doina Precup. Conditional computation in neural networks for faster models. *arXiv preprint arXiv:1511.06297*, 2015.
214
+
215
+ Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. *arXiv preprint arXiv:1308.3432*, 2013.
216
+
217
+ Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. One billion word benchmark for measuring progress in statistical language modeling.
218
+
219
+ arXiv preprint arXiv:1312.3005, 2013.
220
+
221
+ K. Cho and Y. Bengio. Exponentially Increasing the Capacity-to-Computation Ratio for Conditional Computation in Deep Learning. *ArXiv e-prints*, June 2014.
222
+ Ronan Collobert, Samy Bengio, and Yoshua Bengio. A parallel mixture of SVMs for very large scale problems. *Neural Computing*, 2002.
223
+
224
+ Andrew Davis and Itamar Arel. Low-rank approximations for conditional feedforward computation in deep neural networks. *arXiv preprint arXiv:1312.4461*, 2013.
225
+ Marc Peter Deisenroth and Jun Wei Ng. Distributed Gaussian processes. In *ICML*, 2015.
226
+
227
+ John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization, 2010.
228
+
229
+ Nadir Durrani, Barry Haddow, Philipp Koehn, and Kenneth Heafield. Edinburgh's phrase-based machine translation systems for wmt-14. In *Proceedings of the Ninth Workshop on Statistical* Machine Translation, 2014.
230
+
231
+ David Eigen, Marc'Aurelio Ranzato, and Ilya Sutskever. Learning factored representations in a deep mixture of experts. *arXiv preprint arXiv:1312.4314*, 2013.
232
+ Ekaterina Garmash and Christof Monz. Ensemble learning for multi-source neural machine translation. In *staff.science.uva.nl/c.monz*, 2016.
233
+
234
+ Felix A. Gers, Jürgen A. Schmidhuber, and Fred A. Cummins. Learning to forget: Continual prediction with lstm. *Neural Computation*, 2000.
235
+
236
+ Audrunas Gruslys, Rémi Munos, Ivo Danihelka, Marc Lanctot, and Alex Graves. Memory-efficient backpropagation through time. *CoRR*, abs/1606.03401, 2016. URL http://arxiv.org/
237
+ abs/1606.03401.
238
+ Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. *IEEE Conference on Computer Vision and Pattern Recognition*, 2015.
239
+
240
+ Geoffrey Hinton, Li Deng, Dong Yu, George E. Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N. Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE
241
+ Signal Processing Magazine, 2012.
242
+ Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. *Neural Computation*, 1997.
243
+
244
+ Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. *arXiv preprint arXiv:1502.03167*, 2015.
245
+
246
+ Robert A. Jacobs, Michael I. Jordan, Steven J. Nowlan, and Geoffrey E. Hinton. Adaptive mixtures of local experts. *Neural Computing*, 1991.
247
+
248
+ Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda B. Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google's multilingual neural machine translation system: Enabling zero-shot translation.
249
+
250
+ CoRR, abs/1611.04558, 2016. URL http://arxiv.org/abs/1611.04558.
251
+
252
+ Michael I. Jordan and Robert A. Jacobs. Hierarchical mixtures of experts and the EM algorithm.
253
+
254
+ Neural Computing, 1994.
255
+ Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. *arXiv preprint arXiv:1602.02410*, 2016.
256
+
257
+ Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In *ICLR*, 2015.
258
+
259
+ Reinhard Kneser and Hermann. Ney. Improved backingoff for m-gram language modeling., 1995.
260
+
261
+ Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In *NIPS*, 2012.
262
+
263
+ Quoc V. Le, Marc'Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg S. Corrado, Jeffrey Dean, and Andrew Y. Ng. Building high-level features using large scale unsupervised learning. In *ICML*, 2012.
264
+ Patrick Gallinari Ludovic Denoyer. Deep sequential neural network. *arXiv preprint* arXiv:1410.0510, 2014.
265
+
266
+ Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. Effective approaches to attentionbased neural machine translation. *EMNLP*, 2015a.
267
+
268
+ Minh-Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, and Wojciech Zaremba. Addressing the rare word problem in neural machine translation. ACL, 2015b.
269
+
270
+ Carl Edward Rasmussen and Zoubin Ghahramani. Infinite mixtures of Gaussian process experts.
271
+
272
+ NIPS, 2002.
273
+
274
+ Hasim Sak, Andrew W Senior, and Françoise Beaufays. Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In *INTERSPEECH*, pp. 338–342, 2014.
275
+
276
+ Mike Schuster and Kaisuke Nakajima. Japanese and Korean voice search. *ICASSP*, 2012.
277
+
278
+ Babak Shahbaba and Radford Neal. Nonlinear models using dirichlet process mixtures. *JMLR*,
279
+ 2009.
280
+
281
+ Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks.
282
+
283
+ In *NIPS*, 2014.
284
+
285
+ Lucas Theis and Matthias Bethge. Generative image modeling using spatial LSTMs. In *NIPS*, 2015. Volker Tresp. Mixtures of Gaussian Processes. In *NIPS*, 2001.
286
+
287
+ Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
288
+
289
+ Bangpeng Yao, Dirk Walther, Diane Beck, and Li Fei-fei. Hierarchical mixture of classification experts uncovers interactions between brain regions. In *NIPS*. 2009.
290
+
291
+ Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization.
292
+
293
+ arXiv preprint arXiv:1409.2329, 2014.
294
+
295
+ Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. Deep recurrent models with fast-forward connections for neural machine translation. *arXiv preprint arXiv:1606.04199*, 2016.
296
+
297
+ ## Appendices A Load-Balancing Loss
298
+
299
+ As discussed in section 4, for load-balancing purposes, we want to define an additional loss function to encourage experts to receive roughly equal numbers of training examples. Unfortunately, the number of examples received by an expert is a discrete quantity, so it can not be used in backpropagation. Instead, we define a smooth estimator *Load*(X) of the number of examples assigned to each expert for a batch X of inputs. The smoothness allows us to back-propagate gradients through the estimator. This is the purpose of the noise term in the gating function. We define P(*x, i*) as the probability that G(x)iis nonzero, given a new random choice of noise on element i, but keeping the already-sampled choices of noise on the other elements. To compute P(*x, i*), we note that the G(x)iis nonzero if and only if H(x)iis greater than the k th-greatest element of H(x) excluding itself. The probability works out to be:
300
+
301
+ $$P(x,i)=P r\Big((x\cdot W_{g})_{i}+S t a n d x N o r a l()\cdot S o f t p u s((x\cdot W_{n o i s e})_{i})$$ $$>k t h\_e x c l u d i n(H(x),k,i)\Big)$$
302
+ $$({\boldsymbol{8}})$$
303
+
304
+ $$(9)$$
305
+ $$(10)$$
306
+ (8)
307
+ Where kth_excluding(*v, k, i*) means the kth highest component of v, excluding component i. Simplifying, we get:
308
+
309
+ $$P(x,i)=\Phi{\Big(}{\frac{(x\cdot W_{g})_{i}-k t h\_c x c l u d i n g(H(x),k,i)}{S o f t p l u s((x\cdot W_{n o i s e})_{i})}}{\Big)}$$
310
+
311
+ Where Φ is the CDF of the standard normal distribution.
312
+
313
+ $$L o a d(X)_{i}=\sum_{x\in X}P(x,i)$$
314
+
315
+ $$(11)$$
316
+ P(*x, i*) (10)
317
+ We can now define the load loss to be the square of the coefficient of variation of the load vector, multiplied by a hand-tuned scaling factor w*load*.
318
+
319
+ $$L_{l o a d}(X)=w_{l o a d}\cdot C V(L o a d(X))^{2}$$
320
+ L*load*(X) = wload · CV (*Load*(X))2(11)
321
+ Initial Load Imbalance: To avoid out-of-memory errors, we need to initialize the network in a state of approximately equal expert load (since the soft constraints need some time to work). To accomplish this, we initialize the matrices Wg and W*noise* to all zeros, which yields no signal and some noise.
322
+
323
+ Experiments: We trained a set of models with identical architecture (the MoE-256 model described in Appendix C), using different values of w*importance* and w*load*. We trained each model for 10 epochs, then measured perplexity on the test set. We also measured the coefficients of variation in *Importance* and *Load*, as well as ratio of the load on the most overloaded expert to the average load. This last value is significant for load balancing purposes on distributed hardware. All of these metrics were averaged over several training batches.
324
+
325
+ | wimportance wload Test Perplexity CV (Importance(X)) CV (Load(X)) | max(Load(X)) mean(Load(X)) | | | | |
326
+ |---------------------------------------------------------------------|------------------------------|------|------|------|-------|
327
+ | 0.0 | 0.0 | 39.8 | 3.04 | 3.01 | 17.80 |
328
+ | 0.2 | 0.0 | 35.6 | 0.06 | 0.17 | 1.47 |
329
+ | 0.0 | 0.2 | 35.7 | 0.22 | 0.04 | 1.15 |
330
+ | 0.1 | 0.1 | 35.6 | 0.06 | 0.05 | 1.14 |
331
+ | 0.01 | 0.01 | 35.7 | 0.48 | 0.11 | 1.37 |
332
+ | 1.0 | 1.0 | 35.7 | 0.03 | 0.02 | 1.07 |
333
+
334
+ Table 6: Experiments with different combinations of losses.
335
+
336
+ 13 Results: Results are reported in Table 6. All the combinations containing at least one the two losses led to very similar model quality, where having no loss was much worse. Models with higher values of w*load* had lower loads on the most overloaded expert.
337
+
338
+ ## B Hierachical Mixture Of Experts
339
+
340
+ If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted combination of "experts", each of which is itself a secondary mixture-of-experts with its own gating network.3If the hierarchical MoE consists of a groups of b experts each, we denote the primary gating network by G*primary*, the secondary gating networks by (G1, G2..Ga), and the expert networks by (E0,0, E0,1..Ea,b). The output of the MoE is given by:
341
+
342
+ $$y_{H}=\sum_{i=1}^{a}\sum_{j=1}^{b}G_{p r i m a r y}(x)_{i}\cdot G_{i}(x)_{j}\cdot E_{i,j}(x)$$
343
+ $$(12)$$
344
+ $$(13)$$
345
+ $$(14)$$
346
+
347
+ Our metrics of expert utilization change to the following:
348
+
349
+ $$I m p o r t a n c e_{H}(X)_{i,j}=\sum_{x\in X}G_{p r i m a r y}(x)_{i}\cdot G_{i}(x)_{j}$$
350
+ $$L o a d_{H}(X)_{i,j}={\frac{L o a d_{p r i m a r y}(X)_{i}\cdot L o a d_{i}(X^{(i)})_{j}}{|X^{(i)}|}}$$
351
+
352
+ Load*primary* and *Load*i deonte the *Load* functions for the primary gating network and i th secondary gating network respectively. X(i) denotes the subset of X for which G*primary*(x)i > 0.
353
+
354
+ It would seem simpler to let LoadH(X)i,j = *Load*i(Xi)j , but this would not have a gradient with respect to the primary gating network, so we use the formulation above.
355
+
356
+ C 1 BILLION WORD LANGUAGE MODELING BENCHMARK - EXPERIMENTAL DETAILS C.1 8-MILLION-OPERATIONS-PER-TIMESTEP MODELS
357
+ Model Architecture: Our model consists of five layers: a word embedding layer, a recurrent Long Short-Term Memory (LSTM) layer (Hochreiter & Schmidhuber, 1997; Gers et al., 2000), a MoE layer, a second LSTM layer, and a softmax layer. The dimensionality of the embedding layer, the number of units in each LSTM layer, and the input and output dimensionality of the MoE layer are all equal to 512. For every layer other than the softmax, we apply drouput (Zaremba et al.,
358
+ 2014) to the layer output, dropping each activation with probability *DropP rob*, otherwise dividing by (1 − *DropP rob*). After dropout, the output of the previous layer is added to the layer output.
359
+
360
+ This residual connection encourages gradient flow (He et al., 2015).
361
+
362
+ MoE Layer Architecture: Each expert in the MoE layer is a feed forward network with one ReLU-activated hidden layer of size 1024 and an output layer of size 512. Thus, each expert contains
363
+ [512 ∗ 1024] + [1024 ∗ 512] = 1M parameters. The output of the MoE layer is passed through a sigmoid function before dropout. We varied the number of experts between models, using ordinary MoE layers with 4, 32 and 256 experts and hierarchical MoE layers with 256, 1024 and 4096 experts.
364
+
365
+ We call the resulting models MoE-4, MoE-32, MoE-256, MoE-256-h, MoE-1024-h and MoE-4096h. For the hierarchical MoE layers, the first level branching factor was 16, corresponding to the number of GPUs in our cluster. We use Noisy-Top-K Gating (see Section 2.1) with k = 4 for the ordinary MoE layers and k = 2 at each level of the hierarchical MoE layers. Thus, each example is processed by exactly 4 experts for a total of 4M ops/timestep. The two LSTM layers contribute 2M
366
+ ops/timestep each for the desired total of 8M.
367
+
368
+ 3 We have not found the need for deeper hierarchies.
369
+
370
+ Computationally-Matched Baselines: The MoE-4 model does not employ sparsity, since all 4 experts are always used. In addition, we trained four more computationally-matched baseline models with no sparsity:
371
+
372
+ - MoE-1-Wide: The MoE layer consists of a single "expert" containing one ReLU-activated hidden layer of size 4096.
373
+
374
+ - MoE-1-Deep: The MoE layer consists of a single "expert" containing four ReLU-activated hidden layers, each with size 1024.
375
+
376
+ - 4xLSTM-512: We replace the MoE layer with two additional 512-unit LSTM layers.
377
+
378
+ - LSTM-2048-512: The model contains one 2048-unit LSTM layer (and no MoE). The output of the LSTM is projected down to 512 dimensions (Sak et al., 2014). The next timestep of the LSTM receives the projected output. This is identical to one of the models published in (Jozefowicz et al., 2016). We re-ran it to account for differences in training regimen, and obtained results very similar to the published ones.
379
+ Training: The models were trained on a cluster of 16 K40 GPUs using the synchronous method described in Section 3. Each batch consisted of a set of sentences totaling roughly 300,000 words. In the interest of time, we limited training to 10 epochs, (27,000 steps). Training took 12-16 hours for all models, except for MoE-4, which took 18 hours (since all the expert computation was performed on only 4 of 16 GPUs). We used the Adam optimizer (Kingma & Ba, 2015). The base learning rate was increased linearly for the first 1000 training steps, and decreased after that so as to be proportional to the inverse square root of the step number. The Softmax output layer was trained efficiently using importance sampling similarly to the models in (Jozefowicz et al., 2016). For each model, we performed a hyper-parmeter search to find the best dropout probability, in increments of 0.1.
380
+
381
+ To ensure balanced expert utilization we set w*importance* = 0.1 and w*load* = 0.1, as described in Section 4 and Appendix A.
382
+
383
+ Results: We evaluate our model using perplexity on the holdout dataset, used by (Chelba et al.,
384
+ 2013; Jozefowicz et al., 2016). We follow the standard procedure and sum over all the words including the end of sentence symbol. Results are reported in Table 7. For each model, we report the test perplexity, the computational budget, the parameter counts, the value of *DropP rob*, and the computational efficiency.
385
+
386
+ | Model | Test | Test | ops/timestep #Params excluding | Total | Drop- | TFLOPS | |
387
+ |-----------------------|------------|--------------------------|----------------------------------|------------|---------|----------|------|
388
+ | Perplexity Perplexity | (millions) | embed. & softmax #Params | P rob | per GPU | | | |
389
+ | 10 epochs | (final) | (millions) | (billions) | (observed) | | | |
390
+ | Kneser-Ney 5-gram* | 67.6 | 0.00001 | 1.8 | | | | |
391
+ | LSTM-512-512* | 54.1 | 2.4 | 2.4 | 0.8 | 0.1 | | |
392
+ | LSTM-1024-512* | 48.2 | 4.7 | 4.7 | 0.8 | 0.1 | | |
393
+ | LSTM-2048-512* | 45.0 | 43.7 | 9.4 | 9.4 | 0.8 | 0.1 | 0.61 |
394
+ | LSTM-2048-512 | 44.7 | 9.4 | 9.4 | 0.8 | 0.1 | 1.21 | |
395
+ | 4xLSTM-512 | 46.0 | 8.4 | 8.4 | 0.8 | 0.1 | 1.07 | |
396
+ | MoE-1-Wide | 46.1 | 8.4 | 8.4 | 0.8 | 0.1 | 1.29 | |
397
+ | MoE-1-Deep | 45.7 | 8.4 | 8.4 | 0.8 | 0.1 | 1.29 | |
398
+ | MoE-4 | 45.0 | 8.4 | 8.4 | 0.8 | 0.1 | 0.52 | |
399
+ | MoE-32 | 39.7 | 8.4 | 37.8 | 0.9 | 0.1 | 0.87 | |
400
+ | MoE-256 | 35.7 | 8.6 | 272.9 | 1.1 | 0.1 | 0.81 | |
401
+ | MoE-256-h | 36.0 | 8.4 | 272.9 | 1.1 | 0.1 | 0.89 | |
402
+ | MoE-1024-h | 34.6 | 8.5 | 1079.0 | 1.9 | 0.2 | 0.90 | |
403
+ | MoE-4096-h | 34.1 | 8.9 | 4303.4 | 5.1 | 0.2 | 0.74 | |
404
+ | 2xLSTM-8192-1024* | 34.7 | 30.6 | 151.0 | 151.0 | 1.8 | 0.25 | 1.09 |
405
+ | MoE-34M | 31.3 | 33.8 | 4313.9 | 6.0 | 0.3 | 1.22 | |
406
+ | MoE-143M | 28.0 | 142.7 | 4371.1 | 6.0 | 0.4 | 1.56 | |
407
+
408
+ ## C.2 More Expensive Models
409
+
410
+ We ran two additional models (MoE-34M and MoE-143M) to investigate the effects of adding more computation in the presence of a large MoE layer. These models have computation budgets of 34M
411
+ and 143M ops/timestep. Similar to the models above, these models use a MoE layer between two LSTM layers. The dimensionality of the embedding layer, and the input and output dimensionality of the MoE layer are set to 1024 instead of 512. For MoE-34M, the LSTM layers have 1024 units.
412
+
413
+ For MoE-143M, the LSTM layers have 4096 units and an output projection of size 1024 (Sak et al., 2014). MoE-34M uses a hierarchical MoE layer with 1024 experts, each with a hidden layer of size 2048. MoE-143M uses a hierarchical MoE layer with 256 experts, each with a hidden layer of size 8192. Both models have 4B parameters in the MoE layers. We searched for the best *DropP rob* for each model, and trained each model for 10 epochs.
414
+
415
+ The two models achieved test perplexity of 31.3 and 28.0 respectively, showing that even in the presence of a large MoE, more computation is still useful. Results are reported at the bottom of Table 7. The larger of the two models has a similar computational budget to the best published model from the literature, and training times are similar. Comparing after 10 epochs, our model has a lower test perplexity by 18%.
416
+
417
+ D 100 BILLION WORD GOOGLE NEWS CORPUS - EXPERIMENTAL DETAILS
418
+ Model Architecture: The models are similar in structure to the 8-million-operations-per-timestep models described in the previous section. We vary the number of experts between models, using an ordinary MoE layer with 32 experts and hierarchical MoE layers with 256, 1024, 4096, 16384, 65536 and 131072 experts. For the hierarchical MoE layers, the first level branching factors are 32, 32, 64, 128, 256 and 256, respectively.
419
+
420
+ Training: Models are trained on a cluster of 32 Tesla K40 GPUs, except for the last two models, which are trained on clusters of 64 and 128 GPUs so as to have enough memory for all the parameters. For all models, training batch sizes are approximately 2.5 million words. Models are trained once-through over about 100 billion words. We implement several memory optimizations in order to fit up to 1 billion parameters per GPU. First, we do not store the activations of the hidden layers of the experts, but instead recompute them on the backwards pass. Secondly, we modify the optimizer on the expert parameters to require less auxiliary storage:
421
+ The Adam optimizer (Kingma & Ba, 2015) keeps first and second moment estimates of the perparameter gradients. This triples the required memory. To avoid keeping a first-moment estimator, we set β1 = 0. To reduce the size of the second moment estimator, we replace it with a factored approximation. For a matrix of parameters, instead of maintaining a full matrix of second-moment estimators, we maintain vectors of row-wise and column-wise averages of that matrix. At each step, the matrix of estimators is taken to be the outer product of those two vectors divided by the mean of either one. This technique could similarly be applied to Adagrad (Duchi et al., 2010).
422
+
423
+ | Model | Test | Test | ops/timestep #Params excluding | Total | TFLOPS | |
424
+ |-----------------------|------------|--------------------------|----------------------------------|----------|----------|------|
425
+ | Perplexity Perplexity | (millions) | embed. & softmax #Params | per GPU | | | |
426
+ | .1 epochs | 1 epoch | (millions) | (billions) (observed) | | | |
427
+ | Kneser-Ney 5-gram | 67.1 | 45.3 | 0.00001 | 76.0 | | |
428
+ | 4xLSTM-512 | 54.5 | 47.0 | 8.4 | 8.4 | 0.1 | 1.23 |
429
+ | MoE-32 | 48.5 | 40.4 | 8.4 | 37.8 | 0.1 | 0.83 |
430
+ | MoE-256-h | 42.8 | 35.3 | 8.4 | 272.9 | 0.4 | 1.11 |
431
+ | MoE-1024-h | 40.3 | 32.7 | 8.5 | 1079.0 | 1.2 | 1.14 |
432
+ | MoE-4096-h | 38.9 | 30.9 | 8.6 | 4303.4 | 4.4 | 1.07 |
433
+ | MoE-16384-h | 38.2 | 29.7 | 8.8 | 17201.0 | 17.3 | 0.96 |
434
+ | MoE-65536-h | 38.2 | 28.9 | 9.2 | 68791.0 | 68.9 | 0.72 |
435
+ | MoE-131072-h | 39.8 | 29.2 | 9.7 | 137577.6 | 137.7 | 0.30 |
436
+
437
+ Results: We evaluate our model using perplexity on a holdout dataset. Results are reported in Table 8. Perplexity after 100 billion training words is 39% lower for the 68-billion-parameter MoE
438
+ model than for the baseline model. It is notable that the measured computational efficiency of the largest model (0.30 TFLOPS/GPU) is very low compared to the other models. This is likely a result of the fact that, for purposes of comparison to the other models, we did not increase the training batch size proportionally to the number of GPUs. For comparison, we include results for a computationally matched baseline model consisting of 4 LSTMs, and for an unpruned 5-gram model with Kneser-Ney smoothing (Kneser & Ney, 1995).4
439
+
440
+ ## E Machine Translation - Experimental Details
441
+
442
+ Model Architecture for Single Language Pair MoE Models: Our model is a modified version of the GNMT model described in (Wu et al., 2016). To reduce computation, we decrease the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We insert MoE
443
+ layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). We use an attention mechanism between the encoder and decoder, with the first decoder LSTM receiving output from and providing input for the attention 5. All of the layers in our model have input and output dimensionality of 512. Our LSTM layers have 2048 hidden units, with a 512-dimensional output projection. We add residual connections around all LSTM and MoE layers to encourage gradient flow (He et al., 2015). Similar to GNMT, to effectively deal with rare words, we used subword units (also known as "wordpieces") (Schuster & Nakajima, 2012) for inputs and outputs in our system.
444
+
445
+ We use a shared source and target vocabulary of 32K wordpieces. We also used the same beam search technique as proposed in (Wu et al., 2016).
446
+
447
+ We train models with different numbers of experts in the MoE layers. In addition to a baseline model with no MoE layers, we train models with flat MoE layers containing 32 experts, and models with hierarchical MoE layers containing 512 and 2048 experts. The flat MoE layers use k = 4 and the hierarchical MoE models use k = 2 at each level of the gating network. Thus, each input is processed by exactly 4 experts in each MoE layer. Each expert in the MoE layer is a feed forward network with one hidden layer of size 2048 and ReLU activation. Thus, each expert contains [512 ∗ 2048] + [2048 ∗ 512] = 2M parameters. The output of the MoE layer is passed through a sigmoid function. We use the strictly-balanced gating function described in Appendix F.
448
+
449
+ Model Architecture for Multilingual MoE Model: We used the same model architecture as for the single-language-pair models, with the following exceptions: We used noisy-top-k gating as described in Section 2.1, not the scheme from Appendix F. The MoE layers in the encoder and decoder are non-hierarchical MoEs with n = 512 experts, and k = 2. Each expert has a larger hidden layer of size 8192. This doubles the amount of computation in the MoE layers, raising the computational budget of the entire model from 85M to 102M ops/timestep. Training: We trained our networks using the Adam optimizer (Kingma & Ba, 2015). The base learning rate was increased linearly for the first 2000 training steps, held constant for an additional 8000 steps, and decreased after that so as to be proportional to the inverse square root of the step number. For the single-language-pair models, similarly to (Wu et al., 2016), we applied dropout
450
+ (Zaremba et al., 2014) to the output of all embedding, LSTM and MoE layers, using *DropP rob* = 0.4. Training was done synchronously on a cluster of up to 64 GPUs as described in section 3. Each training batch consisted of a set of sentence pairs containing roughly 16000 words per GPU.
451
+
452
+ To ensure balanced expert utilization we set w*importance* = 0.01 and w*load* = 0.01, as described in Section 4 and Appendix A. Metrics: We evaluated our models using the perplexity and the standard BLEU score metric. We reported tokenized BLEU score as computed by the multi-bleu.pl script, downloaded from the public implementation of Moses (on Github), which was also used in (Luong et al., 2015a). Results: Tables 2, 3 and 4 in Section 5.3 show comparisons of our results to other published methods. Figure 4 shows test perplexity as a function of number of words in the (training data's)
453
+ source sentences processed for models with different numbers of experts. As can be seen from the Figure, as we increased the number of experts to approach 2048, the test perplexity of our model continued to improve.
454
+
455
+ ![17_image_0.png](17_image_0.png)
456
+ We found that the experts indeed become highly specialized by syntax and/or semantics, as can be seen in Table 9. For example, one expert is used when the indefinite article "a" introduces the direct object in a verb phrase indicating importance or leadership.
457
+
458
+ | Expert 381 | Expert 752 | Expert 2004 |
459
+ |---------------------------------------|--------------------------------------|---------------------------------|
460
+ | ... with researchers , ... | ... plays a core ... | ... with rapidly growing ... |
461
+ | ... to innovation . | ... plays a critical ... | ... under static conditions ... |
462
+ | ... tics researchers . | ... provides a legislative ... | ... to swift ly ... |
463
+ | ... the generation of ... | ... play a leading ... | ... to dras tically ... |
464
+ | ... technology innovations is ... | ... assume a leadership ... | ... the rapid and ... |
465
+ | ... technological innovations , ... | ... plays a central ... | ... the fast est ... |
466
+ | ... support innovation throughout ... | ... taken a leading ... | ... the Quick Method ... |
467
+ | ... role innovation will ... | ... established a reconciliation ... | ... rec urrent ) ... |
468
+ | ... research scienti st ... | ... played a vital ... | ... provides quick access ... |
469
+ | ... promoting innovation where ... | ... have a central ... | ... of volatile organic ... |
470
+ | ... | ... | ... |
471
+
472
+ ![17_image_1.png](17_image_1.png)
473
+
474
+ Due to some peculiarities in our infrastructure which have since been fixed, at the time we ran some of the machine translation experiments, our models ran faster if every expert received exactly the same batch size. To accommodate this, we used a different gating function which we describe below. Recall that we define the softmax gating function to be:
475
+
476
+ $$G_{\sigma}(x)=S o f t m a x(x\cdot W_{g})$$
477
+ Gσ(x) = *Sof tmax*(x · Wg) (15)
478
+ Sparse Gating (alternate formulation): To obtain a sparse gating vector, we multiply Gσ(x) component-wise with a sparse mask M(Gσ(x)) and normalize the output. The mask itself is a function of Gσ(x) and specifies which experts are assigned to each input example:
479
+
480
+ $$(15)$$
481
+ $$G(x)_{i}=\frac{G_{\sigma}(x)_{i}M(G_{\sigma}(x))_{i}}{\sum_{j=1}^{n}G_{\sigma}(x)_{j}M(G_{\sigma}(x))_{j}}\tag{1}$$
482
+ $$(17)$$
483
+
484
+ $$(16)$$
485
+
486
+ Top-K Mask: To implement top-k gating in this formulation, we would let M(v) = T opK(*v, k*),
487
+ where:
488
+
489
+ $$T o p K(v,k)_{i}=\begin{cases}1&{\mathrm{if~}}v_{i}{\mathrm{~is~in~the~top~}}k{\mathrm{~elements~of~}}v.\\ 0&{\mathrm{otherwise.}}\end{cases}$$
490
+ 0 otherwise. (17)
491
+ Batchwise Mask: To force each expert to receive the exact same number of examples, we introduce an alternative mask function, Mbatchwise(*X, m*), which operates over batches of input vectors.
492
+
493
+ Instead of keeping the top k values per example, we keep the top m values per expert across the training batch, where m =
494
+ k|X| n, so that each example is sent to an average of k experts.
495
+
496
+ $$M_{b a t c h w i s e}(X,m)_{j,i}=\begin{cases}1&\text{if}X_{j,i}\text{is in the top}m\text{values for to expert}i\\ 0&\text{otherwise}\end{cases}$$
497
+
498
+ As our experiments suggest and also observed in (Ioffe & Szegedy, 2015), using a batchwise function during training (such as M*batchwise*) requires modifications to the inference when we may not have a large batch of examples. Our solution to this is to train a vector T of per-expert threshold values to approximate the effects of the batchwise mask. We use the following mask at inference time:
499
+
500
+ $$(18)$$
501
+ $$M_{threshold}(x,T)_{i}=\begin{cases}1&\text{if}x_{i}>T_{i}\\ 0&\text{otherwise}\end{cases}\tag{1}$$
502
+ $$(19)$$
503
+
504
+ To learn the threshold values, we apply an additional loss at training time which is minimized when the batchwise mask and the threshold mask are identical.
505
+
506
+ $$L_{batchwise}(X,T,m)=\sum_{j=1}^{|X|}\sum_{i=1}^{n}(M_{threshold}(x,T)_{i}-M_{batchwise}(X,m)_{j,i})(X_{j,i}-T_{i})\tag{20}$$
507
+
508
+ G ATTENTION FUNCTION
509
+ The attention mechanism described in GNMT (Wu et al., 2016) involves a learned "Attention Function" A(xi, yj ) which takes a "source vector" xi and a "target vector" yj , and must be computed for every source time step i and target time step j. In GNMT, the attention function is implemented as a feed forward neural network with a hidden layer of size n. It can be expressed as:
510
+
511
+ $$A_{G N M T}(x_{i},y_{j})=\sum_{d=1}^{n}V_{d}t a n h((x_{i}U)_{d}+(y_{j}W)_{d})$$
512
+
513
+ Where U and W are trainable weight matrices and V is a trainable weight vector.
514
+
515
+ For performance reasons, in our models, we used a slightly different attention function:
516
+
517
+ $$A(x_{i},y_{j})=\sum_{d=1}^{n}V_{d}tanh((x_{i}U)_{d})tanh((y_{j}W)_{d})\tag{22}$$
518
+
519
+ With our attention function, we can simultaneously compute the attention function on multiple source time steps and multiple target time steps using optimized matrix multiplications. We found little difference in quality between the two functions.
520
+
521
+ $$(21)$$
2202.09368v2.md ADDED
@@ -0,0 +1,426 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Mixture-Of-Experts With Expert Choice Routing
2
+
3
+ Yanqi Zhou, Tao Lei, Hanxiao Liu, Nan Du, Yanping Huang, Vincent Zhao, Andrew Dai, Zhifeng Chen, Quoc Le, and James Laudon Google, Mountain View, CA, USA
4
+ {yanqiz, taole, hanxiaol, dunan, huangyp, vzhao, adai, zhifengc, qvl, jlaudon}@google.com
5
+
6
+ ## Abstract
7
+
8
+ Sparsely-activated Mixture-of-experts (MoE) models allow the number of parameters to greatly increase while keeping the amount of computation for a given token or a given sample unchanged. However, a poor expert routing strategy can cause certain experts to be under-trained, leading to an expert being under or over-specialized. Prior work allocates a fixed number of experts to each token using a top-k function regardless of the relative importance of different tokens. To address this, we propose a heterogeneous mixture-of-experts employing an expert choice method. Instead of letting tokens select the top-k experts, we have experts selecting the top-k tokens. As a result, each token can be routed to a variable number of experts and each expert can have a fixed bucket size. We systematically study pre-training speedups using the same computational resources of the Switch Transformer top-1 and GShard top-2 gating of prior work and find that our method improves training convergence time by more than 2×. For the same computational cost, our method demonstrates higher performance in fine-tuning 11 selected tasks in the GLUE and SuperGLUE benchmarks. For a smaller activation cost, our method outperforms the T5 dense model in 7 out of the 11 tasks.
9
+
10
+ ## 1 Introduction
11
+
12
+ Scaling up model capacity, dataset size, and training time has demonstrated huge success in enhancing the performance of computer vision architectures [4, 11, 13, 14] as well as neural language models [2, 20, 26, 27]. The final model quality has been found to have a power-law relationship with the amount of data, model size, and compute time [16, 20]. However, training efficiency, which is defined as the total amount of computation used to achieve superior model quality than the state of the art system [21], should receive greater attention as we increase our efforts towards green AI [29].
13
+
14
+ Sparsely gated mixture-of-experts [31] (MoE) provides an effective way to scale model capacity given a fixed computational cost, and has recently played an important role in increasing the training efficiency of large-scale language models [10, 21]. MoE operate by adopting a number of experts, each as a sub-network, and by activating only one or a few experts for each input token. A gating network must be chosen and optimized in order to route each token to the most suited expert(s). For example, recent work has implemented sparse routing via k-means clustering [12], linear assignment to maximize token-expert affinities [22], or hashing [8, 28]. Many of the prior work use a routing strategy concerning the *token choice*, where each token selects the best one or two experts.
15
+
16
+ We argue that the independent token choice of prior work often leads to an imbalanced load of experts, which causes training inefficiency and sub-optimal training of the model. In order to mitigate this
17
+
18
+ ![1_image_0.png](1_image_0.png)
19
+
20
+ issue, previous sparsely gated networks introduce additional auxiliary losses as regularization to prevent too many tokens being routed to a single expert, but the effectiveness is still limited. Recent approaches [8, 22, 28] explore alternative strategies for routing, but they focus on pre-training only and do not demonstrate performance gain on downstream tasks. Moreover, none of the previous methods consider allocating a variable number of experts to each token based on importance, which can be beneficial.
21
+
22
+ We propose a very simple yet effective routing method we are calling *expert choice*. Unlike conventional MoE where tokens select one or two top-scoring experts, our method lets each *expert* pick the top-k tokens. Our method guarantees perfect load balancing, allows a variable number of experts for each token, and achieves substantial gains in training efficiency and downstream performance as demonstrated in our experiments. Our major contributions include:
23
+
24
+ - We identify common pitfalls in conventional MoE such as load imbalance as described in Section 3.1. We then propose a heterogeneous, expert choice method to provide a fluid allocation of model parameters based on a learnt token-to-expert importance. This method intrinsically guarantees load balance without imposing an auxiliary loss.
25
+
26
+ - We show our method provides over 2× faster training convergence in a 8B/64E (8 billion activated parameters, 64 experts) model, compared to the top-1 and top-2 gating counterparts in Switch Transformer [10] and GShard [21].
27
+
28
+ - We show our method demonstrates strong scaling when increasing the number of experts from 16 to 128, evaluated in training perplexity.
29
+
30
+ - We show our method demonstrates strong performance on downstream tasks selected from GLUE and SuperGLUE at all the evaluated scales. More specifically, our 8B/64E model outperforms a T5 11B dense model in 7 out of 11 tasks evaluated.
31
+
32
+ ## 2 Related Work
33
+
34
+ Scaling: Various approaches have been proposed to scale up neural network capacity to improve performance. Recent works have successfully scaled models to billions of parameters via various forms of model parallelism [2, 21, 26, 27, 33]. Model parallelism [30] splits weights and tensors across multiple cores while pipeline parallelism [18, 24] splits different layers across devices with micro-batches pipelined to the different layers. To enable continued scaling of neural networks, improving model training and serving efficiency has become a critical research area.
35
+
36
+ Conditional Computation: Computation decisions can be made dynamically based on the input [23, 25]. Conditional computation has been proposed as a way to increase the capacity of a deep neural network without increasing the amount of computation, by activating certain parameters and computation on demand, on a per-example or per-token basis [3]. Conditional convolution layers [1]
37
+ with task-specific gating has been used to combat catastrophic forgetting when a sequence of learning problems are optimized. The gating decisions may be binary or sparse and continuous, stochastic or deterministic.
38
+
39
+ Mixture of Experts: Sparsely-gated MoE [31] is the first model to demonstrate massive improvements in model capacity, training time, or model quality with gating. Switch Transformer [10]
40
+ simplifies the gating by selecting only the top expert per token using a softmax over the hidden state and demonstrates better scaling than previous work. All the prior work requires an auxiliary loss to explicitly encourage balancing. This loss term has to be carefully weighted to not overwhelm the primary loss. However, auxiliary loss does not guarantee balancing and a hard capacity factor has to be imposed. As a result, many tokens can still be unprocessed by the MoE layer. Hard MoE [12] with a single decoding layer can be efficiently trained to good effect on large scale hashtag prediction tasks.
41
+
42
+ Base Layers [22] formulate a linear assignment that maximizes token-expert affinities while ensuring each expert receives an equal number of tokens. Hash layers [8, 28] devise hashing techniques on input tokens. However, the evaluations are limited to pre-training perplexity. THOR [? ] randomly activates experts during training and inference and is trained with a consistency regularization loss.
43
+
44
+ THOR has demonstrated strong performance on translation tasks. Different from these prior works, our method is a learnt method that enables heterogeneous MoE and effectively improves downstream fine-tuning performance.
45
+
46
+ ## 3 Method
47
+
48
+ We first identify a few pitfalls in the routing method of conventional mixture-of-experts (MoE) models and then present our method using expert choice to tackle these problems.
49
+
50
+ ## 3.1 Pitfalls Of Token-Choice Routing
51
+
52
+ MoE can be computationally advantageous compared to a dense model, a routing strategy must be used to assign each token to the most-suited experts. Conventional MoE models employ *token-choice* routing which independently selects the top-k experts for each token [10, 21, 31]. We argue that this strategy has a few pitfalls that lead to sub-optimal training.
53
+
54
+ Load Imbalance: Token-choice routing often lead to poor load balancing across experts. That is, some experts may be trained with most tokens, leaving the remaining experts under-utilized. Experts can be under specialized because a lot of model capacity in the under-utilized experts are wasted. On the other side, some tokens will not be processed, since over-utilized experts can only take a maximum number of tokens at each step in order to avoid running out of memory. Load imbalance can also hurt step latency, thus inference time, as the step latency can be determined by the most loaded expert. Previous methods add an auxiliary loss on load balancing to mitigate the issue. However, this auxiliary loss does not guarantee a balanced load, especially during the important early stages of training. Indeed, **we empirically observe that the over-capacity ratio can reach 20%–40% for**
55
+ some experts in token choice routing, indicating that a significant portion of the tokens routed to these experts will be dropped.
56
+
57
+ Under Specialization: Each MoE layer uses a gating network to learn token-to-expert affinity.
58
+
59
+ Ideally, the learnt gating network should produce the affinity such that similar or relevant tokens are routed to the same expert. A sub-optimal strategy can produce redundant experts and/or experts that are not sufficiently specialized. Under specialization may result by imposing an large auxiliary loss which favors more load balanced but less effective routing. Finding the right balance on the auxiliary loss to promote both load balancing and specialization is challenging for token-choice routing.
60
+
61
+ Same Compute for Every Token: Finally, in a token-choice strategy each token receives exactly k experts and therefore occupies the same amount of compute. We hypothesize that this is not necessary nor desired. Instead, a MoE model should flexibly allocate its compute resource based on the complexity of the input. Motivated by the aforementioned observations, we next describe a simple yet effective method which produces load balanced assignments based on *expert choice*.
62
+
63
+ ## 3.2 Heterogeneous Moe Via Expert Choice
64
+
65
+ Different from conventional routing, an expert choice method independently selects top-k tokens for each expert, where k is a fixed expert capacity (i.e. the number of tokens each expert can take).
66
+
67
+ Despite its simplicity, expert choice achieves perfect load balancing by design. It also enables more flexible allocation of model compute since tokens can be received by a variable number of experts.
68
+
69
+ 3
70
+
71
+ $$k={\frac{n\times c}{c}}$$
72
+
73
+ e(1)
74
+ where n is the total number of tokens in the input batch (such as batch size × sequence length), c is the capacity factor, and e is the number of experts. The capacity factor c denotes on average how many experts are utilized by a token. Given input token representations X ∈ R
75
+ n×d where d is the model hidden dimension, our method produces a token-to-expert assignment denoted by three output matrices I, G and P. The matrix I is an index matrix where I[*i, j*] specifies j-th selected token of the i-th expert. The gating matrix G ∈ R
76
+ e×k denotes the weight of expert for the selected token, and P ∈ R
77
+ e×k×n refers to an one-hot version of I that will be used to gather tokens for each expert.
78
+
79
+ These matrices are computed using a gating function,
80
+
81
+ $$\begin{array}{l l}{{S=\mathrm{Softmax}(X\cdot W_{g}),}}&{{S\in\mathbb{R}^{n\times e}}}\\ {{G,I=\mathrm{TopK}(S^{\top},k),P=\mathrm{Onehot}(I)}}\end{array}$$
82
+
83
+ $$(2)$$
84
+
85
+ where S denotes the token-to-expert affinity scores, Wg ∈ R
86
+ d×e denotes the expert embeddings, and T opK() selects the k largest entries for each row of S>.
87
+
88
+ Similar to Switch Transformer [10] and GShard [21], we apply mixture of experts and the gating
89
+ function in the dense feed-forward (FFN) layer, as it is the most computationally expensive part in
90
+ a Transformer-based network. The input to the gated FFN, denoted by Xin ∈ R
91
+ e×k×d, is produced
92
+ using the permutation matrix P. Here Xin[i] ∈ R
93
+ k×d denotes the input of the i-th expert. Similarly,
94
+ let W1 and W2 denote the parameters of gated FFN in which W1[i] and W2[i] ∈ R
95
+ d×d
96
+ 0denote the
97
+ parameter matrices of the i-th expert. We compute the output of each expert Xe[i] as follows,
98
+ $$X_{i n}=P\cdot X$$ $$\forall i:\ \ X_{e}[i]=\mathrm{{GeLU}}(X_{i n}[i]\cdot W_{1}[i])\cdot W_{2}[i]^{\top}$$
99
+ > (3)
100
+ We omit the bias terms here for brevity. The finally output of the gated FFN layer Xout ∈ R
101
+ n×dcan be obtained given Xe, the permutation and gating matrices P and G,
102
+
103
+ $$({\mathfrak{I}})$$
104
+ $$X_{\mathrm{out}}[l,d]=\sum_{i,j}P[i,j,l]\;G[i,j]\;X_{e}[i,j,d]$$
105
+ $$(4)$$
106
+
107
+ P[i, j, l] G[i, j] Xe[*i, j, d*](4)
108
+ Both Xe and Xout can be efficiently computed using Einstein summation (einsum) operations.
109
+
110
+ ## 3.3 Expert Choice With Additional Constraint
111
+
112
+ We also consider regularizing our expert choice routing by limiting the maximum number of experts for each token. We are interested in whether adding this constraint improves pre-training and finetuning results. More importantly, it helps analyzing to what degree using a variable number of experts per token affects the model performance.
113
+
114
+ Let A ∈ R
115
+ e×n be a positive matrix where A[*i, j*] represents whether the i-th expert selects j-th token.
116
+
117
+ We solve the following entropy-regularized linear programming problem
118
+
119
+ $$\begin{array}{l}{{\operatorname*{max}_{A}\;\left\langle S^{\top},A\right\rangle+\lambda H(A)}}\\ {{\forall i:\;\sum_{j^{\prime}}A[i,j^{\prime}]=k;\;\;\forall j:\;\sum_{i^{\prime}}A[i^{\prime},j]\leq b;\;\;\forall i,j:\;0\leq A[i,j]\leq1}}\end{array}$$
120
+ $$\mathbf{s.t.}$$
121
+ s.t. ∀i :
122
+ where < S>*, A >* denotes the inner product, H(A) is the sum of element-wise entropy1, and b > 0 is an integer that upper bounds the selection for each token. Adding a small entropy term gives a near-integer solution while enabling a fast iterative solver we can run on TPUs. Specifically, the solution space is the intersection of three convex sets each satisfying one of the linear constraints.
123
+
124
+ We use Dykstra's algorithm [9] that alternatively projects the intermediate solution onto one of the convex sets.2 After A is computed, the routing indices I is selected using T opK(*A, k*) instead.
125
+
126
+ 1H(A) = Pij −A[*i, j*] log A[*i, j*]
127
+ 2We use λ = 0.001 and a maximum of 100 iterations.
128
+
129
+ | Model | Type | nparams | nact-params | L | M | H | nheads | dhead | E |
130
+ |-----------|--------|-----------|---------------|-----|-------|--------|----------|---------|-----|
131
+ | 0.1B | Dense | 130M | 130M | - | | | | | |
132
+ | 0.1B/16E | MoE | 548M | 145M | 16 | | | | | |
133
+ | 0.1B/32E | MoE | 1.0B | 145M | 12 | 768 | 3,072 | 12 | 64 | 32 |
134
+ | 0.1B/64E | MoE | 1.9B | 145M | 64 | | | | | |
135
+ | 0.1B/128E | MoE | 3.7B | 145M | 128 | | | | | |
136
+ | 8B | Dense | 8.7B | 8.7B | 32 | 4,096 | 16,384 | 32 | 128 | - |
137
+ | 8B/64E | MoE | 143B | 9.8B | 64 | | | | | |
138
+
139
+ ## 3.4 Model Architecture
140
+
141
+ At the high level, we adopt the idea of sparsely activated Mixture-of-Experts (MoE) [31]. We use a Transformer architecture and replace the feed-forward component of every other Transformer layer with a MoE layer, following recent practice [10, 21]. Interleaving regular Transformer layers and MoE layers empirically improves model performance and training efficiency, probably because forcing some shared components in between MoE layers can mitigate the negative effects of skipping tokens. Several additional modifications adopted in recent work have been applied in our experiments. For example, we replace the standard positional embedding with per-layer relative positional bias [5].
142
+
143
+ In the non-MoE feed-forward sub-layers (only every other layers are MoE layers), we replace the first linear projection and the activation function with the Gated Linear Unit [6], which computes the component-wise product of two linear transformation of the input, followed by a Gaussian Error Linear Unit [15] activation function.
144
+
145
+ As described earlier, each MoE layer consists of a group of independent feed-forward networks as denoted as "experts". The gating function in Eq. (2) uses a softmax activation function to model a probability distribution over these experts. This distribution denotes the preference over experts of each incoming token, which is computed similarly in a conventional gating network [10, 21, 31]. During training, each MoE layer's learnable gating network described in Eq. (2) is trained to use the input to activate the best subset of experts using a top-k function along the token dimension. An
146
+ "shuffle" stage and an "unshuffle" stage are inserted to the MoE layer, where the first stage gathers the tokens to their designated experts while the second stage permutes the tokens back to their original order in the input batch. This step is formulated in Eq. (3) and Eq. (4).
147
+
148
+ Similar to conventional MoE method, there are more parameters in the MoE layer. However, the activated model size per token can be comparable to a dense layer because during training or inference, only a limited subset of experts is activated for any given token. For instance, Switch Transformer [10]
149
+ has only one activated expert while GShard [21] uses two experts per token. In our method, the number of activated experts can vary for each token but the overall computation is kept the same as the baseline architectures by fixing the capacity factor c in Eq. (1). Unless otherwise specified, we set c = 2 such that our method can be directly compared to the top-2 token-choice gating in GShard.
150
+
151
+ We train several variants of our architecture at the 100M scale (i.e. 100M expert size) by increasing the number of experts to understand the scaling effects of our method. We also train a 8B scale MoE model. The large MoE model is partitioned with a 2D sharding algorithm as presented in GSPMD [36], which fully exploits the 2D topology of the TPU cluster [19]. Across different scales and setups, our method outperforms related work and demonstrates strong downstream task performance on selected tasks in GLUE and SuperGLUE.
152
+
153
+ ## 4 Experiments 4.1 Setup
154
+
155
+ Table 1 summarizes the hyperparameter settings of different MoE models. As a reference point, we also include the respective dense model configurations with comparable numbers of activated parameters per-token during inference. To study of the effect of scaling the number of experts, we
156
+
157
+ ![5_image_0.png](5_image_0.png)
158
+
159
+ studied varying the number of experts but fixing the per expert size to 100M parameters. For example, 0.1B/64E represents the architecture of an approximately 100M parameter dense model with every other layer replaced by a 64-expert MoE layer. The MoE model degenerates into a dense transformer architecture when each MoE layer only has one expert. While n*params* is the total number of trainable parameters, nact−*params* represents the number of activated parameters per token. L is the total number of Transformer layers, M is the model dimension, H is the hidden dimension after the projection in each transformer layer, n*heads* is the number of attention heads, and d*head* is the hidden dimension of each attention head.
160
+
161
+ Dataset: We use the high-quality dataset from GLaM [? ] of 1.6 trillion tokens that are representative of a wide range of natural language use cases. An in-house classifier is trained to classify between a collection of curated text and other webpages and estimate the content quality of a webpage. A
162
+ high-quality filtered subset of webpages are combined with books, Wikipedia pages, conversations, forums, and news to create the final dataset. The data and mixture weights can be found in Table 3 in the GLaM paper.
163
+
164
+ Model Training: Our model training follows the setups of GLaM [? ] where a maximum sequence length of 1024 tokens is adopted. We use an Adafactor optimizer [32] with first-moment decay β1 = 0 and second-moment decay β2 = 0.99. We keep the learning rate constant for the first 10K
165
+ training steps, and then decay it with an inverse square root schedule. Unlike most related works, we do not impose any auxiliary loss for load balance, such as described in Switch Transformer [10] and GShard [21]. We use the SentencePiece subword tokenizer with a vocabulary of size of 256K. The largest model (8B/64E) is trained on 512 TPU V4 chips. We use a dropout rate of 0 during training as the number of tokens in the training data corpus is much greater than the total number of tokens during training.
166
+
167
+ Model Evaluation: We mainly focus on evaluating the finetuning performance on the 11 selected tasks from GLUE and SuperGLUE benchmarks [34, 35].
168
+
169
+ ## 4.2 Training Efficiency
170
+
171
+ We first study training efficiency and convergence. We use expert choice with a capacity factor of 2
172
+ (EC-CF2) to match the activated model size and computational cost on a per token basis in GShard top-2 gating and run both for a fixed number of steps. The results are shown in Fig. 2 (a). Comparing to GShard top-2 gating, which showed stronger performance in both perplexity in the evaluation dataset and fine-tuning on downstream tasks compared to Switch Transformer top-1 gating, EC-CF2 converges more than 2x faster during training. More specifically, EC-CF2 reaches the same perplexity as GShard top-2 in less than half the steps, and with each GShard top-2 step being 20% slower than our method. As explained in Section 3.1, the slower step time in top-2 gating is due to load imbalance
173
+
174
+ | 100M/128E | 100M/64E | | | | | | | |
175
+ |-------------|------------|-------|----------|----------|--------|----------|----------|--------|
176
+ | Name | Metric | Split | ST Top-1 | GS Top-2 | EC-CF2 | ST Top-1 | GS Top-2 | EC-CF2 |
177
+ | BoolQ | acc | dev | 77.4 | 76.5 | 76.9 | 73.2 | 77.5 | 79.7 |
178
+ | CB | acc | dev | 87.5 | 80.9 | 89.1 | 85.9 | 84.4 | 89.1 |
179
+ | CoLA | acc | dev | 78.9 | 84.0 | 86.7 | 64.1 | 85.2 | 88.3 |
180
+ | MNLI | acc | dev | 82.3 | 83.6 | 84.9 | 80.8 | 85.2 | 86.7 |
181
+ | MRPC | acc | dev | 82.6 | 81.0 | 83.1 | 81.3 | 81.3 | 84.4 |
182
+ | QNLI | acc | dev | 89.5 | 88.6 | 89.0 | 89.4 | 89.7 | 91.3 |
183
+ | QQP | acc | dev | 90.6 | 90.3 | 90.4 | 88.9 | 90.5 | 91.0 |
184
+ | RTE | acc | dev | 77.0 | 78.9 | 78.5 | 74.1 | 79.3 | 81.6 |
185
+ | SST2 | acc | dev | 92.0 | 94.5 | 94.6 | 91.8 | 95.1 | 95.1 |
186
+ | WiC | acc | dev | 67.8 | 65.5 | 68.1 | 64.4 | 67.8 | 65.6 |
187
+ | WNLI | acc | dev | 65.6 | 70.3 | 67.2 | 68.8 | 68.8 | 71.7 |
188
+ | Avg | - | - | 81.0 | 81.3 | 82.6 | 78.4 | 82.2 | 84.0 |
189
+ | 100M/32E | 8B/64E | | | | | | | |
190
+ | Name | Metric | Split | ST Top-1 | GS Top-2 | EC-CF2 | ST Top-1 | GS Top-2 | EC-CF2 |
191
+ | BoolQ | acc | dev | 74.5 | 79.0 | 79.3 | 89.1 | 89.5 | 89.2 |
192
+ | CB | acc | dev | 80.6 | 81.3 | 92.2 | 93.8 | 96.7 | 100 |
193
+ | CoLA | acc | dev | 87.5 | 92.2 | 93.8 | 88.3 | 87.5 | 89.1 |
194
+ | MNLI | acc | dev | 83.1 | 87.8 | 88.0 | 90.7 | 91.4 | 91.1 |
195
+ | MRPC | acc | dev | 82.3 | 85.2 | 84.4 | 89.3 | 91.7 | 90.6 |
196
+ | QNLI | acc | dev | 91.6 | 91.9 | 92.5 | 94.5 | 94.9 | 95.0 |
197
+ | QQP | acc | dev | 90.1 | 91.5 | 92.0 | 92.1 | 92.5 | 93.8 |
198
+ | RTE | acc | dev | 75.0 | 79.1 | 78.1 | 91.0 | 92.2 | 95.2 |
199
+ | SST2 | acc | dev | 93.3 | 94.4 | 95.4 | 97.1 | 98.0 | 97.7 |
200
+ | WiC | acc | dev | 62.5 | 65.9 | 69.8 | 74.5 | 76.4 | 83.8 |
201
+ | WNLI | acc | dev | 65.6 | 64.1 | 68.8 | 78.1 | 82.8 | 92.8 |
202
+ | Avg | - | - | 80.6 | 83.5 | 85.0 | 88.9 | 90.3 | 92.6 |
203
+
204
+ Table 2: Expert choice with capacity factor of 2 (EC-CF2) outperforms Top-1 gating in Switch Transformer (ST) and top-2 gating in GShard (GS) on GLUE and SuperGLUE tasks. Note that with an expert size of 100M parameters, 100M/32E works best for our method and Ghard Top-2 while 100M/128E works better for Switch Transformer Top-1. Our method consistently outperforms the others across all the scales.
205
+
206
+ where some experts can receive a lot more tokens than the desired capacity. As a result, the step latency will be bottlenecked by the most loaded expert.
207
+
208
+ ## 4.3 Scaling The Number Of Experts 7
209
+
210
+ As presented in Table 1, increasing the number of experts effectively increases model capacity without increasing activated model size. We scale the number of experts while fixing the expert size to 100M
211
+ parameters for both expert choice (EC) and GShard (Top-2) methods and find both methods work well in terms of perplexity on the evaluation dataset during pre-training. As demonstrated in Fig. 2
212
+ (b), having more experts consistently improves training perplexity.
213
+
214
+ ## 4.4 Fine-Tuning On Glue And Superglue
215
+
216
+ To validate whether improved perplexity directly translates to better performance in downstream tasks, we perform fine-tuning on 11 selected tasks from GLUE and SuperGLUE. We compare three MoE
217
+ methods including Switch Transformer top-1 gating (ST Top-1), GShard top-2 gating (GS Top-2)
218
+ and our method (EC-CF2) that matches the activation memory size and computational cost of GS
219
+ Top-2. Indicated by the results in Table 2, our EC-CF2 method consistently outperforms the related methods and yields more than 2% average accuracy increase in a large 8B/64E setting. Table 3 further compares our 8B/64E model against its dense counterpart. Again, our method achieves stronger fine-tuning results, increasing the average score by 3.4 point.
220
+
221
+ Interestingly, we observe the 100M/32E model setting works the best for both GS Top-2 and EC-CF2, even though the effective model capacity is smaller than that of 100M/64E and 100M/128E. This result indicates that a good training perplexity does not always translate to better performance of downstream tasks.
222
+
223
+ | Model | BoolQ | CB | CoLA MNLI MRPC QNLI QQP RTE SST2 WiC WNLI | Avg | | | | | | | | |
224
+ |--------------------|---------|------|---------------------------------------------|-------|------|------|------|------|------|------|------|------|
225
+ | Dense 8B | 88.2 | 100 | 86.4 | 91.3 | 86.7 | 94.7 | 91.2 | 92.2 | 97.2 | 75.6 | 78.1 | 89.2 |
226
+ | EC-CF2 8B/64E 89.2 | 100 | 89.1 | 91.1 | 90.6 | 95.0 | 93.8 | 95.2 | 97.7 | 83.8 | 92.8 | 92.6 | |
227
+
228
+ Table 3: Comparison between Dense 8B and Expert Choice (EC-CF2) 8B/64E models: Our method significantly outperforms the dense model in downstream tasks.
229
+
230
+ Figure 3: Distribution of the number of experts routed to per token in a 100M/64E model.
231
+
232
+ | Layer. Method | Max # of Experts | Avg acc. |
233
+ |-----------------|--------------------|------------|
234
+ | EC-CAP2 | 2 | 83.2 ± 0.4 |
235
+ | EC-CAP3 | 3 | 84.0 ± 0.4 |
236
+ | EC-CF2 | - | 84.0 ± 0.2 |
237
+ | Hash Layer | - | 81.3 ± 0.1 |
238
+
239
+ ![7_image_0.png](7_image_0.png)
240
+
241
+ ![7_image_1.png](7_image_1.png)
242
+
243
+ ## 4.5 Heterogeneity Matters
244
+
245
+ Capped Expert Choice: We regularized expert choice by limiting the maximum number of experts for each token, using the method described in Section 3.3. Table 4 reports the average accuracy on the 11 selected datasets. EC-CAP2 is the variant of our expert choice method by limiting the number of experts of each token to 2. This decreases the fine-tuning accuracy by 0.8 points on average. In addition, EC-CAP3 allows a maximum of 3 experts per token and achieves on par results compared to the vanilla expert choice method. This ablation study confirms that **allowing variable number of**
246
+ experts per token is indeed helpful.
247
+
248
+ Variable Experts per Token: We compute statistics on token-to-expert routing, particularly on the ratio of tokens that have been routed to a certain number of experts. According to Fig. 3, a majority of tokens have been routed to one or two experts while 23% have been routed to three or four experts and only about 3% tokens have been routed to more than 4 experts. This plot verifies our hypothesis that our method learns to allocate a variable number experts to tokens, which can be beneficial for important tokens.
249
+
250
+ ## 4.6 Comparison With Hash Layer
251
+
252
+ In this section, we compare our method with Hash Layers [28]. We use mod x to map a token ID
253
+ to an expert ID. This ensures load balance and generates specialized experts. The fine-tuning results are presented in the last row in Table 4. Hashing based routing performs worse than expert choice in terms of average scores and variance. **This indicates that load balancing alone does not generate** all the benefits.
254
+
255
+ ## 4.7 Ablation
256
+
257
+ Capacity Factor: We study the capacity factor in our expert choice method and compare the training perplexity with the baseline top-1 gating method used in Switch Transformer. As described in Eq. (1),
258
+ the capacity factor determines how many experts on average each token can be routed to, thus the bucket size k of each expert. In all our previous experiments, we use a capacity factor of 2, which matches the computational footprint of the top-2 gating used in GShard method. To match the computation cost on a per-token basis fairly with top-1 gating used in Switch Transformer, we reduce the capacity factor to 1 and plot the training perplexity in Fig. 4 (a). Not surprisingly, using a smaller capacity factor yields higher perplexity, but our method still significantly outperforms top-1 gating.
259
+
260
+ We further push the capacity factor down to 0.5, and observe that it still outperforms the top-1 gating.
261
+
262
+ Comparison with Dense Models on Pre-training: We compare our method with dense models on pre-training. As shown in Fig. 4 (b), our method consistently outperforms the dense method in
263
+
264
+ ![8_image_0.png](8_image_0.png)
265
+
266
+ perplexity and convergence time. For a small expert size of 100M parameters, the benefit of sparse gating is even more significant. Orthogonal to results presented in Fig. 2 (b), where scaling the number of experts improves model performance, Fig. 4 (b) shows that increasing expert capacity also significantly increases model performance.
267
+
268
+ ## 5 Conclusion
269
+
270
+ We propose a new routing method for sparsely activated mixture-of-experts (MoE) models. This method addresses load imbalance and under-utilization of experts in conventional MoE methods, and enables selecting different numbers of experts for each token. Our model demonstrates more than 2x training efficiency improvements when compared to the state-of-the-art GShard and Switch Transformer models, and also achieves strong gains when finetuning on 11 datasets in the GLUE and SuperGLUE benchmark.
271
+
272
+ ## 6 Limitations
273
+
274
+ The expert choice method might not immediately apply to auto-regressive text generation as our current implementation takes in the past and future tokens to perform the top-k selection. One possible solution is to collect a large batch of input sequences, dispatch tokens of the same sequence into separate groups, and perform expert choice routing for each group. Another scenario where the expert choice method does not immediately apply is when the batch size becomes very small during serving or inference. A global top-k can be selected instead and we can cap the number of times each expert or token gets selected. We leave these possible improvements for future work.
275
+
276
+ Another long-standing issue with MoE has been the large memory footprint. Even though computational cost can be reduced using sparsely gated networks, the total number of parameters increases linearly or sub-linearly with the number of experts. Increasing the number of experts requires reservation of a large number of hardware devices. Therefore, dynamic (used) power is saved while static
277
+ (reserved) power is not. Power saving techniques such as the ability to put hardware devices into low power states while not in use [17] can help with reducing the reserved power requirements.
278
+
279
+ ## References
280
+
281
+ [1] Davide Abati, Jakub Tomczak, Tijmen Blankevoort, Simone Calderara, Rita Cucchiara, and Babak Ehteshami Bejnordi. Conditional channel gated networks for task-aware continual learning. In *CVPR*, pages 3930–3939. Computer Vision Foundation / IEEE, 2020.
282
+
283
+ [2] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. In *Advances in Neural Information Processing Systems*.
284
+
285
+ [3] Kyunghyun Cho and Yoshua Bengio. Exponentially increasing the capacity-to-computation ratio for conditional computation in deep learning, 2014.
286
+
287
+ [4] Zihang Dai, Hanxiao Liu, Quoc V. Le, and Mingxing Tan. CoAtNet: Marrying convolution and attention for all data sizes. In *Advances in Neural Information Processing Systems*, 2021.
288
+
289
+ [5] Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc Le, and Ruslan Salakhutdinov.
290
+
291
+ Transformer-XL: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, July 2019. Association for Computational Linguistics.
292
+
293
+ [6] Yann N. Dauphin, Angela Fan, Michael Auli, and David Grangier. Language modeling with gated convolutional networks. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML'17, page 933–941. JMLR.org, 2017.
294
+
295
+ [7] Nan Du, Yanping Huang, Andrew M. Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, Barret Zoph, Liam Fedus, Maarten Bosma, Zongwei Zhou, Tao Wang, Yu Emma Wang, Kellie Webster, Marie Pellat, Kevin Robinson, Kathy Meier-Hellstern, Toju Duke, Lucas Dixon, Kun Zhang, Quoc V Le, Yonghui Wu, Zhifeng Chen, and Claire Cui. Glam: Efficient scaling of language models with mixtureof-experts, 2021.
296
+
297
+ [8] Dheeru Dua, Shruti Bhosale, Vedanuj Goswami, James Cross, Mike Lewis, and Angela Fan.
298
+
299
+ Tricks for training sparse translation models, 2021.
300
+
301
+ [9] Richard L Dykstra. An iterative procedure for obtaining i-projections onto the intersection of convex sets. *The annals of Probability*, pages 975–984, 1985.
302
+
303
+ [10] William Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity, 2021.
304
+
305
+ [11] Golnaz Ghiasi, Tsung-Yi Lin, and Quoc V. Le. NAS-FPN: learning scalable feature pyramid architecture for object detection. In *CVPR*, pages 7036–7045. Computer Vision Foundation /
306
+ IEEE, 2019.
307
+
308
+ [12] Sam Gross, Marc'Aurelio Ranzato, and Arthur Szlam. Hard mixtures of experts for large scale weakly supervised vision, 2017.
309
+
310
+ [13] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In *2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*,
311
+ pages 770–778, 2016.
312
+
313
+ [14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling, editors, Computer Vision –
314
+ ECCV 2016, pages 630–645, Cham, 2016. Springer International Publishing.
315
+
316
+ [15] Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (GELUs), 2016.
317
+
318
+ [16] Joel Hestness, Sharan Narang, Newsha Ardalani, Gregory Diamos, Heewoo Jun, Hassan Kianinejad, Md. Mostofa Ali Patwary, Yang Yang, and Yanqi Zhou. Deep learning scaling is predictable, empirically, 2017.
319
+
320
+ [17] Ping Huang, Zuocheng Xing, Tianran Wang, Qiang Wei, Hongyan Wang, and Guitao Fu. A
321
+ brief survey on power gating design. In *2010 10th IEEE International Conference on Solid-State* and Integrated Circuit Technology, pages 788–790, 2010.
322
+
323
+ [18] Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Dehao Chen, Mia Xu Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V. Le, Yonghui Wu, and Zhifeng Chen. Gpipe: Efficient training of giant neural networks using pipeline parallelism. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett, editors, *Advances in Neural Information Processing Systems 32: Annual Conference on Neural* Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC,
324
+ Canada, pages 103–112, 2019.
325
+
326
+ [19] Norman P. Jouppi, Doe Hyun Yoon, George Kurian, Sheng Li, Nishant Patil, James Laudon, Cliff Young, and David A. Patterson. A domain-specific supercomputer for training deep neural networks. *Commun. ACM*, 63(7):67–78, 2020.
327
+
328
+ [20] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models, 2020.
329
+
330
+ [21] Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. GShard: Scaling giant models with conditional computation and automatic sharding. In *International Conference on Learning Representations*, 2021.
331
+
332
+ [22] Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, and Luke Zettlemoyer. Base layers: Simplifying training of large, sparse models. In Marina Meila and Tong Zhang, editors, *Proceedings of the 38th International Conference on Machine Learning*, volume 139 of Proceedings of Machine Learning Research, pages 6265–6274. PMLR, 18–24 Jul 2021.
333
+
334
+ [24] Deepak Narayanan, Aaron Harlap, Amar Phanishayee, Vivek Seshadri, Nikhil R. Devanur, Gregory R. Ganger, Phillip B. Gibbons, and Matei Zaharia. Pipedream: Generalized pipeline parallelism for dnn training. New York, NY, USA, 2019. Association for Computing Machinery.
335
+
336
+ [25] Joan Puigcerver, Carlos Riquelme Ruiz, Basil Mustafa, Cédric Renggli, André Susano Pinto, Sylvain Gelly, Daniel Keysers, and Neil Houlsby. Scalable transfer learning with expert models. In *ICLR*. OpenReview.net, 2021.
337
+
338
+ [26] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. 2018.
339
+
340
+ [27] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21:140:1–140:67, 2020.
341
+
342
+ [28] Stephen Roller, Sainbayar Sukhbaatar, Arthur Szlam, and Jason Weston. Hash layers for large sparse models, 2021.
343
+
344
+ [30] Noam Shazeer, Youlong Cheng, Niki Parmar, Dustin Tran, Ashish Vaswani, Penporn Koanantakool, Peter Hawkins, HyoukJoong Lee, Mingsheng Hong, Cliff Young, Ryan Sepassi, and Blake Hechtman. Mesh-tensorflow: Deep learning for supercomputers. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS'18, page 10435–10444, Red Hook, NY, USA, 2018. Curran Associates Inc.
345
+
346
+ [31] Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc V. Le, Geoffrey E.
347
+
348
+ Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-ofexperts layer. In *ICLR (Poster)*. OpenReview.net, 2017.
349
+
350
+ [32] Noam Shazeer and Mitchell Stern. Adafactor: Adaptive learning rates with sublinear memory cost. In Jennifer Dy and Andreas Krause, editors, *Proceedings of the 35th International* Conference on Machine Learning, volume 80 of *Proceedings of Machine Learning Research*,
351
+ pages 4596–4604. PMLR, 10–15 Jul 2018.
352
+ [23] Min Lin, Jie Fu, and Yoshua Bengio. Conditional computation for continual learning, 2019.
353
+
354
+ [29] Roy Schwartz, Jesse Dodge, Noah A. Smith, and Oren Etzioni. Green ai, 2019.
355
+
356
+ [33] Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. Megatron-lm: Training multi-billion parameter language models using model parallelism, 2020.
357
+
358
+ [34] Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. Superglue: A stickier benchmark for general-purpose language understanding systems. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, *Advances in Neural Information Processing Systems*. Curran Associates, Inc.
359
+
360
+ [35] Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman.
361
+
362
+ GLUE: A multi-task benchmark and analysis platform for natural language understanding.
363
+
364
+ In *Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting* Neural Networks for NLP, Brussels, Belgium, November 2018. Association for Computational Linguistics.
365
+
366
+ [36] Yuanzhong Xu, HyoukJoong Lee, Dehao Chen, Blake A. Hechtman, Yanping Huang, Rahul Joshi, Maxim Krikun, Dmitry Lepikhin, Andy Ly, Marcello Maggioni, Ruoming Pang, Noam Shazeer, Shibo Wang, Tao Wang, Yonghui Wu, and Zhifeng Chen. GSPMD: general and scalable parallelization for ML computation graphs. *CoRR*, abs/2105.04663, 2021.
367
+
368
+ ## 7 Checklist
369
+
370
+ (a) Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? Yes
371
+ (b) Have you read the ethics review guidelines and ensured that your paper conforms to them? Yes (c) Did you discuss any potential negative societal impacts of your work? **N/A. Not any.** (d) Did you describe the limitations of your work? Yes
372
+ (a) Did you include the code, data, and instructions needed to reproduce the main experimental results? **Yes. We include details in the experiment setup to help reproduce the main results.**
373
+ (b) Did you specify all the training details? Yes (c) Did you report error bars? Yes
374
+ (d) Did you include the amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? Yes
375
+ (a) If your work uses existing assets, did you cite the creators? Yes (b) Did you mention the license of the assets? **No. The used dataset is not released yet.**
376
+ (c) Did you include any new assets either in the supplemental material or as a URL? **No. The dataset**
377
+ is not released yet.
378
+
379
+ (d) Did you discuss whether and how consent was obtained from people whose data you're using/curating? **No. Not using persons' data.**
380
+ (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? **Yes. The dataset does not contain any personally identifiable**
381
+ information or offensive content.
382
+
383
+ ## A Comparison On Fine-Tuning With A Dense Model
384
+
385
+ Our 8B MoE model achieves stronger pre-training perplexity than its dense counterpart. However, a better perplexity does not always directly translate to downstream performance as demonstrated in Section 4.4. To this end, we compare fine-tuning performance of the 8B dense model and MoE
386
+ model in Table 1. As shown in the table, our MoE model using expert choice routing consistently outperforms the dense model across the 11 tasks in GLUE and SuperGLUE.
387
+
388
+ | Model | BoolQ | CB | CoLA MNLI MRPC QNLI QQP RTE SST2 WiC WNLI | Avg | | | | | | | | |
389
+ |--------------------|---------|------|---------------------------------------------|-------|------|------|------|------|------|------|------|------|
390
+ | Dense 8B | 88.2 | 100 | 86.4 | 91.3 | 86.7 | 94.7 | 91.2 | 92.2 | 97.2 | 75.6 | 78.1 | 89.2 |
391
+ | EC-CF2 8B/64E 89.2 | 100 | 89.1 | 91.1 | 90.6 | 95.0 | 93.8 | 95.2 | 97.7 | 83.8 | 92.8 | 92.6 | |
392
+
393
+ Table 1: Comparison between Dense 8B and Expert Choice (EC-CF2) 8B/64E models: Our method significantly outperforms the dense model in downstream tasks.
394
+
395
+ ## B Capacity Factor
396
+
397
+ We evaluate the downstream task fine-tuning performance by varying the capacity factors. Note that a capacity factor of n indicates on average how many experts each token can be received. EC-CF2 is our baseline expert choice, which matches GShard top-2 gating computational footprint. EC-CF1, however, matches Switch Transformer top-1 gating computational footprint. EC-CF0.5 further verifies that an aggressively lowered capacity factor can provide strong enough performance, that almost matches the top-2 gating baseline.
398
+
399
+ | Model | BoolQ | CB | CoLA MNLI MRPC QNLI QQP RTE SST2 WiC WNLI | Avg | | | | | | | | |
400
+ |------------------|---------|------|---------------------------------------------|-------|------|------|------|------|------|------|----------|------------|
401
+ | Top-2 | 78.1 | 87.0 | 88.3 | 85.0 | 82.6 | 90.1 | 90.7 | 81.6 | 94.7 | 68.2 | 67.2 | 83.0±0.3 |
402
+ | EC-CAP2 | 78.2 | 88.0 | 88.5 | 85.7 | 83.0 | 90.8 | 91.1 | 80.0 | 95.4 | 70.4 | 64.1 | 83.2±0.4 |
403
+ | EC-CAP3 | 78.5 | 91.7 | 89.3 | 86.3 | 83.5 | 90.9 | 91.1 | 81.8 | 94.9 | 70.0 | 65.6 | 84.0±0.4 |
404
+ | EC-CF2 | 79.1 | 89.6 | 89.3 | 86.8 | 84.3 | 91.3 | 91.2 | 81.1 | 95.2 | 68.1 | 68.0 | 84.0±0.2 |
405
+ | EC-CF1 | 77.4 | 90.6 | 88.0 | 85.5 | 83.6 | 90.3 | 91.2 | 79.8 | 95.3 | 66.5 | 64.9 | 83.0±0.2 |
406
+ | EC-CF0.5 | 77.4 | 89.6 | 86.3 | 85.2 | 82.7 | 91.7 | 91.0 | 79.6 | 94.9 | 67.3 | 63.5 | 83.0 ±0.05 |
407
+ | Hash Layers 76.1 | 85.2 | 86.7 | 83.4 | 82.5 | 90.0 | 90.3 | 75.7 | 94.0 | 67.4 | 63.3 | 81.3±1.0 | |
408
+
409
+ Table 2: Comparison between different routing methods in fine-tuning of 100M/64E models. We perform 3 independent fine-tuning runs for each method and report the average results. This gives more accurate difference between the variants of expert choice method, since they achieve close fine-tuning results. We do not report averaged results in other experiments.
410
+
411
+ ## C Capped Expert Choice
412
+
413
+ As described in Section 4.5, the maximum number of experts each token is assigned can be capped by an entropy-regularized linear programming. Figure 1 compares the validation perplexity when training the 100M/64E models using the base expert choice method (EC-BASE), expert choice capped by two experts per token (EC-CAP2), expert choice capped by three experts per token (EC-CAP3),
414
+ and GShard top-2 gating.
415
+
416
+ As shown in the figure, restricting the number of experts to 2 degrades the perplexity compared to the base expert choice method. This suggests that a more flexible allocation of experts (e.g. more than 2 experts for a token) can enhance model expressiveness. On the other hand, our EC-CAP2 and EC-CAP3 methods still outperform the top-2 gating method by a clear margin. We believe this confirms the effectiveness of a load balanced training, provided by our method. Finally, EC-CAP3 obtains comparable perplexity to EC-BASE. As indicated by Figure 3, only a little fraction of tokens use more than 3 experts therefore we see little or no difference between EC-BASE and EC-CAP3 variants. We present the fine-tuning results of these methods in Table 2.
417
+
418
+ ![13_image_0.png](13_image_0.png)
419
+
420
+ ## D Comparison With Hash Layer
421
+
422
+ In this section, we compare our method with Hash Layers [? ]. We use mod x to map a token ID to an expert ID. This in some way ensures load balance and generates specialized experts. The fine-tuning results are presented in the last row in Table 2. Hashing based routing performs much worse than expert choice in terms of average scores and variance.
423
+
424
+ ## E Fine-Tuning Details
425
+
426
+ We did a hyperparameter search for both baseline models and expert choice method. For fine-tuning of the 8B dense model, we use a constant learning rate of 0.0001 and a dropout rate of 0.1. We freeze the attention layer and feed-forward layer while leaving the embedding and layer normalization trainable. This setting has been found optimal for the 8B dense model. For MoE 8B/64E models including GShard top-2 gating and expert choice, we found continuing the learning rate from the pre-trained model while using a square root learning rate decay works better. In addition, we do not apply parameter freezing for fine-tuning MoE models. For models with 100M expert size, we use a constant learning rate of 0.0001 and no dropout is used.
DeepSeekMoE_2.md ADDED
The diff for this file is too large to render. See raw diff
 
switch_transformers.md ADDED
The diff for this file is too large to render. See raw diff